CERN Courier https://cerncourier.com/ Reporting on international high-energy physics Wed, 09 Jul 2025 07:12:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://cerncourier.com/wp-content/uploads/2025/03/cropped-favicon-32x32.png CERN Courier https://cerncourier.com/ 32 32 Quantum simulators in high-energy physics https://cerncourier.com/a/quantum-simulators-in-high-energy-physics/ Wed, 09 Jul 2025 07:12:13 +0000 https://cerncourier.com/?p=113530 Enrique Rico Ortega and Sofia Vallecorsa explain how quantum computing will allow physicists to model complex dynamics, from black-hole evaporation to neutron-star interiors.

The post Quantum simulators in high-energy physics appeared first on CERN Courier.

]]>
In 1982 Richard Feynman posed a question that challenged computational limits: can a classical computer simulate a quantum system? His answer: not efficiently. The complexity of the computation increases rapidly, rendering realistic simulations intractable. To understand why, consider the basic units of classical and quantum information.

A classical bit can exist in one of two states: |0> or |1>. A quantum bit, or qubit, exists in a superposition α|0> + β|1>, where α and β are complex amplitudes with real and imaginary parts. This superposition is the core feature that distinguishes quantum bits and classical bits. While a classical bit is either |0> or |1>, a quantum bit can be a blend of both at once. This is what gives quantum computers their immense parallelism – and also their fragility.

The difference becomes profound with scale. Two classical bits have four possible states, and are always in just one of them at a time. Two qubits simultaneously encode a complex-valued superposition of all four states.

Resources scale exponentially. N classical bits encode N boolean values, but N qubits encode 2N complex amplitudes. Simulating 50 qubits with double-precision real numbers for each part of the complex amplitudes would require more than a petabyte of memory, beyond the reach of even the largest supercomputers.

Direct mimicry

Feynman proposed a different approach to quantum simulation. If a classical computer struggles, why not use one quantum system to emulate the behaviour of another? This was the conceptual birth of the quantum simulator: a device that harnesses quantum mechanics to solve quantum problems. For decades, this visionary idea remained in the realm of theory, awaiting the technological breakthroughs that are now rapidly bringing it to life. Today, progress in quantum hardware is driving two main approaches: analog and digital quantum simulation, in direct analogy to the history of classical computing.

Optical tweezers

In analog quantum simulators, the physical parameters of the simulator directly correspond to the parameters of the quantum system being studied. Think of it like a wind tunnel for aeroplanes: you are not calculating air resistance on a computer but directly observing how air flows over a model.

A striking example of an analog quantum simulator traps excited Rydberg atoms in precise configurations using highly focused laser beams known as “optical tweezers”. Rydberg atoms have one electron excited to an energy level far from the nucleus, giving them an exaggerated electric dipole moment that leads to tunable long-range dipole–dipole interactions – an ideal setup for simulating particle interactions in quantum field theories (see “Optical tweezers” figure).

The positions of the Rydberg atoms discretise the space inhabited by the quantum fields being modelled. At each point in the lattice, the local quantum degrees of freedom of the simulated fields are embodied by the internal states of the atoms. Dipole–dipole interactions simulate the dynamics of the quantum fields. This technique has been used to observe phenomena such as string breaking, where the force between particles pulls so strongly that the vacuum spontaneously creates new particle–antiparticle pairs. Such quantum simulations model processes that are notoriously difficult to calculate from first principles using classical computers (see “A philosophical dimension” panel).

Universal quantum computation

Digital quantum simulators operate much like classical digital computers, though using quantum rather than classical logic gates. While classical logic manipulates classical bits, quantum logic manipulates qubits. Because quantum logic gates obey the Schrödinger equation, they preserve information and are reversible, whereas most classical gates, such as “AND” and “OR”, are irreversible. Many quantum gates have no classical equivalent, because they manipulate phase, superposition or entanglement – a uniquely quantum phenomenon in which two or more qubits share a combined state. In an entangled system, the state of each qubit cannot be described independently of the others, even if they are far apart: the global description of the quantum state is more than the combination of the local information at every site.

A philosophical dimension

The discretisation of space by quantum simulators echoes the rise of lattice QCD in the 1970s and 1980s. Confronted with the non-perturbative nature of the strong interaction, Kenneth Wilson introduced a method to discretise spacetime, enabling numerical solutions to quantum chromodynamics beyond the reach of perturbation theory. Simulations on classical supercomputers have since deepened our understanding of quark confinement and hadron masses, catalysed advances in high-performance computing, and inspired international collaborations. It has become an indispensable tool in particle physics (see “Fermilab’s final word on muon g-2”).

In classical lattice QCD, the discretisation of spacetime is just a computational trick – a means to an end. But in quantum simulators this discretisation becomes physical. The simulator is a quantum system governed by the same fundamental laws as the target theory.

This raises a philosophical question: are we merely modelling the target theory or are we, in a limited but genuine sense, realising it? If an array of neutral atoms faithfully mimics the dynamical behaviour of a specific gauge theory, is it “just” a simulation, or is it another manifestation of that theory’s fundamental truth? Feynman’s original proposal was, in a sense, about using nature to compute itself. Quantum simulators bring this abstract notion into concrete laboratory reality.

By applying sequences of quantum logic gates, a digital quantum computer can model the time evolution of any target quantum system. This makes them flexible and scalable in pursuit of universal quantum computation – logic able to run any algorithm allowed by the laws of quantum mechanics, given enough qubits and sufficient time. Universal quantum computing requires only a small subset of the many quantum logic gates that can be conceived, for example Hadamard, T and CNOT. The Hadamard gate creates a superposition: |0> (|0> + |1>) / 2. The T gate applies a 45° phase rotation: |1> eiπ/4|1>. And the CNOT gate entangles qubits by flipping a target qubit if a control qubit is |1>. These three suffice to prepare any quantum state from a trivial reference state: |ψ> = U1 U2 U3 … UN |0000…000>.

Trapped ions

To bring frontier physics problems within the scope of current quantum computing resources, the distinction between analog and digital quantum simulations is often blurred. The complexity of simulations can be reduced by combining digital gate sequences with analog quantum hardware that aligns with the interaction patterns relevant to the target problem. This is feasible as quantum logic gates usually rely on native interactions similar to those used in analog simulations. Rydberg atoms are a common choice. Alongside them, two other technologies are becoming increasingly dominant in digital quantum simulation: trapped ions and superconducting qubit arrays.

Trapped ions offer the greatest control. Individual charged ions can be suspended in free space using electromagnetic fields. Lasers manipulate their quantum states, inducing interactions between them. Trapped-ion systems are renowned for their high fidelity (meaning operations are accurate) and long coherence times (meaning they maintain their quantum properties for longer), making them excellent candidates for quantum simulation (see “Trapped ions” figure).

Superconducting qubit arrays promise the greatest scalability. These tiny superconducting circuit materials act as qubits when cooled to extremely low temperatures and manipulated with microwave pulses. This technology is at the forefront of efforts to build quantum simulators and digital quantum computers for universal quantum computation (see “Superconducting qubits” figure).

The noisy intermediate-scale quantum era

Despite rapid progress, these technologies are at an early stage of development and face three main limitations.

The first problem is that qubits are fragile. Interactions with their environment quickly compromise their superposition and entanglement, making computations unreliable. Preventing “decoherence” is one of the main engineering challenges in quantum technology today.

The second challenge is that quantum logic gates have low fidelity. Over a long sequence of operations, errors accumulate, corrupting the result.

Finally, quantum simulators currently have a very limited number of qubits – typically only a few hundred. This is far fewer than what is needed for high-energy physics (HEP) problems.

Superconducting qubits

This situation is known as the noisy “intermediate-scale” quantum era: we are no longer doing proof-of-principle experiments with a few tens of qubits, but neither can we control thousands of them. These limitations mean that current digital simulations are often restricted to “toy” models, such as QED simplified to have just one spatial and one time dimension. Even with these constraints, small-scale devices have successfully reproduced non-perturbative aspects of the theories in real time and have verified the preservation of fundamental physical principles such as gauge invariance, the symmetry that underpins the fundamental forces of the Standard Model.

Quantum simulators may chart a similar path to classical lattice QCD, but with even greater reach. Lattice QCD struggles with real-time evolution and finite-density physics due to the infamous “sign problem”, wherein quantum interference between classically computed amplitudes causes exponentially worsening signal-to-noise ratios. This renders some of the most interesting problems unsolvable on classical machines.

Quantum simulators do not suffer from the sign problem because they evolve naturally in real-time, just like the physical systems they emulate. This promises to open new frontiers such as the simulation of early-universe dynamics, black-hole evaporation and the dense interiors of neutron stars.

Quantum simulators will powerfully augment traditional theoretical and computational methods, offering profound insights when Feynman diagrams become intractable, when dealing with real-time dynamics and when the sign problem renders classical simulations exponentially difficult. Just as the lattice revolution required decades of concerted community effort to reach its full potential, so will the quantum revolution, but the fruits will again transform the field. As the aphorism attributed to Mark Twain goes: history never repeats itself, but it often rhymes.

Quantum information

One of the most exciting and productive developments in recent years is the unexpected, yet profound, convergence between HEP and quantum information science (QIS). For a long time these fields evolved independently. HEP explored the universe’s smallest constituents and grandest structures, while QIS focused on harnessing quantum mechanics for computation and communication. One of the pioneers in studying the interface between these fields was John Bell, a theoretical physicist at CERN.

Just as the lattice revolution needed decades of concerted community effort to reach its full potential, so will the quantum revolution

HEP and QIS are now deeply intertwined. As quantum simulators advance, there is a growing demand for theoretical tools that combine the rigour of quantum field theory with the concepts of QIS. For example, tensor networks were developed in condensed-matter physics to represent highly entangled quantum states, and have now found surprising applications in lattice gauge theories and “holographic dualities” between quantum gravity and quantum field theory. Another example is quantum error correction – a vital QIS technique to protect fragile quantum information from noise, and now a major focus for quantum simulation in HEP.

This cross-disciplinary synthesis is not just conceptual; it is becoming institutional. Initiatives like the US Department of Energy’s Quantum Information Science Enabled Discovery (QuantISED) programme, CERN’s Quantum Technology Initiative (QTI) and Europe’s Quantum Flagship are making substantial investments in collaborative research. Quantum algorithms will become indispensable for theoretical problems just as quantum sensors are becoming indispensable to experimental observation (see “Sensing at quantum limits”).

The result is the emergence of a new breed of scientist: one equally fluent in the fundamental equations of particle physics and the practicalities of quantum hardware. These “hybrid” scientists are building the theoretical and computational scaffolding for a future where quantum simulation is a standard, indispensable tool in HEP. 

The post Quantum simulators in high-energy physics appeared first on CERN Courier.

]]>
Feature Enrique Rico Ortega and Sofia Vallecorsa explain how quantum computing will allow physicists to model complex dynamics, from black-hole evaporation to neutron-star interiors. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_QSIM_fidelity.jpg
Four ways to interpret quantum mechanics https://cerncourier.com/a/four-ways-to-interpret-quantum-mechanics/ Wed, 09 Jul 2025 07:11:50 +0000 https://cerncourier.com/?p=113474 Carlo Rovelli describes the major schools of thought on how to make sense of a purely quantum world.

The post Four ways to interpret quantum mechanics appeared first on CERN Courier.

]]>
One hundred years after its birth, quantum mechanics is the foundation of our understanding of the physical world. Yet debates on how to interpret the theory – especially the thorny question of what happens when we make a measurement – remain as lively today as during the 1930s.

The latest recognition of the fertility of studying the interpretation of quantum mechanics was the award of the 2022 Nobel Prize in Physics to Alain Aspect, John Clauser and Anton Zeilinger. The motivation for the prize pointed out that the bubbling field of quantum information, with its numerous current and potential technological applications, largely stems from the work of John Bell at CERN the 1960s and 1970s, which in turn was motivated by the debate on the interpretation of quantum mechanics.

The majority of scientists use a textbook formulation of the theory that distinguishes the quantum system being studied from “the rest of the world” – including the measuring apparatus and the experimenter, all described in classical terms. Used in this orthodox manner, quantum theory describes how quantum systems react when probed by the rest of the world. It works flawlessly.

Sense and sensibility

The problem is that the rest of the world is quantum mechanical as well. There are of course regimes in which the behaviour of a quantum system is well approximated by classical mechanics. One may even be tempted to think that this suffices to solve the difficulty. But this leaves us in the awkward position of having a general theory of the world that only makes sense under special approximate conditions. Can we make sense of the theory in general?

Today, variants of four main ideas stand at the forefront of efforts to make quantum mechanics more conceptually robust. They are known as physical collapse, hidden variables, many worlds and relational quantum mechanics. Each appears to me to be viable a priori, but each comes with a conceptual price to pay. The latter two may be of particular interest to the high-energy community as the first two do not appear to fit well with relativity.

Probing physical collapse

The idea of the physical collapse is simple: we are missing a piece of the dynamics. There may exist a yet-undiscovered physical interaction that causes the wavefunction to “collapse” when the quantum system interacts with the classical world in a measurement. The idea is empirically testable. So far, all laboratory attempts to find violations of the textbook Schrödinger equation have failed (see “Probing physical collapse” figure), and some models for these hypothetical new dynamics have been ruled out by measurements.

The second possibility, hidden variables, follows on from Einstein’s belief that quantum mechanics is incomplete. It posits that its predictions are exactly correct, but that there are additional variables describing what is going on, besides those in the usual formulation of the theory: the reason why quantum predictions are probabilistic is our ignorance of these other variables.

The work of John Bell shows that the dynamics of any such theory will have some degree of non-locality (see “Non-locality” image). In the non-relativistic domain, there is a good example of a theory of this sort, that goes under the name of de Broglie–Bohm, or pilot-wave theory. This theory has non-local but deterministic dynamics capable of reproducing the predictions of non-relativistic quantum-particle dynamics. As far as I am aware, all existing theories of this kind break Lorentz invariance, and the extension of hidden variable theories to quantum-field theoretical domains appears cumbersome.

Relativistic interpretations

Let me now come to the two ideas that are naturally closer to relativistic physics. The first is the many-worlds interpretation – a way of making sense of quantum theory without either changing its dynamics or adding extra variables. It is described in detail in this edition of CERN Courier by one of its leading contemporary proponents (see “The minimalism of many worlds“), but the main idea is the following: being a genuine quantum system, the apparatus that makes a quantum measurement does not collapse the superposition of possible measurement outcomes – it becomes a quantum superposition of the possibilities, as does any human observer.

Non-locality

If we observe a singular outcome, says the many-worlds interpretation, it is not because one of the probabilistic alternatives has actualised in a mysterious “quantum measurement”. Rather, it is because we have split into a quantum superposition of ourselves, and we just happen to be in one of the resulting copies. The world we see around us is thus only one of the branches of a forest of parallel worlds in the overall quantum state of everything. The price to pay to make sense of quantum theory in this manner is to accept the idea that the reality we see is just a branch in a vast collection of possible worlds that include innumerable copies of ourselves.

Relational interpretations are the most recent of the four kinds mentioned. They similarly avoid physical collapse or hidden variables, but do so without multiplying worlds. They stay closer to the orthodox textbook interpretation, but with no privileged status for observers. The idea is to think of quantum theory in a manner closer to the way it was initially conceived by Born, Jordan, Heisenberg and Dirac: namely in terms of transition amplitudes between observations rather than quantum states evolving continuously in time, as emphasised by Schrödinger’s wave mechanics (see “A matter of taste” image).

Observer relativity

The alternative to taking the quantum state as the fundamental entity of the theory is to focus on the information that an arbitrary system can have about another arbitrary system. This information is embodied in the physics of the apparatus: the position of its pointer variable, the trace in a bubble chamber, a person’s memory or a scientist’s logbook. After a measurement, these physical quantities “have information” about the measured system as their value is correlated with a property of the observed systems.

Quantum theory can be interpreted as describing the relative information that systems can have about one another. The quantum state is interpreted as a way of coding the information about a system available to another system. What looks like a multiplicity of worlds in the many-worlds interpretation becomes nothing more than a mathematical accounting of possibilities and probabilities.

A matter of taste

The relational interpretation reduces the content of the physical theory to be about how systems affect other systems. This is like the orthodox textbook interpretation, but made democratic. Instead of a preferred classical world, any system can play a role that is a generalisation of the Copenhagen observer. Relativity teaches us that velocity is a relative concept: an object has no velocity by itself, but only relative to another object. Similarly, quantum mechanics, interpreted in this manner, teaches us that all physical variables are relative. They are not properties of a single object, but ways in which an object affects another object.

The QBism version of the interpretation restricts its attention to observing systems that are rational agents: they can use observations and make probabilistic predictions about the future. Probability is interpreted subjectively, as the expectation of a rational agent. The relational interpretation proper does not accept this restriction: it considers the information that any system can have about any other system. Here, “information” is understood in the simple physical sense of correlation described above.

Like many worlds – to which it is not unrelated – the relational interpretation does not add new dynamics or new variables. Unlike many worlds, it does not ask us to think about parallel worlds either. The conceptual price to pay is a radical weakening of a strong form of realism: the theory does not give us a picture of a unique objective sequence of facts, but only perspectives on the reality of physical systems, and how these perspectives interact with one another. Only quantum states of a system relative to another system play a role in this interpretation. The many-worlds interpretation is very close to this. It supplements the relational interpretation with an overall quantum state, interpreted realistically, achieving a stronger version of realism at the price of multiplying worlds. In this sense, the many worlds and relational interpretations can be seen as two sides of the same coin.

Every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics

I have only sketched here the most discussed alternatives, and have tried to be as neutral as possible in a field of lively debates in which I have my own strong bias (towards the fourth solution). Empirical testing, as I have mentioned, can only test the physical collapse hypothesis.

There is nothing wrong, in science, in using different pictures for the same phenomenon. Conceptual flexibility is itself a resource. Specific interpretations often turn out to be well adapted to specific problems. In quantum optics it is sometimes convenient to think that there is a wave undergoing interference, as well as a particle that follows a single trajectory guided by the wave, as in the pilot-wave hidden-variable theory. In quantum computing, it is convenient to think that different calculations are being performed in parallel in different worlds. My own field of loop quantum gravity treats spacetime regions as quantum processes: here, the relational interpretation merges very naturally with general relativity, because spacetime regions themselves become quantum processes, affecting each other.

Richard Feynman famously wrote that “every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics. He knows that they are all equivalent, and that nobody is ever going to be able to decide which one is right at that level, but he keeps them in his head, hoping that they will give him different ideas for guessing.” I think that this is where we are, in trying to make sense of our best physical theory. We have various ways to make sense of it. We do not yet know which of these will turn out to be the most fruitful in the future.

The post Four ways to interpret quantum mechanics appeared first on CERN Courier.

]]>
Feature Carlo Rovelli describes the major schools of thought on how to make sense of a purely quantum world. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_INTERP_Helgoland.jpg
Sensing at quantum limits https://cerncourier.com/a/sensing-at-quantum-limits/ Wed, 09 Jul 2025 07:11:30 +0000 https://cerncourier.com/?p=113517 Quantum sensors have become important tools in low-energy particle physics. Michael Doser explores opportunities to exploit their unparalleled precision at higher energies.

The post Sensing at quantum limits appeared first on CERN Courier.

]]>
Atomic energy levels. Spin orientations in a magnetic field. Resonant modes in cryogenic, high-quality-factor radio-frequency cavities. The transition from superconducting to normal conducting, triggered by the absorption of a single infrared photon. These are all simple yet exquisitely sensitive quantum systems with discrete energy levels. Each can serve as the foundation for a quantum sensor – instruments that detect single photons, measure individual spins or record otherwise imperceptible energy shifts.

Over the past two decades, quantum sensors have taken on leading roles in the search for ultra-light dark matter and in precision tests of fundamental symmetries. Examples include the use of atomic clocks to probe whether Earth is sweeping through oscillating or topologically structured dark-matter fields, and cryogenic detectors to search for electric dipole moments – subtle signatures that could reveal new sources of CP violation. These areas have seen rapid progress, as challenges related to detector size, noise, sensitivity and complexity have been steadily overcome, opening new phase space in which to search for physics beyond the Standard Model. Could high-energy particle physics benefit next?

Low-energy particle physics

Most of the current applications of quantum sensors are at low energies, where their intrinsic sensitivity and characteristic energy scales align naturally with the phenomena being probed. For example, within the Project 8 experiment at the University of Washington, superconducting sensors are being developed to tackle a longstanding challenge: to distinguish the tiny mass of the neutrino from zero (see “Quantum-noise limited” image). Inward-looking phased arrays of quantum-noise-limited microwave receivers allow spectroscopy of cyclotron radiation from beta-decay electrons as they spiral in a magnetic field. The shape of the endpoint of the spectrum is sensitive to the mass of the neutrino and such sensors have the potential to be sensitive to neutrino masses as low as 40 meV.

Quantum-noise limited

Beyond the Standard Model, superconducting sensors play a central role in the search for dark matter. At the lowest mass scales (peV to meV), experiments search for ultralight bosonic dark-matter candidates such as axions and axion-like particles (ALPs) through excitations of the vacuum field inside high–quality–factor microwave and millimetre-wave cavities (see “Quantum sensitivity” image). In the meV range, light-shining-through-wall experiments aim to reveal brief oscillations into weakly coupled hidden-sector particles such as dark photons or ALPs, and may employ quantum sensors for detecting reappearing photons, depending on the detection strategy. In the MeV to sub-GeV mass range, superconducting sensors are used to detect individual photons and phonons in cryogenic scintillators, enabling sensitivity to dark-matter interactions via electron recoils. At higher masses, reaching into the GeV regime, superfluid helium detectors target nuclear recoils from heavier dark matter particles such as WIMPs.

These technologies also find broad application beyond fundamental physics. For example, in superconducting and other cryogenic sensors, the ability to detect single quanta with high efficiency and ultra-low noise is essential. The same capabilities are the technological foundation of quantum communication.

Raising the temperature

While many superconducting quantum sensors require ultra-low temperatures of a few mK, some spin-based quantum sensors can function at or near room temperature. Spin-based sensors, such as nitrogen-vacancy (NV) centres in diamonds and polarised rubidium atoms, are excellent examples.

NV centres are defects in the diamond lattice where a missing carbon atom – the vacancy – is adjacent to a lattice site where a carbon atom has been replaced by a nitrogen atom. The electronic spin states in NV centres have unique energy levels that can be probed by laser excitation and detection of spin-dependent fluorescence.

Researchers are increasingly exploring how quantum-control techniques can be integrated into high-energy-physics detectors

Rubidium is promising for spin-based sensors because it has unpaired electrons. In the presence of an external magnetic field, its atomic energy levels are split by the Zeeman effect. When optically pumped with laser light, spin-polarised “dark” sublevels – those not excited by the light – become increasingly populated. These aligned spins precess in magnetic fields, forming the basis of atomic magnetometers and other quantum sensors.

Being exquisite magnetometers, both devices make promising detectors for ultralight bosonic dark-matter candidates such as axions. Fermion spins may interact with spatial or temporal gradients of the axion field, leading to tiny oscillating energy shifts. The coupling of axions to gluons could also show up as an oscillating nuclear electric dipole moment. These interactions could manifest as oscillating energy-level shifts in NV centres, or as time-varying NMR-like spin precession signals in the atomic magnetometers.

Large-scale detectors

The situation is completely different in high-energy physics detectors, which require numerous interactions between a particle and a detector. Charged particles cause many ionisation events, and when a neutral particle interacts it produces charged particles that result in similarly numerous ionisations. Even if quantum control were possible within individual units of a massive detector, the number of individual quantum sub-processes to be monitored would exceed the possibilities of any realistic device.

Increasingly, however, researchers are exploring how quantum-control techniques – such as manipulating individual atoms or spins using lasers or microwaves – can be integrated into high-energy-physics detectors. These methods could enhance detector sensitivity, tune detector response or enable entirely new ways of measuring particle properties. While these quantum-enhanced or hybrid detection approaches are still in their early stages, they hold significant promise.

Quantum dots

Quantum dots are nanoscale semiconductor crystals – typically a few nanometres in diameter – that confine charge carriers (electrons and holes) in all three spatial dimensions. This quantum confinement leads to discrete, atom-like energy levels and results in optical and electronic properties that are highly tunable with size, shape and composition. Originally studied for their potential in optoelectronics and biomedical imaging, quantum dots have more recently attracted interest in high-energy physics due to their fast scintillation response, narrow-band emission and tunability. Their emission wavelength can be precisely controlled through nanostructuring, making them promising candidates for engineered detectors with tailored response characteristics.

Chromatic calorimetry

While their radiation hardness is still under debate and needs to be resolved, engineering their composition, geometry, surface and size can yield very narrow-band (20 nm) emitters across the optical spectrum and into the infrared. Quantum dots such as these could enable the design of a “chromatic calorimeter”: a stack of quantum-dot layers, each tuned to emit at a distinct wavelength; for example red in the first layer, orange in the second and progressing through the visible spectrum to violet. Each layer would absorb higher energy photons quite broadly but emit light in a narrow spectral band. The intensity of each colour would then correspond to the energy absorbed in that layer, while the emission wavelength would encode the position of energy deposition, revealing the shower shape (see “Chromatic calorimetry” figure). Because each layer is optically distinct, hermetic isolation would be unnecessary, reducing the overall material budget.

Rather than improving the energy resolution of existing calorimeters, quantum dots could provide additional information on the shape and development of particle showers if embedded in existing scintillators. Initial simulations and beam tests by CERN’s Quantum Technology Initiative (QTI) support the hypothesis that the spectral intensity of quantum-dot emission can carry information about the energy and species of incident particles. Ongoing work aims to explore their capabilities and limitations.

Beyond calorimetry, quantum dots could be formed within solid semiconductor matrices, such as gallium arsenide, to form a novel class of “photonic trackers”. Scintillation light from electronically tunable quantum dots could be collected by photodetectors integrated directly on top of the same thin semiconductor structure, such as in the DoTPiX concept. Thanks to a highly compact, radiation-tolerant scintillating pixel tracking system with intrinsic signal amplification and minimal material budget, photonic trackers could provide a scintillation-light-based alternative to traditional charge-based particle trackers.

Living on the edge

Low temperatures also offer opportunities at scale – and cryogenic operation is a well-established technique in both high-energy and astroparticle physics, with liquid argon (boiling point 87 K) widely used in time projection chambers and some calorimeters, and some dark-matter experiments using liquid helium (boiling point 4.2 K) to reach even lower temperatures. A range of solid-state detectors, including superconducting sensors, operate effectively at these temperatures and below, and offer significant advantages in sensitivity and energy resolution.

Single-photon phase transitions

Magnetic microcalorimeters (MMCs) and transition-edge sensors (TESs) operate in the narrow temperature range where a superconducting material undergoes a rapid transition from zero resistance to finite values. When a particle deposits energy in an MMC or TES, it slightly raises the temperature, causing a measurable increase in resistance. Because the transition is extremely steep, even a tiny temperature change leads to a detectable resistance change, allowing precise calorimetry.

Functioning at millikelvin temperatures, TESs provide much higher energy resolution than solid-state detectors made from high-purity germanium crystals, which work by collecting electron–hole pairs created when ionising radiation interacts with the crystal lattice. TESs are increasingly used in high-resolution X-ray spectroscopy of pionic, muonic or antiprotonic atoms, and in photon detection for observational astronomy, despite the technical challenges associated with maintaining ultra-low operating temperatures.

By contrast, superconducting nanowire and microwire single-photon detectors (SNSPDs and SMSPDs) register only a change in state – from superconducting to normal conducting – allowing them to operate at higher temperatures than traditional low-temperature sensors. When made from high–critical-temperature (Tc) superconductors, operation at temperatures as high as 10 K is feasible, while maintaining excellent sensitivity to energy deposited by charged particles and ultrafast switching times on the order of a few picoseconds. Recent advances include the development of large-area devices with up to 400,000 micron-scale pixels (see “Single-photon phase transitions” figure), fabrication of high-Tc SNSPDs and successful beam tests of SMSPDs. These technologies are promising candidates for detecting milli-charged particles – hypothetical particles arising in “hidden sector” extensions of the Standard Model – or for high-rate beam monitoring at future colliders.

Rugged, reliable and reproducible

Quantum sensor-based experiments have vastly expanded the phase space that has been searched for new physics. This is just the beginning of the journey, as larger-scale efforts build on the initial gold rush and new quantum devices are developed, perfected and brought to bear on the many open questions of particle physics.

Partnering with neighbouring fields such as quantum computing, quantum communication and manufacturing is of paramount importance

To fully profit from their potential, a vigorous R&D programme is needed to scale up quantum sensors for future detectors. Ruggedness, reliability and reproducibility are key – as well as establishing “proof of principle” for the numerous imaginative concepts that have already been conceived. Challenges range from access to test infrastructures, to standardised test protocols for fair comparisons. In many cases, the largest challenge is to foster an open exchange of ideas given the numerous local developments that are happening worldwide. Finding a common language to discuss developments in different fields that at first glance may have little in common, builds on a willingness to listen, learn and exchange.

The European Committee for Future Accelerators (ECFA) detector R&D roadmap provides a welcome framework for addressing these challenges collaboratively through the Detector R&D (DRD) collaborations established in 2023 and now coordinated at CERN. Quantum sensors and emerging technologies are covered within the DRD5 collaboration, which ties together 112 institutes worldwide, many of them leaders in their particular field. Only a third stem from the traditional high-energy physics community.

These efforts build on the widespread expertise and enthusiastic efforts at numerous institutes and tie in with the quantum programmes being spearheaded at high-energy-physics research centres, among them CERN’s QTI. Partnering with neighbouring fields such as quantum computing, quantum communication and manufacturing is of paramount importance. The best approach may prove to be “targeted blue-sky research”: a willingness to explore completely novel concepts while keeping their ultimate usefulness for particle physics firmly in mind.

The post Sensing at quantum limits appeared first on CERN Courier.

]]>
Feature Quantum sensors have become important tools in low-energy particle physics. Michael Doser explores opportunities to exploit their unparalleled precision at higher energies. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_QSENSING_ADMX.jpg
A new probe of radial flow https://cerncourier.com/a/a-new-probe-of-radial-flow/ Tue, 08 Jul 2025 20:23:57 +0000 https://cerncourier.com/?p=113587 The ATLAS and ALICE collaborations have announced the first results of a new way to measure the “radial flow” of quark–gluon plasma.

The post A new probe of radial flow appeared first on CERN Courier.

]]>
Radial-flow fluctuations

The ATLAS and ALICE collaborations have announced the first results of a new way to measure the “radial flow” of quark–gluon plasma (QGP). The two analyses offer a fresh perspective into the fluid-like behaviour of QCD matter under extreme conditions, such as those that prevailed after the Big Bang. The measurements are highly complementary, with ALICE drawing on their detector’s particle-identification capabilities and ATLAS leveraging the experiment’s large rapidity coverage.

At the Large Hadron Collider, lead–ion collisions produce matter at temperatures and densities so high that quarks and gluons momentarily escape their confinement within hadrons. The resulting QGP is believed to have filled the universe during its first few microseconds, before cooling and fragmenting into mesons and baryons. In the laboratory, these streams of particles allow researchers to reconstruct the dynamical evolution of the QGP, which has long been known to transform anisotropies of the initial collision geometry into anisotropic momentum distributions of the final-state particles.

Compelling evidence

Differential measurements of the azimuthal distributions of produced particles over the last decades have provided compelling evidence that the outgoing momentum distribution reflects a collective response driven by initial pressure gradients. The isotropic expansion component, typically referred to as radial flow, has instead been inferred from the slope of particle spectra (see figure 1). Despite its fundamental role in driving the QGP fireball, radial flow lacked a differential probe comparable to those of its anisotropic counterparts.

ATLAS measurements of radial flow

That situation has now changed. The ALICE and ATLAS collaborations recently employed the novel observable v0(pT) to investigate radial flow directly. Their independent results demonstrate, for the first time, that the isotropic expansion of the QGP in heavy-ion collisions exhibits clear signatures of collective behaviour. The isotropic expansion of the QGP and its azimuthal modulations ultimately depend on the hydrodynamic properties of the QGP, such as shear or bulk viscosity, and can thus be measured to constrain them.

Traditionally, radial flow has been inferred from the slope of pT-spectra, with the pT-integrated radial-flow extracted via fits to “blast wave” models. The newly introduced differential observable v0(pT) captures fluctuations in spectral shape across pT bins. v0(pT) retains differential sensitivity, since it is defined as the correlation (technically the normalised covariance) between the fraction of particles in a given pT-interval and the mean transverse momentum of the collision products within a single event, [pT]. Roughly speaking, a fluctuation raising [pT] produces a positive v0(pT) at high pT due to the fractional yield increasing; conversely, the fractional yield decreasing at low pT causes a negative v0(pT). A pseudorapidity gap between the measurement of mean pT and the particle yields is used to suppress short-range correlations and isolate the long-range, collective signal. Previous studies observed event-by-event fluctuations in [pT], related to radial flow over a wide pT range and quantified by the coefficient v0ref, but they could not establish whether these fluctuations were correlated across different pT intervals – a crucial signature of collective behaviour.

Origins

The ATLAS collaboration performed a measurement of v0(pT) in the 0.5 to 10 GeV range, identifying three signatures of the collective origin of radial flow (see figure 2). First, correlations between the particle yield at fixed pT and the event-wise mean [pT] in a reference interval show that the two-particle radial flow factorises into single-particle coefficients as v0(pT) × v0ref for pT < 4 GeV, independent of the reference choice (left panel). Second, the data display no dependence on the rapidity gap between correlated particles, suggesting a long-range effect intrinsic to the entire system (middle panel). Finally, the centrality dependence of the ratio v0(pT)/v0ref followed a consistent trend from head-on to peripheral collisions, effectively cancelling initial geometry effects and supporting the interpretation of a collective QGP response (right panel). At higher pT, a decrease in v0(pT) and a splitting with respect to centrality suggest the onset of non-thermal effects such as jet quenching. This may reveal fluctuations in jet energy loss – an area warranting further investigation.

ALICE measurements of radial flow

Using more than 80 million collisions at a centre-of-mass energy of 5.02 TeV, ALICE extracted v0(pT) for identified pions, kaons and protons across a broad range of centralities. ALICE observes v0(pT) to be negative at low pT, reflecting the influence of mean-pT fluctuations on the spectral shape (see figure 3). The data display a clear mass ordering at low pT, from protons to kaons to pions, consistent with expectations from collective radial expansion. This mass ordering reflects the greater “push” heavier particles experience in the rapidly expanding medium. The picture changes above 3 GeV, where protons have larger v0(pT) values than pions and kaons, perhaps indicating the contribution of recombination processes in hadron production.

The results demonstrate that the isotropic expansion of the QGP in heavy-ion collisions exhibits clear signatures of collective behaviour

The two collaborations’ measurements of the new v0(pT) observable highlight its sensitivity to the bulk-transport properties of the QGP medium. Comparisons with hydrodynamic calculations show that v0(pT) varies with bulk viscosity and the speed of sound, but that it has a weaker dependence on shear viscosity. Hydrodynamic predictions reproduce the data well up to about 2 GeV, but diverge at higher momenta. The deviation of non-collective models like HIJING from the data underscores the dominance of final-state, hydrodynamic-like effects in shaping radial flow.

These results advance our understanding of one of the most extreme regimes of QCD matter, strengthening the case for the formation of a strongly interacting, radially expanding QGP medium in heavy-ion collisions. Differential measurements of radial flow offer a new tool to probe this fluid-like expansion in detail, establishing its collective origin and complementing decades of studies of anisotropic flow.

The post A new probe of radial flow appeared first on CERN Courier.

]]>
News The ATLAS and ALICE collaborations have announced the first results of a new way to measure the “radial flow” of quark–gluon plasma. https://cerncourier.com/wp-content/uploads/2025/07/ATLAS-PHOTO-2018-001-1.png
Neutron stars as fundamental physics labs https://cerncourier.com/a/neutron-stars-as-fundamental-physics-labs/ Tue, 08 Jul 2025 20:12:41 +0000 https://cerncourier.com/?p=113630 Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use extreme environments as precise laboratories for fundamental physics.

The post Neutron stars as fundamental physics labs appeared first on CERN Courier.

]]>
Neutron stars are truly remarkable systems. They pack between one and two times the mass of the Sun into a radius of about 10 kilometres. Teetering on the edge of gravitational collapse into a black hole, they exhibit some of the strongest gravitational forces in the universe. They feature extreme densities in excess of atomic nuclei. And due to their high densities they produce weakly interacting particles such as neutrinos. Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use these extreme environments as precise laboratories for fundamental physics.

Perhaps the most intriguing open question surrounding neutron stars is what is actually inside them. Clearly they are primarily composed of neutrons, but many theories suggest that other forms of matter should appear in the highest density regions near the centre of the star, including free quarks, hyperons and kaon or pion condensates. Diverse data can constrain these hypotheses, including astronomical inferences of the masses and radii of neutron stars, observations of the mergers of neutron stars by LIGO, and baryon production patterns and correlations in heavy-ion collisions at the LHC. Theoretical consistency is critical here. Several talks highlighted the importance of low-energy nuclear data to understand the behaviour of nuclear matter at low densities, though also emphasising that at very high densities and energies any description should fall within the realm of QCD – a theory that beautifully describes the dynamics of quarks and gluons at the LHC.

Another key question for neutron stars is how fast they cool. This depends critically on their composition. Quarks, hyperons, nuclear resonances, pions or muons would each lead to different channels to cool the neutron star. Measurements of the temperatures and ages of neutron stars might thereby be used to learn about their composition.

Research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics

The workshop revealed that research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics including tests of particles beyond the Standard Model, including the axion: a very light and weakly coupled dark-matter candidate that was initially postulated to explain the “strong CP problem” of why strong interactions are identical for particles and antiparticles. The workshop allowed particle theorists to appreciate the various possible uncertainties in their theoretical predictions and propagate them into new channels that may allow sharper tests of axions and other weakly interacting particles. An intriguing question that the workshop left open is whether the canonical QCD axion could condense inside neutron stars.

While many uncertainties remain, the workshop revealed that the field is open and exciting, and that upcoming observations of neutron stars, including neutron-star mergers or the next galactic supernova, hold unique opportunities to understand fundamental questions from the nature of dark matter to the strong CP problem.

The post Neutron stars as fundamental physics labs appeared first on CERN Courier.

]]>
Meeting report Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use extreme environments as precise laboratories for fundamental physics. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_Neutron.jpg
The battle of the Big Bang https://cerncourier.com/a/the-battle-of-the-big-bang/ Tue, 08 Jul 2025 20:05:48 +0000 https://cerncourier.com/?p=113664 Battle of the Big Bang provides an entertaining update on the collective obsessions and controlled schizophrenias in cosmology, writes Will Kinney.

The post The battle of the Big Bang appeared first on CERN Courier.

]]>
As Arthur Koestler wrote in his seminal 1959 work The Sleepwalkers, “The history of cosmic theories … may without exaggeration be called a history of collective obsessions and controlled schizophrenias; and the manner in which some of the most important individual discoveries were arrived at, reminds one more of a sleepwalker’s performance than an electronic’s brain.” Koestler’s trenchant observation about the state of cosmology in the first half of the 20th century is perhaps even more true of cosmology in the first half of the 21st, and Battle of the Big Bang: The New Tales of Our Cosmic Origins provides an entertaining – and often refreshingly irreverent – update on the state of current collective obsessions and controlled schizophrenias in cosmology’s effort to understand the origin of the universe. The product of a collaboration between a working cosmologist (Afshordi) and a science communicator (Halper), Battle of the Big Bang tells the story of our modern efforts to comprehend the nature of the first moments of time, back to the moment of the Big Bang and even before.

Rogues gallery

The story told by the book combines lucid explanations of a rogues’ gallery of modern cosmological theories, some astonishingly successful, others less so, interspersed with anecdotes culled from Halper’s numerous interviews with key players in the game. These stories of the real people behind the theories add humanistic depth to the science, and the balance between Halper’s engaging storytelling and Afshordi’s steady-handed illumination of often esoteric scientific ideas is mostly a winning combination; the book is readable, without sacrificing too much scientific depth. In this respect, Battle of the Big Bang is reminiscent of Dennis Overbye’s 1991 Lonely Hearts of the Cosmos. As with Overbye’s account of the famous conference-banquet fist fight between Rocky Kolb and Gary Steigman, there is no shortage here of renowned scientists behaving like children, and the “mean girls of cosmology” angle makes for an entertaining read. The story of University of North Carolina professor Paul Frampton getting catfished by cocaine smugglers posing as model Denise Milani and ending up in an Argentine prison, for example, is not one you see coming.

Battle of the Big Bang: The New Tales of Our Cosmic Origins

A central conflict propelling the narrative is the longstanding feud between Andrei Linde and Alan Guth, both originators of the theory of cosmological inflation, and Paul Steinhardt, also an originator of the theory who later transformed into an apostate and bitter critic of the theory he helped establish.

Inflation – a hypothesised period of exponential cosmic expansion by more than 26 orders of magnitude that set the initial conditions for the hot Big Bang – is the gorilla in the room, a hugely successful theory that over the past several decades has racked up win after win when confronted by modern precision cosmology. Inflation is rightly considered by most cosmologists to be a central part of the “standard” cosmology, and its status as a leading theory inevitably makes it a target of critics like Steinhardt, who argue that inflation’s inherent flexibility means that it is not a scientific theory at all. Inflation is introduced early in the book, and for the remainder, Afshordi and Halper ably lead the reader through a wild mosaic of alternative theories to inflation: multiverses, bouncing universes, new universes birthed from within black holes, extra dimensions, varying light speed and “mirror” universes with reversed time all make appearances, a dizzying inventory of our most recent collective obsessions and schizophrenias.

In the later chapters, Afshordi describes some of his own efforts to formulate an alternative to inflation, and it is here that the book is at its strongest; the voice of a master of the craft confronting his own unconscious assumptions and biases makes for compelling reading. I have known Niayesh as a friend and colleague for more than 20 years. He is a fearlessly creative theorist with deep technical skill, but he has the heart of a rebel and a poet, and I found myself wishing that the book gave his unique voice more room to shine, instead of burying it beneath too many mundane pop-science tropes; the book could have used more of the science and less of the “science communication”. At times the pop-culture references come so thick that the reader feels as if he is having to shake them off his leg.

Compelling arguments

Anyone who reads science blogs or follows science on social media is aware of the voices, some of them from within mainstream science and many from further out on the fringe, arguing that modern theoretical physics suffers from a rigid orthodoxy that serves to crowd out worthy alternative ideas to understand problems such as dark matter, dark energy and the unification of gravity with quantum mechanics. This has been the subject of several books such as Lee Smolin’s The Trouble with Physics and Peter Woit’s Not Even Wrong. A real value in Battle of the Big Bang is to provide a compelling counterargument to that pessimistic narrative. In reality, ambitious scientists like nothing better than overturning a standard paradigm, and theorists have put the standard model of cosmology in the cross hairs with the gusto of assassins gunning for John Wick. Despite – or perhaps because of – its focus on conflict, this book ultimately paints a picture of a vital and healthy scientific process, a kind of controlled chaos, ripe with wild ideas, full of the clash of egos and littered with the ashes of failed shots at glory.

What the book is not is a reliable scholarly work on the history of science. Not only was the manuscript rather haphazardly copy-edited (the renowned Mount Palomar telescope, for example, is not “two hundred foot”, but in fact 200 inches), but the historical details are sometimes smoothed over to fit a coherent narrative rather than presented in their actual messy accuracy. While I do not doubt the anecdote of David Spergel saying “we’re dead”, referring to cosmic strings when data from the COBE satellite was first released, it was not COBE that killed cosmic strings. The blurry vision of COBE could accommodate either strings or inflation as the source of fluctuations in the cosmic microwave background (CMB), and it took a clearer view to make the distinction. The final nail in the coffin came from BOOMERanG nearly a decade later, with the observation of the second acoustic peak in the CMB. And it was not, as claimed here, BOOMERanG that provided the first evidence for a flat geometry to the cosmos; that happened a few years earlier, with the Saskatoon and CAT experiments.

Afshordi and Halper ably lead the reader through a wild mosaic of alternative theories to inflation

The book makes a point of the premature death of Dave Wilkinson, when in fact he died at age 67, not (as is implied in the text) in his 50s. Wilkinson – who was my freshman physics professor – was a great scientist and a gifted teacher, and it is appropriate to memorialise him, but he had a long and productive career.

Besides these points of detail, there are some more significant omissions. The book relates the story of how the Ukrainian physicist Alex Vilenkin, blacklisted from physics and working as a zookeeper in Kharkiv, escaped the Soviet Union. Vilenkin moved to SUNY Buffalo, where I am currently a professor, because he had mistaken Mendel Sachs, a condensed matter theorist, for Ray Sachs, who originally predicted fluctuations in the CMB. It’s a funny story, and although the authors note that Vilenkin was blacklisted for refusing to be an informant for the KGB, they omit the central context that he was Jewish, one of many Jews banished from academic life by Soviet authorities who escaped the stifling anti-Semitism of the Soviet Union for scientific freedom in the West. This history resonates today in light of efforts by some scientists to boycott Israeli institutes and even blacklist Israeli colleagues. Unlike the minutiae of CMB physics, this matters, and Battle of the Big Bang should have been more careful to tell the whole story.

The post The battle of the Big Bang appeared first on CERN Courier.

]]>
Review Battle of the Big Bang provides an entertaining update on the collective obsessions and controlled schizophrenias in cosmology, writes Will Kinney. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_Rev_Steinhardt.jpg
Quantum theory returns to Helgoland https://cerncourier.com/a/quantum-theory-returns-to-helgoland/ Tue, 08 Jul 2025 20:01:35 +0000 https://cerncourier.com/?p=113617 The takeaway from Helgoland 2025 was that the foundations of quantum mechanics, though strongly built on Helgoland 100 years ago, remain open to interpretation.

The post Quantum theory returns to Helgoland appeared first on CERN Courier.

]]>
In June 1925, Werner Heisenberg retreated to the German island of Helgoland seeking relief from hay fever and the conceptual disarray of the old quantum theory. On this remote, rocky outpost in the North Sea, he laid the foundations of matrix mechanics. Later, his “island epiphany” would pass through the hands of Max Born, Wolfgang Pauli, Pascual Jordan and several others, and become the first mature formulation of quantum theory. From 9 to 14 June 2025, almost a century later, hundreds of researchers gathered on Helgoland to mark the anniversary – and to deal with pressing and unfinished business.

Alfred D Stone (Yale University) called upon participants to challenge the folklore surrounding quantum theory’s birth. Philosopher Elise Crull (City College of New York) drew overdue attention to Grete Hermann, who hinted at entanglement before it had a name and anticipated Bell in identifying a flaw in von Neumann’s no-go theorem, which had been taken as proof that hidden-variable theories are impossible. Science writer Philip Ball questioned Heisenberg’s epiphany itself: he didn’t invent matrix mechanics in a flash, claims Ball, nor immediately grasp its relevance, and it took months, and others, to see his contribution for what it was (see “Lend me your ears” image).

Building on a strong base

A clear takeaway from Helgoland 2025 was that the foundations of quantum mechanics, though strongly built on Helgoland 100 years ago, nevertheless remain open to interpretation, and any future progress will depend on excavating them directly (see “Four ways to interpret quantum mechanics“).

Does the quantum wavefunction represent an objective element of reality or merely an observer’s state of knowledge? On this question, Helgoland 2025 could scarcely have been more diverse. Christopher Fuchs (UMass Boston) passionately defended quantum Bayesianism, which recasts the Born probability rule as a consistency condition for rational agents updating their beliefs. Wojciech Zurek (Los Alamos National Laboratory) presented the Darwinist perspective, for which classical objectivity emerges from redundant quantum information encoded across the environment. Although Zurek himself maintains a more agnostic stance, his decoherence-based framework is now widely embraced by proponents of many-worlds quantum mechanics (see “The minimalism of many worlds“).

The foundations of quantum mechanics remain open to interpretation, and any future progress will depend on excavating them directly

Markus Aspelmeyer (University of Vienna) made the case that a signature of gravity’s long-speculated quantum nature may soon be within experimental reach. Building on the “gravitational Schrödinger’s cat” thought experiment proposed by Feynman in the 1950s, he described how placing a massive object in a spatial superposition could entangle a nearby test mass through their gravitational interaction. Such a scenario would produce correlations that are inexplicable by classical general relativity alone, offering direct empirical evidence that gravity must be described quantum-mechanically. Realising this type of experiment requires ultra-low pressures and cryogenic temperatures to suppress decoherence, alongside extremely low-noise measurements of gravitational effects at short distances. Recent advances in optical and opto­mechanical techniques for levitating and controlling nanoparticles suggest a path forward – one that could bring evidence for quantum gravity not from black holes or the early universe, but from laboratories on Earth.

Information insights

Quantum information was never far from the conversation. Isaac Chuang (MIT) offered a reconstruction of how Heisenberg might have arrived at the principles of quantum information, had his inspiration come from Shannon’s Mathematical Theory of Communication. He recast his original insights into three broad principles: observations act on systems; local and global perspectives are in tension; and the order of measurements matters. Starting from these ingredients, one could in principle recover the structure of the qubit and the foundations of quantum computation. Taking the analogy one step further, he suggested that similar tensions between memorisation and generalisation – or robustness and adaptability – may one day give rise to a quantum theory of learning.

Helgoland 2025 illustrated just how much quantum mechanics has diversified since its early days. No longer just a framework for explaining atomic spectra, the photoelectric effect and black-body radiation, it is at once a formalism describing high-energy particle scattering, a handbook for controlling the most exotic states of matter, the foundation for information technologies now driving national investment plans, and a source of philosophical conundrums that, after decades at the margins, has once again taken centre stage in theoretical physics.

The post Quantum theory returns to Helgoland appeared first on CERN Courier.

]]>
Meeting report The takeaway from Helgoland 2025 was that the foundations of quantum mechanics, though strongly built on Helgoland 100 years ago, remain open to interpretation. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_bornpauli.jpg
Exceptional flare tests blazar emission models https://cerncourier.com/a/exceptional-flare-tests-blazar-emission-models/ Tue, 08 Jul 2025 19:51:49 +0000 https://cerncourier.com/?p=113571 A new analysis of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer sheds light on the emission mechanisms of active galactic nuclei.

The post Exceptional flare tests blazar emission models appeared first on CERN Courier.

]]>
Active galactic nuclei (AGNs) are extremely energetic regions at the centres of galaxies, powered by accretion onto a supermassive black hole. Some AGNs launch plasma outflows moving near light speed. Blazars are a subclass of AGNs whose jets are pointed almost directly at Earth, making them appear exceptionally bright across the electro­magnetic spectrum. A new analysis of an exceptional flare of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer (IXPE) has now shed light on their emission mechanisms.

The spectral energy distribution of blazars generally has two broad peaks. The low-energy peak from radio to X-rays is well explained by synchrotron radiation from relativistic electrons spiraling in magnetic fields, but the origin of the higher-energy peak from X-rays to γ-rays is a longstanding point of contention, with two classes of models, dubbed hadronic and leptonic, vying to explain it. Polarisation measurements offer a key diagnostic tool, as the two models predict distinct polarisation signatures.

Model signatures

In hadronic models, high-energy emission is produced by protons, either through synchrotron radiation or via photo-hadronic interactions that generate secondary particles. Hadronic models predict that X-ray polarisation should be as high as that in the optical and millimetre bands, even in complex jet structures.

Leptonic models are powered by inverse Compton scattering, wherein relativistic electrons “upscatter” low-energy photons, boosting them to higher energies with low polarisation. Leptonic models can be further subdivided by the source of the inverse-Compton-scattered photons. If initially generated by synchrotron radiation in the AGN (synchrotron self-Compton, SSC), modest polarisation (~50%) is expected due to the inherent polarisation of synchrotron photons, with further reductions if the emission comes from inhomogeneous or multiple emitting regions. If initially generated by external sources (external Compton, EC), isotropic photon fields from the surrounding structures are expected to average out their polarisation.

IXPE launched on 9 December 2021, seeking to resolve such questions. It is designed to have 100-fold better sensitivity to the polarisation of X-rays in astrophysical sources than the last major X-ray polarimeter, which was launched half a century ago (CERN Courier July/August 2022 p10). In November 2023, it participated in a coordinated multiwavelength campaign spanning radio, millimetre and optical, and X-ray bands targeted the blazar BL Lacertae, whose X-ray emission arises mostly from the high-energy component, with its low-energy synchrotron component mainly at infrared energies. The campaign captured an exceptional flare, providing a rare opportunity to test competing emission models.

Optical telescopes recorded a peak optical polarisation of 47.5 ± 0.4%, the highest ever measured in a blazar. The short-mm (1.3 mm) polarisation also rose to about 10%, with both bands showing similar trends in polarisation angle. IXPE measured no significant polarisation in the 2 to 8 keV X-ray band, placing a 3σ upper limit of 7.4%.

The striking contrast between the high polarisation in optical and mm bands, and a strict upper limit in X-rays, effectively rules out all single-zone and multi-region hadronic models. Had these processes dominated, the X-ray polarisation would have been comparable to the optical. Instead, the observations strongly support a leptonic origin, specifically the SSC model with a stratified or multi-zone jet structure that naturally explains the low X-ray polarisation.

A key feature of the flare was the rapid rise and fall of optical polarisation

A key feature of the flare was the rapid rise and fall of optical polarisation. Initially, it was low, of order 5%, and aligned with the jet direction, suggesting the dominance of poloidal or turbulent fields. A sharp increase to nearly 50%, while retaining alignment, indicates the sudden injection of a compact, toroidally dominated magnetic structure.

The authors of the analysis propose a “magnetic spring” model wherein a tightly wound toroidal field structure is injected into the jet, temporarily ordering the magnetic field and raising the optical polarisation. As the structure travels outward, it relaxes, likely through kink instabilities, causing the polarisation to decline over about two weeks. This resembles an elastic system, briefly stretched and then returning to equilibrium.

A magnetic spring would also explain the multiwavelength flaring. The injection boosted the total magnetic field strength, triggering an unprecedented mm-band flare powered by low-energy electrons with long cooling times. The modest rise in mm-wavelength polarisation (green points) suggests emission from a large, turbulent region. Meanwhile, optical flaring (black points) was suppressed due to the rapid synchrotron cooling of high-energy electrons, consistent with the observed softening of the optical spectrum. No significant γ-ray enhancement was observed, as these photons originate from the same rapidly cooling electron population.

Turning point

These findings mark a turning point in high-energy astrophysics. The data definitively favour leptonic emission mechanisms in BL Lacertae during this flare, ruling out efficient proton acceleration and thus any associated high-energy neutrino or cosmic-ray production. The ability of the jet to sustain nearly 50% polarisation across parsec scales implies a highly ordered, possibly helical magnetic field extending far from the supermassive black hole.

The results cement polarimetry as a definitive tool in identifying the origin of blazar emission. The dedicated Compton Spectrometer and Imager (COSI) γ-ray polarimeter is soon set to complement IXPE at even higher energies when launched by NASA in 2027. Coordinated campaigns will be crucial for probing jet composition and plasma processes in AGNs, helping us understand the most extreme environments in the universe.

The post Exceptional flare tests blazar emission models appeared first on CERN Courier.

]]>
News A new analysis of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer sheds light on the emission mechanisms of active galactic nuclei. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_Astro.jpg
Fermilab’s final word on muon g-2 https://cerncourier.com/a/fermilabs-final-word-on-muon-g-2/ Tue, 08 Jul 2025 19:40:43 +0000 https://cerncourier.com/?p=113549 In parallel, theorists have published an updated Standard Model prediction based purely on lattice QCD.

The post Fermilab’s final word on muon g-2 appeared first on CERN Courier.

]]>
Fermilab’s Muon g-2 collaboration has given its final word on the magnetic moment of the muon. The new measurement agrees closely with a significantly revised Standard Model (SM) prediction. Though the experimental measurement will likely now remain stable for several years, theorists expect to make rapid progress to reduce uncertainties and resolve tensions underlying the SM value. One of the most intriguing anomalies in particle physics is therefore severely undermined, but not yet definitively resolved.

The muon g-2 anomaly dates back to the late 1990s and early 2000s, when measurements at Brookhaven National Laboratory (BNL) uncovered a possible discrepancy by comparison to theoretical predictions of the so-called muon anomaly, aμ = (g-2)/2. aμ expresses the magnitude of quantum loop corrections to the leading-order prediction of the Dirac equation, which multiplies the classical gyromagnetic ratio of fundamental fermions by a “g-factor” of precisely two. Loop corrections of aμ ~ 0.1% quantify the extent to which virtual particles emitted by the muon further increase the strength of its interaction with magnetic fields. Were measurements to be shown to deviate from SM predictions, this would indicate the influence of virtual fields beyond the SM.

Move on up

In 2013, the BNL experiment’s magnetic storage ring was transported from Long Island, New York, to Fermilab in Batavia, Illinois. After years of upgrades and improvements, the new experiment began in 2017. It now reports a final precision of 127 parts per billion (ppb), bettering the experiment’s design precision of 140 ppb, and a factor of four more sensitive than the BNL result.

“First and foremost, an increase in the number of stored muons allowed us to reduce our statistical uncertainty to 98 ppb compared to 460 ppb for BNL,” explains co-spokesperson Peter Winter of Argonne National Laboratory, “but a lot of technical improvements to our calorimetry, tracking, detector calibration and magnetic-field mapping were also needed to improve on the systematic uncertainties from 280 ppb at BNL to 78 ppb at Fermilab.”

This formidable experimental precision throws down the gauntlet to the theory community

The final Fermilab measurement is (116592070.5 ± 11.4 (stat.) ± 9.1(syst.) ± 2.1 (ext.)) × 10–11, fully consistent with the previous BNL measurement. This formidable precision throws down the gauntlet to the Muon g-2 Theory Initiative (TI), which was founded to achieve an international consensus on the theoretical prediction.

The calculation is difficult, featuring contributions from all sectors of the SM (CERN Courier March/April 2025 p21). The TI published its first whitepaper in 2020, reporting aμ = (116591810 ± 43) × 10–11, based exclusively on a data-driven analysis of cross-section measurements at electron–positron colliders (WP20). In May, the TI updated its prediction, publishing a value aμ = (116592033 ± 62) × 10–11, statistically incompatible with the previous prediction at the level of three standard deviations, and with an increased uncertainty of 530 ppb (WP25). The new prediction is based exclusively on numerical SM calculations. This was made possible by rapid progress in the use of lattice QCD to control the dominant source of uncertainty, which arises due to the contribution of so-called hadronic vacuum polarisation (HVP). In HVP, the photon representing the magnetic field interacts with the muon during a brief moment when a virtual photon erupts into a difficult-to-model cloud of quarks and gluons.

Significant shift

“The switch from using the data-driven method for HVP in WP20 to lattice QCD in WP25 results in a significant shift in the SM prediction,” confirms Aida El-Khadra of the University of Illinois, chair of the TI, who believes that it is not unreasonable to expect significant error reductions in the next couple of years. “There still are puzzles to resolve, particularly around the experimental measurements that are used in the data-driven method for HVP, which prevent us, at this point in time, from obtaining a new prediction for HVP in the data-driven method. This means that we also don’t yet know if the data-driven HVP evaluation will agree or disagree with lattice–QCD calculations. However, given the ongoing dedicated efforts to resolve the puzzles, we are confident we will soon know what the data-driven method has to say about HVP. Regardless of the outcome of the comparison with lattice QCD, this will yield profound insights.”

We are making plans to improve experimental precision beyond the Fermilab experiment

On the experimental side, attention now turns to the Muon g-2/EDM experiment at J-PARC in Tokai, Japan. While the Fermilab experiment used the “magic gamma” method first employed at CERN in the 1970s to cancel the effect of electric fields on spin precession in a magnetic field (CERN Courier September/October 2024 p53), the J-PARC experiment seeks to control systematic uncertainties by exercising particularly tight control of its muon beam. In the Japanese experiment, antimatter muons will be captured by atomic electrons to form muonium, ionised using a laser, and reaccelerated for a traditional precession measurement with sensitivity to both the muon’s magnetic moment and its electric dipole moment (CERN Courier July/August 2024 p8).

“We are making plans to improve experimental precision beyond the Fermilab experiment, though their precision is quite tough to beat,” says spokesperson Tsutomu Mibe of KEK. “We also plan to search for the electric dipole moment of the muon with an unprecedented precision of roughly 10–21 e cm, improving the sensitivity of the last results from BNL by a factor of 70.”

With theoretical predictions from high-order loop processes expected to be of the order 10–38 e cm, any observation of an electric dipole moment would be a clear indication of new physics.

“Construction of the experimental facility is currently ongoing,” says Mibe. “We plan to start data taking in 2030.”

The post Fermilab’s final word on muon g-2 appeared first on CERN Courier.

]]>
News In parallel, theorists have published an updated Standard Model prediction based purely on lattice QCD. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_fermilab.jpg
STAR hunts QCD critical point https://cerncourier.com/a/star-hunts-qcd-critical-point/ Tue, 08 Jul 2025 19:38:28 +0000 https://cerncourier.com/?p=113561 The STAR collaboration at BNL has narrowed the search for a long-sought-after “critical point” in the still largely conjectural phase diagram of QCD.

The post STAR hunts QCD critical point appeared first on CERN Courier.

]]>
Phases of QCD

Just as water takes the form of ice, liquid or vapour, QCD matter exhibits distinct phases. But while the phase diagram of water is well established, the QCD phase diagram remains largely conjectural. The STAR collaboration at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) recently completed a new beam-energy scan (BES-II) of gold–gold collisions. The results narrow the search for a long-sought-after “critical point” in the QCD phase diagram.

“BES-II precision measurements rule out the existence of a critical point in the regions of the QCD phase diagram accessed at LHC and top RHIC energies, while still allowing the possibility at lower collision energies,” says Bedangadas Mohanty of the National Institute of Science Education and Research in India, who co-led the analysis. “The results refine earlier BES-I indications, now with much reduced uncertainties.”

At low temperatures and densities, quarks and gluons are confined within hadrons. Heating QCD matter leads to the formation of a deconfined quark–gluon plasma (QGP), while increasing the density at low temperatures is expected to give rise to more exotic states such as colour superconductors. Above a certain threshold in baryon density, the transition from hadron gas to QGP is expected to be first-order – a sharp, discontinuous change akin to water boiling. As density decreases, this boundary gives way to a smooth crossover where the two phases blend. A hypothetical critical point marks the shift between these regimes, much like the endpoint of the liquid–gas coexistence line in the phase diagram of water (see “Phases of QCD” figure).

Heavy-ion collisions offer a way to observe this phase transition directly. At the Large Hadron Collider, the QGP created in heavy-ion collisions transitions smoothly to a hadronic gas as it cools, but the lower energies explored by RHIC probe the region of phase space where the critical point may lie.

To search for possible signatures of a critical point, the STAR collaboration measured gold–gold collisions at centre-of-mass energies between 7.7 and 27 GeV per nucleon pair. The collaboration reports that their data deviate from frameworks that do not include a critical point, including the hadronic transport model, thermal models with canonical ensemble treatment, and hydrodynamic approaches with excluded-volume effects. Depending on the choice of observable and non-critical baseline model, the significance of the deviations ranges from two to five standard deviations, with the largest effects seen in head-on collisions when using peripheral collisions as a reference.

“None of the existing theoretical models fully reproduce the features observed in the data,” explains Mohanty. “To interpret these precision measurements, it is essential that dynamical model calculations that include critical-point physics be developed.” The STAR collaboration is now mapping lower energies and higher baryon densities using a fixed target (FXT) mode, wherein a 1 mm gold foil sits 2 cm below the beam axis.

“The FXT data are a valuable opportunity to explore QCD matter at high baryon density,” says Mohanty. “Data taking will conclude later this year when RHIC transitions to the Electron–Ion Collider. The Compressed Baryonic Matter experiment at FAIR in Germany will then pick up the study of the QCD critical point towards the end of the 2020s.”

The post STAR hunts QCD critical point appeared first on CERN Courier.

]]>
News The STAR collaboration at BNL has narrowed the search for a long-sought-after “critical point” in the still largely conjectural phase diagram of QCD. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_phases_feature.jpg
Double plasma progress at DESY https://cerncourier.com/a/double-plasma-progress-at-desy/ Tue, 08 Jul 2025 19:33:57 +0000 https://cerncourier.com/?p=113556 New developments tackle two of the biggest challenges in plasma-wave acceleration: beam quality and bunch rate.

The post Double plasma progress at DESY appeared first on CERN Courier.

]]>
What if, instead of using tonnes of metal to accelerate electrons, they were to “surf” on a wave of charge displacements in a plasma? This question, posed in 1979 by Toshiki Tajima and John Dawson, planted the seed for plasma wakefield acceleration (PWA). Scientists at DESY now report some of the first signs that PWA is ready to compete with traditional accelerators at low energies. The results tackle two of the biggest challenges in PWA: beam quality and bunch rate.

“We have made great progress in the field of plasma acceleration,” says Andreas Maier, DESY’s lead scientist for plasma acceleration, “but this is an endeavour that has only just started, and we still have a bit of homework to do to get the system integrated with the injector complexes of a synchrotron, which is our final goal.”

Riding a wave

PWA has the potential to radically miniaturise particle accelerators. Plasma waves are generated when a laser pulse or particle beam ploughs through a millimetres-long hydrogen-filled capillary, displacing electrons and creating a wake of alternating positive and negative charge regions behind it. The process is akin to flotsam and jetsam being accelerated in the wake of a speedboat, and the plasma “wakefields” can be thousands of times stronger than the electric fields in conventional accelerators, allowing particles to gain hundreds of MeV in just a few millimetres. But beam quality and intensity are significant challenges in such narrow confines.

In a first study, a team from the LUX experiment at DESY and the University of Hamburg demonstrated, for the first time, a two-stage correction system to dramatically reduce the energy spread of accelerated electron beams. The first stage stretches the longitudinal extent of the beam from a few femtoseconds to several picoseconds using a series of four zigzagging bending magnets called a magnetic chicane. Next, a radio-frequency cavity reduces the energy variation to below 0.1%, bringing the beam quality in line with conventional accelerators.

“We basically trade beam current for energy stability,” explains Paul Winkler, lead author of a recent publication on active energy compression. “But for the intended application of a synchrotron injector, we would need to stretch the electron bunches anyway. As a result, we achieved performance levels so far only associated with conventional accelerators.”

But producing high-quality beams is only half the battle. To make laser-driven PWA a practical proposition, bunches must be accelerated not just once a second, like at LUX, but hundreds or thousands of times per second. This has now been demonstrated by KALDERA, DESY’s new high-power laser system (see “Beam quality and bunch rate” image).

“Already, on the first try, we were able to accelerate 100 electron bunches per second,” says principal investigator Manuel Kirchen, who emphasises the complementarity of the two advances. The team now plans to scale up the energy and deploy “active stabilisation” to improve beam quality. “The next major goal is to demonstrate that we can contin­uously run the plasma accelerators with high stability,” he says.

With the exception of CERN’s AWAKE experiment (CERN Courier May/June 2024 p25), almost all plasma-wakefield accelerators are designed with medical or industrial applications in mind. Medical applications are particularly promising as they require lower beam energies and place less demanding constraints on beam quality. Advances such as those reported by LUX and KALDERA raise confidence in this new technology and could eventually open the door to cheaper and more portable X-ray equipment, allowing medical imaging and cancer therapy to take place in university labs and hospitals.

The post Double plasma progress at DESY appeared first on CERN Courier.

]]>
News New developments tackle two of the biggest challenges in plasma-wave acceleration: beam quality and bunch rate. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_desy.jpg
Plotting the discovery of Higgs pairs on Elba https://cerncourier.com/a/plotting-the-discovery-of-higgs-pairs-on-elba/ Tue, 08 Jul 2025 19:31:41 +0000 https://cerncourier.com/?p=113648 150 physicists convened on Elba from 11 to 17 May for the Higgs Pairs 2025 workshop.

The post Plotting the discovery of Higgs pairs on Elba appeared first on CERN Courier.

]]>
Precise measurements of the Higgs self-coupling and its effects on the Higgs potential will play a key role in testing the validity of the Standard Model (SM). 150 physicists discussed the required experimental and theoretical manoeuvres on the serene island of Elba from 11 to 17 May at the Higgs Pairs 2025 workshop.

The conference mixed updates on theoretical developments in Higgs-boson pair production, searches for new physics in the scalar sector, and the most recent results from Run 2 and Run 3 of the LHC. Among the highlights was the first Run 3 analysis released by ATLAS on the search for di-Higgs production in the bbγγ final state – a particularly sensitive channel for probing the Higgs self-coupling. This result builds on earlier Run 2 analyses and demonstrates significantly improved sensitivity, now comparable to the full Run 2 combination of all channels. These gains were driven by the use of new b-tagging algorithms, improved mass resolution through updated analysis techniques, and the availability of nearly twice the dataset.

Complementing this, CMS presented the first search for ttHH production – a rare process that would provide additional sensitivity to the Higgs self-coupling and Higgs–top interactions. Alongside this, ATLAS presented first experimental searches for triple Higgs boson production (HHH), one of the rarest processes predicted by the SM. Work on more traditional final states such as bbττ and bbbb is ongoing at both experiments, and continues to benefit from improved reconstruction techniques and larger datasets. 

Beyond current data, the workshop featured discussions of the latest combined projection study by ATLAS and CMS, prepared as part of the input to the upcoming European Strategy Update. It extrapolates results of the Run 2 analyses to expected conditions of the High-Luminosity LHC (HL-LHC), estimating future sensitivities to the Higgs self-coupling and di-Higgs cross-section in scenarios with vastly higher luminosity and upgraded detectors. Under these assumptions, the combined sensitivity of ATLAS and CMS to di-Higgs production is projected to reach a significance of 7.6σ, firmly establishing the process. 

These projections provide crucial input for analysis strategy planning and detector design for the next phase of operations at the HL-LHC. Beyond the HL-LHC, efforts are already underway to design experiments at future colliders that will enhance sensitivity to the production of Higgs pairs, and offer new insights into electroweak symmetry breaking.

The post Plotting the discovery of Higgs pairs on Elba appeared first on CERN Courier.

]]>
Meeting report 150 physicists convened on Elba from 11 to 17 May for the Higgs Pairs 2025 workshop. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_Higgs.jpg
New frontiers in science in the era of AI https://cerncourier.com/a/new-frontiers-in-science-in-the-era-of-ai/ Tue, 08 Jul 2025 19:25:45 +0000 https://cerncourier.com/?p=113671 New Frontiers in Science in the Era of AI arrives with a clear mission: to contextualise AI within the long arc of scientific thought and current research frontiers.

The post New frontiers in science in the era of AI appeared first on CERN Courier.

]]>
New Frontiers in Science in the Era of AI

At a time when artificial intelligence is more buzzword than substance in many corners of public discourse, New Frontiers in Science in the Era of AI arrives with a clear mission: to contextualise AI within the long arc of scientific thought and current research frontiers. This book is not another breathless ode to ChatGPT or deep learning, nor a dry compilation of technical papers. Instead, it’s a broad and ambitious survey, spanning particle physics, evolutionary biology, neuroscience and AI ethics, that seeks to make sense of how emerging technologies are reshaping not only the sciences but knowledge and society more broadly.

The book’s chapters, written by established researchers from diverse fields, aim to avoid jargon while attracting non-specialists, without compromising depth. The book offers an insight into how physics remains foundational across scientific domains, and considers the social, ethical and philosophical implications of AI-driven science.

The first section, “New Physics World”, will be the most familiar terrain for physicists. Ugo Moschella’s essay, “What Are Things Made of? The History of Particles from Thales to Higgs”, opens with a sweeping yet grounded narrative of how metaphysical questions have persisted alongside empirical discoveries. He draws a bold parallel between the ancient idea of mass emerging from a cosmic vortex and the Higgs mechanism, a poetic analogy that holds surprising resonance. Thales, who lived roughly from 624 to 545 BCE, proposed that water is the fundamental substance out of which all others are formed. Following his revelation, Pythagoras and Empedocles added three more items to complete the list of the elements: earth, air and fire. Aristotle added a fifth element: the “aether”. The physical foundation of the standard cosmological model of the ancient world is then rooted in the Aristotelian conceptions of movement and gravity, argues Moschella. His essay lays the groundwork for future chapters that explore entanglement, computation and the transition from thought experiments to quantum technology and AI.

A broad and ambitious survey spanning particle physics, evolutionary biology, neuroscience and AI ethics

The second and third sections venture into evolutionary genetics, epigenetics (the study of heritable changes in gene expression) and neuroscience – areas more peripheral to physics, but timely nonetheless. Contributions by Eva Jablonka, evolutionary theorist and geneticist from Tel Aviv University, and Telmo Pievani, a biologist from the University of Padua, explore the biological implications of gene editing, environmental inheritance and self-directed evolution, as well as the ever-blurring boundaries between what is considered “natural” versus “artificial”. The authors propose that the human ability to edit genes is itself an evolutionary agent – a novel and unsettling idea, as this would be an evolution driven by a will and not by chance. Neuroscientist Jason D Runyan reflects compellingly on free will in the age of AI, blending empirical work with philosophical questions. These chapters enrich the central inquiry of what it means to be a “knowing agent”: someone who acts on nature according to its will, influenced by biological, cognitive and social factors. For physicists, the lesson may be less about adopting specific methods and more about recognising how their own field’s assumptions – about determinism, emergence or complexity – are echoed and challenged in the life sciences.

Perspectives on AI

The fourth section, “Artificial Intelligence Perspectives”, most directly addresses the book’s central theme. The quality, scientific depth and rigour are not equally distributed between these chapters, but are stimulating nonetheless. Topics range from the role of open-source AI in student-led AI projects at CERN’s IdeaSquare and real-time astrophysical discovery. Michael Coughlin and colleagues’ chapter on accelerated AI in astrophysics stands out for its technical clarity and relevance, a solid entry point for physicists curious about AI beyond popular discourse. Absent is an in-depth treatment of current AI applications in high-energy physics, such as anomaly detection in LHC triggers or generative models for simulation. Given the book’s CERN affiliations, this omission is surprising and leaves out some of the most active intersections of AI and high-energy physics (HEP) research.

Even as AI expands our modelling capacity, the epistemic limits of human cognition may remain permanent

The final sections address cosmological mysteries and the epistemological limits of human cognition. David H Wolpert’s epilogue, “What Can We Know About That Which We Cannot Even Imagine?”, serves as a reminder that even as AI expands our modelling capacity, the epistemic limits of human cognition – including conceptual blind spots and unprovable truths – may remain permanent. This tension is not a contradiction but a sobering reflection on the intrinsic boundaries of scientific – and more widely human – knowledge.

This eclectic volume is best read as a reflective companion to one’s own work. For advanced students, postdocs and researchers open to thinking beyond disciplinary boundaries, the book is an enriching, if at times uneven, read.

To a professional scientist, the book occasionally romanticises interdisciplinary exchange between specialised fields without fully engaging with the real methodological difficulties of translating complex concepts to the other sciences. Topics including the limitations of current large-language models, the reproducibility crisis in AI research, and the ethical risks of data-driven surveillance would have benefited from deeper treatment. Ethical questions in HEP may be less prominent in the public eye, but still exist. To mention a few, there are the environmental impact of large-scale facilities, the question of spending a substantial amount of public money on such mega-science projects, the potential dual-use concerns of the technologies developed, the governance of massive international collaborations and data transparency. These deserve more attention, and the book could have explored them more thoroughly.

A timely snapshot

Still, the book doesn’t pretend to be exhaustive. Its strength lies in curating diverse voices and offering a timely snapshot of science, as well as shedding light on ethical and philosophical questions associated with science that are less frequently discussed.

There is a vast knowledge gap in today’s society. Researchers often become so absorbed in their specific domains that they lose sight of their work’s broader philosophical and societal context and the need to explain it to the public. Meanwhile, public misunderstanding of science, and the resulting confusion between fact, theory and opinion, is growing. This gulf provides fertile ground for political manipulation and ideological extremism. New Frontiers in Science in the Era of AI has the immense merit of trying to bridge that gap. The editors and contributors deserve credit for producing a work of both scientific and societal relevance.

The post New frontiers in science in the era of AI appeared first on CERN Courier.

]]>
Review New Frontiers in Science in the Era of AI arrives with a clear mission: to contextualise AI within the long arc of scientific thought and current research frontiers. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_Rev_Frontiers.jpg
Quantum culture https://cerncourier.com/a/quantum-culture/ Tue, 08 Jul 2025 19:24:12 +0000 https://cerncourier.com/?p=113653 Kanta Dihal explores why quantum mechanics captures the imagination of writers – and how ‘quantum culture’ affects the public understanding of science.

The post Quantum culture appeared first on CERN Courier.

]]>
Kanta Dihal

How has quantum mechanics influenced culture in the last 100 years?

Quantum physics offers an opportunity to make the impossible seem plausible. For instance, if your superhero dies dramatically but the actor is still on the payroll, you have a few options available. You could pretend the hero miraculously survived the calamity of the previous instalment. You could also pretend the events of the previous instalment never happened. And then there is Star Wars: “Somehow, Palpatine returned.”

These days, however, quantum physics tends to come to the rescue. Because quantum physics offers the wonderful option to maintain that all previous events really happened, and yet your hero is still alive… in a parallel universe. Much is down to the remarkable cultural impact of the many-worlds interpretation of quantum physics, which has been steadily growing in fame (or notoriety) since Hugh Everett introduced it
in 1957.

Is quantum physics unique in helping fiction authors make the impossible seem possible?

Not really! Before the “quantum” handwave, there was “nuclear”: think of Dr Atomic from Watchmen, or Godzilla, as expressions of the utopian and dystopian expectations of that newly discovered branch of science. Before nuclear, there was electricity, with Frankenstein’s monster as perhaps its most important product. We can go all the way back to the invention of hydraulics in the ancient world, which led to an explosion of tales of liquid-operated automata – early forms of artificial intelligence – such as the bronze soldier Talos in ancient Greece. We have always used our latest discoveries to dream of a future in which our ancient tales of wonder could come true.

Is the many-worlds interpretation the most common theory used in science fiction inspired by quantum mechanics?

Many-worlds has become Marvel’s favourite trope. It allows them to expand on an increasingly entangled web of storylines that borrow from a range of remakes and reboots, as well as introducing gender and racial diversity into old stories. Marvel may have mainstreamed this interpretation, but the viewers of the average blockbuster may not realise exactly how niche it is, and how many alternatives there are. With many interpretations vying for acceptance, every once in a while a brave social scientist ventures to survey quantum-physicists’ preferences. These studies tend to confirm the dominance of the Copenhagen interpretation, with its collapse of the wavefunction rather than the branching universes characteristic of the Everett interpretation. In a 2016 study, for instance, only 6% of quantum physicists claimed that Everett was their favourite interpretation. In 2018 I looked through a stack of popular quantum-physics books published between 1980 and 2017, and found that more than half of these books endorse the many-worlds interpretation. A non-physicist might be forgiven for thinking that quantum physicists are split between two equal-sized enemy camps of Copenhagenists and Everettians.

What makes the many-worlds interpretation so compelling?

Answering this brings us to a fundamental question that fiction has enjoyed exploring since humans first told each other stories: what if? “What if the Nazis won the Second World War?” is pretty much an entire genre by itself these days. Before that, there were alternate histories of the American Civil War and many other key historical events. This means that the many-worlds interpretation fits smoothly into an existing narrative genre. It suggests that these alternate histories may be real, that they are potentially accessible to us and simply happening in a different dimension. Even the specific idea of branching alternative universes existed in fiction before Hugh Everett applied it to quantum mechanics. One famous example is the 1941 short story The Garden of Forking Paths by the Argentinian writer Jorge Luis Borges, in which a writer tries to create a novel in which everything that could happen, happens. His story anticipated the many-worlds interpretation so closely that Bryce DeWitt used an extract from it as the epigraph to his 1973 edited collection The Many-Worlds Interpretation of Quantum Mechanics. But the most uncanny example is, perhaps, Andre Norton’s science-fiction novel The Crossroads of Time, from 1956 – published when Everett was writing his thesis. In her novel, a group of historians invents a “possibility worlds” theory of history. The protagonist, Blake Walker, discovers that this theory is true when he meets a group of men from a parallel universe who are on the hunt for a universe-travelling criminal. Travelling with them, Blake ends up in a world where Hitler won the Battle of Britain. Of course, in fiction, only worlds in which a significant change has taken place are of any real interest to the reader or viewer. (Blake also visits a world inhabited by metal dinosaurs.) The truly uncountable number of slightly different universes Everett’s theory implies are extremely difficult to get our heads around. Nonetheless, our storytelling mindsets have long primed us for a fascination with the many-worlds interpretation.

Have writers put other interpretations to good use?

For someone who really wants to put their physics degree to use in their spare time, I’d recommend the works of Greg Egan: although his novel Quarantine uses the controversial conscious collapse interpretation, he always ensures that the maths checks out. Egan’s attitude towards the scientific content of his novels is best summed up by a quote on his blog: “A few reviewers complained that they had trouble keeping straight [the science of his novel Incandescence]. This leaves me wondering if they’ve really never encountered a book that benefits from being read with a pad of paper and a pen beside it, or whether they’re just so hung up on the idea that only non-fiction should be accompanied by note-taking and diagram-scribbling that it never even occurred to them to do this.”

What other quantum concepts are widely used and abused?

We have Albert Einstein to thank for the extremely evocative description of quantum entanglement as “spooky action at a distance”. As with most scientific phenomena, a catchy nickname such as this one is extremely effective for getting a concept to stick in the popular imagination. While Einstein himself did not initially believe quantum entanglement could be a real phenomenon, as it would violate local causality, we now have both evidence and applications of entanglement in the real world, most notably in quantum cryptography. But in science fiction, the most common application of quantum entanglement is in faster-than-light communication. In her 1966 novel Rocannon’s World, Ursula K Le Guin describes a device called the “ansible”, which interstellar travellers use to instantaneously communicate with each other across vast distances. Her term was so influential that it now regularly appears in science fiction as a widely accepted name for a faster-than-light communications device, the same way we have adopted the word “robot” from the 1920 play R.U.R. by Karel Čapek.

Fiction may get the science wrong, but that is often because the story it tries to tell existed long before the science

How were cultural interpretations of entanglement influenced by the development of quantum theory?

It wasn’t until the 1970s that no-signalling theorems conclusively proved that entanglement correlations, while instantaneous, cannot be controlled or used to send messages. Explaining why is a lot more complex than communicating the notion that observing a particle here has an effect on a particle there. Once again, quantum physics seemingly provides just enough scientific justification to resolve an issue that has plagued science fiction ever since the speed of light was discovered: how can we travel through space, exploring galaxies, settling on distant planets, if we cannot communicate with each other? This same line of thought has sparked another entanglement-related invention in fiction: what if we can send not just messages but also people, or even entire spaceships, across faster-than-light distances using entanglement? Conveniently, quantum physicists had come up with another extremely evocative term that fit this idea perfectly: quantum teleportation. Real quantum teleportation only transfers information. But the idea of teleportation is so deeply embedded in our storytelling past that we can’t help extrapolating it. From stories of gods that could appear anywhere at will to tales of portals that lead to strange new worlds, we have always felt limited by the speeds of travel we have managed to achieve – and once again, the speed of light seems to be a hard limit that quantum teleportation might be able to get us around. In his 2003 novel Timeline, Michael Crichton sends a group of researchers back in time using quantum teleportation, and the videogame Half-Life 2 contains teleportation devices that similarly seem to work through quantum entanglement.

What quantum concepts have unexplored cultural potential?

Clearly, interpretations other than many worlds have a PR problem, so is anyone willing to write a chart topper based on the relational interpretation or QBism? More generally, I think that any question we do not yet have an answer to, or any theory that remains untestable, is a potential source for an excellent story. Richard Feynman famously said, “I think I can safely say that nobody understands quantum mechanics.” Ironically, it is precisely because of this that quantum physics has become such a widespread building block of science fiction: it is just hard enough to understand, just unresolved and unexplained enough to keep our hopes up that one day we might discover that interstellar communication or inter-universe travel might be possible. Few people would choose the realities of theorising over these ancient dreams. That said, the theorising may never have happened without the dreams. How many of your colleagues are intimately acquainted with the very science fiction they criticise for having unrealistic physics? We are creatures of habit and convenience held together by stories, physicists no less than everyone else. This is why we come up with catchy names for theories, and stories about dead-and-alive cats. Fiction may often get the science wrong, but that is often because the story it tries to tell existed long before the science.

The post Quantum culture appeared first on CERN Courier.

]]>
Opinion Kanta Dihal explores why quantum mechanics captures the imagination of writers – and how ‘quantum culture’ affects the public understanding of science. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_INT_dihal_feature.jpg
A scientist in sales https://cerncourier.com/a/a-scientist-in-sales/ Tue, 08 Jul 2025 19:22:15 +0000 https://cerncourier.com/?p=113683 Massimiliano Pindo discusses opportunities for high-energy physicists in marketing and sales.

The post A scientist in sales appeared first on CERN Courier.

]]>
Massimiliano Pindo

The boundary between industry and academia can feel like a chasm. Opportunity abounds for those willing to bridge the gap.

Massimiliano Pindo began his career working on silicon pixel detectors at the DELPHI experiment at the Large Electron–Positron Collider. While at CERN, Pindo developed analytical and technical skills that would later become crucial in his career. But despite his passion for research, doubts clouded his hopes for the future.

“I wanted to stay in academia,” he recalls. “But at that time, it was getting really difficult to get a permanent job.” Pindo moved from his childhood home in Milan to Geneva, before eventually moving back in with his parents while applying for his next research grant. “The golden days of academia where people got a fixed position immediately after a postdoc or PhD were over.”

The path forward seemed increasingly unstable, defined by short-term grants, constant travel and an inability to plan long-term. There was always a constant stream of new grant applications, but permanent contracts were few and far between. With competition increasing, job stability seemed further and further out of reach. “You could make a decent living,” Pindo says, “but the real problem was you could not plan your life.”

Translatable skills

Faced with the unpredictability of academic work, Pindo transitioned into industry – a leap that eventually led him to his current role as marketing and sales director at Renishaw, France, a global engineering and scientific technology company. Pindo was confident that his technical expertise would provide a strong foundation for a job beyond academia, and indeed he found that “hard” skills such as analytical thinking, problem-solving and a deep understanding of technology, which he had honed at CERN alongside soft skills such as teamwork, languages and communication, translated well to his work in industry.

“When you’re a physicist, especially a particle physicist, you’re used to breaking down complex problems, selecting what is really meaningful amongst all the noise, and addressing these issues directly,” Pindo says. His experience in academia gave him the confidence that industry challenges would pale in comparison. “I was telling myself that in the academic world, you are dealing with things that, at least on paper, are more complex and difficult than what you find in industry.”

Initially, these technical skills helped Pindo become a device engineer for a hardware company, before making the switch to sales. The gradual transition from academia to something more hands-on allowed him to really understand the company’s product on a technical level, which made him a more desirable candidate when transitioning into marketing.

“When you are in B2B [business-to-business] mode and selling technical products, it’s always good to have somebody who has technical experience in the industry,” explains Pindo. “You have to have a technical understanding of what you’re selling, to better understand the problems customers are trying to solve.”

However, this experience also allowed him to recognise gaps in his knowledge. As he began gaining more responsibility in his new, more business-focused role, Pindo decided to go back to university and get an MBA. During the programme, he was able to familiarise himself with the worlds of human resources, business strategy and management – skills that aren’t typically the focus in a physics lab.

Pindo’s journey through industry hasn’t been a one-way ticket out of academia. Today, he still maintains a foothold in the academic world, teaching strategy as an affiliated professor at the Sorbonne. “In the end you never leave the places you love,” he says. “I got out through the door – now I’m getting back in through the window!”

Transitioning between industry and academia was not entirely seamless. Misconceptions loomed on both sides, and it took Pindo a while to find a balance between the two.

“There is a stereotype that scientists are people who can’t adapt to industrial environments – that they are too abstract, too theoretical,” Pindo explains. “People think scientists are always in the clouds, disconnected from reality. But that’s not true. The science we make is not the science of cartoons. Scientists can be people who plan and execute practical solutions.”

The misunderstanding, he says, goes both ways. “When I talk to alumni still in academia, many think that industry is a nightmare – boring, routine, uninteresting. But that’s also false,” Pindo says. “There’s this wall of suspicion. Academics look at industry and think, ‘What do they want? What’s the real goal? Are they just trying to make more money?’ There is no trust.”

Tight labour markets

For Pindo, this divide is frustrating and entirely unnecessary. Now with years of experience navigating both worlds, he envisions a more fluid connection between academia and industry – one that leverages the strengths of both. “Industry is currently facing tight labour markets for highly skilled talent, and academia doesn’t have access to the money and practical opportunities that industry can provide,” says Pindo. “Both sides need to work together.”

To bridge this gap, Pindo advocates a more open dialogue and a revolving door between the two fields – one that allows both academics and industry professionals to move fluidly back and forth, carrying their expertise across boundaries. Both sides have much to gain from shared knowledge and collaboration. One way to achieve this, he suggests, is through active participation in alumni networks and university events, which can nurture lasting relationships and mutual understanding. If more professionals embraced this mindset, it could help alleviate the very instability that once pushed him out of academia, creating a landscape where the boundaries between science and industry blur to the benefit of both.

“Everything depends on active listening. You always have to learn from the person in front of you, so give them the chance to speak. We have a better world to build, and that comes only from open dialogue and communication.”

The post A scientist in sales appeared first on CERN Courier.

]]>
Careers Massimiliano Pindo discusses opportunities for high-energy physicists in marketing and sales. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_CAR_Pindo_feature.jpg
Hadronic decays confirm long-lived Ωc0 baryon https://cerncourier.com/a/hadronic-decays-confirm-long-lived-%cf%89c0-baryon/ Tue, 08 Jul 2025 19:19:39 +0000 https://cerncourier.com/?p=113601 A new LHCb analysis of hadronic decays confirms that the Ωc0 baryon lives longer than once thought.

The post Hadronic decays confirm long-lived Ω<sub>c</sub><sup>0</sup> baryon appeared first on CERN Courier.

]]>
LHCb figure 1

In 2018 and 2019, the LHCb collaboration published surprising measurements of the Ξc0 and Ωc0 baryon lifetimes, which were inconsistent with previous results and overturned the established hierarchy between the two. A new analysis of their hadronic decays now confirms this observation, promising insights into the dynamics of baryons.

The Λc+, Ξc+, Ξc0 and Ωc0 baryons – each composed of one charm and two lighter up, down or strange quarks – are the only ground-state singly charmed baryons that decay predominantly via the weak interaction. The main contribution to this process comes from the charm quark transitioning into a strange quark, with the other constituents acting as passive spectators. Consequently, at leading order, their lifetimes should be the same. Differences arise from higher-order effects, such as W-boson exchange between the charm and spectator quarks and quantum interference between identical particles, known as “Pauli interference”. Charm hadron lifetimes are more sensitive to these effects than beauty hadrons because of the smaller charm quark mass compared to the bottom quark, making them a promising testing ground to study these effects.

Measurements of the Ξc0 and Ωc0 lifetimes prior to the start of the LHCb experiment resulted in the PDG averages shown in figure 1. The first LHCb analysis, using charm baryons produced in semi-leptonic decays of beauty baryons, was in tension with the established values, giving a Ωc0 lifetime four times larger than the previous average. The inconsistencies were later confirmed by another LHCb measurement, using an independent data set with charm baryons produced directly (prompt) in the pp collision (CERN Courier July/August 2021 p17). These results changed the ordering of the four single-charm baryons when arranged according to their lifetimes, triggering a scientific discussion on how to treat higher-order effects in decay rate calculations.

Using the full Run 1 and 2 datasets, LHCb has now measured the Ξc0 and Ωc0 lifetimes with a third independent data sample, based on fully reconstructed Ξb Ξc0 ( pKKπ+ and Ωb Ωc0 ( pKKπ+ decays. The selection of these hadronic decay chains exploits the long lifetime of the beauty baryons, such that the selection efficiency is almost independent of the charm baryon decay time. To cancel out the small remaining acceptance effects, the measurement is normalised to the kinematically and topologically similar B D0( K+Kπ+π channel, minimising the uncertainties with only a small additional correction from simulation.

The signal decays are separated from the remaining background by fits to the Ξc0 π and Ωc0 π invariant mass spectra, providing 8260 ± 100 Ξc0 and 355 ± 26 Ωc0 candidates. The decay time distributions are obtained with two independent methods: by determining the yield in each of a specific set of decay time intervals, and by employing a statistical technique that uses the covariance matrix from the fit to the mass spectra. The two methods give consistent results, confirming LHCb’s earlier measurements. Combining the three measurements from LHCb, while accounting for their correlated uncertainties, gives τ(Ξc0) = 150.7 ± 1.6 fs and τc0) = 274.8 ± 10.5 fs. These new results will serve as experimental guidance on how to treat higher-order effects in weak baryon decays, particularly regarding the approach-dependent sign and magnitude of Pauli interference terms.

The post Hadronic decays confirm long-lived Ω<sub>c</sub><sup>0</sup> baryon appeared first on CERN Courier.

]]>
News A new LHCb analysis of hadronic decays confirms that the Ωc0 baryon lives longer than once thought. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_EF_LHCb_feature.jpg
Decoding the Higgs mechanism with vector bosons https://cerncourier.com/a/decoding-the-higgs-mechanism-with-vector-bosons/ Tue, 08 Jul 2025 19:18:25 +0000 https://cerncourier.com/?p=113595 The CMS collaboration jointly analysed all vector boson scattering channels.

The post Decoding the Higgs mechanism with vector bosons appeared first on CERN Courier.

]]>
CMS figure 1

The discovery of the Higgs boson at the LHC in 2012 provided strong experimental support for the Brout–Englert–Higgs mechanism of spontaneous electroweak symmetry breaking (EWSB) as predicted by the Standard Model. The EWSB explains how the W and Z bosons, the mediators of the weak interaction, acquire mass: their longitudinal polarisation states emerge from the Goldstone modes of the Higgs field, linking the mass generation of vector bosons directly to the dynamics of the process.

Yet, its ultimate origins remain un­known and the Standard Model may only offer an effective low-energy description of a more fundamental theo­ry. Exploring this possibility requires precise tests of how EWSB operates, and vector boson scattering (VBS) provides a particularly sensitive probe. In VBS, two electroweak gauge bosons scatter off one another. The cross section remains finite at high energies only because there is an exact cancellation between the pure gauge-boson interactions and the Higgs-boson mediated contributions, an effect analogous to the role of the Z boson propagator in WW production at electron–positron colliders. Deviations from the expected behaviour could signal new dynamics, such as anomalous couplings, strong interactions in the Higgs sector or new particles at higher energy scales.

This result lays the groundwork for future searches for new physics hidden within the electroweak sector

VBS interactions are among the rarest observed so far at the LHC, with cross sections as low as one femtobarn. To disentangle them from the background, researchers rely on the distinctive experimental signature of two high-energy jets in the forward detector regions produced by the initial quarks that radiate the bosons, with minimal hadronic activity between them. Using the full data set from Run 2 of the LHC at a centre-of-mass energy of 13 TeV, the CMS collaboration carried out a comprehensive set of VBS measurements across several production modes: WW (with both same and opposite charges), WZ and ZZ, studied in five final states where both bosons decay leptonically and in two semi-leptonic configurations where one boson decays into leptons and the other into quarks. To enhance sensitivity further, the data from all the measurements have now been combined in a single joint fit, with a complete treatment of uncertainty correlations and a careful handling of events selected by more than one analysis. 

All modes, one analysis

To account for possible deviations from the expected predictions, each process is characterised by a signal strength parameter (μ), defined as the ratio of the measured production rate to the cross section predicted by the Standard Model. A value of μ near unity indicates consistency with the Standard Model, while significant deviations may suggest new physics. The results, summarised in figure 1, display good agreement with the Standard Model predictions: all measured signal strengths are consistent with unity within their respective uncertainties. A mild excess with respect to the leading-order theoretical predictions is observed across several channels, highlighting the need for more accurate modelling, in particular for the measurements that have reached a level of precision where systematic effects dominate. By presenting the first evidence for all charged VBS production modes from a single combined statistical analysis, this CMS result lays the groundwork for future searches for new physics hidden within the electroweak sector.

The post Decoding the Higgs mechanism with vector bosons appeared first on CERN Courier.

]]>
News The CMS collaboration jointly analysed all vector boson scattering channels. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_EF_CMS_feature.jpg
Slovenia, Ireland and Chile tighten ties with CERN https://cerncourier.com/a/slovenia-ireland-and-chile-tighten-ties-with-cern/ Tue, 08 Jul 2025 19:16:38 +0000 https://cerncourier.com/?p=113567 Slovenia becomes CERN’s 25th Member State, and Ireland and Chile have signed agreements to become Associate Member States.

The post Slovenia, Ireland and Chile tighten ties with CERN appeared first on CERN Courier.

]]>
Slovenia became CERN’s 25th Member State on 21 June, formalising a relationship of over 30 years. Full membership confers voting rights in the CERN Council and opportunities for Slovenian enterprises and citizens.

“Slovenia’s full membership in CERN is an exceptional recognition of our science and researchers,” said Igor Papič, Slovenia’s Minister of Higher Education, Science and Innovation. “Furthermore, it reaffirms and strengthens Slovenia’s reputation as a nation building its future on knowledge and science. Indeed, apart from its beautiful natural landscapes, knowledge is the only true natural wealth of our country. For this reason, we have allocated record financial resources to science, research and innovation. Moreover, we have enshrined the obligation to increase these funds annually in the Scientific Research and Innovation Activities Act.”

“On behalf of the CERN Council, I warmly welcome Slovenia as the newest Member State of CERN,” said Costas Fountas, president of the CERN Council. “Slovenia has a longstanding relationship with CERN, with continuous involvement of the Slovenian science community over many decades in the ATLAS experiment in particular.”

On 8 and 16 May, respectively, Ireland and Chile signed agreements to become Associate Member States of CERN, pending the completion of national ratification processes. They join Türkiye, Pakistan, Cyprus, Ukraine, India, Lithuania, Croatia, Latvia and Brazil as Associate Members – a status introduced by the CERN Council in 2010. In this period, the Organization has also concluded international cooperation agreements with Qatar, Sri Lanka, Nepal, Kazakhstan, the Philippines, Thailand, Paraguay, Bosnia and Herzegovina, Honduras, Bahrain and Uruguay.

The post Slovenia, Ireland and Chile tighten ties with CERN appeared first on CERN Courier.

]]>
News Slovenia becomes CERN’s 25th Member State, and Ireland and Chile have signed agreements to become Associate Member States. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_slovenia.jpg
Advances in very-high-energy astrophysics https://cerncourier.com/a/advances-in-very-high-energy-astrophysics/ Tue, 08 Jul 2025 19:14:12 +0000 https://cerncourier.com/?p=113677 Advances in Very High Energy Astrophysics summarises the progress made by the third generation of imaging atmospheric Cherenkov telescopes.

The post Advances in very-high-energy astrophysics appeared first on CERN Courier.

]]>
Advances in Very High Energy Astrophysics: The Science Program of the Third Generation IACTs for Exploring Cosmic Gamma Rays

Imaging atmospheric Cherenkov telescopes (IACTs) are designed to detect very-high-energy gamma rays, enabling the study of a range of both galactic and extragalactic gamma-ray sources. By capturing Cherenkov light from gamma-ray-induced air showers, IACTs help trace the origins of cosmic rays and probe fundamental physics, including questions surrounding dark matter and Lorentz invariance. Since the first gamma-ray source detection by the Whipple telescope in 1989, the field has rapidly advanced through instruments like HESS, MAGIC and VERITAS. Building on these successes, the Cherenkov Telescope Array Observatory (CTAO) represents the next generation of IACTs, with greatly improved sensitivity and energy coverage. The northern CTAO site on La Palma is already collecting data, and major infrastructure development is now underway at the southern site in Chile, where telescope construction is set to begin soon.

Considering the looming start to CTAO telescope construction, Advances in Very High Energy Astrophysics, edited by Reshmi Mukherjee of Barnard College and Roberta Zanin, from the University of Barcelona, is very timely. World-leading experts tackle the almost impossible task of summarising the progress made by the third-generation IACTs: HESS, MAGIC and VERITAS.

The range of topics covered is vast, spanning the last 20 years of progress in the areas of IACT instrumentation, data-analysis techniques, all aspects of high-energy astrophysics, cosmic-ray astrophysics and gamma-ray cosmology.  The authors are necessarily selective, so the depth into each sector is limited, but I believe that the essential concepts were properly introduced and the most important highlights captured. The primary focus of the book lies in discussions surrounding gamma-ray astronomy and high-energy physics, cosmic rays and ongoing research into dark matter.

It appears, however, that the individual chapters were all written independently of each other by different authors, leading to some duplications. Source classes and high-energy radiation mechanisms are introduced multiple times, sometimes with different terminology and notation in the different chapters, which could lead to confusion for novices in the field. But though internal coordination could have been improved, a positive aspect of this independence is that each chapter is self-contained and can be read on its own. I recommend the book to emerging researchers looking for a broad overview of this rapidly evolving field.

The post Advances in very-high-energy astrophysics appeared first on CERN Courier.

]]>
Review Advances in Very High Energy Astrophysics summarises the progress made by the third generation of imaging atmospheric Cherenkov telescopes. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_Rev_Advances_feature.jpg
Hadrons in Porto Alegre https://cerncourier.com/a/hadrons-in-porto-alegre/ Tue, 08 Jul 2025 19:11:51 +0000 https://cerncourier.com/?p=113636 The 16th International Workshop on Hadron Physics welcomed 135 physicists to the Federal University of Rio Grande do Sul in Porto Alegre, Brazil.

The post Hadrons in Porto Alegre appeared first on CERN Courier.

]]>
The 16th International Workshop on Hadron Physics (Hadrons 2025) welcomed 135 physicists to the Federal University of Rio Grande do Sul (UFRGS) in Porto Alegre, Brazil. Delayed by four months due to a tragic flood that devastated the city, the triennial conference took place from 10 to 14 March, despite adversity maintaining its long tradition as a forum for collaboration among Brazilian and international researchers at different stages of their careers.

The workshop’s scientific programme included field theoretical approaches to QCD, the behaviour of hadronic and quark matter in astrophysical contexts, hadronic structure and decays, lattice QCD calculations, recent experimental developments in relativistic heavy-ion collisions, and the interplay of strong and electroweak forces within the Standard Model.

Fernanda Steffens (University of Bonn) explained how deep-inelastic-scattering experiments and theoretical developments are revealing the internal structure of the proton. Kenji Fukushima (University of Tokyo) addressed the theoretical framework and phase structure of strongly interacting matter, with particular emphasis on the QCD phase diagram and its relevance to heavy-ion collisions and neutron stars. Chun Shen (Wayne State University) presented a comprehensive overview of the state-of-the-art techniques used to extract the transport properties of quark–gluon plasma from heavy-ion collision data, emphasising the role of Bayesian inference and machine learning in constraining theoretical models. Li-Sheng Geng (Beihang University) explored exotic hadrons through the lens of hadronic molecules, highlighting symmetry multiplets such as pentaquarks, the formation of multi-hadron states and the role of femtoscopy in studying unstable particle interactions.

This edition of Hadrons was dedicated to the memory of two individuals who left a profound mark on the Brazilian hadronic-physics community: Yogiro Hama, a distinguished senior researcher and educator whose decades-long contributions were foundational to the development of the field in Brazil, and Kau Marquez, an early-career physicist whose passion for science remained steadfast despite her courageous battle with spinal muscular atrophy. Both were remembered with deep admiration and respect, not only for their scientific dedication but also for their personal strength and impact on the community.

Its mission is to cultivate a vibrant and inclusive scientific environment

Since its creation in 1988, the Hadrons workshop has played a central role in developing Brazil’s scientific capacity in particle and nuclear physics. Its structure facilitates close interaction between master’s and doctoral students, and senior researchers, thus enhancing both technical training and academic exchange. This model continues to strengthen the foundations of research and collaboration throughout the Brazilian scientific community.

This is the main event for the Brazilian particle- and nuclear-physics communities, reflecting a commitment to advancing research in this highly interactive field. By circulating the venue across multiple regions of Brazil, each edition further renews its mission to cultivate a vibrant and inclusive scientific environment. This edition was closed by a public lecture on QCD by Tereza Mendes (University of São Paolo), who engaged local students with the foundational questions of strong-interaction physics.

The next edition of the Hadrons series will take place in Bahia in 2028.

The post Hadrons in Porto Alegre appeared first on CERN Courier.

]]>
Meeting report The 16th International Workshop on Hadron Physics welcomed 135 physicists to the Federal University of Rio Grande do Sul in Porto Alegre, Brazil. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_Hadrons.jpg
Muons under the microscope in Cincinnati https://cerncourier.com/a/muons-under-the-microscope-in-cincinnati/ Tue, 08 Jul 2025 19:11:11 +0000 https://cerncourier.com/?p=113641 The 23rd edition of Flavor Physics and CP Violation (FPCP) attracted 100 physicists to Cincinnati, USA, from 2 to 6 June 2025.

The post Muons under the microscope in Cincinnati appeared first on CERN Courier.

]]>
The 23rd edition of Flavor Physics and CP Violation (FPCP) attracted 100 physicists to Cincinnati, USA, from 2 to 6 June 2025. The conference reviews recent experimental and theoretical developments in CP violation, rare decays, Cabibbo–Kobayashi–Maskawa matrix elements, heavy-quark decays, flavour phenomena in charged leptons and neutrinos, and the interplay between flavour physics and high-pT physics at the LHC.

The highlight of the conference was new results on the muon magnetic anomaly. The Muon g-2 experiment at Fermilab released its final measurement of aμ = (g-2)/2 on 3 June, while the conference was in progress, reaching a precision of 127 ppb on the published value. This uncertainty is more than four times smaller than that reported by the previous experiment. One week earlier, on 27 May, the Muon g-2 Theory Initiative published their second calculation of the same quantity, following that published in summer 2020. A major difference between the two calculations is that the earlier one used experimental data and the dispersion integral to evaluate the hadronic contribution to aμ, whereas the update uses a purely theoretical approach based on lattice QCD. The strong tension with the experiment of the earlier calculation is no longer present, with the new calculation compatible with experimental results. Thus, no new physics discovery can be claimed, though the reason for the difference between the two approaches must be understood (see “Fermilab’s final word on muon g-2“). 

The MEG II collaboration presented an important update to their limit on the branching fraction for the lepton-flavour-violating decay μ → eγ. Their new upper bound of 1.5 × 10–13 is determined from data collected in 2021 and 2022. The experiment recorded additional data from 2023 to 2024 and expects to continue data taking for two more years. These data will be sensitive to a branching fraction four to five times smaller than the current limit.

LHCb, Belle II, BESIII and NA62 all discussed recent results in quark flavour physics. Highlights include the first measurement of CP violation in a baryon decay by LHCb and improved limits on CP violation in D-meson decay to two pions by Belle II. With more data, the latter measurements could potentially show that the observed CP violation in charm is from a non-Standard-Model source. 

The Belle II collaboration now plans to collect a sample between 5 to 10 ab–1 by the early 2030s before undergoing an upgrade to collect a 30 to 50 ab–1 sample by the early 2040s. LHCb plan to run to the end of the High-Luminosity LHC and collect 300 fb–1. LHCb recorded almost 10 fb–1 of data last year – more than in all their previous running, and now with a fully software-based trigger with much higher efficiency than the previous hardware-based first-level trigger. Future results from Belle II and the LHCb upgrade are eagerly anticipated.

The 24th FPCP conference will be held from 18 to 22 May 2026 in Bad Honnef, Germany. 

The post Muons under the microscope in Cincinnati appeared first on CERN Courier.

]]>
Meeting report The 23rd edition of Flavor Physics and CP Violation (FPCP) attracted 100 physicists to Cincinnati, USA, from 2 to 6 June 2025. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_FPCP.jpg
A new phase for the FCC https://cerncourier.com/a/a-new-phase-for-the-fcc/ Tue, 08 Jul 2025 19:09:25 +0000 https://cerncourier.com/?p=113623 FCC Week 2025 took place in Vienna from 19 to 23 May.

The post A new phase for the FCC appeared first on CERN Courier.

]]>
FCC Week 2025 gathered more than 600 participants from 34 countries together in Vienna from 19 to 23 May. The meeting was the first following the submission of the FCC’s feasibility study to the European Strategy for Particle Physics (CERN Courier May/June 2025 p9). Comprising three volumes – covering physics and detectors, accelerators and infrastructure, and civil engineering and sustainability – the study represents the most comprehensive blueprint to date for a next-generation collider facility. The next phase will focus on preparing a robust implementation strategy, via technical design, cost assessment, environmental planning and global engagement.

CERN Director-General Fabiola Gianotti estimated the integral FCC programme to offer unparalleled opportunities to explore physics at the shortest distances, and noted growing support and enthusiasm for the programme within the community. That enthusiasm is reflected in the growing collaboration: the FCC collaboration now includes 162 institutes from 38 countries, with 28 new Memoranda of Understanding signed in the past year. These include new partnerships in Latin America, Asia and Ukraine, as well as Statements of Intent from the US and Canada. The FCC vision has also gained visibility in high-level policy dialogues, including the Draghi report on European competitiveness. Scientific plenaries and parallel sessions highlighted updates on simulation tools, rare-process searches and strategies to probe beyond the Standard Model. Detector R&D has progressed significantly, with prototyping, software development and AI-driven simulations advancing rapidly.

In accelerator design, developments included updated lattice and optics concepts involving global “head-on” compensation (using opposing beam interactions) and local chromaticity corrections (to the dependence of beam optics on particle energy). Refinements were also presented to injection schemes, beam collimation and the mitigation of collective effects. A central tool in these efforts is the Xsuite simulation platform, whose capabilities now include spin tracking and modelling based on real collider environments such as SuperKEKB.

Technical innovations also came to the fore. The superconducting RF system for FCC-ee includes 400 MHz Nb/Cu cavities for low-energy operation and 800 MHz Nb cavities for higher-energy modes. The introduction of reverse-phase operation and new RF source concepts – such as the tristron, with energy efficiencies above 90% (CERN Courier May/June 2025 p30) – represent major design advances.

Design developments

Vacuum technologies based on ultrathin NEG coating and discrete photon stops, as well as industrialisation strategies for cost control, are under active development. For FCC-hh, high-field magnet R&D continues on both Nb3Sn prototypes and high-temperature superconductors.

Sessions on technical infrastructure explored everything from grid design, cryogenics and RF power to heat recovery, robotics and safety systems. Sustainability concepts, including renewable energy integration and hydrogen storage, showcased the project’s interdisciplinary scope and long-term environmental planning.

FCC Week 2025 extended well beyond the conference venue, turning Vienna into a vibrant hub for public science outreach

The Early Career Researchers forum drew nearly 100 participants for discussions on sustainability, governance and societal impact. The session culminated in a commitment to inclusive collaboration, echoed by the quote from Austrian-born artist, architect and environmentalist Friedensreich Hundertwasser (1928–2000): “Those who do not honour the past lose the future. Those who destroy their roots cannot grow.”

This spirit of openness and public connection also defined the week’s city-wide engagement. FCC Week 2025 extended well beyond the conference venue, turning Vienna into a vibrant hub for public science outreach. In particular, the “Big Science, Big Impact” session – co-organised with the Austrian Federal Economic Chamber (WKO) – highlighted CERN’s broader role in economic development. Daniel Pawel Zawarczynski (WKO) shared examples of small and medium enterprise growth and technology transfer, noting that CERN participation can open new markets, from tunnelling to aerospace. Economist Gabriel Felbermayr referred to a recent WIFO analysis indicating a benefit-to-cost ratio for the FCC greater than 1.2 under conservative assumptions. The FCC is not only a tool for discovery, observed Johannes Gutleber (CERN), but also a platform enabling technology development, open software innovation and workforce training.

The FCC awards celebrate the creativity, rigour and passion that early-career researchers bring to the programme. This year, Tsz Hong Kwok (University of Zürich) and Audrey Piccini (CERN) won poster prizes, Sara Aumiller (TU München) and Elaf Musa (DESY) received innovation awards, and Ivan Karpov (CERN) and Nicolas Vallis (PSI) were honoured with paper prizes sponsored by Physical Review Accelerators and Beams. As CERN Council President Costas Fountas reminded participants, the FCC is not only about pushing the frontiers of knowledge, but also about enabling a new generation of ideas, collaborations and societal progress.

The post A new phase for the FCC appeared first on CERN Courier.

]]>
Meeting report FCC Week 2025 took place in Vienna from 19 to 23 May. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_FCC.jpg
Mary K Gaillard 1939–2025 https://cerncourier.com/a/mary-k-gaillard-1939-2025/ Tue, 08 Jul 2025 19:05:22 +0000 https://cerncourier.com/?p=113693 Mary K Gaillard, a key figure in the development of the Standard Model of particle physics, passed away on 23 May 2025.

The post Mary K Gaillard 1939–2025 appeared first on CERN Courier.

]]>
Mary K Gaillard, a key figure in the development of the Standard Model of particle physics, passed away on 23 May 2025. She was born in 1939 to a family of academics who encouraged her inquisitiveness and independence. She graduated in 1960 from Hollins College, a small college in Virginia, where her physics professor recognised her talent, helping her get jobs in the Ringuet laboratory at l’École Polytechnique during a junior year abroad and for two summers at the Brookhaven National Laboratory. In 1961 she obtained a master’s degree from Columbia University and in 1968 a doctorate in theoretical physics from the University of Paris at Orsay. Mary K was a research scientist with the French CNRS and a visiting scientist at CERN for most of the 1970s. From 1981 until she retired in 2009, she was a senior scientist at the Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley, where she was the first woman in the department.

Mary K was a theoretical physicist of great power, gifted both with a deep physical intuition and a very high level of technical mastery. She used her gifts to great effect and made many important contributions to the development of the Standard Model of elementary particle physics that was established precisely during the course of her career. She pursued her love of physics with powerful determination, in the face of overt discrimination that went well beyond what may still exist today. She fought these battles and produced beautiful, important physics, all while raising three children as a devoted mother.

Undeniable impact

After obtaining her master’s degree at Columbia, Mary K accompanied her first husband, Jean-Marc Gaillard, to Paris, where she was rebuffed in many attempts to obtain a position in an experimental group. She next tried and failed, multiple times, to find an advisor in theoretical physics, which she actually preferred to experimental physics but had not pursued because it was regarded as an even more unlikely career for a woman. Eventually, and fortunately for the development of elementary particle physics, Bernard d’Espagnat agreed to supervise her doctoral research at the University of Paris. While she quickly succeeded in producing significant results in her research, respect and recognition were still slow to come. She suffered many slights from a culture that could not understand or countenance the possibility of a woman theoretical physicist and put many obstacles in her way. Respect and recognition did finally come in appropriate measure, however, by virtue of the undeniable impact of her work.

Her contributions to the field are numerous. During an intensely productive period in the mid-1970s, she completed a series of projects that established the framework for the decades to follow that would culminate in the Standard Model. Famously, during a one-year visit to Fermilab in 1973, using the known properties of the “strange” K mesons, she successfully predicted the mass scale of the fourth “charm” quark a few months prior to its discovery. Back at CERN a few years later, she also predicted, in the framework of grand unified theories, the mass of the fifth “bottom” quark – a successful though still speculative prediction. Other impactful work, extracting the experimental consequences of theoretical constructs, laid down the paths that were followed to experimentally validate the charm-quark discovery and to search for the Higgs boson required to complete the Standard Model. Another key contribution showed how “jets”, streams of particles created in high-energy accelerators, could be identified as manifestations of the “gluon” carriers of the strong force of the Standard Model.

In the 1980s in Berkeley, when the Superconducting Super Collider and the Large Hadron Collider were under discussion, she showed that they could successfully uncover the mechanism of electroweak symmetry breaking required to understand the Standard Model weak force, even if it was “dynamical” – an experimentally much more challenging possibility than breaking by a Higgs boson. For the remainder of her career, she focused principally on work to address issues that are still unresolved by the Standard Model. Much of this research involved “supersymmetry” and its extension to encompass the gravitational force, theoretical constructs that originated in the work of her second husband, the late Bruno Zumino, who also moved from CERN to Berkeley.

Mary K’s accomplishments were recognised by numerous honorary societies and awards, including the National Academy of Sciences, the American Academy of Arts and Sciences, and the J. J. Sakurai Prize for Theoretical Particle Physics of the American Physical Society. She served on numerous governmental and academic advisory panels, including six years on the National Science Board. She tells her own story in a memoir, A Singularly Unfeminine Profession, published in 2015. Mary K Gaillard will surely be remembered when the final history of elementary particle physics is written.

The post Mary K Gaillard 1939–2025 appeared first on CERN Courier.

]]>
News Mary K Gaillard, a key figure in the development of the Standard Model of particle physics, passed away on 23 May 2025. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_OBITS_Gaillard.jpg
Fritz Caspers 1950–2025 https://cerncourier.com/a/fritz-caspers-1950-2025/ Tue, 08 Jul 2025 19:03:51 +0000 https://cerncourier.com/?p=113696 Friedhelm “Fritz” Caspers, a master of beam cooling, passed away on 12 March 2025.

The post Fritz Caspers 1950–2025 appeared first on CERN Courier.

]]>
Friedhelm “Fritz” Caspers, a master of beam cooling, passed away on 12 March 2025.

Born in Bonn, Germany in 1950, Fritz studied electrical engineering at RWTH Aachen. He joined CERN in 1981, first as a fellow and then as a staff member. During the 1980s Fritz contributed to stochastic cooling in CERN’s antiproton programme. In the team of Georges Carron and Lars Thorndahl, he helped devise ultra-fast microwave stochastic cooling systems for the then new antiproton cooler ring. He also initiated the development of power field-effect transistors that are still operational today in CERN’s Antiproton Decelerator ring. Fritz conceived novel geometries for pickups and kickers, such as slits cut into ground plates, as now used for the GSI FAIR project, and meander-type electrodes. From 1988 to 1995, Fritz was responsible for all 26 stochastic-cooling systems at CERN. In 1990 he became a senior member of the Institute of Electrical and Electronics Engineers (IEEE), before being distinguished as an IEEE Life Fellow later in his career.

Pioneering diagnostics

In the mid-2000s, Fritz proposed enamel-based clearing electrodes and initiated pertinent collaborations with several German companies. At about the same time, he carried out ultrasound diagnostics on soldered junctions on LHC interconnects. Among the roughly 1000 junctions measured, he and his team found a single non-conform junction. In 2008 Fritz suggested non-elliptical superconducting crab cavities for the HL-LHC. He also proposed and performed pioneering electron-cloud diagnostics and mitigation-using microwaves. For the LHC, he predicted a “magnetron effect”, where coherently radiating cloud electrons might quench the LHC magnets at specific values of their magnetic field. His advice was highly sought after on laboratory-impedance measurements and electromagnetic interference.

Throughout the past three decades, Fritz was active and held in high esteem not only at CERN but all around the world. For example, he helped develop the stochastic cooling systems for GSI in Darmstadt, Germany, where his main contact was Fritz Nolden. He contributed to the construction and commissioning of stochastic cooling for GSI’s Experimental Storage Ring, including the successful demonstration of the stochastic cooling of heavy ions in 1997. Fritz also helped develop the stochastic cooling of rare isotopes for the RI Beam Factory project at RIKEN, Japan.

He helped develop the power field-effect transistors still operational today in CERNs AD ring

Fritz was a long-term collaborator of IMP Lanzhou at the Chinese Academy of Sciences (CAS). In 2015, stochastic cooling was commissioned at the Cooling Storage Ring with his support. Always kind and willing to help anyone who needed him, Fritz also provided valuable suggestions and hands-on experience with impedance measurements for IMP’s HIAF project, especially the titanium-alloy-loaded thin-wall vacuum chamber and magnetic-alloy-loaded RF cavities. In 2021, Fritz was elected as a Distinguished Scientist of the CAS President’s International Fellowship Initiative and awarded the Dieter Möhl Award by the International Committee for Future Accelerators for his contributions to beam cooling.

In 2013, the axion dark-matter research centre IBS-CAPP was established at KAIST, Korea. For this new institute, Fritz proved to be just the right lecturer. Every spring, he visited Korea for a week of intensive lectures on RF techniques, noise measurements and much more. His lessons, which were open to scientists from all over Korea, transformed Korean researchers from RF amateurs into professionals, and his contributions helped propel IBS–CAPP to the forefront of research.

Fritz was far more than just a brilliant scientist. He was a generous mentor, a trusted colleague and a dear friend who lit up a room when he entered, and his absence will be deeply felt by all of us who had the privilege of knowing him. Always on the hunt for novel ideas, Fritz was a polymath and a fully open-minded scientist. His library at home was a visit into the unknown, containing “dark matter”, as we often joked. We will remember Fritz as a gentleman who was full of inspiration for the young and the not-so-young alike. His death is a loss to the whole accelerator world.

The post Fritz Caspers 1950–2025 appeared first on CERN Courier.

]]>
News Friedhelm “Fritz” Caspers, a master of beam cooling, passed away on 12 March 2025. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_OBITS_Caspers.jpg
Sandy Donnachie 1936–2025 https://cerncourier.com/a/sandy-donnachie-1936-2025/ Tue, 08 Jul 2025 19:03:05 +0000 https://cerncourier.com/?p=113702 A particle theorist and scientific leader.

The post Sandy Donnachie 1936–2025 appeared first on CERN Courier.

]]>
Sandy Donnachie, a particle theorist and scientific leader, passed away on 7 April 2025.

Born in 1936 and raised in Kilmarnock, Scotland, Sandy received his BSc and PhD degrees from the University of Glasgow before taking up a lectureship at University College London in 1963. He was a CERN research associate from 1965 to 1967, and then senior lecturer at the University of Glasgow until 1969, when he took up a chair at the University of Manchester and played a leading role in developing the scientific programme at NINA, the electron synchrotron at the nearby Daresbury National Laboratory. Sandy then served as head of the Department of Physics and Astronomy at the University from 1989 to 1994, and as dean of the Faculty of Science and Engineering from 1994 to 1997. He had a formidable reputation – if a staff member or student asked to see him, he would invite them to come at 8 a.m., to test whether what they wanted to discuss was truly important.

Sandy played a leading role in the international scientific community, maintaining strong connections with CERN throughout his career, as scientific delegate to the CERN Council from 1989 to 1994, chair of the SPS committee from 1988 to 1992, and member of the CERN Scientific Policy Committee from 1988 to 1993. In the UK, he chaired the UK’s Nuclear Physics Board from 1989 to 1993, and served as a member of the Science and Engineering Research Council from 1989 to 1994. He also served as an associate editor for Physical Review Letters from 2010 to 2016. In recognition of his leadership and scientific contributions, he was awarded the UK’s Institute of Physics Glazebrook Medal in 1997.

The “Donnachie–Landshoff pomeron” is known to all those working in the field

Sandy is perhaps best known for his body of work with Peter Landshoff on elastic and diffractive scattering: the “Donnachie–Landshoff pomeron” is known to all those
working in the field. The collaboration began half a century ago and when email became available, they were among its early and most enthusiastic users. Sandy only knew Fortran and Peter only knew C, but somehow they managed to collaborate and together wrote more than 50 publications, including a book Pomeron Physics and QCD with Günter Dosch and Otto Nachtmann published in 2004. The collaboration lasted until, so sadly, Sandy was struck with Parkinson’s disease and was no longer able to use email. Earlier in his career, Sandy had made significant contributions to the field of low-energy hadron scattering, in particular through a collaboration with Claud Lovelace, which revealed many hitherto unknown baryon states in pion–nucleon scattering, and through a series of papers on meson photoproduction, initially with Graham Shaw and then with Frits Berends and other co-workers.

Throughout his career, Sandy was notable for his close collaborations with experimental physics groups, including a long association with the Omega Photon Collaboration at CERN, with whom he co-authored 27 published papers. He and Shaw also produced three books, culminating in Electromagnetic Interactions and Hadronic Structure with Frank Close, which was published in 2007.

In his leisure time, Sandy was a great lover of classical music and a keen sailor, golfer and country walker.

The post Sandy Donnachie 1936–2025 appeared first on CERN Courier.

]]>
News A particle theorist and scientific leader. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_OBITS_Donnachie.jpg
Fritz A Ferger 1933–2025 https://cerncourier.com/a/fritz-a-ferger-1933-2025/ Tue, 08 Jul 2025 19:01:59 +0000 https://cerncourier.com/?p=113699 A multi-talented engineer who had a significant impact on the technical development and management of CERN.

The post Fritz A Ferger 1933–2025 appeared first on CERN Courier.

]]>
Fritz Ferger, a multi-talented engineer who had a significant impact on the technical development and management of CERN, passed away on 22 March 2025.

Born in Reutlingen, Germany, on 5 April 1933, Fritz obtained his electrical engineering degree in Stuttgart and a doctorate at the University of Grenoble. A contract with General Electric in his pocket, he visited CERN, curious about the 25 GeV Proton Synchrotron, the construction of which was receiving the finishing touches in the late 1950s. He met senior CERN staff and was offered a contract that he, impressed by the visit, accepted in early 1959.

Fritz’s first assignment was the development of a radio-frequency (RF) accelerating cavity for a planned fixed-field alternating-gradient (FFAG) accelerator. This was abandoned in early 1960 in favour of the study of a 2 × 25 GeV proton–proton collider, the Intersecting Storage Rings (ISR). As a first step, the CERN Electron Storage and Accumulation Ring (CESAR) was constructed to test high-vacuum technology and RF accumulation schemes; Fritz designed and constructed the RF system. With CESAR in operation, he moved on to the construction and tests of the high-power RF system of the ISR, a project that was approved in 1965.

After the smooth running-in of the ISR and, for a while having been responsible for the General Engineering Group, he became division leader of the ISR in 1974, a position he held until 1982. Under his leadership the ISR unfolded its full potential with proton beam currents up to 50 A and a luminosity 35 times the design value, leading CERN to acquire the confidence that colliders were the way to go. Due to his foresight, the development of new technologies was encouraged for the accelerator, including superconducting quadrupoles and pumping by cryo- and getter surfaces. Both were applied on a grand scale in LEP and are still essential for the LHC today.

Under his ISR leadership CERN acquired the confidence that colliders were the way to go

When the resources of the ISR Division were refocussed on LEP in 1983, Fritz became the leader of the Technical Inspection and Safety Commission. This absorbed the activities of the previous health and safety groups, but its main task was to scrutinise the LEP project from all technical and safety aspects. Fritz’s responsibility widened considerably when he became leader of the Technical Support Division in 1986. All of the CERN civil engineering, the tunnelling for the 27 km circumference LEP ring, its auxiliary tunnels, the concreting of the enormous caverns for the experiments and the construction of a dozen surface buildings were in full swing and brought to a successful conclusion in the following years. New buildings on the Meyrin site were added, including the attractive Building 40 for the large experimental groups, in which he took particular pride. At the same time, and under pressure to reduce expenditure, he had to manage several difficult outsourcing contracts.

When he retired in 1997, he could look back on almost 40 years dedicated to CERN; his scientific and technical competence paired with exceptional organisational and administrative talent. We shall always remember him as an exacting colleague with a wide range of interests, and as a friend, appreciated for his open and helpful attitude.

We grieve his loss and offer our sincere condolences to his widow Catherine and their daughters Sophie and Karina.

The post Fritz A Ferger 1933–2025 appeared first on CERN Courier.

]]>
News A multi-talented engineer who had a significant impact on the technical development and management of CERN. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_OBITS_Ferger.jpg
The minimalism of many worlds https://cerncourier.com/a/the-minimalism-of-many-worlds/ Wed, 02 Jul 2025 11:29:05 +0000 https://cerncourier.com/?p=113491 David Wallace argues for the ‘decoherent view’ of quantum mechanics, where at the fundamental level there is neither probability nor wavefunction collapse.

The post The minimalism of many worlds appeared first on CERN Courier.

]]>
Physicists have long been suspicious of the “quantum measurement problem”: the supposed puzzle of how to make sense of quantum mechanics. Everyone agrees (don’t they?) on the formalism of quantum mechanics (QM); any additional discussion of the interpretation of that formalism can seem like empty words. And Hugh Everett III’s infamous “many-worlds interpretation” looks more dubious than most: not just unneeded words but unneeded worlds. Don’t waste your time on words or worlds; shut up and calculate.

But the measurement problem has driven more than philosophy. Questions of how to understand QM have always been entangled, so to speak, with questions of how to apply and use it, and even how to formulate it; the continued controversies about the measurement problem are also continuing controversies in how to apply, teach and mathematically describe QM. The Everett interpretation emerges as the natural reading of one strategy for doing QM, which I call the “decoherent view” and which has largely supplanted the rival “lab view”, and so – I will argue – the Everett interpretation can and should be understood not as a useless adjunct to modern QM but as part of the development in our understanding of QM over the past century.

The view from the lab

The lab view has its origins in the work of Bohr and Heisenberg, and it takes the word “observable” that appears in every QM textbook seriously. In the lab view, QM is not a theory like Newton’s or Einstein’s that aims at an objective description of an external world subject to its own dynamics; rather, it is essentially, irreducibly, a theory of observation and measurement. Quantum states, in the lab view, do not represent objective features of a system in the way that (say) points in classical phase space do: they represent the experimentalist’s partial knowledge of that system. The process of measurement is not something to describe within QM: ultimately it is external to QM. And the so-called “collapse” of quantum states upon measurement represents not a mysterious stochastic process but simply the updating of our knowledge upon gaining more information.

Valued measurements

The lab view has led to important physics. In particular, the “positive operator valued measure” idea, central to many aspects of quantum information, emerges most naturally from the lab view. So do the many extensions, total and partial, to QM of concepts initially from the classical theory of probability and information. Indeed, in quantum information more generally it is arguably the dominant approach. Yet outside that context, it faces severe difficulties. Most notably: if quantum mechanics describes not physical systems in themselves but some calculus of measurement results, if a quantum system can be described only relative to an experimental context, what theory describes those measurement results and experimental contexts themselves?

Dynamical probes

One popular answer – at least in quantum information – is that measurement is primitive: no dynamical theory is required to account for what measurement is, and the idea that we should describe measurement in dynamical terms is just another Newtonian prejudice. (The “QBist” approach to QM fairly unapologetically takes this line.)

One can criticise this answer on philosophical grounds, but more pressingly: that just isn’t how measurement is actually done in the lab. Experimental kit isn’t found scattered across the desert (each device perhaps stamped by the gods with the self-adjoint operator it measures); it is built using physical principles (see “Dynamical probes” figure). The fact that the LHC measures the momentum and particle spectra of various decay processes, for instance, is something established through vast amounts of scientific analysis, not something simply posited. We need an account of experimental practice that allows us to explain how measurement devices work and how to build them.

Perhaps this was viable in the 1930s, but today measurement devices rely on quantum principles

Bohr had such an account: quantum measurements are to be described through classical mechanics. The classical is ineliminable from QM precisely because it is to classical mechanics we turn when we want to describe the experimental context of a quantum system. To Bohr, the quantum–classical transition is a conceptual and philosophical matter as much as a technical one, and classical ideas are unavoidably required to make sense of any quantum description.

Perhaps this was viable in the 1930s. But today it is not only the measured systems but the measurement devices themselves that essentially rely on quantum principles, beyond anything that classical mechanics can describe. And so, whatever the philosophical strengths and weaknesses of this approach – or of the lab view in general – we need something more to make sense of modern QM, something that lets us apply QM itself to the measurement process.

Practice makes perfect

We can look to physics practice to see how. As von Neumann glimpsed, and Everett first showed clearly, nothing prevents us from modelling a measurement device itself inside unitary quantum mechanics. When we do so, we find that the measured system becomes entangled with the device, so that (for instance) if a measured atom is in a weighted superposition of spins with respect to some axis, after measurement then the device is in a similarly-weighted superposition of readout values.

Origins

In principle, this courts infinite regress: how is that new superposition to be interpreted, save by a still-larger measurement device? In practice, we simply treat the mod-squared amplitudes of the various readout values as probabilities, and compare them with observed frequencies. This sounds a bit like the lab view, but there is a subtle difference: these probabilities are understood not with respect to some hypothetical measurement, but as the actual probabilities of the system being in a given state.

Of course, if we could always understand mod-squared amplitudes that way, there would be no measurement problem! But interference precludes this. Set up, say, a Mach–Zehnder interferometer, with a particle beam split in two and then re-interfered, and two detectors after the re-interference (see “Superpositions are not probabilities” figure). We know that if either of the two paths is blocked, so that any particle detected must have gone along the other path, then each of the two outcomes is equally likely: for each particle sent through, detector A fires with 50% probability and detector B with 50% probability. So whichever path the particle went down, we get A with 50% probability and B with 50% probability. And yet we know that if the interferometer is properly tuned and both paths are open, we can get A with 100% probability or 0% probability or anything in between. Whatever microscopic superpositions are, they are not straightforwardly probabilities of classical goings-on.

Unfeasible interference

But macroscopic superpositions are another matter. There, interference is unfeasible (good luck reinterfering the two states of Schrödinger’s cat); nothing formally prevents us from treating mod-squared amplitudes like probabilities.

And decoherence theory has given us a clear understanding of just why interference is invisible in large systems, and more generally when we can and cannot get away with treating mod-squared amplitudes as probabilities. As the work of Zeh, Zurek, Gell-Mann, Hartle and many others (drawing inspiration from Everett and from work on the quantum/classical transition as far back as Mott) has shown, decoherence – that is, the suppression of interference – is simply an aspect of non-equilibrium statistical mechanics. The large-scale, collective degrees of freedom of a quantum system, be it the needle on a measurement device or the centre-of-mass of a dust mote, are constantly interacting with a much larger number of small-scale degrees of freedom: the short-wavelength phonons inside the object itself; the ambient light; the microwave background radiation. We can still find autonomous dynamics for the collective degrees of freedom, but because of the constant transfer of information to the small scale, the coherence of any macroscopic superposition rapidly bleeds into microscopic degrees of freedom, where it is dynamically inert and in practice unmeasurable.

Emergence and scale

Decoherence can be understood in the familiar language of emergence and scale separation. Quantum states are not fundamentally probabilistic, but they are emergently probabilistic. That emergence occurs because for macroscopic systems, the timescale by which energy is transferred from macroscopic to residual degrees of freedom is very long compared to the timescale of the macroscopic system’s own dynamics, which in turn is very long compared to the timescale by which information is transferred. (To take an extreme example, information about the location of the planet Jupiter is recorded very rapidly in the particles of the solar wind, or even the photons of the cosmic background radiation, but Jupiter loses only an infinitesimal fraction of its energy to either.) So the system decoheres very rapidly, but having done so it can still be treated as autonomous.

On this decoherent view of QM, there is ultimately only the unitary dynamics of closed systems; everything else is a limiting or special case. Probability and classicality emerge through dynamical processes that can be understood through known techniques of physics: understanding that emergence may be technically challenging but poses no problem of principle. And this means that the decoherent view can address the lab view’s deficiencies: it can analyse the measurement process quantum mechanically; it can apply quantum mechanics even in cosmological contexts where the “measurement” paradigm breaks down; it can even recover the lab view within itself as a limited special case. And so it is the decoherent view, not the lab view, that – I claim – underlies the way quantum theory is for the most part used in the 21st century, including in its applications in particle physics and cosmology (see “Two views of quantum mechanics” table).

Two views of quantum mechanics

Quantum phenomenon Lab view Decoherent view

Dynamics

Unitary (i.e. governed by the Schrödinger equation) only between measurements

Always unitary

Quantum/classical transition

Conceptual jump between fundamentally different systems

Purely dynamical: classical physics is a limiting case of quantum physics

Measurements

Cannot be treated internal to the formalism

Just one more dynamical interaction

Role of the observer

Conceptually central

Just one more physical system

But if the decoherent view is correct, then at the fundamental level there is neither probability nor wavefunction collapse; nor is there a fundamental difference between a microscopic superposition like those in interference experiments and a macroscopic superposition like Schrödinger’s cat. The differences are differences of degree and scale: at the microscopic level, interference is manifest; as we move to larger and more complex systems it hides away more and more effectively; in practice it is invisible for macroscopic systems. But even if we cannot detect the coherence of the superposition of a live and dead cat, it does not thereby vanish. And so according to the decoherent view, the cat is simultaneously alive and dead in the same way that the superposed atom is simultaneously in two places. We don’t need a change in the dynamics of the theory, or even a reinterpretation of the theory, to explain why we don’t see the cat as alive and dead at once: decoherence has already explained it. There is a “live cat” branch of the quantum state, entangled with its surroundings to an ever-increasing degree; there is likewise a “dead cat” branch; the interference between them is rendered negligible by all that entanglement.

Many worlds

At last we come to the “many worlds” interpretation: for when we observe the cat ourselves, we too enter a superposition of seeing a live and a dead cat. But these “worlds” are not added to QM as exotic new ontology: they are discovered, as emergent features of collective degrees of freedom, simply by working out how to use QM in contexts beyond the lab view and then thinking clearly about its content. The Everett interpretation – the many-worlds theory – is just the decoherent view taken fully seriously. Interference explains why superpositions cannot be understood simply as parameterising our ignorance; unitarity explains how we end up in superpositions ourselves; decoherence explains why we have no awareness of it.

Superpositions are not probabilities

(Forty-five years ago, David Deutsch suggested testing the Everett interpretation by simulating an observer inside a quantum computer, so that we could recohere them after they made a measurement. Then, it was science fiction; in this era of rapid progress on AI and quantum computation, perhaps less so!)

Could we retain the decoherent view and yet avoid any commitment to “worlds”? Yes, but only in the same sense that we could retain general relativity and yet refuse to commit to what lies behind the cosmological event horizon: the theory gives a perfectly good account of the other Everett worlds, and the matter beyond the horizon, but perhaps epistemic caution might lead us not to overcommit. But even so, the content of QM includes the other worlds, just as the content of general relativity includes beyond-horizon physics, and we will only confuse ourselves if we avoid even talking about that content. (Thus Hawking, who famously observed that when he heard about Schrödinger’s cat he reached for his gun, was nonetheless happy to talk about Everettian branches when doing quantum cosmology.)

Alternative views

Could there be a different way to make sense of the decoherent view? Never say never; but the many-worlds perspective results almost automatically from simply taking that view as a literal description of quantum systems and how they evolve, so any alternative would have to be philosophically subtle, taking a different and less literal reading of QM. (Perhaps relationalism, discussed in this issue by Carlo Rovelli, see “Four ways to interpret quantum mechanics“, offers a way to do it, though in many ways it seems more a version of the lab view. The physical collapse and hidden variables interpretations modify the formalism, and so fall outside either category.)

The Everett interpretation is just the decoherent view taken fully seriously

Does the apparent absurdity, or the ontological extravagance, of the Everett interpretation force us, as good scientists, to abandon many-worlds, or if necessary the decoherent view itself? Only if we accept some scientific principle that throws out theories that are too strange or that postulate too large a universe. But physics accepts no such principle, as modern cosmology makes clear.

Are there philosophical problems for the Everett interpretation? Certainly: how are we to think of the emergent ontology of worlds and branches; how are we to understand probability when all outcomes occur? But problems of this kind arise across all physical theories. Probability is philosophically contested even apart from Everett, for instance: is it frequency, rational credence, symmetry or something else? In any case, these problems pose no barrier to the use of Everettian ideas in physics.

The case for the Everett interpretation is that it is the conservative, literal reading of the version of quantum mechanics we actually use in modern physics, and there is no scientific pressure for us to abandon that reading. We could, of course, look for alternatives. Who knows what we might find? Or we could shut up and calculate – within the Everett interpretation.

The post The minimalism of many worlds appeared first on CERN Courier.

]]>
Feature David Wallace argues for the ‘decoherent view’ of quantum mechanics, where at the fundamental level there is neither probability nor wavefunction collapse. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_MANY_probes.jpg
Discovering the neutrino sky https://cerncourier.com/a/discovering-the-neutrino-sky/ Mon, 19 May 2025 08:01:22 +0000 https://cerncourier.com/?p=113109 Lu Lu looks forward to the next two decades of neutrino astrophysics, exploring the remarkable detector concepts needed to probe ultra-high energies from 1 EeV to 1 ZeV.

The post Discovering the neutrino sky appeared first on CERN Courier.

]]>
Lake Baikal, the Mediterranean Sea and the deep, clean ice at the South Pole: trackers. The atmosphere: a calorimeter. Mountains and even the Moon: targets. These will be the tools of the neutrino astrophysicist in the next two decades. Potentially observable energies dwarf those of the particle physicist doing repeatable experiments, rising up to 1 ZeV (1021 eV) for some detector concepts.

The natural accelerators of the neutrino astrophysicist are also humbling. Consider, for instance, the extraordinary relativistic jets emerging from the supermassive black hole in Messier 87 – an accelerator that stretches for about 5000 light years, or roughly 315 million times the distance from the Earth to the Sun.

Alongside gravitational waves, high-energy neutrinos have opened up a new chapter in astronomy. They point to the most extreme events in the cosmos. They can escape from regions where high-energy photons are attenuated by gas and dust, such as NGC 1068, the first steady neutrino emitter to be discovered (see “The neutrino sky” figure). Their energies can rise orders of magnitude above 1 PeV (1015 eV), where the universe becomes opaque to photons due to pair production with the cosmic microwave background. Unlike charged cosmic rays, they are not deflected by magnetic fields, preserving their original direction.

Breaking into the exascale calls for new thinking

High-energy neutrinos therefore offer a unique window into some of the most profound questions in modern physics. Are there new particles beyond the Standard Model at the highest energies? What acceleration mechanisms allow nature to propel them to such extraordinary energies? And is dark matter implicated in these extreme events? With the observation of a 220+570–110 PeV neutrino confounding the limits set by prior observatories and opening up the era of ultra-high-energy neutrino astronomy (CERN Courier March/April 2025 p7), the time is ripe for a new generation of neutrino detectors on an even grander scale (see “Thinking big” table).

A cubic-kilometre ice cube

Detecting high-energy neutrinos is a serious challenge. Though the neutrino–nucleon cross section increases a little less than linearly with neutrino energy, the flux of cosmic neutrinos drops as the inverse square or faster, reducing the event rate by nearly an order of magnitude per decade. A cubic-kilometre-scale detector is required to measure cosmic neutrinos beyond 100 TeV, and Earth starts to be opaque as energies rise beyond a PeV or so, when the odds of a neutrino being absorbed as it passes through the planet are roughly even depending on the direction of the event.

Thinking big

The journey of cosmic neutrino detection began off the coast of the Hawaiian Islands in the 1980s, led by John Learned of the University of Hawaii at Mānoa. The DUMAND (Deep Underwater Muon And Neutrino Detector) project sought to use both an array of optical sensors to measure Cherenkov light and acoustic detectors to measure the pressure waves generated by energetic particle cascades in water. It was ultimately cancelled in 1995 due to engineering difficulties related to deep-sea installation, data transmission over long underwater distances and sensor reliability under high pressure.

The next generation of cubic-kilometre-scale neutrino detectors built on DUMAND’s experience. The IceCube Neutrino Observatory has pioneered neutrino astronomy at the South Pole since 2011, probing energies from 10 GeV to 100 PeV, and is now being joined by experiments under construction such as KM3NeT in the Mediterranean Sea, which observed the 220 PeV candidate, and Baikal–GVD in Lake Baikal, the deepest lake on Earth. All three experiments watch for the deep inelastic scattering of high-energy neutrinos, using optical sensors to detect Cherenkov photons emitted by secondary particles.

Exascale from above

A decade of data-taking from IceCube has been fruitful. The Milky Way has been observed in neutrinos for the first time. A neutrino candidate event has been observed that is consistent with the Glashow resonance – the resonant production in the ice of a real W boson by a 6.3 PeV electron–antineutrino – confirming a longstanding prediction from 1960. Neutrino emission has been observed from supermassive black holes in NGC 1068 and TXS 0506+056. A diffuse neutrino flux has been discovered beyond 10 TeV. Neutrino mixing parameters have been measured. And flavour ratios have been constrained: due to the averaging of neutrino oscillations over cosmological distances, significant deviations from a 1:1:1 ratio of electron, muon and tau neutrinos could imply new physics such as the violation of Lorentz invariance, non-standard neutrino interactions or neutrino decay.

The sensitivity and global coverage of water-Cherenkov neutrino observatories is set to increase still further. The Pacific Ocean Neutrino Experiment (P-ONE) aims to establish a cubic-kilometre-scale deep-sea neutrino telescope off the coast of Canada; IceCube will expand the volume of its optical array by a factor eight; and the TRIDENT and HUNT experiments, currently being prototyped in the South China Sea, may offer the largest detector volumes of all. These detectors will improve sky coverage, enhance angular resolution, and increase statistical precision in the study of neutrino sources from 1 TeV to 10 PeV and above.

Breaking into the exascale calls for new thinking.

Into the exascale

Optical Cherenkov detectors have been exceptionally successful in establishing neutrino astronomy, however, the attenuation of optical photons in water and ice requires the horizontal spacing of photodetectors to a few hundred metres at most, constraining the scalability of the technology. To achieve sensitivity to ultra-high energies measured in EeV (1018 eV), an instrumented area of order 100 km2 would be required. Constructing an optical-based detector on such a scale is impractical.

Earth skimming

One solution is to exchange the tracking volume of IceCube and its siblings with a larger detector that uses the atmosphere as a calorimeter: the deposited energy is sampled on the Earth’s surface.

The Pierre Auger Observatory in Argentina epitomises this approach. If IceCube is presently the world’s largest detector by volume, the Pierre Auger Observatory is the world’s largest detector by area. Over an area of 3000 km2, 1660 water Cherenkov detectors and 24 fluorescence telescopes sample the particle showers generated when cosmic rays with energies beyond 10 EeV strike the atmosphere, producing billions of secondary particles. Among the showers it detects are surely events caused by ultra-high-energy neutrinos, but how might they be identified?

Out on a limb

One of the most promising approaches is to filter events based on where the air shower reaches its maximum development in the atmosphere. Cosmic rays tend to interact after traversing much less atmosphere than neutrinos, since the weakly interacting neutrinos have a much smaller cross-section than the hadronically interacting cosmic rays. In some cases, tau neutrinos can even skim the Earth’s atmospheric edge or “limb” as seen from space, interacting to produce a strongly boosted tau lepton that emerges from the rock (unlike an electron) to produce an upward-going air shower when it decays tens of kilometres later – though not so much later (unlike a muon) that it has escaped the atmosphere entirely. This signature is not possible for charged cosmic rays. So far, Auger has detected no neutrino candidate events of either topology, imposing stringent upper limits on the ultra-high-energy neutrino flux that are compatible with limits set by IceCube. The AugerPrime upgrade, soon expected to be fully operational, will equip each surface detector with scintillator panels and improved electronics.

Pole position

Experiments in space are being developed to detect these rare showers with an even larger instrumentation volume. POEMMA (Probe of Extreme Multi-Messenger Astrophysics) is a proposed satellite mission designed to monitor the Earth’s atmosphere from orbit. Two satellites equipped with fluorescence and Cherenkov detectors will search for ultraviolet photons produced by extensive air showers (see “Exascale from above” figure). EUSO-SPB2 (Extreme Universe Space Observatory on a Super Pressure Balloon 2) will test the same detection methods from the vantage point of high-atmosphere balloons. These instruments can help distinguish cosmic rays from neutrinos by identifying shallow showers and up-going events.

Another way to detect ultra-high-energy neutrinos is by using mountains and valleys as natural neutrino targets. This Earth-skimming technique also primarily relies on tau neutrinos, as the tau leptons produced via deep inelastic scattering in the rock can emerge from Earth’s crust and decay within the atmosphere to generate detectable particle showers in the air.

The Giant Radio Array for Neutrino Detection (GRAND) aims to detect radio signals from these tau-induced air showers using a large array of radio antennas spread over thousands of square kilometres (see “Earth skimming” figure). GRAND is planned to be deployed in multiple remote, mountainous locations, with the first site in western China, followed by others in South America and Africa. The Tau Air-Shower Mountain-Based Observatory (TAMBO) has been proposed to be deployed on the face of the Colca Canyon in the Peruvian Andes, where an array of scintillators will detect the electromagnetic signals from tau-induced air showers.

Another proposed strategy that builds upon the Earth-skimming principle is the Trinity experiment, which employs an array of Cherenkov telescopes to observe nearby mountains. Ground-based air Cherenkov detectors are known for their excellent angular resolution, allowing for precise pointing to trace back to the origin of the high-energy primary particles. Trinity is a proposed system of 18 wide-field Cherenkov telescopes optimised for detecting neutrinos in the 10 PeV–1000 PeV energy range from the direction of nearby mountains – an approach validated by experiments such as Ashra–NTA, deployed on Hawaii’s Big Island utilising the natural topography of the Mauna Loa, Mauna Kea and Hualālai volcanoes.

Diffuse neutrino landscape

All these ultra-high-energy experiments detect particle showers as they develop in the atmosphere, whether from above, below or skimming the surface. But “Askaryan” detectors operate deep within the ice of the Earth’s poles, where both the neutrino interaction and detection occur.

In 1962 Soviet physicist Gurgen Askaryan reasoned that electromagnetic showers must develop a net negative charge excess as they develop, due to the Compton scattering of photons off atomic electrons and the ionisation of atoms by charged particles in the shower. As the charged shower propagates faster than the phase velocity of light in the medium, it should emit radiation in a manner analogous to Cherenkov light. However, there are key differences: Cherenkov radiation is typically incoherent and emitted by individual charged particles, while Askaryan radiation is coherent, being produced by a macroscopic buildup of charge, and is significantly stronger at radio frequencies. The Askaryan effect was experimentally confirmed at SLAC in 2001.

Optimised arrays

Because the attenuation length of radio waves is an order of magnitude longer than for optical photons, it becomes feasible to build much sparser arrays of radio antennas to detect the Askaryan signals than the compact optical arrays used in deep ice Cherenkov detectors. Such detectors are optimised to cover thousands of square kilometres, with typical energy thresholds beyond 100 PeV.

The Radio Neutrino Observatory in Greenland (RNO-G) is a next-generation in-ice radio detector currently under construction on the ~3 km-thick ice sheet above central Greenland, operating at frequencies in the 150–700 MHz range. RNO-G will consist of a sparse array of 35 autonomous radio detector stations, each separated by 1.25 km, making it the first large-scale radio neutrino array in the northern hemisphere.

Moon skimming

In the southern hemisphere, the proposed IceCube-Gen2 will complement the aforementioned eightfold expanded optical array with a radio component covering a remarkable 500 km2. The cold Antarctic ice provides an optimal medium for radio detection, with radio attenuation lengths of roughly 2 km facilitating cost-efficient instrumentation of the large volumes needed to measure the low ultra-high-energy neutrino flux. The radio array will combine in-ice omnidirectional antennas 150 m below the surface with high-gain antennas at a depth of 15 m and upward-facing antennas on the surface to veto the cosmic-ray background.

The IceCube-Gen2 radio array will have the sensitivity to probe features of the spectrum of astrophysical neutrino beyond the PeV scale, addressing the tension between upper limits from Auger and IceCube, and KM3NeT’s 220 +570–110PeV neutrino candidate – the sole ultra-high-energy neutrino yet observed. Extrapolating an isotropic and diffuse flux, IceCube should have detected 75 events in the 72–2600 PeV energy range over its operational period. However, no events have been observed above 70 PeV.

Perhaps the most ambitious way to observe ultra-high-energy neutrinos is to use the Moon as a target

If the detected KM3NeT event has a neutrino energy of around 100 PeV, it could originate from the same astrophysical sources responsible for accelerating ultra-high-energy cosmic rays. In this case, interactions between accelerated protons and ambient photons from starlight or synchrotron radiation would produce pions that decay into ultra-high-energy neutrinos. Alternatively, if its true energy is closer to 1 EeV, it is more likely cosmogenic: arising from the Greisen–Zatsepin–Kuzmin process, in which ultra-high-energy cosmic rays interact with cosmic microwave background photons, producing a Δ-resonance that decays into pions and ultimately neutrinos. IceCube-Gen2 will resolve the spectral shape from PeV to 10 EeV and differentiate between these two possible production mechanisms (see “Diffuse neutrino landscape” figure).

Moonshots

Remarkably, the Radar Echo Telescope (RET) is exploring using radar to actively probe the ice for transient signals. Unlike Askaryan-based detectors, which passively listen for radio pulses generated by charge imbalances in particle cascades, RET’s concept is to beam a radar signal and watch for reflections off the ionisation caused by particle showers. SLAC’s T576 experiment demonstrated the concept in the lab in 2022 by observing a radar echo from a beam of high-energy electrons scattering off a plastic target. RET has now been deployed in Greenland, where it seeks echoes from down-going cosmic rays as a proof of concept.

Full-sky coverage

Perhaps the most ambitious way to observe ultra-high-energy neutrinos foresees using the Moon as a target. When neutrinos with energies above 100 EeV interact near the rim of the Moon, they can induce particle cascades that generate coherent Askaryan radio emission which could be detectable on Earth (see “Moon skimming” figure). Observations could be conducted from Earth-based radio telescopes or from satellites orbiting the Moon to improve detection sensitivity. Lunar Askaryan detectors could potentially be sensitive to neutrinos up to 1 ZeV (1021 eV). No confirmed detections have been reported so far.

Neutrino network

Proposed neutrino observatories are distributed across the globe – a necessary requirement for full sky coverage, given the Earth is not transparent to ultra-high-energy neutrinos (see “Full-sky coverage” figure). A network of neutrino telescopes ensures that transient astrophysical events can always be observed as the Earth rotates. This is particularly important for time-domain multi-messenger astronomy, enabling coordinated observations with gravitational wave detectors and electromagnetic counterparts. The ability to track neutrino signals in real time will be key to identifying the most extreme cosmic accelerators and probing fundamental physics at ultra-high energies.

The post Discovering the neutrino sky appeared first on CERN Courier.

]]>
Feature Lu Lu looks forward to the next two decades of neutrino astrophysics, exploring the remarkable detector concepts needed to probe ultra-high energies from 1 EeV to 1 ZeV. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_NEUTRINOS_sky.jpg
Accelerators on autopilot https://cerncourier.com/a/accelerators-on-autopilot/ Mon, 19 May 2025 07:57:43 +0000 https://cerncourier.com/?p=113076 Verena Kain highlights four ways machine learning is making the LHC more efficient.

The post Accelerators on autopilot appeared first on CERN Courier.

]]>
The James Webb Space Telescope and the LHC

Particle accelerators can be surprisingly temperamental machines. Expertise, specialisation and experience is needed to maintain their performance. Nonlinear and resonant effects keep accelerator engineers and physicists up late into the night. With so many variables to juggle and fine-tune, even the most seasoned experts will be stretched by future colliders. Can artificial intelligence (AI) help?

Proposed solutions take inspiration from space telescopes. The two fields have been jockeying to innovate since the Hubble Space Telescope launched with minimal automation in 1990. In the 2000s, multiple space missions tested AI for fault detection and onboard decision-making, before the LHC took a notable step forward for colliders in the 2010s by incorporating machine learning (ML) in trigger decisions. Most recently, the James Webb Space Telescope launched in 2021 using AI-driven autonomous control systems for mirror alignment, thermal balancing and scheduling science operations with minimal intervention from the ground. The new Efficient Particle Accelerators project at CERN, which I have led since its approval in 2023, is now rolling out AI at scale across CERN’s accelerator complex (see “Dynamic and adaptive” image.

AI-driven automation will only become more necessary in the future. As well as being unprecedented in size and complexity, future accelerators will also have to navigate new constraints such as fluctuating energy availability from intermittent sources like wind and solar power, requiring highly adaptive and dynamic machine operation. This would represent a step change in complexity and scale. A new equipment integration paradigm would automate accelerator operation, equipment maintenance, fault analysis and recovery. Every item of equipment will need to be fully digitalised and able to auto-configure, auto-stabilise, auto-analyse and auto-recover. Like a driverless car, instrumentation and software layers must also be added for safe and efficient performance.

On-site human intervention of the LHC could be treated as a last resort – or perhaps designed out entirely

The final consideration is full virtualisation. While space telescopes are famously inaccessible once deployed, a machine like the Future Circular Collider (FCC) would present similar challenges. Given the scale and number of components, on-site human intervention should be treated as a last resort – or perhaps designed out entirely. This requires a new approach: equipment must be engineered for autonomy from the outset – with built-in margins, high reliability, modular designs and redundancy. Emerging technologies like robotic inspection, automated recovery systems and digital twins will play a central role in enabling this. A digital twin – a real-time, data-driven virtual replica of the accelerator – can be used to train and constrain control algorithms, test scenarios safely and support predictive diagnostics. Combined with differentiable simulations and layered instrumentation, these tools will make autonomous operation not just feasible, but optimal.

The field is moving fast. Recent advances allow us to rethink how humans interact with complex machines – not by tweaking hardware parameters, but by expressing intent at a higher level. Generative pre-trained transformers, a class of large language models, open the door to prompting machines with concepts rather than step-by-step instructions. While further R&D is needed for robust AI copilots, tailor-made ML models have already become standard tools for parameter optimisation, virtual diagnostics and anomaly detection across CERN’s accelerator landscape.

Progress is diverse. AI can reconstruct LHC bunch profiles using signals from wall current monitors, analyse camera images to spot anomalies in the “dump kickers” that safely remove beams, or even identify malfunctioning beam-position monitors. In the following, I identify four different types of AI that have been successfully deployed across CERN’s accelerator complex. They are merely the harbingers of a whole new way of operating CERN’s accelerators.

1. Beam steering with reinforcement learning

In 2020, LINAC4 became the new first link in the LHC’s modernised proton accelerator chain – and quickly became an early success story for AI-assisted control in particle accelerators.

Small deviations in a particle beam’s path within the vacuum chamber can have a significant impact, including beam loss, equipment damage or degraded beam quality. Beams must stay precisely centred in the beampipe to maintain stability and efficiency. But their trajectory is sensitive to small variations in magnet strength, temperature, radiofrequency phase and even ground vibrations. Worse still, errors typically accumulate along the accelerator, compounding the problem. Beam-position monitors (BPMs) provide measurements at discrete points – often noisy – while steering corrections are applied via small dipole corrector magnets, typically using model-based correction algorithms.

Beam steering

In 2019, the reinforcement learning (RL) algorithm normalised advantage function (NAF) was trained online to steer the H beam in the horizontal plane of LINAC4 during commissioning. In RL, an agent learns by interacting with its environment and receiving rewards that guide it toward better decisions. NAF uses a neural network to model the so-called Q-function that estimates rewards in RL and uses this to continuously refine its control policy.

Initially, the algorithm required many attempts to find an effective strategy, and in early iterations it occasionally worsened the beam trajectory, but as training progressed, performance improved rapidly. Eventually, the agent achieved a final trajectory better aligned than the goal of an RMS of 1 mm (see “Beam steering” figure).

This experiment demonstrated that RL can learn effective control policies for accelerator-physics problems within a reasonable amount of time. The agent was fully trained after about 300 iterations, or 30 minutes of beam time, making online training feasible. Since 2019, the use of AI techniques has expanded significantly across accelerator labs worldwide, targeting more and more problems that don’t have any classical solution. At CERN, tools such as GeOFF (Generic Optimisation Framework and Front­end) have been developed to standardise and scale these approaches throughout the accelerator complex.

2. Efficient injection with Bayesian optimisation

Bayesian optimisation (BO) is a global optimisation technique that uses a probabilistic model to find the optimal parameters of a system by balancing exploration and exploitation, making it ideal for expensive or noisy evaluations. A game-changing example of its use is the record-breaking LHC ion run in 2024. BO was extensively used all along the ion chain, and made a significant difference in LEIR (the low-energy ion ring, the first synchrotron in the chain) and in the Super Proton Synchrotron (SPS, the last accelerator before the LHC). In LEIR, most processes are no longer manually optimised, but the multi-turn injection process is still non-trivial and depends on various longitudinal and transverse parameters from its injector LINAC3.

Quick recovery

In heavy-ion accelerators, particles are injected in a partially stripped charge state and must be converted to higher charge states at different stages for efficient acceleration. In the LHC ion injector chain, the stripping foil between LINAC3 and LEIR raises the charge of the lead ions from Pb27+ to Pb54+. A second stripping foil, between the PS and SPS, fully ionises the beam to Pb82+ ions for final acceleration toward the LHC. These foils degrade over time due to thermal stress, radiation damage and sputtering, and must be remotely exchanged using a rotating wheel mechanism. Because each new foil has slightly different stripping efficiency and scattering properties, beam transmission must be re-optimised – a task that traditionally required expert manual tuning.

In 2024 it was successfully demonstrated that BO with embedded physics constraints can efficiently optimise the 21 most important parameters between LEIR and the LINAC3 injector. Following a stripping foil exchange, the algorithm restored the accumulated beam intensity in LEIR to better than nominal levels within just a few dozen iterations (see “Quick recovery” figure).

This example shows how AI can now match or outperform expert human tuning, significantly reducing recovery time, freeing up operator bandwidth and improving overall machine availability.

3. Adaptively correcting the 50 Hz ripple

In high-precision accelerator systems, even tiny perturbations can have significant effects. One such disturbance is the 50 Hz ripple in power supplies – small periodic fluctuations in current that originate from the electrical grid. While these ripples were historically only a concern for slow-extracted proton beams sent to fixed-target experiments, 2024 revealed a broader impact.

SPS intensity

In the SPS, adaptive Bayesian optimisation (ABO) was deployed to control this ripple in real time. ABO extends BO by learning the objective not only as a function of the control parameters, but also as a function of time, which then allows continuous control through forecasting.

The algorithm generated shot-by-shot feed-forward corrections to inject precise counter-noise into the voltage regulation of one of the quadrupole magnet circuits. This approach was already in use for the North Area proton beams, but in summer 2024 it was discovered that even for high-intensity proton beams bound for the LHC, the same ripple could contribute to beam losses at low energy.

Thanks to existing ML frameworks, prior experience with ripple compensation and available hardware for active noise injection, the fix could be implemented quickly. While the gains for protons were modest – around 1% improvement in losses – the impact for LHC ion beams was far more dramatic. Correcting the 50 Hz ripple increased ion transmission by more than 15%. ABO is therefore now active whenever ions are accelerated, improving transmission and supporting the record beam intensity achieved in 2024 (see “SPS intensity” figure).

4. Predicting hysteresis with transformers

Another outstanding issue in today’s multi-cycling synchrotrons with iron-dominated electromagnets is correcting for magnetic hysteresis – a phenomenon where the magnetic field depends not only on the current but also on its cycling history. Cumbersome mitigation strategies include playing dummy cycles and manually re-tuning parameters after each change in magnetic history.

SPS hysteresis

While phenomenological hysteresis models exist, their accuracy is typically insufficient for precise beam control. ML offers a path forward, especially when supported by high-quality field measurement data. Recent work using temporal fusion transformers – a deep-learning architecture designed for multivariate time-series prediction – has demonstrated that ML-based models can accurately predict field deviations from the programmed transfer function across different SPS magnetic cycles (see “SPS hysteresis” figure). This hysteresis model is now used in the SPS control room to provide feed-forward corrections – pre-emptive adjustments to magnet currents based on the predicted magnetic state – ensuring field stability without waiting for feedback from beam measurements and manual adjustments.

A blueprint for the future

With the Efficient Particle Accelerators project, CERN is developing a blueprint for the next generation of autonomous equipment. This includes concepts for continuous self-analysis, anomaly detection and new layers of “Internet of Things” instrumentation that support auto-configuration and predictive maintenance. The focus is on making it easier to integrate smart software layers. Full results are expected by the end of LHC Run 3, with robust frameworks ready for deployment in Run 4.

AI can now match or outperform expert human tuning, significantly reducing recovery time and improving overall machine availability

The goal is ambitious: to reduce maintenance effort by at least 50% wherever these frameworks are applied. This is based on a realistic assumption – already today, about half of all interventions across the CERN accelerator complex are performed remotely, a number that continues to grow. With current technologies, many of these could be fully automated.

Together, these developments will not only improve the operability and resilience of today’s accelerators, but also lay the foundation for CERN’s future machines, where human intervention during operation may become the exception rather than the rule. AI is set to transform how we design, build and operate accelerators – and how we do science itself. It opens the door to new models of R&D, innovation and deep collaboration with industry. 

The post Accelerators on autopilot appeared first on CERN Courier.

]]>
Feature Verena Kain highlights four ways machine learning is making the LHC more efficient. https://cerncourier.com/wp-content/uploads/2025/05/hara_andrew7-scaled.jpg
Powering into the future https://cerncourier.com/a/powering-into-the-future/ Mon, 19 May 2025 07:55:18 +0000 https://cerncourier.com/?p=113089 Nuria Catalan Lasheras and Igor Syratchev explain why klystrons are strategically important to the future of the field – and how CERN plans to boost their efficiency above 90%.

The post Powering into the future appeared first on CERN Courier.

]]>
The Higgs boson is the most intriguing and unusual object yet discovered by fundamental science. There is no higher experimental priority for particle physics than building an electron–positron collider to produce it copiously and study it precisely. Given the importance of energy efficiency and cost effectiveness in the current geopolitical context, this gives unique strategic importance to developing a humble technology called the klystron – a technology that will consume the majority of site power at every major electron–positron collider under consideration, but which has historically only achieved 60% energy efficiency.

The klystron was invented in 1937 by two American brothers, Russell and Sigurd Varian. The Varians wanted to improve aircraft radar systems. At the time, there was a growing need for better high-frequency amplification to detect objects at a distance using radar, a critical technology in the lead-up to World War II.

The Varian’s RF source operated around 3.2 GHz, or a wavelength of about 9.4 cm, in the microwave region of the electromagnetic spectrum. At the time, this was an extraordinarily high frequency – conventional vacuum tubes struggled beyond 300 MHz. Microwave wavelengths promised better resolution, less noise, and the ability to penetrate rain and fog. Crucially, antennas could be small enough to fit on ships and planes. But the source was far too weak for radar.

Klystrons are ubiquitous in medical, industrial and research accelerators – and not least in the next generation of Higgs factories

The Varians’ genius was to invent a way to amplify the electromagnetic signal by up to 30 dB, or a factor of 1000. The US and British military used the klystron for airborne radar, submarine detection of U-boats in the Atlantic and naval gun targeting beyond visual range. Radar helped win the Battle of Britain, the Battle of the Atlantic and Pacific naval battles, making surprise attacks harder by giving advance warning. Winston Churchill called radar “the secret weapon of WWII”, and the klystron was one of its enabling technologies.

With its high gain and narrow bandwidth, the klystron was the first practical microwave amplifier and became foundational in radio-frequency (RF) technology. This was the first time anyone had efficiently amplified microwaves with stability and directionality. Klystrons have since been used in satellite communication, broadcasting and particle accelerators, where they power the resonant RF cavities that accelerate the beams. Klystrons are therefore ubiquitous in medical, industrial and research accelerators – and not least in the next generation of Higgs factories, which are central to the future of high-energy physics.

Klystrons and the Higgs

Hadron colliders like the LHC tend to be circular. Their fundamental energy limit is given by the maximum strength of the bending magnets and the circumference of the tunnel. A handful of RF cavities repeatedly accelerate beams of protons or ions after hundreds or thousands of bending magnets force the beams to loop back through them.

Operating principle

Thanks to their clean and precisely controllable collisions, all Higgs factories under consideration are electron–positron colliders. Electron–positron colliders can be either circular or linear in construction. The dynamics of circular electron–positron colliders are radically different as the particles are 2000 times lighter than protons. The strength required from the bending magnets is relatively low for any practical circumference, however, the energy of the particles must be continually replenished, as they radiate away energy in the bends through synchrotron radiation, requiring hundreds of RF cavities. RF cavities are equally important in the linear case. Here, all the energy must be imparted in a single pass, with each cavity accelerating the beam only once, requiring either hundreds or even thousands of RF cavities.

Either way, 50 to 60% of the total energy consumed by an electron-positron collider is used for RF acceleration, compared to a relatively small fraction in a hadron collider. Efficiently powering the RF cavities is of paramount importance to the energy efficiency and cost effectiveness of the facility as a whole. RF acceleration is therefore of far greater significance at electron–positron colliders than at hadron colliders.

From a pen to a mid-size car

RF cavities cannot simply be plugged into the wall. These finely tuned resonant structures must be excited by RF power – an alternating microwave electromagnetic field that is supplied through waveguides at the appropriate frequency. Due to the geometry of resonant cavities, this excites an on-axis oscillating electrical field. Particles that arrive when the electrical field has the right direction are accelerated. For this reason, particles in an accelerator travel in bunches separated by a long distance, during which the RF field is not optimised for acceleration.

CLIC klystron

Despite the development of modern solid-state amplifiers, the Varians’ klystron is still the most practical technology to generate RF when the power required is in the MW level. They can be as small as a pen or as large and heavy as a mid-size car, depending on the frequency and power required. Linear colliders use higher frequency because they also come with higher gradients and make the linac shorter, whereas a circular collider does not need high gradients as the energy to be given each turn is smaller.

Klystrons fall under the general classification of vacuum tubes – fully enclosed miniature electron accelerators with their own source, accelerating path and “interaction region” where the RF field is produced. Their name is derived from the Greek verb describing the action of waves crashing against the seashore. In a klystron, RF power is generated when electrons crash against a decelerating electric field.

Every klystron contains at least two cavities: an input and an output. The input cavity is powered by a weak RF source that must be amplified. The output cavity generates the strongly amplified RF signal generated by the klystron. All this comes encapsulated in an ultra-high vacuum volume inside the field of a solenoid for focusing (see “Operating principle” figure).

Thanks to the efforts made in recent years, high-efficiency klystrons are now approaching the ultimate theoretical limit

Inside the klystron, electrons leave a heated cathode and are accelerated by a high voltage applied between the cathode and the anode. As they are being pushed forward, a small input RF signal is applied to the input cavity, either accelerating or decelerating the electrons according to their time of arrival. After a long drift, late-emitted accelerated electrons catch up with early-emitted decelerated electrons, intersecting with those that did not see any net accelerating force. This is called velocity bunching.

A second, passive accelerating cavity is placed at the location where maximum bunching occurs. Though of a comparable design, this cavity behaves in an inverse fashion to those used in particle accelerators. Rather than converting the energy of an electromagnetic field into the kinetic energy of particles, the kinetic energy of particles is converted into RF electromagnetic waves. This process can be enhanced by the presence of other passive cavities in between the already mentioned two, as well as by several iterations of bunching and de-bunching before reaching the output cavity. Once decelerated, the spent beam finishes its life in a dump or a water-cooled collector.

Optimising efficiency

Klystrons are ultimately RF amplifiers with a very high gain of the order of 30 to 60 dB and a very narrow bandwidth. They can be built at any frequency from a few hundred MHz to tens of GHz, but each operates within a very small range of frequencies called the bandwidth. After broadcasting became reliant on wider bandwidth vacuum tubes, their application in particle accelerators turned into a small market for high-power klystrons. Most klystrons for science are manufactured by a handful of companies which offer a limited number of models that have been in operation for decades. Their frequency, power and duty cycle may not correspond to the specifications of a new accelerator being considered – and in most cases, little or no thought has been given to energy efficiency or carbon footprint.

Battling space charge

When searching for suitable solutions for the next particle-physics collider, however, optimising the energy efficiency of klystrons and other devices that will determine the final energy bill and CO2 emissions is a task of the utmost importance. Therefore, nearly a decade ago, RF experts at CERN and the University of Lancaster began the High-Efficiency Klystron (HEK) project to maximise beam-to-RF efficiency: the fraction of the power contained in the klystron’s electron beam that is converted into RF power by the output cavity.

The complexity of klystrons resides on the very nonlinear fields to which the electrons are subjected. In the cathode and the first stages of electrostatic acceleration, the collective effect of “space-charge” forces between the electrons determines the strongly nonlinear dynamics of the beam. The same is true when the bunching tightens along the tube, with mutual repulsion between the electrons preventing optimal bunching at the output cavity.

For this reason, designing klystrons is not susceptible to simple analytical calculations. Since 2017, CERN has developed a code called KlyC that simulates the beam along the klystron channel and optimises parameters such as frequency and distance between cavities 100 to 1000 times faster than commercial 3D codes. KlyC is available in the public domain and is being used by an ever-growing list of labs and industrial partners.

Perveance

The main characteristic of a klystron is an obscure magnitude inherited from electron-gun design called perveance. For small perveances, space-charge forces are small, due to either high energy or low intensity, making bunching easy. For large perveances, space-charge forces oppose bunching, lowering beam-to-RF efficiency. High-power klystrons require large currents and therefore high perveances. One way to produce highly efficient, high-power klystrons is therefore for multiple cathodes to generate multiple low-perveance electron beams in a “multi-beam” (MB) klystron.

High-luminosity gains

Overall, there is an almost linear dependence between perveance and efficiency. Thanks to the efforts made in recent years, high-efficiency klystrons are now outperforming industrial klystrons by 10% in efficiency for all values of perveance, and approaching the ultimate theoretical limit (see “Battling space charge” figure).

One of the first designs to be brought to life was based on the E37113, a pulsed klystron with 6 MW peak power working in the X-band at 12 GHz, commercialised by CANON ETD. This klystron is currently used in the test facility at CERN for validating CLIC RF prototypes, which could greatly benefit from a larger power. As part of a collaboration with CERN, CANON ETD built a new tube, according to the design optimised at CERN, to reach a beam-to-RF efficiency of 57% instead of the original 42% (see “CLIC klystron” image and CERN Courier September/October 2022 p9).

As its interfaces with the high-voltage (HV) source and solenoid were kept identical, one can now benefit from 8 MW of RF power for the same energy consumption as before. As changes in the manufacturing of the tube channel are just a small fraction of the manufacture of the instrument, its price should not increase considerably, even if more accurate production methods are required.

In pursuit of power

Towards an FCC klystron

Another successful example of re-designing a tube for high efficiency is the TH2167 – the klystron behind the LHC, which is manufactured by Thales. Originally exhibiting a beam-to-RF efficiency of 60%, it was re-designed by the CERN team to gain 10% and reach 70% efficiency, while again using the same HV source and solenoid. The tube prototype has been built and is currently at CERN, where it has demonstrated the capacity to generate 350 kW of RF power with the same input energy as previously required to produce 300 kW. This power will be decisive when dealing with the higher intensity beam expected after the LHC luminosity upgrade. And all this again for a price comparable to previous models (see “High-luminosity gains” image).

The quest for the highest efficiency is not over yet. The CERN team is currently working on a design that could power the proposed Future Circular collider (FCC). Using about a hundred accelerating cavities, the electron and positron beams will need to be replenished with 100 MW of RF power, and energy efficiency is imperative.

The quest for the highest efficiency is not over yet

Although the same tube in use for the LHC, now boosted to 70% efficiency, could be used to power the FCC, CERN is working towards a vacuum tube that could reach an efficiency over 80%. A two-stage multi-beam klystron was initially designed that was capable of reaching 86% efficiency and generating 1 MW of continuous-wave power (see “Towards an FCC klystron” figure).

Motivated by recent changes in FCC parameters, we have rediscovered an old device called a tristron, which is not a conventional klystron but a “gridded tube” where the electron beam bunching mechanism is different. Tristons have a lower power gain but much greater flexibility. Simulations have confirmed that they can reach efficiencies as high as 90%. This could be a disruptive technology with applications well beyond accelerators. Manufacturing a prototype is an excellent opportunity for knowledge transfer from fundamental research to industrial applications.

The post Powering into the future appeared first on CERN Courier.

]]>
Feature Nuria Catalan Lasheras and Igor Syratchev explain why klystrons are strategically important to the future of the field – and how CERN plans to boost their efficiency above 90%. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_KLYSTRONS_frontis.jpg
Charting DESY’s future https://cerncourier.com/a/charting-desys-future/ Mon, 19 May 2025 07:34:51 +0000 https://cerncourier.com/?p=113176 DESY’s new chair, Beate Heinemann, reflects on the laboratory’s evolving role in science and society – from building next-generation accelerators to navigating Europe’s geopolitical landscape.

The post Charting DESY’s future appeared first on CERN Courier.

]]>
How would you describe DESY’s scientific culture?

DESY is a large laboratory with just over 3000 employees. It was founded 65 years ago as an accelerator lab, and at its heart it remains one, though what we do with the accelerators has evolved over time. It is fully funded by Germany.

In particle physics, DESY has performed many important studies, for example to understand the charm quark following the November Revolution of 1974. The gluon was discovered here in the late 1970s. In the 1980s, DESY ran the first experiments to study B mesons, laying the groundwork for core programmes such as LHCb at CERN and the Belle II experiment in Japan. In the 1990s, the HERA accelerator focused on probing the structure of the proton, which, incidentally, was the subject of my PhD, and those results have been crucial for precision studies of the Higgs boson.

Over time, DESY has become much more than an accelerator and particle-physics lab. Even in the early days, it used what is called synchrotron radiation, the light emitted when electrons change direction in the accelerator. This light is incredibly useful for studying matter in detail. Today, our accelerators are used primarily for this purpose: they generate X-rays that image tiny structures, for example viruses.

DESY’s culture is shaped by its very engaged and loyal workforce. People often call themselves “DESYians” and strongly identify with the laboratory. At its heart, DESY is really an engineering lab. You need an amazing engineering workforce to be able to construct and operate these accelerators.

Which of DESY’s scientific achievements are you most proud of?

The discovery of the gluon is, of course, an incredible achievement, but actually I would say that DESY’s greatest accomplishment has been building so many cutting-edge accelerators: delivering them on time, within budget, and getting them to work as intended.

Take the PETRA accelerator, for example – an entirely new concept when it was first proposed in the 1970s. The decision to build it was made in 1975; construction was completed by 1978; and by 1979 the gluon was discovered. So in just four years, we went from approving a 2.3 km accelerator to making a fundamental discovery, something that is absolutely crucial to our understanding of the universe. That’s something I’m extremely proud of.

I’m also very proud of the European X-ray Free-Electron Laser (XFEL), completed in 2017 and now fully operational. Before that, in 2005 we launched the world’s first free-electron laser, FLASH, and of course in the 1990s HERA, another pioneering machine. Again and again, DESY has succeeded in building large, novel and highly valuable accelerators that have pushed the boundaries of science.

What can we look forward to during your time as chair?

We are currently working on 10 major projects in the next three years alone! PETRA III will be running until the end of 2029, but our goal is to move forward with PETRA IV, the world’s most advanced X-ray source. Securing funding for that first, and then building it, is one of my main objectives. In Germany, there’s a roadmap process, and by July this year we’ll know whether an independent committee has judged PETRA IV to be one of the highest-priority science projects in the country. If all goes well, we aim to begin operating PETRA IV in 2032.

Our FLASH soft X-ray facility is also being upgraded to improve beam quality, and we plan to relaunch it in early September. That will allow us to serve more users and deliver better beam quality, increasing its impact.

In parallel, we’re contributing significantly to the HL-LHC upgrade. More than 100 people at DESY are working on building trackers for the ATLAS and CMS detectors, and parts of the forward calorimeter of CMS. That work needs to be completed by 2028.

Hunting axions

Astroparticle physics is another growing area for us. Over the next three years we’re completing telescopes for the Cherenkov Telescope Array and building detectors for the IceCube upgrade. For the first time, DESY is also constructing a space camera for the satellite UltraSat, which is expected to launch within the next three years.

At the Hamburg site, DESY is diving further into axion research. We’re currently running the ALPS II experiment, which has a fascinating “light shining through a wall” setup. Normally, of course, light can’t pass through something like a thick concrete wall. But in ALPS II, light inside a magnet can convert into an axion, a hypothetical dark-matter particle that can travel through matter almost unhindered. On the other side, another magnet converts the axion back into light. So, it appears as if the light has passed through the wall, when in fact it was briefly an axion. We started the experiment last year. As with most experiments, we began carefully, because not everything works at once, but two more major upgrades are planned in the next two years, and that’s when we expect ALPS II to reach its full scientific potential.

We’re also developing additional axion experiments. One of them, in collaboration with CERN, is called BabyIAXO. It’s designed to look for axions from the Sun, where you have both light and magnetic fields. We hope to start construction before the end of the decade.

Finally, DESY also has a strong and diverse theory group. Their work spans many areas, and it’s exciting to see what ideas will emerge from them over the coming years.

How does DESY collaborate with industry to deliver benefits to society?

We already collaborate quite a lot with industry. The beamlines at PETRA, in particular, are of strong interest. For example, BioNTech conducted some of its research for the COVID-19 vaccine here. We also have a close relationship with the Fraunhofer Society in Germany, which focuses on translating basic research into industrial applications. They famously developed the MP3 format, for instance. Our collaboration with them is quite structured, and there have also been several spinoffs and start-ups based on technology developed at DESY. Looking ahead, we want to significantly strengthen our ties with industry through PETRA IV. With much higher data rates and improved beam quality, it will be far easier to obtain results quickly. Our goal is for 10% of PETRA IV’s capacity to be dedicated to industrial use. Furthermore, we are developing a strong ecosystem for innovation on the campus and the surrounding area, with DESY in the centre, called the Science City Hamburg Bahrenfeld.

What’s your position on “dual use” research, which could have military applications?

The discussion around dual-use research is complicated. Personally, I find the term “dual use” a bit odd – almost any high-tech equipment can be used for both civilian and military purposes. Take a transistor for example, which has countless applications, including military ones, but it wasn’t invented for that reason. At DESY, we’re currently having an internal discussion about whether to engage in projects that relate to defence. This is part of an ongoing process where we’re trying to define under what conditions, if any, DESY would take on targeted projects related to defence. There are a range of views within DESY, and I think that diversity of opinion is valuable. Some people are firmly against this idea, and I respect that. Honestly, it’s probably how I would have felt 10 or 20 years ago. But others believe DESY should play a role. Personally, I’m open to it.

If our expertise can help people defend themselves and our freedom in Europe, that’s something worth considering. Of course, I would love to live in a world without weapons, where no one attacks anyone. But if I were attacked, I’d want to be able to defend myself. I prefer to work on shields, not swords, like in Asterix and Obelix, but, of course, it’s never that simple. That’s why we’re taking time with this. It’s a complex and multifaceted issue, and we’re engaging with experts from peace and security research, as well as the social sciences, to help us understand all dimensions. I’ve already learned far more about this than I ever expected to. We hope to come to a decision on this later this year.

You are DESY’s first female chair. What barriers do you think still exist for women in physics, and how can institutions like DESY address them?

There are two main barriers, I think. The first is that, in my opinion, society at large still discourages girls from going into maths and science.

Certainly in Germany, if you stopped a hundred people on the street, I think most of them would still say that girls aren’t naturally good at maths and science. Of course, there are always exceptions: you do find great teachers and supportive parents who go against this narrative. I wouldn’t be here today if I hadn’t received that kind of encouragement.

That’s why it’s so important to actively counter those messages. Girls need encouragement from an early age, they need to be strengthened and supported. On the encouragement side, DESY is quite active. We run many outreach activities for schoolchildren, including a dedicated school lab. Every year, more than 13,000 school pupils visit our campus. We also take part in Germany’s “Zukunftstag”, where girls are encouraged to explore careers traditionally considered male-dominated, and boys do the same for fields seen as female-dominated.

Looking ahead, we want to significantly strengthen our ties with industry

The second challenge comes later, at a different career stage, and it has to do with family responsibilities. Often, family work still falls more heavily on women than men in many partnerships. That imbalance can hold women back, particularly during the postdoc years, which tend to coincide with the time when many people are starting families. It’s a tough period, because you’re trying to advance your career.

Workplaces like DESY can play a role in making this easier. We offer good childcare options, flexibility with home–office arrangements, and even shared leadership positions, which help make it more manageable to balance work and family life. We also have mentoring programmes. One example is dynaMENT, where female PhD students and postdocs are mentored by more senior professionals. I’ve taken part in that myself, and I think it’s incredibly valuable.

Do you have any advice for early-career women physicists?

If I could offer one more piece of advice, it’s about building a strong professional network. That’s something I’ve found truly valuable. I’m fortunate to have a fantastic international network, both male and female colleagues, including many women in leadership positions. It’s so important to have people you can talk to, who understand your challenges, and who might be in similar situations. So if you’re a student, I’d really recommend investing in your network. That’s very important, I think.

What are your personal reflections on the next-generation colliders?

Our generation has a responsibility to understand the electroweak scale and the Higgs boson. These questions have been around for almost 90 years, since 1935 when Hideki Yukawa explored the idea that forces might be mediated by the exchange of massive particles. While we’ve made progress, a true understanding is still out of reach. That’s what the next generation of machines is aiming to tackle.

The problem, of course, is cost. All the proposed solutions are expensive, and it is very challenging to secure investments for such large-scale projects, even though the return on investment from big science is typically excellent: these projects drive innovation, build high-tech capability and create a highly skilled workforce.

Europe’s role is more vital than ever

From a scientific point of view, the FCC is the most comprehensive option. As a Higgs factory, it offers a broad and strong programme to analyse the Higgs and electroweak gauge bosons. But who knows if we’ll be able to afford it? And it’s not just about money. The timeline and the risks also matter. The FCC feasibility report was just published and is still under review by an expert committee. I’d rather not comment further until I’ve seen the full information. I’m part of the European Strategy Group and we’ll publish a new report by the end of the year. Until then, I want to understand all the details before forming an opinion.

It’s good to have other options too. The muon collider is not yet as technically ready as the FCC or linear collider, but it’s an exciting technology and could be the machine after next. Another could be using plasma-wakefield acceleration, which we’re very actively working on at DESY. It could enable us to build high-energy colliders on a much smaller scale. This is something we’ll need, as we can’t keep building ever-larger machines forever. Investing in accelerator R&D to develop these next-gen technologies is crucial.

Still, I really hope there will be an intermediate machine in the near future, a Higgs factory that lets us properly explore the Higgs boson. There are still many mysteries there. I like to compare it to an egg: you have to crack it open to see what’s inside. And that’s what we need to do with the Higgs.

One thing that is becoming clearer to me is the growing importance of Europe. With the current uncertainties in the US, which are already affecting health and climate research, we can’t assume fundamental research will remain unaffected. That’s why Europe’s role is more vital than ever.

I think we need to build more collaborations between European labs. Sharing expertise, especially through staff exchanges, could be particularly valuable in engineering, where we need a huge number of highly skilled professionals to deliver billion-euro projects. We’ve got one coming up ourselves, and the technical expertise for that will be critical.

I believe science has a key role to play in strengthening Europe, not just culturally, but economically too. It’s an area where we can and should come together.

The post Charting DESY’s future appeared first on CERN Courier.

]]>
Opinion DESY’s new chair, Beate Heinemann, reflects on the laboratory’s evolving role in science and society – from building next-generation accelerators to navigating Europe’s geopolitical landscape. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_INT_Heinemann.jpg
Clean di-pions reveal vector mesons https://cerncourier.com/a/clean-di-pions-reveal-vector-mesons/ Mon, 19 May 2025 07:32:21 +0000 https://cerncourier.com/?p=113155 LHCb has isolated a precisely measured, high-statistics sample of di-pions.

The post Clean di-pions reveal vector mesons appeared first on CERN Courier.

]]>
LHCb figure 1

Heavy-ion collisions usually have very high multiplicities due to colour flow and multiple nucleon interactions. However, when the ions are separated by greater than about twice their radii in so-called ultra-peripheral collisions (UPC), electromagnetic-induced interactions dominate. In these colour-neutral interactions, the ions remain intact and a central system with few particles is produced whose summed transverse momenta, being the Fourier transform of the distance between the ions, is typically less than 100 MeV/c.

In the photoproduction of vector mesons, a photon, radiated from one of the ions, fluctuates into a virtual vector meson long before it reaches the target and then interacts with one or more nucleons in the other ion. The production of ρ mesons has been measured at the LHC by ALICE in PbPb and XeXe collisions, while J/ψ mesons have been measured in PbPb collisions by ALICE, CMS and LHCb. Now, LHCb has isolated a precisely measured, high-statistics sample of di-pions with backgrounds below 1% in which several vector mesons are seen.

Figure 1 shows the invariant mass distribution of the pions, and the fit to the data requires contributions from the ρ meson, continuum ππ, the ω meson and two higher mass resonances at about 1.35 and 1.80 GeV, consistent with excited ρ mesons. The higher structure was also discernible in previous measurements by STAR and ALICE. Since its discovery in 1961, the ρ meson has proved challenging to describe because of its broad width and because of interference effects. More data in the di-pion channel, particularly when practically background-free down almost to production threshold, are therefore welcome. These data may help with hadronic corrections to the prediction of muon g-2: the dip and bump structure at high masses seen by LHCb is qualitatively similar to that observed by BaBar in e+e → π+π scattering (CERN Courier March/April 2025 p21). From the invariant mass spectrum, LHCb has measured the cross-sections for ρ, ω, ρand ρ′′ as a function of rapidity in photoproduction on lead nuclei.

Naively, comparison of the photo­production on the nucleus and on the proton should simply scale with the number of nucleons, and can be calculated in the impulse approximation that only takes into account the nuclear form factor, neglecting all other potential nuclear effects.

However, nuclear shadowing, caused by multiple interactions as the meson passes through the nucleus, leads to a suppression (CERN Courier January/February 2025 p31). In addition, there may be further non-linear QCD effects at play.

Elastic re-scattering is usually described through a Glauber calculation that takes account of multiple elastic scatters. This is extended in the GKZ model using Gribov’s formalism to include inelastic scatters. The inset in figure 1 shows the measured differential cross-section for the ρ meson as a function of rapidity for LHCb data compared to the GKZ prediction, to a prediction for the STARlight generator, and to ALICE data at central rapidities. Additional suppression due to nuclear effects is observed above that predicted by GKZ.

The post Clean di-pions reveal vector mesons appeared first on CERN Courier.

]]>
News LHCb has isolated a precisely measured, high-statistics sample of di-pions. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_EF-LHCb_feature.jpg
European strategy update: the community speaks https://cerncourier.com/a/european-strategy-update-the-community-speaks/ Mon, 19 May 2025 07:18:23 +0000 https://cerncourier.com/?p=113032 A total of 263 submissions range from individual to national perspectives.

The post European strategy update: the community speaks appeared first on CERN Courier.

]]>
Community input themes of the European Strategy process

The deadline for submitting inputs to the 2026 update of the European Strategy for Particle Physics (ESPP) passed on 31 March. A total of 263 submissions, ranging from individual to national perspectives, express the priorities of the high-energy physics community (see “Community inputs” figure). These inputs will be distilled by expert panels in preparation for an Open Symposium that will be held in Venice from 23 to 27 June (CERN Courier March/April 2025 p11).

Launched by the CERN Council in March 2024, the stated aim of the 2026 update to the ESPP is to develop a visionary and concrete plan that greatly advances human knowledge in fundamental physics, in particular through the realisation of the next flagship project at CERN. The community-wide process, which is due to submit recom­mendations to Council by the end of the year, is also expected to prioritise alternative options to be pursued if the preferred project turns out not to be feasible or competitive.

“We are heartened to see so many rich and varied contributions, in particular the national input and the various proposals for the next large-scale accelerator project at CERN,” says strategy secretary Karl Jakobs of the University of Freiburg, speaking on behalf of the European Strategy Group (ESG). “We thank everyone for their hard work and rigour.”

Two proposals for flagship colliders are at an advanced stage: a Future Circular Collider (FCC) and a Linear Collider Facility (LCF). As recommended in the 2020 strategy update, a feasibility study for the FCC was released on 31 March, describing a 91 km-circumference infrastructure that could host an electron–positron Higgs and electroweak factory followed by an energy-frontier hadron collider at a later stage. Inputs for an electron–positron LCF cover potential starting configurations based on Compact Linear Collider (CLIC) or International Linear Collider (ILC) technologies. It is proposed that the latter LCF could be upgraded using CLIC, Cool Copper Collider, plasma-wakefield or energy-recovery technologies and designs. Other proposals outline a muon collider and a possible plasma-wakefield collider, as well as potential “bridging” projects to a future flagship collider. Among the latter are LEP3 and LHeC, which would site an electron–positron and an electron–proton collider, respectively, in the existing LHC tunnel. For the LHeC, an additional energy-recovery linac would need to be added to CERN’s accelerator complex.

Future choices

In probing beyond the Standard Model and more deeply studying the Higgs boson and its electroweak domain, next-generation colliders will pick up where the High-Luminosity LHC (HL-LHC) leaves off. In a joint submission, the ATLAS and CMS collaborations presented physics projections which suggest that the HL-LHC will be able to: observe the H  µ+µ and H  Zγ decays of the Higgs boson; observe Standard Model di-Higgs production; and measure the Higgs’ trilinear self-coupling with a precision better than 30%. The joint document also highlights the need for further progress in high-precision theoretical calculations aligned with the demands of the HL-LHC and serves as important input to the discussion on the choice of a future collider at CERN.

Neutrinos and cosmic messengers, dark matter and the dark sector, strong interactions and flavour physics also attracted many inputs, allowing priorities in non-collider physics to complement collider programmes. Underpinning the community’s physics aspirations are numerous submissions in the categories of accelerator science and technology, detector instrumentation and computing. Progress in these technologies is vital for the realisation of a post-LHC collider, which was also reflected by the recommendation of the 2020 strategy update to define R&D roadmaps. The scientific and technical inputs will be reviewed by the Physics Preparatory Group (PPG), which will conduct comparative assessments of the scientific potential of various proposed projects against defined physics benchmarks.

We are heartened to see so many rich and varied contributions

Key to the ESPP 2026 update are 57 national and national-laboratory submissions, including some from outside Europe. Most identify the FCC as the preferred project to succeed the LHC. If the FCC is found to be unfeasible, many national communities propose that a linear collider at CERN should be pursued, while taking into account the global context: a 250 GeV linear collider may not be competitive if China decides to proceed with a Circular Electron Positron Collider at a comparable energy on the anticipated timescale, potentially motivating a higher energy electron–positron machine or a proton–proton collider instead.

Complex process

In its review, the ESG will take the physics reach of proposed colliders as well as other factors into account. This complex process will be undertaken by seven working groups, addressing: national inputs; diversity in European particle physics; project comparison; implementation of the strategy and deliverability of large projects; relations with other fields of physics; sustainability and environmental impact; public engagement, education, communication and social and career aspects for the next generation; and knowledge and technology transfer. “The ESG and the PPG have their work cut out and we look forward to further strong participation by the full community, in particular at the Open Symposium,” says Jakobs.

A briefing book prepared by the PPG based on the community input and discussions at the Open Symposium will be submitted to the ESG by the end of September for consideration during a five-day-long drafting session, which is scheduled to take place from 1 to 5 December. The CERN Council will then review the final ESG recommendations ahead of a special session to be held in Budapest in May 2026.

The post European strategy update: the community speaks appeared first on CERN Courier.

]]>
News A total of 263 submissions range from individual to national perspectives. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_NA_ESPP.png
Machine learning in industry https://cerncourier.com/a/machine-learning-in-industry/ Mon, 19 May 2025 07:10:04 +0000 https://cerncourier.com/?p=113165 Antoni Shtipliyski offers advice on how early-career researchers can transition into machine-learning roles in industry.

The post Machine learning in industry appeared first on CERN Courier.

]]>
Antoni Shtipliyski

In the past decade, machine learning has surged into every corner of industry, from travel and transport to healthcare and finance. For early-career researchers, who have spent their PhDs and postdocs coding, a job in machine learning may seem a natural next step.

“Scientists often study nature by attempting to model the world around us into math­ematical models and computer code,” says Antoni Shtipliyski, engineering manager at Skyscanner. “But that’s only one part of the story if the aim is to apply these models to large-scale research questions or business problems. A completely orthogonal set of challenges revolves around how people collaborate to build and operate these systems. That’s where the real work begins.”

Used to large-scale experiments and collaborative problem solving, particle physicists are uniquely well-equipped to step into machine-learning roles. Shtipliyski worked on upgrades for the level-1 trigger system of the CMS experiment at CERN, before leaving to lead the machine-learning operations team in one of the biggest travel companies in the world.

Effective mindset

“At CERN, building an experimental detector is just the first step,” says Shtipliyski. “To be useful, it needs to be operated effectively over a long period of time. That’s exactly the mindset needed in industry.”

During his time as a physicist, Shtipliyski gained multiple skills that continue to help him at work today, but there were also a number of other areas he developed to succeed in machine learning in industry. One critical gap in a physicists’ portfolio, he notes, is that many people interpret machine-learning careers as purely algorithmic development and model training.

“At Skyscanner, my team doesn’t build models directly,” he says. “We look after the platform used to push and serve machine-learning models to our users. We oversee the techno-social machine that delivers these models to travellers. That’s the part people underestimate, and where a lot of the challenges lie.”

An important factor for physicists transitioning out of academia is to understand the entire lifecycle of a machine-learning project. This includes not only developing an algorithm, but deploying it, monitoring its performance, adapting it to changing conditions and ensuring that it serves business or user needs.

Learning to write and communicate yourself is incredibly powerful

“In practice, you often find new ways that machine-learning models surprise you,” says Shtipliyski. “So having flexibility and confidence that the evolved system still works is key. In physics we’re used to big experiments like CMS being designed 20 years before being built. By the time it’s operational, it’s adapted so much from the original spec. It’s no different with machine-learning systems.”

This ability to live with ambiguity and work through evolving systems is one of the strongest foundations physicists can bring. But large complex systems cannot be built alone, so companies will be looking for examples of soft skills: teamwork, collaboration, communication and leadership.

“Most people don’t emphasise these skills, but I found them to be among the most useful,” Shtipliyski says. “Learning to write and communicate yourself is incredibly powerful. Being able to clearly express what you’re doing and why you’re doing it, especially in high-trust environments, makes everything else easier. It’s something I also look for when I do hiring.”

Industry may not offer the same depth of exploration as academia, but it does offer something equally valuable: breadth, variety and a dynamic environment. Work evolves fast, deadlines come more readily and teams are constantly changing.

“In academia, things tend to move more slowly. You’re encouraged to go deep into one specific niche,” says Shtipliyski. “In industry, you often move faster and are sometimes more shallow. But if you can combine the depth of thought from academia with the breadth of experience from industry, that’s a winning combination.”

Applied skills

For physicists eyeing a career in machine learning, the most they can do is to familiarise themselves with tools and practices for building and deploying models. Show that you can use the skills developed in academia and apply them to other environments. This tells recruiters that you have a willingness to learn, and is a simple but effective way of demonstrating commitment to a project from start to finish, beyond your assigned work.

“People coming from physics or mathematics might want to spend more time on implementation,” says Shtipliyski. “Even if you follow a guided walkthrough online, or complete classes on Coursera, going through the whole process of implementing things from scratch teaches you a lot. This puts you in a position to reason about the big picture and shows employers your willingness to stretch yourself, to make trade-offs and to evaluate your work critically.”

A common misconception is that practicing machine learning outside of academia is somehow less rigorous or less meaningful. But in many ways, it can be more demanding.

Scientific development is often driven by arguments of beauty and robustness. In industry, there’s less patience for that,” he says. “You have to apply it to a real-world domain – finance, travel, healthcare. That domain shapes everything: your constraints, your models, even your ethics.”

Shtipliyski emphasises that the technical side of machine learning is only one half of the equation. The other half is organisational: helping teams work together, navigate constraints and build systems that evolve over time. Physicists would benefit from exploring different business domains to understand how machine learning is used in different contexts. For example, GDPR constraints make privacy a critical issue in healthcare and tech. Learning how government funding is distributed throughout each project, as well as understanding how to build a trusting relationship between the funding agencies and the team, is equally important.

“A lot of my day-to-day work is just passing information, helping people build a shared mental model,” he says. “Trust is earned by being vulnerable yourself, which allows others to be vulnerable in turn. Once that happens, you can solve almost any problem.”

Taking the lead

Particle physicists are used to working in high-stakes, international teams, so this collaborative mindset is engrained in their training. But many may not have had the opportunity to lead, manage or take responsibility for an entire project from start to finish.

“In CMS, I did not have a lot of say due to the complexity and scale of the project, but I was able to make meaningful contributions in the validation and running of the detector,” says Shtipliyski. “But what I did not get much exposure to was the end-to-end experience, and that’s something employers really want to see.”

This does not mean you need to be a project manager to gain leadership experience. Early-career researchers have the chance to up-skill when mentoring a newcomer, help improve the team’s workflow in a proactive way, or network with other physicists and think outside the box.

You can be the dedicated expert in the room, even if you’re new. That feels really empowering

“Even if you just shadow an existing project, if you can talk confidently about what was done, why it was done and how it might be done differently – that’s huge.”

Many early-career researchers hesitate prior to leaving academia. They worry about making the “wrong” choice, or being labelled as a “finance person” or “tech person” as soon as they enter another industry. This is something Shtipliyski struggled to reckon with, but eventually realised that such labels do not define you.

“It was tough at CERN trying to anticipate what comes next,” he admits. “I thought that I could only have one first job. What if it’s the wrong one? But once a scientist, always a scientist. You carry your experiences with you.”

Shtipliyski quickly learnt that industry operates under a different set of rules: where everyone comes from a different background, and the levels of expertise differ depending on the person you will speak to next. Having faced intense imposter syndrome at CERN – having shared spaces with world-leading experts – industry offered Shtipliyski a more level playing field.

“In academia, there’s a kind of ladder: the longer you stay, the better you get. In industry, it’s not like that,” says Shtipliyski. “You can be the dedicated expert in the room, even if you’re new. That feels really empowering.”

Industry rewards adaptability as much as expertise. For physicists stepping beyond academia, the challenge is not abandoning their training, but expanding it – learning to navigate ambiguity, communicate clearly and understand the full lifecycle of real-world systems. Harnessing a scientist’s natural curiosity, and demonstrating flexibility, allows the transition to become less about leaving science behind, and more about discovering new ways to apply it.

“You are the collection of your past experiences,” says Shtipliyski. “You have the freedom to shape the future.”

The post Machine learning in industry appeared first on CERN Courier.

]]>
Careers Antoni Shtipliyski offers advice on how early-career researchers can transition into machine-learning roles in industry. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_CAR_Shtipliyski_feature.jpg
DESI hints at evolving dark energy https://cerncourier.com/a/desi-hints-at-evolving-dark-energy/ Fri, 16 May 2025 16:57:24 +0000 https://cerncourier.com/?p=113047 The new data could indicate a deviation from the ΛCDM model.

The post DESI hints at evolving dark energy appeared first on CERN Courier.

]]>
The dynamics of the universe depend on a delicate balance between gravitational attraction from matter and the repulsive effect of dark energy. A universe containing only matter would eventually slow down its expansion due to gravitational forces and possibly recollapse. However, observations of Type Ia supernovae in the late 1990s revealed that our universe’s expansion is in fact accelerating, requiring the introduction of dark energy. The standard cosmological model, called the Lambda Cold Dark Matter (ΛCDM) model, provides an elegant and robust explanation of cosmological observations by including normal matter, cold dark matter (CDM) and dark energy. It is the foundation of our current understanding of the universe.

Cosmological constant

In ΛCDM, Λ refers to the cosmological constant – a parameter introduced by Albert Einstein to counter the effect of gravity in his pursuit of a static universe. With the knowledge that the universe is accelerating, Λ is now used to quantify this acceleration. An important parameter that describes dark energy, and therefore influences the evolution of the universe, is its equation-of-state parameter, w. This value relates the pressure dark energy exerts on the universe, p, to its energy density, ρ, via p = wρ. Within ΛCDM, w is –1 and ρ is constant – a combination that has to date explained observations well. However, new results by the Dark Energy Spectroscopic Instrument (DESI) put these assumptions under increasing stress.

These new results are part of the second data release (DR2) from DESI. Mounted on the Nicholas U Mayall 4-metre telescope at Kitt Peak National Observatory in
Arizona, DESI is optimised to measure the spectra of a large number of objects in the sky simultaneously. Joint observations are possible thanks to 5000 optical fibres controlled through robots, which continuously optimise the focal plane of the detector. Combined with a highly efficient processing pipeline, this allows DESI to perform detailed simultaneous spectrometer measurements of a large number of objects in the sky, resulting in a catalogue of measurements of the distance of objects based on their velocity-induced shift in wavelength, or redshift. For its first data release, DESI used 6 million such redshifts, allowing it to show that w was several sigma away from its expected value of –1 (
CERN Courier May/June 2024 p11). For DR2, 14 million measurements are used, enough to provide strong hints of w changing with time.

The first studies of the expansion rate of the universe were based on redshift measurements of local objects, such as supernovae. As the objects are relatively close, they provide data on the acceleration at small redshifts. An alternative method is to use the cosmic microwave background (CMB), which allows for measurements of the evolution of the early universe through complex imprints left on the current distribution of the CMB. The significantly smaller expansion rate measured through the CMB compared to local measurements resulted in a “Hubble tension”, prompting novel measurements to resolve or explain the observed difference (CERN Courier March/April 2025 p28). One such attempt comes from DESI, which aims to provide a detailed 3D map of the universe focusing on the distance between galaxies to measure the expansion (see “3D map” figure).

Tension with ΛCDM

The 3D map produced by DESI can be used to study the evolution of the universe as it holds imprints from small fluctuations in the density of the early universe. These density fluctuations have been studied through their imprint on the CMB, however, they also left imprints in the distribution of baryonic matter until the age of recombination occurred. The variations in baryonic density grew over time into the varying densities of galaxies and other large-scale structures that are observed today.

The regions originally containing higher baryon densities are now those with larger densities of galaxies. Exactly how the matter-density fluctuations evolved into variations in galaxy densities throughout the universe depends on a range of parameters from the ΛCDM model, including w. The detailed map of the universe produced by DESI, which contains a range of objects with redshifts up to 2.5, can therefore be fitted against the ΛCDM model.

Among other studies, the latest data from DESI was combined with that of CMB observations and fitted to the ΛCDM model. This worked relatively well, although it requires a lower matter-density parameter than found from CMB data alone. However, using the resulting cosmological parameters results in a poor match with the data for the early universe coming from supernova measurements. Similarly, fitting the ΛCDM model using the supernova data results in poor agreement with both the DESI and CMB data, thereby putting some strain on the ΛCDM model. Things don’t get significantly better when adding some freedom in these analyses by allowing w to differ from –1.

The new data release provides significant evidence of a deviation from the ΛCDM model

An adaption of the ΛCDM model that results in an agreement with all three datasets requires w to evolve with redshift, or time. The implications for the acceleration of the universe based on these results are shown in the “Tension with ΛCDM” figure, which shows the deceleration rate of the expansion of the universe as a function of redshift. q < 0 implies an accelerating universe. In the ΛCDM model, acceleration increases with time, as redshift approaches 0. DESI data suggests that the acceleration of the universe started earlier, but is currently less than that predicted by ΛCDM.

Although this model matches the data well, a theoretical explanation is difficult. In particular, the data implies that w(z) was below –1, which translates into an energy density that increases with the expansion; however, the energy density seems to have peaked at a redshift of 0.45 and is now decreasing.

Overall, the new data release provides significant evidence of a deviation from the ΛCDM model. The exact significance depends on the specific analysis and which data sets are combined, however, all such studies provide similar results. As no 5σ discrepancy is found yet, there is no reason to discard ΛCDM, though this could change with another two years of DESI data coming up, along with data from the European Euclid mission, Vera C Rubin Observatory, and the Nancy Grace Roman Space Telescope. Each will provide new insights into the expansion for various redshift periods.

The post DESI hints at evolving dark energy appeared first on CERN Courier.

]]>
News The new data could indicate a deviation from the ΛCDM model. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_NA-DESI.jpg
FCC feasibility study complete https://cerncourier.com/a/fcc-feasibility-study-complete/ Fri, 16 May 2025 16:40:37 +0000 https://cerncourier.com/?p=113038 The final report of a study investigating the technical and financial feasibility of a Future Circular Collider at CERN was released on 31 March.

The post FCC feasibility study complete appeared first on CERN Courier.

]]>
The final report of a detailed study investigating the technical and financial feasibility of a Future Circular Collider (FCC) at CERN was released on 31 March. Building on a conceptual design study conducted between 2014 and 2018, the three-volume report is authored by over 1400 scientists and engineers in more than 400 institutes worldwide, and covers aspects of the project ranging from civil engineering to socioeconomic impact. As recommended in the 2020 update to the European Strategy for Particle Physics (ESPP), it was completed in time to serve as an input to the ongoing 2026 update to the ESPP (see “European strategy update: the community speaks“).

The FCC is a proposed collider infrastructure that could succeed the LHC in the 2040s. Its scientific motivation stems from the discovery in 2012 of the final particle of the Standard Model (SM), the Higgs boson, with a mass of just 125 GeV, and the wealth of precision measurements and exploratory searches during 15 years of LHC operations that have excluded many signatures of new physics at the TeV scale. The report argues that the FCC is particularly well equipped to study the Higgs and associated electroweak sectors in detail and that it provides a broad and powerful exploratory tool that would push the limits of the unknown as far as possible.

The report describes how the FCC will seek to address key domains formulated in the 2013 and 2020 ESPP updates, including: mapping the properties of the Higgs and electroweak gauge bosons with accuracies orders of magnitude better than today to probe the processes that led to the emergence of the Brout–Englert–Higgs field’s nonzero vacuum expectation value; ensuring a comprehensive and accurate campaign of precision electroweak, quantum chromodynamics, flavour and top-quark measurements sensitive to tiny deviations from the SM, probing energy scales far beyond the direct kinematic reach; improving by orders of magnitude the sensitivity to rare and elusive phenomena at low energies, including the possible discovery of light particles with very small couplings such as those relevant to the search for dark matter; and increasing by at least an order of magnitude the direct discovery reach for new particles at the energy frontier.

This technology has significant potential for industrial and societal applications

The FCC research programme outlines two possible stages: an electron–positron collider (FCC-ee) running at several centre-of-mass energies to serve as a Higgs, electroweak and top-quark factory, followed at a later stage by a proton–proton collider (FCC-hh) operating at an unprecedented collision energy. An FCC-ee with four detectors is judged to be “the electroweak, Higgs and top factory project with the highest luminosity proposed to date”, able to produce 6 × 1012 Z bosons, 2.4 × 108 W pairs, almost 3 × 106 Higgs bosons, and 2 × 106 top-quark pairs over 15 years of operations. Its versatile RF system would enable flexibility in the running sequence, states the report, allowing experimenters to move between physics programmes and scan through energies at ease. The report also outlines how the FCC-ee injector offers opportunities for other branches of science, including the production of spatially coherent photon beams with a brightness several orders of magnitude higher than any existing or planned light source.

The estimated cost of the construction of the FCC-ee is CHF 15.3 billion. This investment, which would be distributed over a period of about 15 years starting from the early 2030s, includes civil engineering, technical infrastructure, electron and positron accelerators, and four detectors.

Ready for construction

The report describes how key FCC-ee design approaches, such as a double-ring layout, top-up injection with a full-energy booster, a crab-waist collision scheme, and precise energy calibration, have been demonstrated at several previous or presently operating colliders. The FCC-ee is thus “technically ready for construction” and is projected to deliver four-to-five orders of magnitude higher luminosity per unit electrical power than LEP. During operation, its energy consumption is estimated to vary
from 1.1 to 1.8 TWh/y depending on the operation mode compared to CERN’s current consumption of about 1.3 TWh/y. Decarbonised energy including an ever-growing contribution from renewable sources would be the main source of energy for the FCC. Ongoing technology R&D aims at further increasing FCC-ee’s energy efficiency (see “Powering into the future”).

Assuming 14 T Nb3Sn magnet technology as a baseline design, a subsequent hadron collider with a centre-of-mass energy of 85 TeV entering operation in the early 2070s would extend the energy frontier by a factor six and provide an integrated luminosity five to 10 times higher than that of the HL-LHC during 25 years of operation. With four detectors, FCC-hh would increase the mass reach of direct searches for new particles to several tens of TeV, probing a broad spectrum of beyond-the-SM theories and potentially identifying the sources of any deviations found in precision measurements at FCC-ee, especially those involving the Higgs boson. An estimated sample of more than 20 billion Higgs bosons would allow the absolute determination of its couplings to muons, to photons, to the top quark and to Zγ below the percent level, while di-Higgs production would bring the uncertainty on the Higgs self-coupling below the 5% level. FCC-hh would also significantly advance understanding of the hot QCD medium by enabling lead–lead and other heavy-ion collisions at unprecedented energies, and could be configured to provide electron–proton and electron–ion collisions, says the report.

The FCC-hh design is based on LHC experience and would leverage a substantial amount of the technical infrastructure built for the first FCC stage. Two hadron injector options are under study involving a superconducting machine in either the LHC or SPS tunnel. For the purpose of a technical feasibility analysis, a reference scenario based on 14 T Nb3Sn magnets cooled to 1.9 K was considered, yielding 2.4 MW of synchrotron radiation and a power consumption of 360 MW or 2.3 TWh/y – a comparable power consumption to FCC-ee.

FCC-hh’s power consumption might be reduced below 300 MW if the magnet temperature can be raised to 4.5 K. Outlining the potential use of high-
temperature superconductors for 14 to 20 T dipole magnets operating at temperatures between 4.5 K and 20 K, the report notes that such technology could either extend the centre-of-mass energy of FCC-hh to 120 TeV or lead to significantly improved operational sustainability at the same collision energy. “The time window of more than 25 years opened by the lepton-collider stage is long enough to bring that technology to market maturity,” says FCC study leader Michael Benedikt  (CERN). “High-temperature superconductors have significant potential for industrial and societal applications, and particle accelerators can serve as pilots for market uptake, as was the case with the Tevatron and the LHC for NbTi technology.”

Society and sustainability

The report details the concepts and paths to keep the FCC’s environmental footprint low while boosting new technologies to benefit society and developing territorial synergies such as energy reuse. The civil construction process for FCC-ee, which would also serve FCC-hh, is estimated to result in about 500,000 tCO2(eq) over a period of 10 years, which the authors say corresponds to approximately one-third of the carbon budget of the Paris Olympic Games. A socio-economic impact assessment of the FCC integrating environmental aspects throughout its entire lifecycle reveals a positive cost–benefit ratio, even under conservative assumptions and adverse implementation conditions.

The actual journey towards the realisation of the FCC starts now

A major achievement of the FCC feasibility study has been the development of the layout and placement of the collider ring and related infrastructure, which have been optimised for scientific benefit while taking into account territorial compatibility, environmental and construction constraints, and cost. No fewer than 100 scenarios were developed and analysed before settling on the preferred option: a ring circumference of 90.7 km with shaft depths ranging between 200 and 400 m, with eight surface sites and four experiments. Throughout the study, CERN has been accompanied by its host states, France and Switzerland, working with entities at the local, regional and national levels to ensure a constructive dialogue with territorial stakeholders.

The final report of the FCC feasibility study together with numerous referenced technical documents have been submitted to the ongoing ESPP 2026 update, along with studies of alternative projects proposed by the community. The CERN Council may take a decision around 2028.

“After four years of effort, perseverance and creativity, the FCC feasibility study was concluded on 31 March 2025,” says Benedikt. “The actual journey towards the realisation of the FCC starts now and promises to be at least as fascinating as the successive steps that brought us to the present state.”

The post FCC feasibility study complete appeared first on CERN Courier.

]]>
News The final report of a study investigating the technical and financial feasibility of a Future Circular Collider at CERN was released on 31 March. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_NA-FCC.jpg
Gravitational remnants in the sky https://cerncourier.com/a/gravitational-remnants-in-the-sky/ Fri, 16 May 2025 16:36:48 +0000 https://cerncourier.com/?p=113216 Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to one of the most exciting frontiers in modern cosmology and particle physics.

The post Gravitational remnants in the sky appeared first on CERN Courier.

]]>
Astrophysical gravitational waves have revolutionised astronomy; the eventual detection of cosmological gravitons promises to open an otherwise inaccessible window into the universe’s earliest moments. Such a discovery would offer profound insights into the hidden corners of the early universe and physics beyond the Standard Model. Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to the most exciting frontiers in modern cosmology and particle physics.

Giovannini is an esteemed scholar and household name in the fields of theoretical cosmology and early-universe physics. He has written influential research papers, reviews and books on cosmology, providing detailed discussions on several aspects of the early universe. He also authored 2008’s A Primer on the Physics of the Cosmic Microwave Background – a book most cosmologists are very familiar with.

In Relic Gravitons, Giovannini provides a comprehensive exploration of recent developments in the field, striking a remarkable balance between clarity, physical intuition and rigorous mathematical formalism. As such, it serves as an excellent reference – equally valuable for both junior researchers and seasoned experts seeking depth and insight into theoretical cosmology and particle physics.

Relic Gravitons opens with an overview of cosmological gravitons, offering a broad perspective on gravitational waves across different scales and cosmological epochs, while drawing parallels with the electromagnetic spectrum. This graceful introduction sets the stage for a well-contextualised and structured discussion.

Gravitational rainbow

Relic gravitational waves from the early universe span 30 orders of magnitude, from attohertz to gigahertz. Their wavelengths are constrained from above by the Hubble radius, setting a lower frequency bound of 10–18 Hz. At the lowest frequencies, measurements of the cosmic microwave background (CMB) provide the most sensitive probe of gravitational waves. In the nanohertz range, pulsar timing arrays serve as powerful astrophysical detectors. At intermediate frequencies, laser and atomic interferometers are actively probing the spectrum. At higher frequencies, only wide-band interferometers such as LIGO and Virgo currently operate, primarily within the audio band spanning from a few hertz to several kilohertz.

Relic Gravitons

The theoretical foundation begins with a clear and accessible introduction to tensor modes in flat spacetime, followed by spherical harmonics and polarisations. With these basics in place, tensor modes in curved spacetime are also explored, before progressing to effective action, the quantum mechanics of relic gravitons and effective energy density. This structured progression builds a solid framework for phenomenological applications.

The second part of the book is about the signals of the concordance paradigm, which includes discussions of Sakharov oscillations, short, intermediate and long wavelengths, before entering technical interludes in the next section. Here, Giovannini emphasises that the evolution of the comoving Hubble radius is uncertain, spectral energy density and other observables require approximate methods. The chapter expands to include conventional results using the Wentzel–Kramers–Brillouin approach, which is particularly useful when early-universe dynamics deviate from standard inflation.

Phenomenological implications are discussed in the final section, starting with the low-frequency branch that covers the analysis of the phenomenological implications in the lowest-frequency domain. Giovannini then examines the intermediate and high-frequency ranges. The concordance paradigm suggests that large-scale inhomogeneities originate from quantum mechanics, where traveling waves transform into standing waves. The penultimate chapter addresses the hot topic of the “quantumness” of relic gravitons, before diving into the conclusion. The book finishes with five appendices covering all sorts of useful topics, from notation to basic related topics in general relativity and cosmic perturbations.

Relic Gravitons is a must-read for anyone intrigued by the gravitational-wave background and its unparalleled potential to unveil new physics. It is an invaluable resource for those interested in gravitational waves and the unique potential to explore the unknown parts of particle physics and cosmology.

The post Gravitational remnants in the sky appeared first on CERN Courier.

]]>
Review Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to one of the most exciting frontiers in modern cosmology and particle physics. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Rev_GreenBank.jpg
Colour information diffuses in Frankfurt https://cerncourier.com/a/colour-information-diffuses-in-frankfurt/ Fri, 16 May 2025 16:35:40 +0000 https://cerncourier.com/?p=113057 The 31st Quark Matter conference was the best attended in the series’ history, with more than 1000 participants.

The post Colour information diffuses in Frankfurt appeared first on CERN Courier.

]]>
Quark Matter 2025

The 31st Quark Matter conference took place from 6 to 12 April at Goethe University in Frankfurt, Germany. This edition of the world’s flagship conference for ultra-relativistic heavy-ion physics was the best attended in the series’ history, with more than 1000 participants.

A host of experimental measurements and theoretical calculations targeted fundamental questions in many-body QCD. These included the search for a critical point along the QCD phase diagram, the extraction of the properties of the deconfined quark–gluon plasma (QGP) medium created in heavy-ion collisions, and the search for signatures of the formation of this deconfined medium in smaller collision systems.

Probing thermalisation

New results highlighted the ability of the strong force to thermalise the out-of-equilibrium QCD matter produced during the collisions. Thermalisation can be probed by taking advantage of spatial anisotropies in the initial collision geometry which, due to the rapid onset of strong interactions at early times, result in pressure gradients across the system. These pressure gradients in turn translate into a momentum-space anisotropy of produced particles in the bulk, which can be experimentally measured by taking a Fourier transform of the azimuthal distribution of final-state particles with respect to a reference event axis.

An area of active experimental and theoretical interest is to quantify the degree to which heavy quarks, such as charm and beauty, participate in this collective behaviour, which informs on the diffusion properties of the medium. The ALICE collaboration presented the first measurement of the second-order coefficient of the momentum anisotropy of charm baryons in Pb–Pb collisions, showing significant collective behaviour and suggesting that charm quarks undergo some degree of thermalisation. This collective behaviour appears to be stronger in charm baryons than charm mesons, following similar observations for light flavour.

A host of measurements and calculations targeted fundamental questions in many-body QCD

Due to the nature of thermalisation and the long hydrodynamic phase of the medium in Pb–Pb collisions, signatures of the microscopic dynamics giving rise to the thermalisation are often washed out in bulk observables. However, local excitations of the hydrodynamic medium, caused by the propagation of a high-energy jet through the QGP, can offer a window into such dynamics. Due to coupling to the coloured medium, the jet loses energy to the QGP, which in turn re-excites the thermalised medium. These excited states quickly decay and dissipate, and the local perturbation can partially thermalise. This results in a correlated response of the medium in the direction of the propagating jet, the distribution of which allows measurement of the thermalisation properties of the medium in a more controlled manner.

In this direction, the CMS collaboration presented the first measurement of an event-wise two-point energy–energy correlator, for events containing a Z boson, in both pp and Pb–Pb collisions. The two-point correlator represents the energy-weighted cross section of the angle between particle pairs in the event and can separate out QCD effects at different scales, as these populate different regions in angular phase space. In particular, the correlated response of the medium is expected to appear at large angles in the correlator in Pb–Pb collisions.

The use of a colourless Z boson, which does not interact in the QGP, allows CMS to compare events with similar initial virtuality scales in pp and Pb–Pb collisions, without incurring biases due to energy loss in the QCD probes. The collaboration showed modifications in the two-point correlator at large angles, from pp to Pb–Pb collisions, alluding to a possible signature of the correlated response of the medium to the traversing jets. Such measurements can help guide models into capturing the relevant physical processes underpinning the diffusion of colour information in the medium.

Looking to the future

The next addition of this conference series will take place in 2027 in Jeju, South Korea, and the new results presented there should notably contain the latest complement of results from the upgraded Run 3 detectors at the LHC and the newly commissioned sPHENIX detector at RHIC. New collision systems like O–O at the LHC will help shed light on many of the properties of the QGP, including its thermalisation, by varying the lifetime of the pre-equilibrium and hydrodynamic phases in the collision evolution.

The post Colour information diffuses in Frankfurt appeared first on CERN Courier.

]]>
Meeting report The 31st Quark Matter conference was the best attended in the series’ history, with more than 1000 participants. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_FN_Quark_feature.jpg
PhyStat turns 25 https://cerncourier.com/a/phystat-turns-25/ Fri, 16 May 2025 16:31:48 +0000 https://cerncourier.com/?p=112707 On 16 January, physicists and statisticians met in the CERN Council Chamber to celebrate 25 years of the PhyStat series of conferences, workshops and seminars.

The post PhyStat turns 25 appeared first on CERN Courier.

]]>
Confidence intervals

On 16 January, physicists and statisticians met in the CERN Council Chamber to celebrate 25 years of the PhyStat series of conferences, workshops and seminars, which bring together physicists, statisticians and scientists from related fields to discuss, develop and disseminate methods for statistical data analysis and machine learning.

The special symposium heard from the founder and primary organiser of the PhyStat series Louis Lyons (Imperial College London and University of Oxford), who together with Fred James and Yves Perrin initiated the movement with the “Workshop on Confidence Limits” in January 2000. According to Lyons, the series was to bring together physicists and statisticians, a philosophy that has been followed and extended throughout the 22 PhyStat workshops and conferences, as well as numerous seminars and “informal reviews”. Speakers called attention to recognition from the Royal Statistical Society’s pictorial timeline of statistics, starting with the use of averages by Hippias of Elis in 450 BC and culminating with the 2012 discovery of the Higgs boson with 5σ significance.

Lyons and Bob Cousins (UCLA) offered their views on the evolution of statistical practice in high-energy physics, starting in the 1960s bubble-chamber era, strongly influenced by the 1971 book Statistical Methods in Experimental Physics by W T Eadie et al., its 2006 second edition by symposium participant Fred James (CERN), as well as Statistics for Nuclear and Particle Physics (1985) by Louis Lyons – reportedly the most stolen book from the CERN library. Both Lyons and Cousins noted the interest of the PhyStat community not only in practical solutions to concrete problems but also in foundational questions in statistics, with the focus on frequentist methods setting high-energy physics somewhat apart from the Bayesian approach more widely used in astrophysics.

Giving his view of the PhyStat era, ATLAS physicist and director of the University of Wisconsin Data Science Institute Kyle Cranmer emphasised the enormous impact that PhyStat has had on the field, noting important milestones such as the ability to publish full likelihood models through the statistical package RooStats, the treatment of systematic uncertainties with profile-likelihood ratio analyses, methods for combining analyses, and the reuse of published analyses to place constraints on new physics models. In regards to the next 25 years, Cranmer predicted the increasing use of methods that have emerged from PhyStat, such as simulation-based inference, and pointed out that artificial intelligence (the elephant in the room) could drastically alter how we use statistics.

Statistician Mikael Kuusela (CMU) noted that Phystat workshops have provided important two-way communication between the physics and statistics communities, citing simulation-based inference as an example where many key ideas were first developed in physics and later adopted by statisticians. In his view, the use of statistics in particle physics has emerged as “phystatistics”, a proper subfield with distinct problems and methods.

Another important feature of the PhyStat movement has been to encourage active participation and leadership by younger members of the community.  With its 25th anniversary, the torch is now passed from Louis Lyons to Olaf Behnke (DESY), Lydia Brenner (NIKHEF) and a younger team, who will guide Phystat into the next 25 years and beyond.

The post PhyStat turns 25 appeared first on CERN Courier.

]]>
Meeting report On 16 January, physicists and statisticians met in the CERN Council Chamber to celebrate 25 years of the PhyStat series of conferences, workshops and seminars. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_phystat_feature.jpg
Gaseous detectors school at CERN https://cerncourier.com/a/gaseous-detectors-school-at-cern/ Fri, 16 May 2025 16:29:04 +0000 https://cerncourier.com/?p=112717 DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors.

The post Gaseous detectors school at CERN appeared first on CERN Courier.

]]>
How do wire-based detectors compare to resistive-plate chambers? How well do micropattern gaseous detectors perform? Which gas mixtures optimise operation? How will detectors face the challenges of future more powerful accelerators?

Thirty-two students attended the first DRD1 Gaseous Detectors School at CERN last November. The EP-DT Gas Detectors Development (GDD) lab hosted academic lectures and varied hands-on laboratory exercises. Students assembled their own detectors, learnt about their operating characteristics and explored radiation-imaging methods with state-of-the-art readout approaches – all under the instruction of more than 40 distinguished lecturers and tutors, including renowned scientists, pioneers of innovative technologies and emerging experts.

DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors. The collaboration focuses on knowledge sharing and scientific exchange, in addition to the development of novel gaseous detector technologies to address the needs of future experiments. This instrumentation school, initiated in DRD1’s first year, marks the start of a series of regular training events for young researchers that will also serve to exchange ideas between research groups and encourage collaboration.

The school will take place annually, with future editions hosted at different DRD1 member institutes to reach students from a number of regions and communities.

The post Gaseous detectors school at CERN appeared first on CERN Courier.

]]>
Meeting report DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors. https://cerncourier.com/wp-content/uploads/2025/03/CCMayJun25_FN_DRD1.jpg
Planning for precision at Moriond https://cerncourier.com/a/planning-for-precision-at-moriond/ Fri, 16 May 2025 16:26:44 +0000 https://cerncourier.com/?p=113063 Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions.

The post Planning for precision at Moriond appeared first on CERN Courier.

]]>
Since 1966 the Rencontres de Moriond has been one of the most important conferences for theoretical and experimental particle physicists. The Electroweak Interactions and Unified Theories session of the 59th edition attracted about 150 participants to La Thuile, Italy, from 23 to 30 March, to discuss electroweak, Higgs-boson, top-quark, flavour, neutrino and dark-matter physics, and the field’s links to astrophysics and cosmology.

Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions. These are particularly important while the international community discusses future projects, basing projections on current results and technology. The conference heard how theoretical investigations of specific models and “catch all” effective field theories are being sharpened to constrain a broader spectrum of possible extensions of the Standard Model. Theoretical parametric uncertainties are being greatly reduced by collider precision measurements and lattice QCD. Perturbative calculations of short-distance amplitudes are reaching to percent-level precision, while hadronic long-distance effects are being investigated both in B-, D- and K-meson decays, as well as in the modelling of collider events.

Comprehensive searches

Throughout Moriond 2025 we heard how a broad spectrum of experiments at the LHC, B factories, neutrino facilities, and astrophysical and cosmological observatories are planning upgrades to search for new physics at both low- and high-energy scales. Several fields promise qualitative progress in understanding nature in the coming years. Neutrino experiments will measure the neutrino mass hierarchy and CP violation in the neutrino sector. Flavour experiments will exclude or confirm flavour anomalies. Searches for QCD axions and axion-like particles will seek hints to the solution of the strong CP problem and possible dark-matter candidates.

The Standard Model has so far been confirmed to be the theory that describes physics at the electroweak scale (up to a few hundred GeV) to a remarkable level of precision. All the particles predicted by the theory have been discovered, and the consistency of the theory has been proven with high precision, including all calculable quantum effects. No direct evidence of new physics has been found so far. Still, big open questions remain that the Standard Model cannot answer, from understanding the origin of neutrino masses and their hierarchy, to identifying the origin and nature of dark matter and dark energy, and explaining the dynamics behind the baryon asymmetry of the universe.

Several fields promise qualitative progress in understanding nature in the coming years

The discovery of the Higgs boson has been crucial to confirming the Standard Model as the theory of particle physics at the electroweak scale, but it does not explain why the scalar Brout–Englert–Higgs (BEH) potential takes the form of a Mexican hat, why the electroweak scale is set by a Higgs vacuum expectation value of 246 GeV, or what the nature of the Yukawa force is that results in the bizarre hierarchy of masses coupling the BEH field to quarks and leptons. Gravity is also not a component of the Standard Model, and a unified theory escapes us.

At the LHC today, the ATLAS and CMS collaborations are delivering Run 1 and 2 results with beyond-expectation accuracies on Higgs-boson properties and electroweak precision measurements. Projections for the high-luminosity phase of the LHC are being updated and Run 3 analyses are in full swing. The LHCb collaboration presented another milestone in flavour physics for the first time at Moriond 2025: the first observation of CP violation in baryon decays. Its rebuilt Run 3 detector with triggerless readout and full software trigger reported its first results at this conference.

Several talks presented scenarios of new physics that could be revealed in today’s data given theoretical guidance of sufficient accuracy. These included models with light weakly interacting particles, vector-like fermions and additional scalar particles. Other talks discussed how revisiting established quantum properties such as entanglement with fresh eyes could offer unexplored avenues to new theoretical paradigms and overlooked new-physics effects.

The post Planning for precision at Moriond appeared first on CERN Courier.

]]>
Meeting report Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_FN_moriond.jpg
Pinpointing polarisation in vector-boson scattering https://cerncourier.com/a/pinpointing-polarisation-in-vector-boson-scattering/ Fri, 16 May 2025 16:20:59 +0000 https://cerncourier.com/?p=113145 Interactions involving longitudinally polarised W and Z bosons provide a stringent test of the SM.

The post Pinpointing polarisation in vector-boson scattering appeared first on CERN Courier.

]]>
In the Standard Model (SM), W and Z bosons acquire mass and longitudinal polarisation through electroweak (EW) symmetry breaking, where the Brout–Englert–Higgs mechanism transforms Goldstone bosons into their longitudinal components. One of the most powerful ways to probe this mechanism is through vector-boson scattering (VBS), a rare process represented in figure 1, where two vector bosons scatter off each other. At high (TeV-scale) energies, interactions involving longitudinally polarised W and Z bosons provide a stringent test of the SM. Without the Higgs boson’s couplings to these polarisation states, their interaction rates would grow uncontrollably with energy, eventually violating unitarity, indicating a complete breakdown of the SM.

Measuring the polarisation of same electric charge (same sign) W-boson pairs in VBS directly tests the predicted EW interactions at high energies through precision measurements. Furthermore, beyond-the-SM scenarios predict modifications to VBS, some affecting specific polarisation states, rendering such measurements valuable avenues for uncovering new physics.

ATLAS figure 2

Using the full proton–proton collision dataset from LHC Run 2 (2015–2018, 140 fb–1 at 13 TeV), the ATLAS collaboration recently published the first evidence for longitudinally polarised W bosons in the electroweak production of same-sign W-boson pairs in final states including two same-sign leptons (electrons or muons) and missing transverse momentum, along with two jets (EW W±W±jj). This process is categorised by the polarisation states of the W bosons: fully longitudinal (WL±WL±jj), mixed (WL±WT±jj), and fully transverse (WT±WT±jj). Measuring the polarisation states is particularly challenging due to the rarity of the VBS events, the presence of two undetected neutrinos, and the absence of a single kinematic variable that efficiently distinguishes between polarisation states. To overcome this, deep neural networks (DNNs) were trained to exploit the complex correlations between event kinematic variables that characterise different polarisations. This approach enabled the separation of the fully longitudinal WL±WL±jj from the combined WT±W±jj (WL±WT±jj plus WT±WT±jj) processes as well as the combined WL±W±jj (WL±WL±jj plus WL±WT±jj) from the purely transverse WT±WT±jj contribution.

To measure the production of WL±WL±jj and WL±W±jj processes, a first DNN (inclusive DNN) was trained to distinguish EW W±W±jj events from background processes. Variables such as the invariant mass of the two highest-energy jets provide strong discrimination for this classification. In addition, two independent DNNs (signal DNNs) were trained to extract polarisation information, separating either WL±WL±jj from WT±W±jj or WL±W±jj from WT±WT±jj, respectively. Angular variables, such as the azimuthal angle difference between the leading leptons and the pseudorapidity difference between the leading and subleading jets, are particularly sensitive to the scattering angles of the W bosons, enhancing the separation power of the signal DNNs. Each DNN is trained using up to 20 kinematic variables, leveraging correlations among them to improve sensitivity.

The signal DNN distributions, within each inclusive DNN region, were used to extract the WL±WL±jj and WL±W±jj polarisation fractions through two independent maximum-likelihood fits. The excellent separation between the WL±W±jj and WT±WT±jj processes can be seen in figure 2 for the WL±W±jj fit, achieving better separation for higher scores of the signal DNN, represented in the x-axis. An observed (expected) significance of 3.3 (4.0) standard deviations was obtained for WL±W±jj, providing the first evidence of same-sign WW production with at least one of the W bosons longitudinally polarised. No significant excess of events consistent with WL±WL±jj production was observed, leading to the most stringent 95% confidence-level upper limits to date on the WL±WL±jj cross section: 0.45 (0.70) fb observed (expected).

There is still much to understand about the electroweak sector of the Standard Model, and the measurement presented in this article remains limited by the size of the available data sample. The techniques developed in this analysis open new avenues for studying W- and Z-boson polarisation in VBS processes during the LHC Run 3 and beyond.

The post Pinpointing polarisation in vector-boson scattering appeared first on CERN Courier.

]]>
News Interactions involving longitudinally polarised W and Z bosons provide a stringent test of the SM. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_EF-ATLAS1.jpg
Particle Cosmology and Astrophysics https://cerncourier.com/a/particle-cosmology-and-astrophysics/ Fri, 16 May 2025 16:10:30 +0000 https://cerncourier.com/?p=113221 In Particle Cosmology and Astrophysics, Dan Hooper captures the rapid developments in particle cosmology over the past three decades.

The post Particle Cosmology and Astrophysics appeared first on CERN Courier.

]]>
Particle Cosmology and Astrophysics

In 1989, Rocky Kolb and Mike Turner published The Early Universe – a seminal book that offered a comprehensive introduction to the then-nascent field of particle cosmology, laying the groundwork for a generation of physicists to explore the connections between the smallest and largest scales of the universe. Since then, the interfaces between particle physics, astrophysics and cosmology have expanded enormously, fuelled by an avalanche of new data from ground-based and space-borne observatories.

In Particle Cosmology and Astrophysics, Dan Hooper follows in their footsteps, providing a much-needed update that captures the rapid developments of the past three decades. Hooper, now a professor at the University of Wisconsin–Madison, addresses the growing need for a text that introduces the fundamental concepts and synthesises the vast array of recent discoveries that have shaped our current understanding of the universe.

Hooper’s textbook opens with 75 pages of “preliminaries”, covering general relativity, cosmology, the Standard Model of particle physics, thermodynamics and high-energy processes in astrophysics. Each of these disciplines is typically introduced in a full semester of dedicated study, supported by comprehensive texts. For example, students seeking a deeper understanding of high-energy phenomena are likely to benefit from consulting Longair’s High Energy Astrophysics or Sigl’s Astroparticle Physics. Similarly, those wishing to advance their knowledge in particle physics will find that more detailed treatments are available in Griffiths’ Introduction to Elementary Particles or Peskin and Schroeder’s An Introduction to Quantum Field Theory, to mention just a few textbooks recommended by the author.

A much-needed update that captures the rapid developments of the past three decades

By distilling these complex subjects into just enough foundational content, Hooper makes the field accessible to those who have been exposed to only a fraction of the standard coursework. His approach provides an essential stepping stone, enabling students to embark on research in particle cosmology and astrophysics with a well calibrated introduction while still encouraging further study through more specialised texts.

Part II, “Cosmology”, follows a similarly pragmatic approach, providing an updated treatment that parallels Kolb and Turner while incorporating a range of topics that have, in the intervening years, become central to modern cosmology. The text now covers areas such as cosmic microwave background (CMB) anisotropies, the evidence for dark matter and its potential particle candidates, the inflationary paradigm, and the evidence and possible nature of dark energy.

Hooper doesn’t shy away from complex subjects, even when they resist simple expositions. The discussion on CMB anisotropies serves as a case in point: anyone who has attempted to condense this complex topic into a few graduate lectures is aware of the challenge in maintaining both depth and clarity. Instead of attempting an exhaustive technical introduction, Hooper offers a qualitative description of the evolution of density perturbations and how one extracts cosmological parameters from CMB observations. This approach, while not substituting for the comprehensive analysis found in texts such as Dodelson’s Modern Cosmology or Baumann’s Cosmology, provides students with a valuable overview that successfully charts the broad landscape of modern cosmology and illustrates the interconnectedness of its many subdisciplines.

Part III, “Particle Astrophysics”, contains a selection of topics that largely reflect the scientific interests of the author, a renowned expert in the field of dark matter. Some colleagues might raise an eyebrow at the book devoting 10 pages each to entire fields such as cosmic rays, gamma rays and neutrino astrophysics, and 50 pages to dark-matter candidates and searches. Others might argue that a book titled Particle Cosmology and Astrophysics is incomplete without detailing the experimental techniques behind the extraordinary advances witnessed in these fields and without at least a short introduction to the booming field of gravitational-wave astronomy. But the truth is that, in the author’s own words, particle cosmology and astrophysics have become “exceptionally multidisciplinary,” and it is impossible in a single textbook to do complete justice to domains that intersect nearly all branches of physics and astronomy. I would also contend that it is not only acceptable but indeed welcome for authors to align the content of their work with their own scientific interests, as this contributes to the diversity of textbooks and offers more choice to lecturers who wish to supplement a standard curriculum with innovative, interdisciplinary perspectives.

Ultimately, I recommend the book as a welcome addition to the literature and an excellent introductory textbook for graduate students and junior scientists entering the field.

The post Particle Cosmology and Astrophysics appeared first on CERN Courier.

]]>
Review In Particle Cosmology and Astrophysics, Dan Hooper captures the rapid developments in particle cosmology over the past three decades. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Rev_hooper_feature.jpg
ALICE measures a rare Ω baryon https://cerncourier.com/a/alice-measures-a-rare-%cf%89-baryon/ Fri, 16 May 2025 16:08:24 +0000 https://cerncourier.com/?p=113150 These results will improve the theoretical description of excited baryons.

The post ALICE measures a rare Ω baryon appeared first on CERN Courier.

]]>
ALICE figure 1

Since the discovery of the electron and proton over 100 years ago, physicists have observed a “zoo” of different types of particles. While some of these particles have been fundamental, like neutrinos and muons, many are composite hadrons consisting of quarks bound together by the exchange of gluons. Studying the zoo of hadrons – their compositions, masses, lifetimes and decay modes – allows physicists to understand the details of the strong interaction, one of the fundamental forces of nature.

The Ω(2012) was discovered by the Belle Collaboration in 2018. The ALICE collaboration recently released an observation of a signal consistent with it with a significance of 15σ in proton–proton (pp) collisions at a centre-of-mass energy of 13 TeV. This is the first observation of the Ω(2012) by another experiment.

While the details of its internal structure are still up for debate, the Ω(2012) consists, at minimum, of three strange quarks bound together. It is a heavier, excited version of the ground-state Ω baryon discovered in 1964, which also contains three strange quarks. Multiple theoretical models predicted a spectrum of excited Ω baryons, with some calling for a state with a mass around 2 GeV. Following the discovery of the Ω(2012), theoretical work has attempted to describe its internal structure, with hypotheses including a simple three-quark baryon or a hadronic molecule.

Using a sample of a billion pp collisions, ALICE has measured the decay of Ω(2012) baryons to ΞK0S pairs. After traveling a few centimetres, these hadrons decay in turn, eventually producing a proton and four charged pions that are tracked by the ALICE detector.

ALICE’s measurements of the mass and width of the Ω(2012) are consistent with Belle’s, and superior precision on the mass. ALICE has also confirmed the rather narrow width of around 6 MeV, which indicates that the Ω(2012) is fairly long-lived for a particle that decays via the strong interaction. Belle and ALICE’s width measurements also lend support to the conclusion that the Ω(2012) has a spin-parity configuration of JP = 3/2.

ALICE also measured the number of Ω(2012) decays to ΞK0S pairs. By comparing this to the total Ω(2012) yield based on statistical thermal model calculations, ALICE has estimated the absolute branching ratio for the Ω(2012) → ΞK0 decay. A branching ratio is the probability of decay to a given mode. The ALICE results indicate that Ω(2012) undergoes two-body (ΞK) decays more than half the time, disfavouring models of the Ω(2012) structure that require large branching ratios for three-body decays.

The present ALICE results will help to improve the theoretical description of the structure of excited baryons. They can also serve as baseline measurements in searches for modifications of Ω-baryon properties in nucleus–nucleus collisions. In the future, Ω(2012) bary­ons may also serve as new probes to study the strangeness enhancement effect observed in proton–proton and nucleus–nucleus collisions.

The post ALICE measures a rare Ω baryon appeared first on CERN Courier.

]]>
News These results will improve the theoretical description of excited baryons. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_EF-ALICE_feature.jpg
Exographer https://cerncourier.com/a/exographer/ Fri, 16 May 2025 15:45:52 +0000 https://cerncourier.com/?p=113226 Exographer puts you in the shoes of a scientist with a barrage of apparatus to investigate the world, writes our reviewer.

The post Exographer appeared first on CERN Courier.

]]>
Exographer

Try lecturing the excitement of subatomic particle discovery to physics students, and you might inspire several future physicists. Lecture physics to a layperson, and you might get a completely different response. Not everyone is excited about particle physics by listening to lectures alone. Sometimes video games can help. 

Exographer, the brainchild of Raphael Granier de Cassagnac (CERN Courier March/April 2025 p48), puts you in the shoes of an investigator in a world where scientists are fascinated by what their planet is made of, and have made a barrage of apparatus to investigate it. Your role is to traverse through this beautiful realm and solve puzzles that may lead to future discoveries, encountering frustration and excitement along the way.

The puzzles are neither nerve-racking nor too difficult, but solving each one brings immense satisfaction, much like the joy of discoveries in particle physics. These eureka moments make up for the hundreds of times when you fell to your death because you forgot to use the item that could have saved you.

The most important part of the game is taking pictures, particularly inside particle detectors. These reveal the tracks of particles, reminiscent of Feynman diagrams. It’s your job to figure out what particles leave these tracks. Is it a known particle? Is it new? Can we add it to our collection?

I am sure that the readers of CERN Courier will be familiar with particle discoveries throughout the past century, but as a particle physicist I still found awe and joy in rediscovering them whilst playing the game. It feels like walking through a museum, with each apparatus you encounter more sophisticated than the last. The game also hides an immensely intriguing lore of scientists from our own world. Curious gamers who spend extra time unravelling these stories are rewarded with various achievements.

All in all, this game is a nice introduction to the world of particle-physics discovery – an enjoyable puzzle/platformer game you should try, regardless of whether or not you are a physicist. 

The post Exographer appeared first on CERN Courier.

]]>
Review Exographer puts you in the shoes of a scientist with a barrage of apparatus to investigate the world, writes our reviewer. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Rev_exographer_feature.jpg
Tau leptons from light resonances https://cerncourier.com/a/tau-leptons-from-light-resonances/ Fri, 16 May 2025 15:40:37 +0000 https://cerncourier.com/?p=113136 Among the fundamental particles, tau leptons occupy a curious spot.

The post Tau leptons from light resonances appeared first on CERN Courier.

]]>
CMS figure 1

Among the fundamental particles, tau leptons occupy a curious spot. They participate in the same sort of reactions as their lighter lepton cousins, electrons and muons, but their large mass means that they can also decay into a shower of pions and they interact more strongly with the Higgs boson. In many new-physics theories, Higgs-like particles – beyond that of the Standard Model – are introduced in order to explain the mass hierarchy or as possible portals to dark matter.

Because of their large mass, tau leptons are especially useful in searches for new physics. However, identifying taus is challenging, as in most cases they decay into a final state of one or more pions and an undetected neutrino. A crucial step in the identification of a tau lepton in the CMS experiment is the hadrons-plus-strips (HPS) algorithm. In the standard CMS reconstruction, a minimum momentum threshold of 20 GeV is imposed, such that the taus have enough momentum to make their decay products fall into narrow cones. However, this requirement reduces sensitivity to low-momentum taus. As a result, previous searches for a Higgs-like resonance φ decaying into two tau leptons required a φ-mass of more than 60 GeV.

CMS figure 2

The CMS experiment has now been able to extend the φ-mass range down to 20 GeV. To improve sensitivity to low-momentum tau decays, machine learning is used to determine a dynamic cone algorithm that expands the cone size as needed. The new algorithm, requiring one tau decaying into a muon and two neutrinos and one tau decaying into hadrons and a neutrino, is implemented in the CMS Scouting trigger system. Scouting extends CMS’s reach into previously inaccessible phase space by retaining only the most relevant information about the event, and thus facilitating much higher event rates.

The sensitivity of the new algorithm is so high that even the upsilon (Υ) meson, a bound state of the bottom quark and its antiquark, can be seen. Figure 1 shows the distribution of the mass of the visible decay products of tau (Mvis), in this case a muon from one tau lepton and either one or three pions from the other. A clear resonance structure is visible at Mvis = 6 GeV, in agreement with the expectation for the Υ meson. The peak is not at the actual mass of the Υ meson (9.46 GeV) due to the presence of neutrinos in the decay. While Υττ decays have been observed at electron–positron colliders, this marks the first evidence at a hadron collider and serves as an important benchmark for the analysis.

Given the high sensitivity of the new algorithm, CMS performed a search for a possible resonance in the range between 20 and 60 GeV using the data recorded in the years 2022 and 2023, and set competitive exclusion limits (see figure 2). For the 2024 and 2025 data taking, the algorithm was further improved, enhancing the sensitivity even more.

The post Tau leptons from light resonances appeared first on CERN Courier.

]]>
News Among the fundamental particles, tau leptons occupy a curious spot. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_EF-CMS_feature.jpg
Walter Oelert 1942–2024 https://cerncourier.com/a/walter-oelert-1942-2024/ Fri, 16 May 2025 15:36:26 +0000 https://cerncourier.com/?p=113189 Walter Oelert, founding spokesperson of COSY-11 and an experimentalist of rare foresight in the study of antimatter, passed away on 25 November 2024.

The post Walter Oelert 1942–2024 appeared first on CERN Courier.

]]>
Walter Oelert

Walter Oelert, founding spokesperson of COSY-11 and an experimentalist of rare foresight in the study of antimatter, passed away on 25 November 2024.

Walter was born in Dortmund on 14 July 1942. He studied physics in Hamburg and Heidelberg, achieving his diploma on solid-state detectors in 1969 and his doctoral thesis on transfer reactions on samarium isotopes in 1973. He spent the years from 1973 to 1975 working on transfer reactions of rare-earth elements as a postdoc in Pittsburgh under Bernie Cohen, after which he continued his nuclear-physics experiments at the Jülich cyclotron.

With the decision to build the “Cooler Synchrotron” (COSY) at Forschungszentrum Jülich (FZJ), he terminated his work on transfer reactions, summarised it in a review article, and switched to the field of medium-energy physics. At the end of 1985 he conducted a research stay at CERN, contributing to the PS185 and the JETSET (PS202) experiments at the antiproton storage ring LEAR, while also collaborating with Swedish partners at the CELSIUS synchrotron in Uppsala. In 1986 he habilitated at Ruhr University Bochum, where he was granted an APL professorship in 1996.

With the experience gained at CERN, Oelert proposed the construction of the international COSY-11 experiment as spokesperson, leading the way on studies of threshold production with full acceptance for the reaction products. From first data in 1996, COSY-11 operated successfully for 11 years, producing important results in several meson-production channels.

At CERN, Walter proposed the production of antihydrogen in the interaction of the antiproton beam with a xenon cluster target – the last experiment before the shutdown of LEAR. The experiment was performed in 1995, resulting in the production of nine antihydrogen atoms. This result was an important factor in the decision by CERN management to build the antiproton–decelerator (AD). In order to continue antihydrogen studies, he received substantial support from Jülich for a partnership in the new ATRAP experiment aiming for CPT violation studies in antihydrogen spectroscopy.

Walter retired in 2008, but kept active in antiproton activities at the AD for more than 10 years, during which time he was affiliated with the Johannes Gutenberg University of Mainz. He was one of the main driving forces on the way to the extra-low-energy antiproton ring (ELENA), which was finally built within time and financial constraints, and drastically improved the performance of the antimatter experiments. He also received a number of honours, notably the Merentibus Medal of the Jagiellonian University of Kraków, and was elected as an external member of the Polish Academy of Arts and Sciences.

Walter’s personality – driven, competent, visionary, inspiring, open minded and caring – was the type of glue that made proactive, successful and happy collaborations.

The post Walter Oelert 1942–2024 appeared first on CERN Courier.

]]>
News Walter Oelert, founding spokesperson of COSY-11 and an experimentalist of rare foresight in the study of antimatter, passed away on 25 November 2024. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Obits_Oelert_feature.jpg
Grigory Vladimirovich Domogatsky 1941–2024 https://cerncourier.com/a/grigory-vladimirovich-domogatsky-1941-2024/ Fri, 16 May 2025 15:34:38 +0000 https://cerncourier.com/?p=113195 Grigory Vladimirovich Domogatsky, spokesman of the Baikal Neutrino Telescope project, passed away on 17 December 2024 at the age of 83.

The post Grigory Vladimirovich Domogatsky 1941–2024 appeared first on CERN Courier.

]]>
Grigory Vladimirovich Domogatsky, spokesman of the Baikal Neutrino Telescope project, passed away on 17 December 2024 at the age of 83.

Born in Moscow in 1941, Domogatsky obtained his PhD in 1970 from Moscow Lomonosov University and then worked at the Moscow Lebedev Institute. There, he studied the processes of the interaction of low-energy neutrinos with matter and neutrino emission during the gravitational collapse of stars. His work was essential for defining the scientific programme of the Baksan Neutrino Observatory. Already at that time, he had put forward the idea of a network of underground detectors to register neutrinos from supernovae, a programme realised decades later by the current SuperNova Early Warning System, SNEWS. Together with his co-author Dmitry Nadyozhin, he showed that neutrinos released in star collapses are drivers in the formation of isotopes such as Li-7, Be-8 and B-11 in the supernova shell, and that these processes play an important role in cosmic nucleosynthesis.

In 1980 Domogatsky obtained his doctor of science (equivalent to the Western habilitation) and in the same year became the head of the newly founded Laboratory of Neutrino Astrophysics at High Energies at the Institute for Nuclear Research of the Russian Academy of Sciences, INR RAS. The central goal of this laboratory was, and is, the construction of an underwater neutrino telescope in Lake Baikal, a task to which he devoted all his life from that point on. He created a team of enthusiastic young experimentalists, starting site explorations in the following year and obtaining first physics results with test configurations later in the 1980s. At the end of the 1980s, the plan for a neutrino telescope comprising about 200 photomultipliers (NT200) was born, and realised together with German collaborators in the 1990s. The economic crisis following the breakdown of the Soviet Union would surely have ended the project if not for Domogatsky’s unshakable will and strong leadership. With the partial configuration of the project deployed in 1994, first neutrino candidates were identified in 1996: the proof of concept for underwater neutrino telescopes had been delivered.

He shaped the image of the INR RAS and the field of neutrino astronomy

NT200 was shut down a decade ago, by which time a new cubic-kilometre telescope in Lake Baikal was already under construction. This project was christened Baikal–GVD, with GVD standing for gigaton volume telescope, though these letters could equally well denote Domogatsky’s initials. Thus far it has reached about half of the size of the IceCube neutrino telescope at the South Pole.

Domogatsky was born to a family of artists and was surrounded by an artistic atmosphere whilst growing up. His grandfather was a famous sculptor, his father a painter, woodcrafter and book illustrator. His brother followed in his father’s footsteps, while Grigory himself married Svetlana, an art historian. He possessed an outstanding literary, historical and artistic education, and all who met him were struck by his knowledge, his old-fashioned noblesse and his intellectual charm.

Domogatsky was a corresponding member of the Russian Academy of Sciences and the recipient of many prestigious awards, most notably the Bruno Pontecorvo Prize and the Pavel Cherenkov Prize. With his leadership in the Baikal project, Grigory Domogatsky shaped the scientific image of the INR RAS and the field of neutrino astronomy. He will be remembered as a carefully weighing scientist, as a person of incredible stamina, and as the unforgettable father figure of the Baikal project.

The post Grigory Vladimirovich Domogatsky 1941–2024 appeared first on CERN Courier.

]]>
News Grigory Vladimirovich Domogatsky, spokesman of the Baikal Neutrino Telescope project, passed away on 17 December 2024 at the age of 83. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Obits_Domogatsky.jpg
Elena Accomando 1965–2025 https://cerncourier.com/a/elena-accomando-1965-2025/ Fri, 16 May 2025 15:01:24 +0000 https://cerncourier.com/?p=113203 Elena Accomando, a distinguished collider phenomenologist, passed away on 7 January 2025.

The post Elena Accomando 1965–2025 appeared first on CERN Courier.

]]>
Elena Accomando

Elena Accomando, a distinguished collider phenomenologist, passed away on 7 January 2025.

Elena received her laurea in physics from the Sapienza University of Rome in 1993, followed by a PhD from the University of Torino in 1997. Her early career included postdoctoral positions at Texas A&M University and the Paul Scherrer Institute, as well as a staff position at the University of Torino. In 2009 she joined the University of Southampton as a lecturer, earning promotions to associate professor in 2018 and professor in 2022.

Elena’s research focused on the theory and phenomenology of particle physics at colliders, searching for new forces and exotic supersymmetric particles at the Large Hadron Collider. She explored a wide range of Beyond the Standard Model (BSM) scenarios at current and future colliders. Her work included studies of new gauge bosons such as the Z′, extra-dimensional models, and CP-violating effects in BSM frameworks, as well as dark-matter scattering on nuclei and quantum corrections to vector-boson scattering. She was also one of the authors of “WPHACT”, a Monte Carlo event generator developed for four-fermion physics at electron–positron colliders, which remains a valuable tool for precision studies. Elena investigated novel signatures in decays of the Higgs boson, aiming to uncover deviations from Standard Model expectations, and was known for connecting theory with experimental applications, proposing phenomenological strategies that were both realistic and impactful. She was well known as a research collaborator at CERN and other international institutions.

She authored the WPHACT Monte Carlo event generator that remains a valuable tool for precision studies

Elena played an integral role in shaping the academic community at Southampton and was greatly admired as a teacher. Her remarkable professional achievements were paralleled by strength and optimism in the face of adversity. Despite her long illness, she remained a positive presence, planning ahead for her work and her family. Her colleagues and students remember her as a brilliant scientist, an inspiring mentor and a warm and compassionate person. She will also be missed by her longstanding colleagues from the CMS collaboration at Rutherford Appleton Laboratory.

Elena is survived by her devoted husband, Francesco, and their two daughters.

The post Elena Accomando 1965–2025 appeared first on CERN Courier.

]]>
News Elena Accomando, a distinguished collider phenomenologist, passed away on 7 January 2025. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Obits_Accomando_feature.jpg
Shoroku Ohnuma 1928–2024 https://cerncourier.com/a/shoroku-ohnuma-1928-2024/ Fri, 16 May 2025 14:51:11 +0000 https://cerncourier.com/?p=113207 Shoroku Ohnuma, who made significant contributions to accelerator physics in the US and Japan, passed away on 4 February 2024, at the age of 95.

The post Shoroku Ohnuma 1928–2024 appeared first on CERN Courier.

]]>
Shoroku Ohnuma

Shoroku Ohnuma, who made significant contributions to accelerator physics in the US and Japan, passed away on 4 February 2024, at the age of 95.

Born on 19 April 1928, in Akita Prefecture, Japan, Ohnuma graduated from the University of Tokyo’s Physics Department in 1950. After studying with Yoichiro Nambu at Osaka University, he came to the US as a Fulbright scholar in 1953, obtaining his doctorate from the University of Rochester in 1956. He maintained a lifelong friendship with neutrino astrophysicist Masatoshi Koshiba, who received his degree from Rochester in the same period. A photo published in the Japanese national newspaper Asahi Shimbun shows him with Koshiba, Richard Feynman and Nambu when the latter won the Nobel Prize in Physics – Ohnuma would often joke that he was the only one pictured who did not win a Nobel.

Ohnuma spent three years doing research at Yale University before returning to Japan to teach at Waseda University. In 1962 he returned to the US with his wife and infant daughter Keiko to work on linear accelerators at Yale. In 1970 he joined the Fermi National Accelerator Laboratory (FNAL), where he contributed significantly to the completion of the Tevatron before moving to the University of Houston in 1986, where he worked on the Superconducting Super Collider (SSC). While he claimed to have moved to Texas because his work at FNAL was done, he must have had high hopes for the SSC, which the first Bush administration slated to be built in Dallas in 1989. Young researchers who worked with him, including me, made up an energetic but inexperienced working team of accelerator researchers. With many FNAL-linked people such as Helen Edwards in the leadership of SSC, we frequently invited professor Ohnuma to Dallas to review the overall design. He was a mentor to me for more than 35 years after our work together at the Texas Accelerator Center in 1988.

Ohnuma reviewed accelerator designs and educated students and young researchers in the US and Japan

After Congress cancelled the SSC in 1993, Ohnuma continued his research at the University of Houston until 1999. Starting in the late 1990s, he visited the JHF, later J-PARC, accelerator group led by Yoshiharu Mori at the University of Tokyo’s Institute for Nuclear Study almost every year. As a member of JHF’s first International Advisory Committee, he reviewed the accelerator design and educated students and young researchers, whom he considered his grandchildren. Indeed, his guidance had grown gentler and more grandfatherly.

In 2000, in semi-retirement, Ohnuma settled at the University of Hawaii, where he continued to frequent the campus most weekdays until his death. Even after the loss of his wife in 2021, he continued walking every day, taking a bus to the university, doing volunteer work at a senior facility, and visiting the Buddhist temple every Sunday. His interest in Zen Buddhism had grown after retirement, and he resolved to copy the Heart Sutra a thousand times on rice paper, with the sumi brush and ink prepared from scratch. We were entertained by his panic at having nearly achieved his goal too soon before his death. The Heart Sutra is a foundational text in Zen Buddhism, chanted on every formal occasion. Undertaking to copy it 1000 times exemplified his considerable tenacity and dedication. Whatever he undertook in the way of study, he was unhurried and unworried, optimistic and cheerful, and persistent.

The post Shoroku Ohnuma 1928–2024 appeared first on CERN Courier.

]]>
News Shoroku Ohnuma, who made significant contributions to accelerator physics in the US and Japan, passed away on 4 February 2024, at the age of 95. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Obits_Ohnuma_feature.jpg
Leading the industry in Monte Carlo simulations for accelerator applications https://cerncourier.com/a/leading-the-industry-in-monte-carlo-simulations-for-accelerator-applications/ Mon, 12 May 2025 14:07:13 +0000 https://cerncourier.com/?p=113263 Particle-beam technology has wide applications in science and industry. Specifically, high-energy x-ray prod­uction is being investigated for FLASH radiotherapy, 14 MeV neutrons are being produced for fusion energy production, and compact electron accelerators are being built for medical-device sterilisation. In each instance it is critical to guarantee that the particle beam is delivered to the end […]

The post Leading the industry in Monte Carlo simulations for accelerator applications appeared first on CERN Courier.

]]>
Figure 1

Particle-beam technology has wide applications in science and industry. Specifically, high-energy x-ray prod­uction is being investigated for FLASH radiotherapy, 14 MeV neutrons are being produced for fusion energy production, and compact electron accelerators are being built for medical-device sterilisation. In each instance it is critical to guarantee that the particle beam is delivered to the end user with the correct makeup, and also to ensure that secondary particles created from scattering interactions are shielded from technicians and sensitive equipment. There is no precise way to predict the random walk of any individual particle as it encounters materials and alloys of different shapes within a complicated apparatus. Monte Carlo methods simulate the random paths of many millions of independent particles, revealing the tendencies of these particles in aggregate. Assessing shielding effectiveness is particularly challenging computationally, as the very nature of shielding means simulations produce low particle rate.

Figure 2

A common technique for shielding calculations takes these random walk simulations a step further by applying variance reduction techniques. Variance reduction techniques are a way of introducing biases in the simulation in a smart way to increase the number of particles emerging from the shielding, while still staying true to the total conservation of matter. In some regions within the shielding, particles are split into independent “daughter” particles with independent pathways but some common history. They are given a weight value, so the overall flux of particles is kept constant. In this way, it is possible to predict the behaviour of a one-in-a-million event without having to simulate one million particle trajectories. The performance of these techniques is shown in figure 2.

Figure 3

These kinds of simulations take on new importance with the global race to develop fusion reactors for energy production. Materials will be exposed to conditions they’ve never seen before, mere feet from the fusion reactions that sustain stars. It is imperative to understand the neutron flux from fusion reactions and how they affect critical components in the sustained operation of fusion facilities if they are going to operate to meet our ever-growing energy needs. Monte Carlo simulation packages are capable of both distributed memory (MPI) and shared memory (OpenMP) parallel computation on the world’s largest supercomputers, engaging hundreds of thousands of cores at once. This enables simulations of billions of particle histories. Together with variance reduction, these powerful simulation tools enable precise estimation of particle fluxes in even the most deeply shielded regions.

RadiaSoft offers browser-based modelling of neutron radiation transport with parallel computation and variance reduction capabilities running on Sirepo, their browser-based interface. Examples of fusion tokamak simulations can be seen above. RadiaSoft is also available for comprehensive consultation in x-ray production, radiation shielding and dose-delivery simulations across a wide range of applications.

The post Leading the industry in Monte Carlo simulations for accelerator applications appeared first on CERN Courier.

]]>
Advertising feature https://cerncourier.com/wp-content/uploads/2025/05/CCMarApr25_RADIASOFT_advertorial_feature.jpg
An international year like no other https://cerncourier.com/a/an-international-year-like-no-other/ Thu, 03 Apr 2025 09:41:02 +0000 https://cerncourier.com/?p=112713 The International Year of Quantum inaugural event was organised at UNESCO Headquarters in Paris in February 2025.

The post An international year like no other appeared first on CERN Courier.

]]>
Last June, the United Nations and UNESCO proclaimed 2025 the International Year of Quantum (IYQ): here is why it really matters.

Everything started a century ago, when scientists like Niels Bohr, Max Planck and Wolfgang Pauli, but also Albert Einstein, Erwin Schrödinger and many others, came up with ideas that would revolutionise our description of the subatomic world. This is when physics transitioned from being a deterministic discipline to a mostly probabilistic one, at least when we look at subatomic scales. Brave predictions of weird behaviours started to attract the attention of an increasingly larger part of the scientific community, and continued to appear decade after decade. The most popular ones being: particle entanglement, the superposition of states and the tunnelling effect. These are also some of the most impactful quantum effects, in terms of the technologies that emerged from them.

One hundred years on, and the scientific community is somewhat acclimatised to observing and measuring the probabilistic nature of particles and quanta. Lasers, MRI and even sliding doors would not exist without the pioneering studies on quantum mechanics. However, it’s common opinion that today we are on the edge of a second quantum revolution.

“International years” are proclaimed to raise awareness, focus global attention, encourage cooperation and mobilise resources towards a certain topic or research domain. The International Year of Quantum also aims to reverse-engineer the approach taken with artificial intelligence (AI), a technology that came along faster than any attempt to educate and prepare the layperson for its adoption. As we know, this is creating a lot of scepticism towards AI, which is often felt to be too complex and designed to generate a loss of control in its users.

The second quantum revolution has begun and we are at the dawn of future powerful applications

The second quantum revolution has begun in recent years and, while we are rapidly moving from simply using the properties of the quantum world to controlling individual quantum systems, we are still at the dawn of future powerful applications. Some quantum sensors are already being used, and quantum cryptography is quite well understood. However, quantum bits need further studies and the exploration of other quantum fields has not even started yet.

Unlike AI, we still have time to push for a more inclusive approach to the development of new technology. During the international year, hundreds of events, workshops and initiatives will emphasise the role of global collaboration in the development of accessible quantum technologies. Through initiatives like the Quantum Technology Initiative (QTI) and the Open Quantum Institute (OQI), CERN is actively contributing not only to scientific research but also to promoting the advancement of its applications for the benefit of society.

The IYQ inaugural event was organised at UNESCO Headquarters in Paris in February 2025. At CERN, this year’s public event season is devoted to the quantum year, and will present talks, performances, a film festival and more. The full programme is available at visit.cern/events.

The post An international year like no other appeared first on CERN Courier.

]]>
Meeting report The International Year of Quantum inaugural event was organised at UNESCO Headquarters in Paris in February 2025. https://cerncourier.com/wp-content/uploads/2025/03/CCMayJun25_FN_IYQ.jpg
CMS observes top–antitop excess https://cerncourier.com/a/cms-observes-top-antitop-excess-2/ Wed, 02 Apr 2025 10:20:07 +0000 https://cerncourier.com/?p=112962 The signal could be caused by a quasi-bound top–antitop meson commonly called "toponium".

The post CMS observes top–antitop excess appeared first on CERN Courier.

]]>
Threshold excess

CERN’s Large Hadron Collider continues to deliver surprises. While searching for additional Higgs bosons, the CMS collaboration may have instead uncovered evidence for the smallest composite particle yet observed in nature – a “quasi-bound” hadron made up of the most massive and shortest-lived fundamental particle known to science and its antimatter counterpart. The findings, which do not yet constitute a discovery claim and could also be susceptible to other explanations, were reported this week at the Rencontres de Moriond conference in the Italian Alps.

Almost all of the Standard Model’s shortcomings motivate the search for additional Higgs bosons. Their properties are usually assumed to be simple. Much as the 125 GeV Higgs boson discovered in 2012 appears to interact with each fundamental fermion with a strength proportional to the fermion’s mass, theories postulating additional Higgs bosons generally expect them to couple more strongly to heavier quarks. This puts the singularly massive top quark at centre stage. If an additional Higgs boson has a mass greater than about 345 GeV and can therefore decay to a top quark–antiquark pair, this should dominate the way it decays inside detectors. Hunting for bumps in the invariant mass spectrum of top–antitop pairs is therefore often considered to be the key experimental signature of additional Higgs bosons above the top–antitop production threshold.

The CMS experiment has observed just such a bump. Intriguingly, however, it is located at the lower limit of the search, right at the top-quark pair production threshold itself, leading CMS to also consider an alternative hypothesis long considered difficult to detect: a top–antitop quasi-bound state known as toponium (see “Threshold excess figure).

The toponium hypothesis is very exciting as we previously did not expect to be able to see it at the LHC

“When we started the project, toponium was not even considered as a background to this search,” explains CMS physics coordinator Andreas Meyer (DESY). “In our analysis today we are only using a simplified model for toponium – just a generic spin-0 colour-singlet state with a pseudoscalar coupling to top quarks. The toponium hypothesis is very exciting as we previously did not expect to be able to see it at the LHC.”

Though other explanations can’t be ruled out, CMS finds the toponium hypothesis to be sufficient to explain the observed excess. The size of the excess is consistent with the latest theoretical estimate of the cross section to produce pseudoscalar toponium of around 6.4 pb.

“The cross section we obtain for our simplified hypothesis is 8.8 pb with an uncertainty of about 15%,” explains Meyer. “One can infer that this is significantly above five sigma.”

The smallest hadron

If confirmed, toponium would be the final example of quarkonium – a term for quark–antiquark states formed from heavy charm, bottom and perhaps top quarks. Charmonium (charm–anticharm) mesons were discovered at SLAC and Brookhaven National Laboratory in the November Revolution of 1974. Bottomonium (bottom–antibottom) mesons were discovered at Fermilab in 1977. These heavy quarks move relatively slowly compared to the speed of light, allowing the strong interaction to be modelled by a static potential as a function of the separation between them. When the quarks are far apart, the potential is proportional to their separation due to the self-interacting gluons forming an elongating flux tube, yielding a constant force of attraction. At close separations, the potential is due to the exchange of individual gluons and is Coulomb-like in form, and inversely proportional to separation, leading to an inverse-square force of attraction. This is the domain where compact quarkonium states are formed, in a near perfect QCD analogy to positronium, wherein an electron and a positron are bound by photon exchange. The Bohr radii of the ground states of charmonium and bottomonium are approximately 0.3 fm and 0.2 fm, and bottomonium is thought to be the smallest hadron yet discovered. Given its larger mass, toponium’s Bohr radius would be an order of magnitude smaller.

Angular analysis

For a long time it was thought that toponium bound states were unlikely to be detected in hadron–hadron collisions. The top quark is the most massive and the shortest-lived of the known fundamental particles. It decays into a bottom quark and a real W boson in the time it takes light to travel just 0.1 fm, leaving little time for a hadron to form. Toponium would be unique among quarkonia in that its decay would be triggered by the weak decay of one of its constituent quarks rather than the annihilation of its constituent quarks into photons or gluons. Toponium is expected to decay at twice the rate of the top quark itself, with a width of approximately 3 GeV.

CMS first saw a 3.5 sigma excess in a 2019 search studying the mass range above 400 GeV, based on 35.9 fb−1 of proton–proton collisions at 13 TeV from 2016. Now armed with 138 fb–1 of collisions from 2016 to 2018, the collaboration extended the search down to the top–antitop production threshold at 345 GeV. Searches are complicated by the possibility that quantum interference between background and Higgs signal processes could generate an experimentally challenging peak–dip structure with a more or less pronounced bump.

“The signal reported by CMS, if confirmed, could be due either to a quasi-bound top–antitop meson, commonly called ‘toponium’, or possibly an elementary spin-zero boson such as appears in models with additional Higgs bosons, or conceivably even a combination of the two,” says theorist John Ellis of King’s College London. “The mass of the lowest-lying toponium state can be calculated quite accurately in QCD, and is expected to lie just below the nominal top–antitop threshold. However, this threshold is smeared out by the short lifetime of the top quark, as well as the mass resolution of an LHC detector, so toponium would appear spread out as a broad excess of events in the final states with leptons and jets that generally appear in top decays.”

Quantum numbers

An important task of the analysis is to investigate the quantum numbers of the signal. It could be a scalar particle, like the Higgs boson discovered in 2012, or a pseudoscalar particle – a different type of spin-0 object with odd rather than even parity. To measure its spin-parity, CMS studied the angular correlations of the top-quark-pair decay products, which retain information on the original quantum state. The decays bear all the experimental hallmarks of a pseudoscalar particle, consistent with toponium (see “Angular analysis” figure) or the pseudoscalar Higgs bosons common to many theories featuring extended Higgs sectors.

“The toponium state produced at the LHC would be a pseudoscalar boson, whose decays into these final states would have characteristic angular distributions, and the excess of events reported by CMS exhibits the angular correlations expected for such a pseudoscalar state,” explains Ellis. “Similar angular correlations would be expected in the decays of an elementary pseudoscalar boson, whereas scalar-boson decays would exhibit different angular correlations that are disfavoured by the CMS analysis.”

Whatever the true cause of the excess, the analyses reflect a vibrant programme of sensitive measurements at the LHC – and the possibility of a timely discovery

Two main challenges now stand in the way of definitively identifying the nature of the excess. The first is to improve the modelling of the creation of top-quark pairs at the LHC, including the creation of bound states at the threshold. The second challenge is to obtain consistency with the ATLAS experiment. “ATLAS had similar studies in the past but with a more conservative approach on the systematic uncertainties,” says ATLAS physics coordinator Fabio Cerutti (LBNL). “This included, for example, larger uncertainties related to parton showers and other top-modelling effects. To shed more light on the CMS observation, be it a new boson, a top quasi-bound state, or some limited understanding of the modelling of top–antitop production at threshold, further studies are needed on our side. We have several analysis teams working on that. We expect to have new results with improved modelling of the top-pair production at threshold and additional variables sensitive to both a new pseudo-scalar boson or a top quasi-bounded state very soon.”

Whatever the true cause of the excess, the analyses reflect a vibrant programme of sensitive measurements at the LHC – and the possibility of a timely discovery.

“Discovering toponium 50 years after the November Revolution would be an unanticipated and welcome golden anniversary present for its charmonium cousin that was discovered in 1974,” concludes Ellis. “The prospective observation and measurement of the vector state of toponium in e+e collisions around 350 GeV have been studied in considerable theoretical detail, but there have been rather fewer studies of the observability of pseudoscalar toponium at the LHC. In addition to the angular correlations observed by CMS, the effective production cross section of the observed threshold effect is consistent with non-relativistic QCD calculations. More detailed calculations will be desirable for confirmation that another quarkonium family member has made its appearance, though the omens are promising.”

The post CMS observes top–antitop excess appeared first on CERN Courier.

]]>
News The signal could be caused by a quasi-bound top–antitop meson commonly called "toponium". https://cerncourier.com/wp-content/uploads/2025/04/CCMayJun25_NA_CMS_feature.jpg
The Hubble tension https://cerncourier.com/a/the-hubble-tension/ Wed, 26 Mar 2025 15:22:42 +0000 https://cerncourier.com/?p=112638 Vivian Poulin asks if the tension between a direct measurement of the Hubble constant and constraints from the early universe could be resolved by new physics.

The post The Hubble tension appeared first on CERN Courier.

]]>

Just like particle physics, cosmology has its own standard model. It is also powerful in prediction, and brings new mysteries and profound implications. The first was the realisation in 1917 that a homogeneous and isotropic universe must be expanding. This led Einstein to modify his general theory of relativity by introducing a cosmological constant (Λ) to counteract gravity and achieve a static universe – an act he labelled his greatest blunder when Edwin Hubble provided observational proof of the universe’s expansion in 1929. Sixty-nine years later, Saul Perlmutter, Adam Riess and Brian Schmidt went further. Their observations of Type Ia supernovae (SN Ia) showed that the universe’s expansion was accelerating. Λ was revived as “dark energy”, now estimated to account for 68% of the total energy density of the universe.

On large scales the dominant motion of galaxies is the Hubble flow, the expansion of the fabric of space itself

The second dominant component of the model emerged not from theory but from 50 years of astrophysical sleuthing. From the “missing mass problem” in the Coma galaxy cluster in the 1930s to anomalous galaxy-rotation curves in the 1970s, evidence built up that additional gravitational heft was needed to explain the formation of the large-scale structure of galaxies that we observe today. The 1980s therefore saw the proposal of cold dark matter (CDM), now estimated to account for 27% of the energy density of the universe, and actively sought by diverse experiments across the globe and in space.

Dark energy and CDM supplement the remaining 5% of normal matter to form the ΛCDM model. ΛCDM is a remarkable six-parameter framework that models 13.8 billion years of cosmic evolution from quantum fluctuations during an initial phase of “inflation” – a hypothesised expansion of the universe by 26 to 30 orders of magnitude in roughly 10–36 seconds at the beginning of time. ΛCDM successfully models cosmic microwave background (CMB) anisotropies, the large-scale structure of the universe, and the redshifts and distances of SN Ia. It achieves this despite big open questions: the nature of dark matter, the nature of dark energy and the mechanism for inflation.

The Hubble tension

Cosmologists are eager to guide beyond-ΛCDM model-building efforts by testing its end-to-end predictions, and the model now seems to be failing the most important: predicting the expansion rate of the universe.

One of the main predictions of ΛCDM is the average energy density of the universe today. This determines its current expansion rate, otherwise known as the Hubble constant (H0). The most precise ΛCDM prediction comes from a fit to CMB data from ESA’s Planck satellite (operational 2009 to 2013), which yields H0 = 67.4 ± 0.5 km/s/Mpc. This can be tested against direct measurements in our local universe, revealing a surprising discrepancy (see “The Hubble tension” figure).

At sufficiently large distances, the dominant motion of galaxies is the Hubble flow – the expansion of the fabric of space itself. Directly measuring the expansion rate of the universe calls for fitting the increase in the recession velocity of galaxies deep within the Hubble flow as a function of distance. The gradient is H0.

Receding supernovae

While high-precision spectroscopy allows recession velocity to be precisely measured using the redshifts (z) of atomic spectra, it is more difficult to measure the distance to astrophysical objects. Geometrical methods such as parallax are imprecise at large distances, but “standard candles” with somewhat predictable luminosities such as cepheids and SN Ia allow distance to be inferred using the inverse square-law. Cepheids are pulsating post-main-sequence stars whose radius and observed luminosity oscillate over a period of one to 100 days, driven by the ionisation and recombination of helium in their outer layers, which increases opacity and traps heat; their period increases with their true luminosity. Before going supernova, SN Ia were white dwarf stars in binary systems; when the white dwarf accretes enough mass from its companion star, runaway carbon fusion produces a nearly standardised peak luminosity for a period of one to two weeks. Only SN Ia are deep enough in the Hubble flow to allow precise measurements of H0. When cepheids are observable in the same galaxies, they can be used to calibrate them.

Distance ladder

At present, the main driver of the Hubble tension is a 2022 measurement of H0 by the SH0ES (Supernova H0 for the Equation of State) team led by Adam Riess. As the SN Ia luminosity is not known from first principles, SH0ES built a “distance ladder” to calibrate the luminosity of 42 SN Ia within 37 host galaxies. The SN Ia are calibrated against intermediate-distance cepheids, and the cepheids are calibrated against four nearby “geometric anchors” whose distance is known through a geometric method (see “Distance ladder” figure). The geometric anchors are: Milky Way parallaxes from ESA’s Gaia mission; detached eclipsing binaries in the large and small magellanic clouds (LMC and SMC); and the “megamaser” galaxy host NGC4258, where water molecules in the accretion disk of a supermassive black hole emit Doppler-shifting microwave maser photons.

The great strength of the SH0ES programme is its use of NASA and ESA’s Hubble Space Telescope (HST, 1990–) at all three rungs of the distance ladder, bypassing the need for cross-calibration between instruments. SN Ia can be calibrated out to 40 Mpc. As a result, in 2022 SH0ES used measurements of 300 or so high-z SN Ia deep within the Hubble flow to measure H0 = 73.04 ± 1.04 km/s/Mpc. This is in more than 5σ tension with Planck’s ΛCDM prediction of 67.4 ± 0.5 km/s/Mpc.

Baryon acoustic oscillation

The sound horizon

The value of H0 obtained from fitting Planck CMB data has been shown to be robust in two key ways.

First, Planck data can be bypassed by combining CMB data from NASA’s WMAP probe (2001–2010) with observations by ground-based telescopes. WMAP in combination with the Atacama Cosmology Telescope (ACT, 2007–2022) yields H0 = 67.6 ± 1.1 km/s/Mpc. WMAP in combination with the South Pole Telescope (SPT, 2007–) yields H0 = 68.2 ± 1.1 km/s/Mpc. Second, and more intriguingly, CMB data can be bypassed altogether.

In the early universe, Compton scattering between photons and electrons was so prevalent that the universe behaved as a plasma. Quantum fluctuations from the era of inflation propagated like sound waves until the era of recombination, when the universe had cooled sufficiently for CMB photons to escape the plasma when protons and electrons combined to form neutral atoms. This propagation of inflationary perturbations left a characteristic scale known as the sound horizon in both the acoustic peaks of the CMB and in “baryon acoustic oscillations” (BAOs) seen in the large-scale structure of galaxy surveys (see “Baryon acoustic oscillation” figure). The sound horizon is the distance travelled by sound waves in the primordial plasma.

While the SH0ES measurement relies on standard candles, ΛCDM predictions rely instead on using the sound horizon as a “standard ruler” against which to compare the apparent size of BAOs at different redshifts, and thereby deduce the expansion rate of the universe. Under ΛCDM, the only two free parameters entering the computation of the sound horizon are the baryon density and the dark-matter density. Planck evaluates both by studying the CMB, but they can be obtained independently of the CMB by combining BAO measurements of the dark-matter density with Big Bang nucleosynthesis (BBN) measurements of the baryon density (see “Sound horizon” figure). The latest measurement by the Dark Energy Spectroscopic Instrument in Arizona (DESI, 2021–) yields H0 = 68.53 ± 0.80 km/s/Mpc, in 3.4σ tension with SH0ES and fully independent of Planck.

Sound horizon

The next few years will be crucial for understanding the Hubble tension, and may decide the fate of the ΛCDM model. ACT, SPT and the Simons Observatory in Chile (2024–) will release new CMB data. DESI, the Euclid space telescope (2023–) and the forthcoming LSST wide-field optical survey in Chile will release new galaxy surveys. “Standard siren” measurements from gravitational waves with electromagnetic counterparts may also contribute to the debate, although the original excitement has dampened with a lack of new events after GW170817. More accurate measurements of the age of the oldest objects may also provide an important new test. If H0 increases, the age of the universe decreases, and the SH0ES measurement favours less than 13.1 billion years at 2σ significance.

The SH0ES measurement is also being checked directly. A key approach is to test the three-step calibration by seeking alternative intermediate standard candles besides cepheids. One candidate is the peak-luminosity “tip” of the red giant branch (TRGB) caused by the sudden start of helium fusion in low-mass stars. The TRGB is bright enough to be seen in distant galaxies that host SN Ia, though at distances smaller than that of cepheids.

Settling the debate

In 2019 the Carnegie–Chicago Hubble Program (CCHP) led by Wendy Freedman and Barry Madore calibrated SN Ia using the TRGB within the LMC and NGC4258 to determine H0 = 69.8 ± 0.8 (stat) ± 1.7 (syst). An independent reanalysis including authors from the SH0ES collaboration later reported H0 = 71.5 ± 1.8 (stat + syst) km/s/Mpc. The difference in the results suggests that updated measurements with the James Webb Space Telescope (JWST) may settle the debate.

James Webb Space Telescope

Launched into space on 25 December 2021, JWST is perfectly adapted to improve measurements of the expansion rate of the universe thanks to its improved capabilities in the near infrared band, where the impact of dust is reduced (see “Improved resolution” figure). Its four-times-better spatial resolution has already been used to re-observe a subsample of the 37 hosts galaxies home to the 42 SN Ia studied by SH0ES and the geometric anchor NGC4258.

So far, all observations suggest good agreement with the previous observations by HST. SH0ES used JWST observations to obtain up to a factor 2.5 reduction in the dispersion of the period-luminosity relation for cepheids with no indication of a bias in HST measurements. Most importantly, they were able to exclude the confusion of cepheids with other stars as being responsible for the Hubble tension at 8σ significance.

Meanwhile, the CCHP team provided new measurements based on three distance indicators: cepheids, the TRGB and a new “population based” method using the J-region of the asymptotic giant branch (JAGB) of carbon-rich stars, for which the magnitude of the mode of the luminosity function can serve as a distance indicator (see the last three rows of “The Hubble tension” figure).

Galaxies used to measure the Hubble constant

The new CCHP results suggest that cepheids may show a bias compared to JAGB and TRGB, though this conclusion was rapidly challenged by SH0ES, who identified a missing source of uncertainty and argued that the size of the sample of SN Ia within hosts with primary distance indicators is too small to provide competitive constraints: they claim that sample variations of order 2.5 km/s/Mpc could explain why the JAGB and TRGB yield a lower value. Agreement may be reached when JWST has observed a larger sample of galaxies – across both teams, 19 of the 37 calibrated by SH0ES have been remeasured so far, plus the geometric anchor NGC 5468 (see “The usual suspects” figure).

At this stage, no single systematic error seems likely to fully explain the Hubble tension, and the problem is more severe than it appears. When calibrated, SN Ia and BAOs constrain not only H0, but the entire redshift range out to z ~ 1. This imposes strong constraints on any new physics introduced in the late universe. For example, recent DESI results suggest that the dynamics of dark energy at late times may not be exactly that of a cosmological constant, but the behaviour needed to reconcile Planck and SH0ES is strongly excluded.

Comparison of JWST and HST views

Rather than focusing on the value of the expansion rate, most proposals now focus on altering the calibration of either SN Ia or BAOs. For example, an unknown systematic error could alter the luminosity of SN Ia in our local vicinity, but we have no indication that their magnitude changes with redshift, and this solution appears to be very constrained.

The most promising solution appears to be that some new physics may have altered the value of the sound horizon in the early universe. As the sound horizon is used to calibrate both the CMB and BAOs, reducing it by 10 Mpc could match the value of H0 favoured by SH0ES (see “Sound horizon” figure). This can be achieved either by increasing the redshift of recombination or the energy density in the pre-recombination universe, giving the sound waves less time to propagate.

The best motivated models invoke additional relativistic species in the early universe such as a sterile neutrino or a new type of “dark radiation”. Another intriguing possibility is that dark energy played a role in the pre-recombination universe, boosting the expansion rate at just the right time. The wide variety and high precision of the data make it hard to find a simple mechanism that is not strongly constrained or finely tuned, but existing models have some of the right features. Future data will be decisive in testing them.

The post The Hubble tension appeared first on CERN Courier.

]]>
Feature Vivian Poulin asks if the tension between a direct measurement of the Hubble constant and constraints from the early universe could be resolved by new physics. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_coverpiccrop.jpg
Do muons wobble faster than expected? https://cerncourier.com/a/do-muons-wobble-faster-than-expected/ Wed, 26 Mar 2025 15:08:49 +0000 https://cerncourier.com/?p=112616 With a new measurement imminent, the Courier explores the experimental results and theoretical calculations used to predict ‘muon g-2’ – one of particle physics’ most precisely known quantities and the subject of a fast-evolving anomaly.

The post Do muons wobble faster than expected? appeared first on CERN Courier.

]]>
Vacuum fluctuation

Fundamental charged particles have spins that wobble in a magnetic field. This is just one of the insights that emerged from the equation Paul Dirac wrote down in 1928. Almost 100 years later, calculating how much they wobble – their “magnetic moment” – strains the computational sinews of theoretical physicists to a level rarely matched. The challenge is to sum all the possible ways in which the quantum fluctuations of the vacuum affect their wobbling.

The particle in question here is the muon. Discovered in cosmic rays in 1936, muons are more massive but ephemeral cousins of the electron. Their greater mass is expected to amplify the effect of any undiscovered new particles shimmering in the quantum haze around them, and measurements have disagreed with theoretical predictions for nearly 20 years. This suggests a possible gap in the Standard Model (SM) of particle physics, potentially providing a glimpse of deeper truths beyond it.

In the coming weeks, Fermilab is expected to present the final results of a seven-year campaign to measure this property, reducing uncertainties to a remarkable one part in 1010 on the magnetic moment of the muon, and 0.1 parts per million on the quantum corrections. Theorists are racing to match this with an updated prediction of comparable precision. The calculation is in good shape, except for the incredibly unusual eventuality that the muon briefly emits a cloud of quarks and gluons at just the moment it absorbs a photon from the magnetic field. But in quantum mechanics all possibilities count all the time, and the experimental precision is such that the fine details of “hadronic vacuum polarisation” (HVP) could be the difference between reinforcing the SM and challenging it.

Quantum fluctuations

The Dirac equation predicts that fundamental spin s = ½ particles have a magnetic moment given by g(eħ/2m)s, where the gyromagnetic ratio (g) is precisely equal to two. For the electron, this remarkable result was soon confirmed by atomic spectroscopy, before more precise experiments in 1947 indicated a deviation from g = 2 of a few parts per thousand. Expressed as a = (g-2)/2, the shift was a surprise and was named the magnetic anomaly or the anomalous magnetic moment.

Quantum fluctuation

This marked the beginning of an enduring dialogue between experiment and theory. It became clear that a relativistic field theory like the developing quantum electrodynamics (QED) could produce quantum fluctuations, shifting g from two. In 1948, Julian Schwinger calculated the first correction to be a = α/2π ≈ 0.00116, aligning beautifully with 1947 experimental results. The emission and absorption of a virtual photon creates a cloud around the electron, altering its interaction with the external magnetic field (see “Quantum fluctuation” figure). Soon, other particles would be seen to influence the calculations. The SM’s limitations suggest that undiscovered particles could also affect these calculations. Their existence might be revealed by a discrepancy between the SM prediction for a particle’s anomalous magnetic moment and its measured value.

As noted, the muon is an even more promising target than the electron, as its sensitivity to physics beyond QED is generically enhanced by the square of the ratio of their masses: a factor of around 43,000. In 1957, inspired by Tsung-Dao Lee and Chen-Ning Yang’s proposal that parity is violated in the weak interaction, Richard Garwin, Leon Lederman and Marcel Weinrich studied the decay of muons brought to rest in a magnetic field at the Nevis cyclotron at Columbia University. As well as showing that parity is broken in both pion and muon decays, they found g to be close to two for muons by studying their “precession” in the magnetic field as their spins circled around the field lines.

Precision

This iconic experiment was the prototype of muon-precession projects at CERN (see CERN Courier September/October 2024 p53), later at Brookhaven National Laboratory and now Fermilab (see “Precision” figure). By the end of the Brookhaven project, a disagreement between the measured value of “aμ” – the subscript indicating g-2 for the muon rather than the electron – and the SM prediction was too large to ignore, motivating the present round of measurements at Fermilab and rapidly improving theory refinements.

g-2 and the Standard Model

Today, a prediction for aμ must include the effects of all three of the SM’s interactions and all of its elementary particles. The leading contributions are from electrons, muons and tau leptons interacting electromagnetically. These QED contributions can be computed in an expansion where each successive term contributes only around 1% of the previous one. QED effects have been computed to fifth order, yielding an extraordinary precision of 0.9 parts per billion – significantly more precise than needed to match measurements of the muon’s g-2, though not the electron’s. It took over half a century to achieve this theoretical tour de force.

The weak interaction gives the smallest contribution to aμ, a million times less than QED. These contributions can also be computed in an expansion. Second order suffices. All SM particles except gluons need to be taken into account.

Gluons are responsible for the strong interaction and appear in the third and last set of contributions. These are described by QCD and are called “hadronic” because quarks and gluons form hadrons at the low energies relevant for the muon g-2 (see “Hadronic contributions” figure). HVP is the largest, though 10,000 times smaller than the corrections due to QED. “Hadronic light-by-light scattering” (HLbL) is a further 100 times smaller due to the exchange of an additional photon. The challenge is that the strong-interaction effects cannot be approximated by a perturbative expansion. QCD is highly nonlinear and different methods are needed.

Data or the lattice?

Even before QCD was formulated, theorists sought to subdue the wildness of the strong force using experimental data. In the case of HVP, this triggered experimental investigations of e+e annihilation into hadrons and later hadronic tau–lepton decays. Though apparently disparate, the production of hadrons in these processes can be related to the clouds of virtual quarks and gluons that are responsible for HVP.

Hadronic contributions

A more recent alternative makes use of massively parallel numerical simulations to directly solve the equations of QCD. To compute quantities such as HVP or HLbL, “lattice QCD” requires hundreds of millions of processor-core hours on the world’s largest supercomputers.

In preparation for Fermilab’s first measurement in 2021, the Muon g-2 Theory Initiative, spanning more than 120 collaborators from over 80 institutions, was formed to provide a reference SM prediction that was published in a 2020 white paper. The HVP contribution was obtained with a precision of a few parts per thousand using a compilation of measurements of e+e annihilation into hadrons. The HLbL contribution was determined from a combination of data-driven and lattice–QCD methods. Though even more complex to compute, HLbL is needed only to 10% precision, as its contribution is smaller.

After summing all contributions, the prediction of the 2020 white paper sits over five standard deviations below the most recent experimental world average (see “Landscape of muon g-2” figure). Such a deviation would usually be interpreted as a discovery of physics beyond the SM. However, in 2021 the result of the first lattice calculation of the HVP contribution with a precision comparable to that of the data-driven white paper was published by the Budapest–Marseille–Wuppertal collaboration (BMW). The result, labelled BMW 2020 as it was uploaded to the preprint archive the previous year, is much closer to the experimental average (green band on the figure), suggesting that the SM may still be in the race. The calculation relied on methods developed by dozens of physicists since the seminal work of Tom Blum (University of Connecticut) in 2002 (see CERN Courier May/June 2021 p25).

Landscape of muon g-2

In 2020, the uncertainties on the data-driven and lattice-QCD predictions for the HVP contribution were still large enough that both could be correct, but BMW’s 2021 paper showed them to be explicitly incompatible in an “intermediate-distance window” accounting for approximately 35% of the HVP contribution, where lattice QCD is most reliable.

This disagreement was the first sign that the 2020 consensus had to be revised. To move forward, the sources of the various disagreements – more numerous now – and the relative limitations of the different approaches must be understood better. Moreover, uncertainty on HVP already dominated the SM prediction in 2020. As well as resolving these discrepancies, its uncertainty must be reduced by a factor of three to fully leverage the coming measurement from Fermilab. Work on the HVP is therefore even more critical than before, as elsewhere the theory house is in order: Sergey Volkov (KITP) recently verified the fifth-order QED calculation of Tatsumi Aoyama, Toichiro Kinoshita and Makiko Nio, identifying an oversight not numerically relevant at current experimental sensitivities; new HLbL calculations remain consistent; and weak contributions have already been checked and are precise enough for the foreseeable future.

News from the lattice

Since BMW’s 2020 lattice results, a further eight lattice-QCD computations of the dominant up-and-down-quark (u + d) contribution to HVP’s intermediate-distance window have been performed with similar precision, with four also including all other relevant contributions. Agreement is excellent and the verdict is clear: the disagreement between the lattice and data-driven approaches is confirmed (see “Intermediate window” figure).

Intermediate window

Work on the short-distance window (about 10% of the HVP contribution) has also advanced rapidly. Seven computations of the u + d contribution have appeared, with four including all other relevant contributions. No significant disagreement is observed.

The long-distance window (around 55% of the total) is by far the most challenging, with the largest uncertainties. In recent weeks three calculations of the dominant u + d contribution have appeared, by the RBC–UKQCD, Mainz and FHM collaborations. Though some differences are present, none can be considered significant for the time being.

With all three windows cross-validated, the Muon g-2 Theory Initiative is combining results to obtain a robust lattice–QCD determination of the HVP contribution. The final uncertainty should be slightly below 1%, still quite far from the 0.2% ultimately needed.

The BMW–DMZ and Mainz collaborations have also presented new results for the full HVP contribution to aμ, and the RBC–UKQCD collaboration, which first proposed the multi-window approach, is also in a position to make a full calculation. (The corresponding result in the “Landscape of muon g-2” figure combines contributions reported in their publications.) Mainz obtained a result with 1% precision using the three windows described above. BMW–DMZ divided its new calculation into five windows and replaced the lattice–QCD computation of the longest distance window – “the tail”, encompassing just 5% of the total – with a data-driven result. This pragmatic approach allows a total uncertainty of just 0.46%, with the collaboration showing that all e+e datasets contributing to this long-distance tail are entirely consistent. This new prediction differs from the experimental measurement of aμ by only 0.9 standard deviations.

These new lattice results, which have not yet been published in refereed journals, make the disagreement with the 2020 data-driven result even more blatant. However, the analysis of the annihilation of e+e into hadrons is also evolving rapidly.

News from electron–positron annihilation

Many experiments have measured the cross-section for e+e annihilation to hadrons as a function of centre-of-mass energy (√s). The dominant contribution to a data-driven calculation of aμ, and over 70% of its uncertainty budget, is provided by the e+e π+π process, in which the final-state pions are produced via the ρ resonance (see “Two-pion channel” figure).

The most recent measurement, by the CMD-3 energy-scan experiment in Novosibirsk, obtained a cross-section on the peak of the ρ resonance that is larger than all previous ones, significantly changing the picture in the π+π channel. Scrutiny by the Theory Initiative has identified no major problem.

Two-pion channel

CMD-3’s approach contrasts that used by KLOE, BaBar and BESIII, which study e+e annihilation with a hard photon emitted from the initial state (radiative return) at facilities with fixed √s. BaBar has innovated by calibrating the luminosity of the initial-state radiation using the μ+μ channel and using a unique “next-to-leading-order” approach that accounts for extra radiation from either the initial or the final state – a necessary step at the required level of precision.

In 1997, Ricard Alemany, Michel Davier and Andreas Höcker proposed an alternative method that employs τ→ ππ0ν decay while requiring some additional theoretical input. The decay rate has been precisely measured as a function of the two-pion invariant mass by the ALEPH and OPAL experiments at LEP, as well as by the Belle and CLEO experiments at B factories, under very different conditions. The measurements are in good agreement. ALEPH offers the best normalisation and Belle the best shape measurement.

KLOE and CMD-3 differ by more than five standard deviations on the ρ peak, precluding a combined analysis of e+e → π+π cross-sections. BaBar and τ data lie between them. All measurements are in good agreement at low energies, below the ρ peak. BaBar, CMD-3 and τ data are also in agreement above the ρ peak. To help clarify this unsatisfactory situation, in 2023 BaBar performed a careful study of radiative corrections to e+e → π+π. That study points to the possible underestimate of systematic uncertainties in radiative-return experiments that rely on Monte Carlo simulations to describe extra radiation, as opposed to the in situ studies performed by BaBar.

The future

While most contributions to the SM prediction of the muon g-2 are under control at the level of precision required to match the forthcoming Fermilab measurement, in trying to reduce the uncertainties of the HVP contribution to a commensurate degree, theorists and experimentalists shattered a 20 year consensus. This has triggered an intense collective effort that is still in progress.

The prospect of testing the limits of the SM through high-precision measurements generates considerable impetus

New analyses of e+e are underway at BaBar, Belle II, BES III and KLOE, experiments are continuing at CMD-3, and Belle II is also studying τ decays. At CERN, the longer term “MUonE” project will extract HVP by analysing how muons scatter off electrons – a very challenging endeavour regarding the unusual accuracy required both in the control of experimental systematic uncertainties and also theoretically, for the radiative corrections.

At the same time, lattice-QCD calculations have made enormous progress in the last five years and provide a very competitive alternative. The fact that several groups are involved with somewhat independent techniques is allowing detailed cross checks. The complementarity of the data-driven and lattice-QCD approaches should soon provide a reliable value for the g-2 theoretical prediction at unprecedented levels of precision.

There is still some way to go to reach that point, but the prospect of testing the limits of the SM through high-precision measurements generates considerable impetus. A new white paper is expected in the coming weeks. The ultimate aim is to reach a level of precision in the SM prediction that allows us to fully leverage the potential of the muon anomalous magnetic moment in the search for new fundamental physics, in concert with the final results of Fermilab’s Muon g-2 experiment and the projected Muon g-2/EDM experiment at J-PARC in Japan, which will implement a novel technique.

The post Do muons wobble faster than expected? appeared first on CERN Courier.

]]>
Feature With a new measurement imminent, the Courier explores the experimental results and theoretical calculations used to predict ‘muon g-2’ – one of particle physics’ most precisely known quantities and the subject of a fast-evolving anomaly. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_MUON-top_feature.jpg
Educational accelerator open to the public https://cerncourier.com/a/educational-accelerator-open-to-the-public/ Wed, 26 Mar 2025 14:37:38 +0000 https://cerncourier.com/?p=112590 What better way to communicate accelerator physics to the public than using a functioning particle accelerator?

The post Educational accelerator open to the public appeared first on CERN Courier.

]]>
What better way to communicate accelerator physics to the public than using a functioning particle accelerator? From January, visitors to CERN’s Science Gateway were able to witness a beam of protons being accelerated and focused before their very eyes. Its designers believe it to be the first working proton accelerator to be exhibited in a museum.

“ELISA gives people who visit CERN a chance to really see how the LHC works,” says Science Gateway’s project leader Patrick Geeraert. “This gives visitors a unique experience: they can actually see a proton beam in real time. It then means they can begin to conceptualise the experiments we do at CERN.”

The model accelerator is inspired by a component of LINAC 4 – the first stage in the chain of accelerators used to prepare beams of protons for experiments at the LHC. Hydrogen is injected into a low-pressure chamber and ionised; a one-metre-long RF cavity accelerates the protons to 2 MeV, which then pass through a thin vacuum-sealed window. In dim light, the protons in the air ionise the gas molecules, producing visible light, allowing members of the public to see the beam’s progress before their very eyes (see “Accelerating education” figure).

ELISA – the Experimental Linac for Surface Analysis – will also be used to analyse the composition of cultural artefacts, geological samples and objects brought in by members of the public. This is an established application of low-energy proton accelerators: for example, a particle accelerator is hidden 15 m below the famous glass pyramids of the Louvre in Paris, though it is almost 40 m long and not freely accessible to the public.

“The proton-beam technique is very effective because it has higher sensitivity and lower backgrounds than electron beams,” explains applied physicist and lead designer Serge Mathot. “You can also perform the analysis in the ambient air, instead of in a vacuum, making it more flexible and better suited to fragile objects.”

For ELISA’s first experiment, researchers from the Australian Nuclear Science Technology Organisation and from Oxford’s Ashmolean Museum have proposed a joint research project about the optimisation of ELISA’s analysis of paint samples designed to mimic ancient cave art. The ultimate goal is to work towards a portable accelerator that can be taken to regions of the world that don’t have access to proton beams.

The post Educational accelerator open to the public appeared first on CERN Courier.

]]>
News What better way to communicate accelerator physics to the public than using a functioning particle accelerator? https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_accelerator.jpg
Game on for physicists https://cerncourier.com/a/game-on-for-physicists/ Wed, 26 Mar 2025 14:35:42 +0000 https://cerncourier.com/?p=112787 Raphael Granier de Cassagnac discusses opportunities for particle physicists in the gaming industry.

The post Game on for physicists appeared first on CERN Courier.

]]>
Raphael Granier de Cassagnac and Exographer

“Confucius famously may or may not have said: ‘When I hear, I forget. When I see, I remember. When I do, I understand.’ And computer-game mechanics can be inspired directly by science. Study it well, and you can invent game mechanics that allow you to engage with and learn about your own reality in a way you can’t when simply watching films or reading books.”

So says Raphael Granier de Cassagnac, a research director at France’s CNRS and Ecole Polytechnique, as well as member of the CMS collaboration at the CMS. Granier de Cassagnac is also the creative director of Exographer, a science-fiction computer game that draws on concepts from particle physics and is available on Steam, Switch, PlayStation 5 and Xbox.

“To some extent, it’s not too different from working at a place like CMS, which is also a super complicated object,” explains Granier de Cassagnac. Developing a game often requires graphic artists, sound designers, programmers and science advisors. To keep a detector like CMS running, you need engineers, computer scientists, accelerator physicists and funding agencies. And that’s to name just a few. Even if you are not the primary game designer or principal investigator, understanding the
fundamentals is crucial to keep the project running efficiently.

Root skills

Most physicists already have some familiarity with structured programming and data handling, which eases the transition into game development. Just as tools like ROOT and Geant4 serve as libraries for analysing particle collisions, game engines such as Unreal, Unity or Godot provide a foundation for building games. Prebuilt functionalities are used to refine the game mechanics.

“Physicists are trained to have an analytical mind, which helps when it comes to organising a game’s software,” explains Granier de Cassagnac. “The engine is merely one big library, and you never have to code anything super complicated, you just need to know how to use the building blocks you have and code in smaller sections to optimise the engine itself.”

While coding is an essential skill for game production, it is not enough to create a compelling game. Game design demands storytelling, character development and world-building. Structure, coherence and the ability to guide an audience through complex information are also required.

“Some games are character-driven, others focus more on the adventure or world-building,” says Granier de Cassagnac. “I’ve always enjoyed reading science fiction and playing role-playing games like Dungeons and Dragons, so writing for me came naturally.”

Entrepreneurship and collaboration are also key skills, as it is increasingly rare for developers to create games independently. Universities and startup incubators can provide valuable support through funding and mentorship. Incubators can help connect entrepreneurs with industry experts, and bridge the gap between scientific research and commercial viability.

“Managing a creative studio and a company, as well as selling the game, was entirely new for me,” recalls Granier de Cassagnac. “While working at CMS, we always had long deadlines and low pressure. Physicists are usually not prepared for the speed of the industry at all. Specialised offices in most universities can help with valorisation – taking scientific research and putting it on the market. You cannot forget that your academic institutions are still part of your support network.”

Though challenging to break into, opportunity abounds for those willing to upskill

The industry is fiercely competitive, with more games being released than players can consume, but a well-crafted game with a unique vision can still break through. A common mistake made by first-time developers is releasing their game too early. No matter how innovative the concept or engaging the mechanics, a game riddled with bugs frustrates players and damages its reputation. Even with strong marketing, a rushed release can lead to negative reviews and refunds – sometimes sinking a project entirely.

“In this industry, time is money and money is time,” explains Granier de Cassagnac. But though challenging to break into, opportunity abounds for those willing to upskill, with the gaming industry worth almost $200 billion a year and reaching more than three billion players worldwide by Granier de Cassagnac’s estimation. The most important aspects for making a successful game are originality, creativity, marketing and knowing the engine, he says.

“Learning must always be part of the process; without it we cannot improve,” adds Granier de Cassagnac, referring to his own upskilling for the company’s next project, which will be even more ambitious in its scientific coverage. “In the next game we want to explore the world as we know it, from the Big Bang to the rise of technology. We want to tell the story of humankind.”

The post Game on for physicists appeared first on CERN Courier.

]]>
Careers Raphael Granier de Cassagnac discusses opportunities for particle physicists in the gaming industry. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_CAREERS_Garnier_feature.jpg
The beauty of falling https://cerncourier.com/a/the-beauty-of-falling/ Wed, 26 Mar 2025 14:34:00 +0000 https://cerncourier.com/?p=112815 Kurt Hinterbichler reviews Claudia de Rham's first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery.

The post The beauty of falling appeared first on CERN Courier.

]]>
The Beauty of Falling

A theory of massive gravity is one in which the graviton, the particle that is believed to mediate the force of gravity, has a small mass. This contrasts with general relativity, our current best theory of gravity, which predicts that the graviton is exactly massless. In 2011, Claudia de Rham (Imperial College London), Gregory Gabadadze (New York University) and Andrew Tolley (Imperial College London) revitalised interest in massive gravity by uncovering the structure of the best possible (in a technical sense) theory of massive gravity, now known as the dRGT theory, after these authors.

Claudia de Rham has now written a popular book on the physics of gravity. The Beauty of Falling is an enjoyable and relatively quick read: a first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery.

De Rahm begins by setting the stage with the breakthroughs that led to our current paradigm of gravity. The Michelson–Morley experiment and special relativity, Einstein’s description of gravity as geometry leading to general relativity and its early experimental triumphs, black holes and cosmology are all described in accessible terms using familiar analogies. De Rham grips the reader by weaving in a deeply personal account of her own life and upbringing, illustrating what inspired her to study these ideas and pursue a career in theoretical physics. She has led an interesting life, from growing up in various parts of the world, to learning to dive and fly, to training as an astronaut and coming within a hair’s breadth of becoming one. Her account of the training and selection process for European Space Agency astronauts is fascinating, and worth the read in its own right.

Moving closer to the present day, de Rahm discusses the detection of gravitational waves at gravitational-wave observatories such as LIGO, the direct imaging of black holes by the Event Horizon Telescope, and the evidence for dark matter and the accelerating expansion of the universe with its concomitant cosmological constant problem. As de Rham explains, this latter discovery underlies much of the interest in massive gravity; there remains the lingering possibility that general relativity may need to be modified to account for the observed accelerated expansion.

In the second part of the book, de Rham warns us that we are departing from the realm of well tested and established physics, and entering the world of more uncertain ideas. A pet peeve of mine is popular accounts that fail to clearly make this distinction, a temptation to which this book does not succumb. 

Here, the book offers something that is hard to find: a first-hand account of the process of thought and discovery in theoretical physics. When reading the latest outrageously overhyped clickbait headlines coming out of the world of fundamental physics, it is easy to get the wrong impression about what theoretical physicists do. This part of the book illustrates how ideas come about: by asking questions of established theories and tugging on their loose threads, we uncover new mathematical structures and, in the process, gain a deeper understanding of the structures we have.

Massive gravity, the focus of this part of the book, is a prime example: by starting with a basic question, “does the graviton have to be massless?”, a new structure was revealed. This structure may or may not have any direct relevance to gravity in the real world, but even if it does not, our study of it has significantly enhanced our understanding of the structure of general relativity. And, as has occurred countless times before with intriguing mathematical structures, it may ultimately prove useful for something completely different and unforeseen – something that its originators did not have even remotely in mind. Here, de Rahm offers invaluable insights both into uncovering a new theoretical structure and what happens next, as the results are challenged and built upon by others in the community.

The post The beauty of falling appeared first on CERN Courier.

]]>
Review Kurt Hinterbichler reviews Claudia de Rham's first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_REV_Beauty_feature.jpg
CMS peers inside heavy-quark jets https://cerncourier.com/a/cms-peers-inside-heavy-quark-jets/ Wed, 26 Mar 2025 14:31:07 +0000 https://cerncourier.com/?p=112764 The CMS collaboration has shed light on the role of the quark mass in parton showers.

The post CMS peers inside heavy-quark jets appeared first on CERN Courier.

]]>
CMS figure 1

Ever since quarks and gluons were discovered, scientists have been gathering clues about their nature and behaviour. When quarks and gluons – collectively called partons – are produced at particle colliders, they shower to form jets – sprays of composite particles called hadrons. The study of jets has been indispensable towards understanding quantum chromodynamics (QCD) and the description of the final state using parton shower models. Recently, particular focus has been on the study of the jet substructure, which provides further input about the modelling of parton showers.

Jets initiated by the heavy charm (c-jets) or bottom quarks (b-jets) provide insight into the role of the quark mass, as an additional energy scale in QCD calculations. Heavy-flavour jets are not only used to test QCD predictions, they are also a key part of the study of other particles, such as the top quark and the Higgs boson. Understanding the internal structure of heavy-quark jets is thus crucial for both the identification of these heavier objects and the interpretation of QCD properties. One such property is the presence of a “dead cone” around the heavy quark, where collinear gluon emissions are suppressed in the direction of motion of the quark.

CMS has shed light on the role of the quark mass in the parton shower with two new results focusing on c- and b-jets, respectively. Heavy-flavour hadrons in these jets are typically long-lived, and decay at a small but measurable distance from the primary interaction vertex. In c-jets, the D0 meson is reconstructed in the K±π decay channel by combining pairs of charged hadrons that do not appear to come from the primary interaction vertex. In the case of b-jets, a novel technique is employed. Instead of reconstructing the b hadron in a given decay channel, its charged decay daughters are identified using a multivariate analysis. In both cases, the decay daughters are replaced by the mother hadron in the jet constituents.

CMS has shed light on the role of the quark mass in the parton shower

Jets are reconstructed by clustering particles in a pairwise manner, leading to a clustering tree that mimics the parton shower process. Substructure techniques are then employed to decompose the jet into two subjets, which correspond to the heavy quark and a gluon being emitted from it. Two of those algorithms are soft drop and late-kT. They select the first and last emission in the jet clustering tree, respectively, capturing different aspects of the QCD shower. Looking at the angle between the two subjets (see figure 1), denoted as Rg for soft drop and θ for late-kT, demonstrates the dead-cone effect, as the small angle emissions of b-jets (left) and c-jets (right) are suppressed compared to the inclusive jet case. The effect is captured better by the late-kT algorithm than soft drop in the case of c-jets.

These measurements serve to refine the tuning of Monte Carlo event generators relating to the heavy-quark mass and strong coupling. Identifying the onset of the dead cone in the vacuum also opens up possibilities for substructure studies in heavy-ion collisions, where emissions induced by the strongly interacting quark–gluon plasma can be isolated.

The post CMS peers inside heavy-quark jets appeared first on CERN Courier.

]]>
News The CMS collaboration has shed light on the role of the quark mass in parton showers. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_EF_CMS_feature.jpg
Salam’s dream visits the Himalayas https://cerncourier.com/a/salams-dream-visits-the-himalayas/ Wed, 26 Mar 2025 14:28:34 +0000 https://cerncourier.com/?p=112728 The BCVSPIN programme aims to facilitate interactions between researchers from Bangladesh, China, Vietnam, Sri Lanka, Pakistan, India and Nepal and the broader international community.

The post Salam’s dream visits the Himalayas appeared first on CERN Courier.

]]>
After winning the Nobel Prize in Physics in 1979, Abdus Salam wanted to bring world-class physics research opportunities to South Asia. This was the beginning of the BCSPIN programme, encompassing Bangladesh, China, Sri Lanka, Pakistan, India and Nepal. The goal was to provide scientists in South and Southeast Asia with new opportunities to learn from leading experts about developments in particle physics, astroparticle physics and cosmology. Together with Jogesh Pati, Yu Lu and Qaisar Shafi, Salam initiated the programme in 1989. This first edition was hosted by Nepal. Vietnam joined in 2009 and BCSPIN became BCVSPIN. Over the years, the conference has been held as far afield as Mexico.

The most recent edition attracted more than 100 participants to the historic Hotel Shanker in Kathmandu, Nepal, from 9 to 13 December 2024. The conference aimed to facilitate interactions between researchers from BCVSPIN countries and the broader international community, covering topics such as collider physics, cosmology, gravitational waves, dark matter, neutrino physics, particle astrophysics, physics beyond the Standard Model and machine learning. Participants ranged from renowned professors from across the globe to aspiring students.

Speaking of aspiring students, the main event was preceded by the BCVSPIN-2024 Masterclass in Particle Physics and Workshop in Machine Learning, hosted at Tribhuvan University from 4 to 6 December. The workshop provided 34 undergraduate and graduate students from around Nepal with a comprehensive introduction to particle physics, high-energy physics (HEP) experiments and machine learning. In addition to lectures, the workshop engaged students in hands-on sessions, allowing them to experience real research by exploring core concepts and applying machine-learning techniques to data from the ATLAS experiment. The students’ enthusiasm was palpable as they delved into the intricacies of particle physics and machine learning. The interactive sessions were particularly engaging, with students eagerly participating in discussions and practical exercises. Highlights included a special talk on artificial intelligence (AI) and a career development session focused on crafting CVs, applications and research statements. These sessions ensured participants were equipped with both academic insights and practical guidance. The impact on students was profound, as they gained valuable skills and networking opportunities, preparing them for future careers in HEP.

The BCVSPIN conference officially started the following Monday. In the spirit of BCVSPIN, the first plenary session featured an insightful talk on the status and prospects of HEP in Nepal, providing valuable insights for both locals and newcomers to the initiative. Then, the latest and the near-future physics highlights of experiments such as ATLAS, ALICE, CMS, as well as Belle, DUNE and IceCube, were showcased. From physics performance such as ATLAS nailing b-tagging with graph neural networks, to the most elaborate mass measurement of the W boson mass by CMS, not to mention ProtoDUNE’s runs exceeding expectations, the audience were offered comprehensive reviews of the recent breakthroughs on the experimental side. The younger physicists willing to continue or start hardware efforts surely appreciated the overview and schedule of the different upgrade programmes. The theory talks covered, among others, dark-matter models, our dear friend the neutrino and the interactions between the two. A special talk on AI invited the audience to reflect on what AI really is and how – in the midst of the ongoing revolution – it impacts the fields of physics and physicists themselves. Overviews of long-term future endeavours such as the Electron–Ion Collider and the Future Circular Collider concluded the programme.

BCVSPIN offers younger scientists precious connections with physicists from the international community

A special highlight of the conference was a public lecture “Oscillating Neutrinos” by the 2015 Nobel Laureate Takaaki Kajita. The event was held near the historical landmark of Patan Durbar Square, in the packed auditorium of the Rato Bangala School. This centre of excellence is known for its innovative teaching methods and quality instruction. More than half the room was filled with excited students from schools and universities, eager to listen to the keynote speaker. After a very pedagogical introduction explaining the “problem of solar neutrinos”, Kajita shared his insights on the discovery of neutrino oscillations and its implications for our understanding of the universe. His presentation included historical photographs of the experiments in Kamioka, Japan, as well as his participation at BCVSPIN in 1994. After encouraging the students to become scientists and answering as many questions as time allowed, he was swept up in a crowd of passionate Nepali youth, thrilled to be in the presence of such a renowned physicist.

The BCVSPIN initiative has changed the landscape of HEP in South and Southeast Asia. With participation made affordable for students, it is a stepping stone for the younger generation of scientists, offering them precious connections with physicists from the international community.

The post Salam’s dream visits the Himalayas appeared first on CERN Courier.

]]>
Meeting report The BCVSPIN programme aims to facilitate interactions between researchers from Bangladesh, China, Vietnam, Sri Lanka, Pakistan, India and Nepal and the broader international community. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_BCVSPIN.jpg
CDF addresses W-mass doubt https://cerncourier.com/a/cdf-addresses-w-mass-doubt/ Wed, 26 Mar 2025 14:24:15 +0000 https://cerncourier.com/?p=112584 Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model prediction

The post CDF addresses W-mass doubt appeared first on CERN Courier.

]]>
The CDF II experiment

It’s tough to be a lone dissenting voice, but the CDF collaboration is sticking to its guns. Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model (SM) prediction. All other measurements are statistically compatible with the SM, though slightly higher, including the most recent by the CMS collaboration at the LHC, which almost matched CDF’s stated precision of 9.4 MeV (CERN Courier November/December 2024 p7).

With CMS’s measurement came fresh scrutiny for the CDF collaboration, which had established one of the most interesting anomalies in fundamental science – a higher-than-expected W mass might reveal the presence of undiscovered heavy virtual particles. Particular scrutiny focused on the quoted momentum resolution of the CDF detector, which the collaboration claims exceeds the precision of any other collider detector by more than a factor of two. A new analysis by CDF verifies the stated accuracy of 25 parts per million by constraining possible biases using a large sample of cosmic-ray muons.

“The publication lays out the ‘warts and all’ of the tracking aspect and explains why the CDF measurement should be taken seriously despite being in disagreement with both the SM and silicon-tracker-based LHC measurements,” says spokesperson David Toback of Texas A&M University. “The paper should be seen as required reading for anyone who truly wants to understand, without bias, the path forward for these incredibly difficult analyses.”

The 2022 W-mass measurement exclusively used information from CDF’s drift chamber – a descendant of the multiwire proportional chamber invented at CERN by Georges Charpak in 1968 – and discarded information from its inner silicon vertex detector as it offered only marginal improvements to momentum resolution. The new analysis by CDF collaborator Ashutosh Kotwal of Duke University studies possible geometrical defects in the experiment’s drift chamber that could introduce unsuspected biases in the measured momenta of the electrons and muons emitted in the decays of W bosons.

“Silicon trackers have replaced wire-based technology in many parts of modern particle detectors, but the drift chamber continues to hold its own as the technology of choice when high accuracy is required over large tracking volumes for extended time periods in harsh collider environments,” opines Kotwal. “The new analysis demonstrates the efficiency and stability of the CDF drift chamber and its insensitivity to radiation damage.”

The CDF II detector operated at Fermilab’s Tevatron collider from 1999 to 2011. Its cylindrical drift chamber was coaxial with the colliding proton and antiproton beams, and immersed in an axial 1.4 T magnetic field. A helical fit yielded track parameters.

The post CDF addresses W-mass doubt appeared first on CERN Courier.

]]>
News Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model prediction https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_CDF_feature.jpg
Boost for compact fast radio bursts https://cerncourier.com/a/boost-for-compact-fast-radio-bursts/ Wed, 26 Mar 2025 14:21:58 +0000 https://cerncourier.com/?p=112596 New results from the CHIME telescope support the hypothesis that fast radio bursts originate in close proximity to the turbulent magnetosphere of a central engine.

The post Boost for compact fast radio bursts appeared first on CERN Courier.

]]>
Fast radio bursts (FRBs) are short but powerful bursts of radio waves that are believed to be emitted by dense astrophysical objects such as neutron stars or black holes. They were discovered by Duncan Lorimer and his student David Narkevic in 2007 while studying archival data from the Parkes radio telescope in Australia. Since then, more than a thousand FRBs have been detected, located both within and without the Milky Way. These bursts usually last only a few milliseconds but can release enormous amounts of energy – an FRB detected in 2022 gave off more energy in a millisecond than the Sun does in 30 years – however, the exact mechanism underlying their creation remains a mystery.

Inhomogeneities caused by the presence of gas and dust in the interstellar medium scatter the radio waves coming from an FRB. This creates a stochastic interference pattern on the signal, called scintillation – a phenomenon akin to the twinkling of stars. In a recent study, astronomer Kenzie Nimmo and her colleagues used scintillation data from FRB 20221022A to constrain the size of its emission region. FRB 20221022A is a 2.5 millisecond burst from a galaxy about 200 million light-years away. It was detected on 22 October 2022 by the Canadian Hydrogen Intensity Mapping Experiment Fast Radio Burst project (CHIME/FRB).

The CHIME telescope is currently the world’s leading FRB detector, discovering an average of three new FRBs every day. It consists of four stationary 20 m-wide and 100 m-long semi-cylindrical paraboloidal reflectors with a focal length of 5 m (see “Right on CHIME” figure). 256 dual-polarisation feeds suspended along each axis gives it a field of view of more than 200 square degrees. With a wide bandwidth, high sensitivity and a high-performance correlator to pinpoint where in the sky signals are coming from, CHIME is an excellent instrument for the detection of FRBs. The antenna receives radio waves in the frequency range of 400 to 800 MHz.

Two main classes of models compete to explain the emission mechanisms of FRBs. Near-field models hypothesise that emission occurs in close proximity to the turbulent magnetosphere of a central engine, while far-away models hypothesise that emission occurs in relativistic shocks that propagate out to large radial distances. Nimmo and her team measured two distinct scintillation scales in the frequency spectrum of FRB 20221022A: one originating from its host galaxy or local environment, and another from a scattering site within the Milky Way. By using these scattering sites as astrophysical lenses, they were able to constrain the size of the FRB’s emission region to better than 30,000 km. This emission size contradicted expectations from far-away models. It is more consistent with an emission process occurring within or just beyond the magnetosphere of a central compact object – the first clear evidence for the near-field class of models.

Additionally, FRB 20221022A’s detection paper notes a striking change in the burst’s polarisation angle – an “S-shaped” swing covering about 130° – over a mere 2.5 milliseconds. They interpret this as the emission beam physically sweeping across our line of sight, much like a lighthouse beam passing by an observer, and conclude that it hints at a magnetospheric origin of the emission, as highly magnetised regions can twist or shape how radio waves are emitted. The scintillation studies by Nimmo et al. independently support this conclusion, narrowing the possible sources and mechanisms that power FRBs. Moreover, they highlight the potential of the scintillation technique to explore the emission mechanisms in FRBs and understand their environments.

The field of FRB physics looks set to grow by leaps and bounds. CHIME can already identify host galaxies for FRBs, but an “outrigger” programme using similar detectors geographically displaced from the main telescope at the Dominion Radio Astrophysical Observatory near Penticton, British Columbia, aims to strengthen its localisation capabilities to a precision of tens of milliarcsecond. CHIME recently finished deploying its third outrigger telescope in northern California.

The post Boost for compact fast radio bursts appeared first on CERN Courier.

]]>
News New results from the CHIME telescope support the hypothesis that fast radio bursts originate in close proximity to the turbulent magnetosphere of a central engine. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_Chime.jpg
Charm jets lose less energy https://cerncourier.com/a/charm-jets-lose-less-energy/ Wed, 26 Mar 2025 14:17:29 +0000 https://cerncourier.com/?p=112750 New results from the ALICE collaboration highlight the quark-mass and colour-charge dependence of energy loss in the quark-gluon plasma.

The post Charm jets lose less energy appeared first on CERN Courier.

]]>
ALICE figure 1

Collisions between lead ions at the LHC generate the hottest and densest system ever created in the laboratory. Under these extreme conditions, quarks and gluons are no longer confined inside hadrons but instead form a quark–gluon plasma (QGP). Being heavier than the more abundantly produced light quarks, charm quarks play a special role in probing the plasma since they are created in the collision before the plasma is formed and interact with the plasma as they traverse the collision zone. Charm jets, which are clusters of particles originating from charm quarks, have been investigated for the first time by the ALICE collaboration in Pb–Pb collisions at the LHC using the D0 mesons (that carry a charm quark) as tags.

The primary interest lies in measuring the extent of energy loss experienced by different types of particles as they traverse the plasma, referred to as “in-medium energy loss”. This energy loss specifically depends on the particle type and particle mass, varying between quarks and gluons. Due to their larger mass, charm quarks at low transverse momentum do not reach the speed of light and lose substantially less energy than light quarks through both collisional and radiative processes, as gluon radiation by massive quarks is suppressed: the so-called “dead-cone effect”. Additionally, gluons, which carry a larger colour charge than quarks, experience greater energy loss in the QGP as quantified by the Casimir factors CA = 3 for gluons and CF = 4/3 for quarks. This makes the charm quark an ideal probe for studying the QGP properties. ALICE is well suited to study the in-medium energy loss of charm quarks, which is dependent on the mass of the charm quark and its colour charge.

The production yield of charm jets tagged with fully reconstructed D0 mesons (D0 Kπ+) in central Pb–Pb collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair during LHC Run 2 was measured by ALICE. The results are reported in terms of nuclear modification factor (RAA), which is the ratio of the particle production rate in Pb–Pb collisions to that in proton–proton collisions, scaled by the number of binary nucleon–nucleon collisions. A measured nuclear modification factor of unity would indicate the absence of final-state effects.

The results, shown in figure 1, show a clear suppression (RAA < 1) for both charm jets and inclusive jets (that mainly originate from light quarks and gluons) due to energy loss. Importantly, the charm jets exhibit less suppression than the inclusive jets within the transverse momentum range of 20 to 50 GeV, which is consistent with mass and colour-charge dependence.

The measured results are compared with theoretical model calculations that include mass effects in the in-medium energy loss. Among the different models, LIDO incorporates both the dead-cone effect and the colour-charge effects, which are essential for describing the energy-loss mechanisms. Consequently, it shows reasonable agreement with experimental data, reproducing the observed hierarchy between charm jets and inclusive jets.

The present finding provides a hint of the flavour-dependent energy loss in the QGP, suggesting that charm jets lose less energy than inclusive jets. This highlights the quark-mass and colour-charge dependence of the in-medium energy-loss mechanisms.

The post Charm jets lose less energy appeared first on CERN Courier.

]]>
News New results from the ALICE collaboration highlight the quark-mass and colour-charge dependence of energy loss in the quark-gluon plasma. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_EF_ALICE_feature.jpg
Chamonix looks to CERN’s future https://cerncourier.com/a/chamonix-looks-to-cerns-future/ Wed, 26 Mar 2025 14:15:37 +0000 https://cerncourier.com/?p=112738 CERN’s accelerator and experimental communities converged on Chamonix to chart a course for the future.

The post Chamonix looks to CERN’s future appeared first on CERN Courier.

]]>
The Chamonix Workshop 2025, held from 27 to 30 January, brought together CERN’s accelerator and experimental communities to reflect on achievements, address challenges and chart a course for the future. As the discussions made clear, CERN is at a pivotal moment. The past decade has seen transformative developments across the accelerator complex, while the present holds significant potential and opportunity.

The workshop opened with a review of accelerator operations, supported by input from December’s Joint Accelerator Performance Workshop. Maintaining current performance levels requires an extraordinary effort across all the facilities. Performance data from the ongoing Run 3 shows steady improvements in availability and beam delivery. These results are driven by dedicated efforts from system experts, operations teams and accelerator physicists, all working to ensure excellent performance and high availability across the complex.

Electron clouds parting

Attention is now turning to Run 4 and the High-Luminosity LHC (HL-LHC) era. Several challenges have been identified, including the demand for high-intensity beams, radiofrequency (RF) power limitations and electron-cloud effects. In the latter case, synchrotron-radiation photons strike the beam-pipe walls, releasing electrons which are then accelerated by proton bunches, triggering a cascading electron-cloud buildup. Measures to address these issues will be implemented during Long Shutdown 3 (LS3), ensuring CERN’s accelerators continue to meet the demands of its diverse physics community.

LS3 will be a crucial period for CERN. In addition to the deployment of the HL-LHC and major upgrades to the ATLAS and CMS experiments, it will see a widespread programme of consolidation, maintenance and improvements across the accelerator complex to secure future exploitation over the coming decades.

Progress on the HL-LHC upgrade was reviewed in detail, with a focus on key systems – magnets, cryogenics and beam instrumentation – and on the construction of critical components such as crab cavities. The next two years will be decisive, with significant system testing scheduled to ensure that these technologies meet ambitious performance targets.

Planning for LS3 is already well advan­ced. Coordination between all stakeholders has been key to aligning complex interdependencies, and the experienced teams are making strong progress in shaping a resource-loaded plan. The scale of LS3 will require meticulous coordination, but it also represents a unique opportunity to build a more robust and adaptable accelerator complex for the future. Looking beyond LS3, CERN’s unique accelerator complex is well positioned to support an increasingly diverse physics programme. This diversity is one of CERN’s greatest strengths, offering complementary opportunities across a wide range of fields.

The high demand for beam time at ISOLDE, n_TOF, AD-ELENA and the North and East Areas underscores the need for a well-balanced approach that supports a broad range of physics. The discussions highlighted the importance of balancing these demands while ensuring that the full potential of the accelerator complex is realised.

Future opportunities such as those highlighted by the Physics Beyond Colliders study will be shaped by discussions being held as part of the update of the European Strategy for Particle Physics (ESPP). Defining the next generation of physics programmes entails striking a careful balance between continuity and innovation, and the accelerator community will play a central role in setting the priorities.

A forward-looking session at the workshop focused on the Future Circular Collider (FCC) Feasibility Study and the next steps. The physics case was presented alongside updates on territorial implementation and civil-engineering investigations and plans. How the FCC-ee injector complex would fit into the broader strategic picture was examined in detail, along with the goals and deliverables of the pre-technical design report (pre-TDR) phase that is planned to follow the Feasibility Study’s conclusion.

While the FCC remains a central focus, other future projects were also discussed in the context of the ESPP update. These include mature linear-collider proposals, the potential of a muon collider and plasma wakefield acceleration. Development of key technologies, such as high-field magnets and superconducting RF systems, will underpin the realisation of future accelerator-based facilities.

The next steps – preparing for Run 4, implementing the LS3 upgrade programmes and laying the groundwork for future projects – are ambitious but essential. CERN’s future will be shaped by how well we seize these opportunities.

The shared expertise and dedication of CERN’s personnel, combined with a clear strategic vision, provide a solid foundation for success. The path ahead is challenging, but with careful planning, collaboration and innovation, CERN’s accelerator complex will remain at the heart of discovery for decades to come.

The post Chamonix looks to CERN’s future appeared first on CERN Courier.

]]>
Meeting report CERN’s accelerator and experimental communities converged on Chamonix to chart a course for the future. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_Chamonix.jpg
The triggering of tomorrow https://cerncourier.com/a/the-triggering-of-tomorrow/ Wed, 26 Mar 2025 14:14:12 +0000 https://cerncourier.com/?p=112724 The third TDHEP workshop explored how triggers can cope with high data rates.

The post The triggering of tomorrow appeared first on CERN Courier.

]]>
The third edition of Triggering Discoveries in High Energy Physics (TDHEP) attracted 55 participants to Slovakia’s High Tatras mountains from 9 to 13 December 2024. The workshop is the only conference dedicated to triggering in high-energy physics, and follows previous editions in Jammu, India in 2013 and Puebla, Mexico in 2018. Given the upcoming High-Luminosity LHC (HL-LHC) upgrade, discussions focused on how trigger systems can be enhanced to manage high data rates while preserving physics sensitivity.

Triggering systems play a crucial role in filtering the vast amounts of data generated by modern collider experiments. A good trigger design selects features in the event sample that greatly enrich the proportion of the desired physics processes in the recorded data. The key considerations are timing and selectivity. Timing has long been at the core of experiment design – detectors must capture data at the appropriate time to record an event. Selectivity has been a feature of triggering for almost as long. Recording an event makes demands on running time and data-acquisition bandwidth, both of which are limited.

Evolving architecture

Thanks to detector upgrades and major changes in the cost and availability of fast data links and storage, the past 10 years have seen an evolution in LHC triggers away from hardware-based decisions using coarse-grain information.

Detector upgrades mean higher granularity and better time resolution, improving the precision of the trigger algorithms and the ability to resolve the problem of having multiple events in a single LHC bunch crossing (“pileup”). Such upgrades allow more precise initial-level hardware triggering, bringing the event rate down to a level where events can be reconstructed for further selection via high-level trigger (HLT) systems.

To take advantage of modern computer architecture more fully, HLTs use both graphics processing units (GPUs) and central processing units (CPUs) to process events. In ALICE and LHCb this leads to essentially triggerless access to all events, while in ATLAS and CMS hardware selections are still important. All HLTs now use machine learning (ML) algorithms, with the ATLAS and CMS experiments even considering their use at the first hardware level.

ATLAS and CMS are primarily designed to search for new physics. At the end of Run 3, upgrades to both experiments will significantly enhance granularity and time resolution to handle the high-luminosity environment of the HL-LHC, which will deliver up to 200 interactions per LHC bunch crossing. Both experiments achieved efficient triggering in Run 3, but higher luminosities, difficult-to-distinguish physics signatures, upgraded detectors and increasingly ambitious physics goals call for advanced new techniques. The step change will be significant. At HL-LHC, the first-level hardware trigger rate will increase from the current 100 kHz to 1 MHz in ATLAS and 760 kHz in CMS. The price to pay is increasing the latency – the time delay between input and output – to 10 µsec in ATLAS and 12.5 µsec in CMS.

The proposed trigger systems for ATLAS and CMS are predominantly FPGA-based, employing highly parallelised processing to crunch huge data streams efficiently in real time. Both will be two-level triggers: a hardware trigger followed by a software-based HLT. The ATLAS hardware trigger will utilise full-granularity calorimeter and muon signals in the global-trigger-event processor, using advanced ML techniques for real-time event selection. In addition to calorimeter and muon data, CMS will introduce a global track trigger, enabling real-time tracking at the first trigger level. All information will be integrated within the global-correlator trigger, which will extensively utilise ML to enhance event selection and background suppression.

Substantial upgrades

The other two big LHC experiments already implemented substantial trigger upgrades at the beginning of Run 3. The ALICE experiment is dedicated to studying the strong interactions of the quark–gluon plasma – a state of matter in which quarks and gluons are not confined in hadrons. The detector was upgraded significantly for Run 3, including the trigger and data-acquisition systems. The ALICE continuous readout can cope with 50 kHz for lead ion–lead ion (PbPb) collisions and several MHz for proton–proton (pp) collisions. In PbPb collisions the full data is continuously recorded and stored for offline analysis, while for pp collisions the data is filtered.

Unlike in Run 2, where the hardware trigger reduced the data rate to several kHz, Run 3 uses an online software trigger that is a natural part of the common online–offline computing framework. The raw data from detectors is streamed continuously and processed in real time using high-performance FPGAs and GPUs. ML plays a crucial role in the heavy-flavour software trigger, which is one of the main physics interests. Boosted decision trees are used to identify displaced vertices from heavy quark decays. The full chain from saving raw data in a 100 PB buffer to selecting events of interest and removing the original raw data takes about three weeks and was fully employed last year.

The third edition of TDHEP suggests that innovation in this field is only set to accelerate

The LHCb experiment focuses on precision measurements in heavy-flavour physics. A typical example is measuring the probability of a particle decaying into a certain decay channel. In Run 2 the hardware trigger tended to saturate in many hadronic channels when the luminosity was instantaneously increased. To solve this issue for Run 3 a high-level software trigger was developed that can handle 30 MHz event readout with 4 TB/s data flow. A GPU-based partial event reconstruction and primary selection of displaced tracks and vertices (HLT1) reduces the output data rate to 1 MHz. The calibration and detector alignment (embedded into the trigger system) are calculated during data taking just after HLT1 and feed full-event reconstruction (HLT2), which reduces the output rate to 20 kHz. This represents 10 GB/s written to disk for later analysis.

Away from the LHC, trigger requirements differ considerably. Contributions from other areas covered heavy-ion physics at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC), fixed-target physics at CERN and future experiments at the Facility for Antiproton and Ion Research at GSI Darmstadt and Brookhaven’s Electron–Ion Collider (EIC). NA62 at CERN and STAR at RHIC both use conventional trigger strategies to arrive at their final event samples. The forthcoming CBM experiment at FAIR and the ePIC experiment at the EIC deal with high intensities but aim for “triggerless” operation.

Requirements were reported to be even more diverse in astroparticle physics. The Pierre Auger Observatory combines local and global trigger decisions at three levels to manage the problem of trigger distribution and data collection over 3000 km2 of fluorescence and Cherenkov detectors.

These diverse requirements will lead to new approaches being taken, and evolution as the experiments are finalised. The third edition of TDHEP suggests that innovation in this field is only set to accelerate.

The post The triggering of tomorrow appeared first on CERN Courier.

]]>
Meeting report The third TDHEP workshop explored how triggers can cope with high data rates. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_TDHEP.jpg
Space oddities https://cerncourier.com/a/space-oddities/ Wed, 26 Mar 2025 14:11:01 +0000 https://cerncourier.com/?p=112823 In his new popular book, Harry Cliff tackles the thorny subject of anomalies in fundamental science.

The post Space oddities appeared first on CERN Courier.

]]>
Space Oddities

Space Oddities takes readers on a journey through the mysteries of modern physics, from the smallest subatomic particles to the vast expanse of stars and space. Harry Cliff – an experimental particle physicist at Cambridge University – unravels some of the most perplexing anomalies challenging the Standard Model (SM), with behind-the-scenes scoops from eight different experiments. The most intriguing stories concern lepton universality and the magnetic moment of the muon.

Theoretical predictions have demonstrated an extremely precise value for the muon’s magnetic moment, experimentally verified to an astonishing 11 significant figures. Over the last few years, however, experimental measurements have suggested a slight discrepancy – the devil lying in the 12th digit. 2021 measurements at Fermilab disagreed with theory predictions at 4σ. Not enough to cause a “scientific earthquake”, as Cliff puts it, but enough to suggest that new physics might be at play.

Just as everything seemed to be edging towards a new discovery, Cliff introduces the “villains” of the piece. Groundbreaking lattice–QCD predictions from the Budapest–Marseille–Wuppertal collaboration were published on the same day as a new measurement from Fermilab. If correct, these would destroy the anomaly by contradicting the data-driven theory consensus. (“Yeah, bullshit,” said one experimentalist to Cliff when put to him that the timing wasn’t intended to steal the experiment’s thunder.) The situation is still unresolved, though many new theoretical predictions have been made and a new theoretical consensus is imminent (see “Do muons wobble faster than expected“). Regardless of the outcome, Cliff emphasises that this research will pave the way for future discoveries, and none of it should be taken for granted – even if the anomaly disappears.

“One of the challenging aspects of being part of a large international project is that your colleagues are both collaborators and competitors,” Cliff notes. “When it comes to analysing the data with the ultimate goal of making discoveries, each research group will fight to claim ownership of the most interesting topics.”

This spirit of spurring collaborator- competitors on to greater heights of precision is echoed throughout Cliff’s own experience of working in the LHCb collaboration, where he studies “lepton universality”. All three lepton flavours – electron, muon and tau – should interact almost identically, except for small differences due to their masses. However, over the past decade several experimental results suggested that this theory might not hold in B-meson decays, where muons seemed to be appearing less frequently than electrons. If confirmed, this would point to physics beyond the SM.

Having been involved himself in a complementary but less sensitive analy­sis of B-meson decay channels involving strange quarks, Cliff recalls the emotional rollercoaster experienced by some of the key protagonists: the “RK” team from Imperial College London. After a year of rigorous testing, RK unblinded a sanity check of their new computational toolkit: a reanalysis of the prior measurement that yielded a perfectly consistent R value of 0.72 with an uncertainty of about 0.08, upholding a 3σ discrepancy. Now was the time to put the data collected since then through the same pasta machine: if it agreed, the tension between the SM and their overall measurement would cross the 5σ threshold. After an anxious wait while the numbers were crunched, the team received the results for the new data: 0.93 with an uncertainty of 0.09.

“Dreams of a major discovery evaporated in an instant,” recalls Cliff. “Anyone who saw the RK team in the CERN cafeteria that day could read the result from their faces.” The lead on the RK team, Mitesh Patel, told Cliff that they felt “emotionally train wrecked”.

One day we might make the right mistake and escape the claustrophobic clutches of the SM

With both results combined, the ratio averaged out to 0.85 ± 0.06, just shy of 3σ away from unity. While the experimentalists were deflated, Cliff notes that for theorists this result may have been more exciting than the initial anomaly, as it was easier to explain using new particles or forces. “It was as if we were spying the footprints of a great, unknown beast as it crashed about in a dark jungle,” writes Cliff.

Space Oddities is a great defence of irrepressible experimentation. Even “failed” anomalies are far from useless: if they evaporate, the effort required to investigate them pushes the boundaries of experimental precision, enhances collaboration between scientists across the world, and refines theoretical frameworks. Through retellings and interviews, Cliff helps the public experience the excitement of near breakthroughs, the heartbreak of failed experiments, and the dynamic interactions between theoretical and experimental physicists. Thwarting myths that physicists are cold, calculating figures working in isolation, Cliff sheds light on a community driven by curiosity, ambition and (healthy) competition. His book is a story of hope that one day we might make the right mistake and escape the claustrophobic clutches of the SM.

“I’ve learned so much from my mistakes,” read a poster above Cliff’s undergraduate tutor’s desk. “I think I’ll make another.”

The post Space oddities appeared first on CERN Courier.

]]>
Review In his new popular book, Harry Cliff tackles the thorny subject of anomalies in fundamental science. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_REV_Space_feature.jpg
Probing the quark–gluon plasma in Nagasaki https://cerncourier.com/a/probing-the-quark-gluon-plasma-in-nagasaki/ Wed, 26 Mar 2025 14:08:03 +0000 https://cerncourier.com/?p=112733 The 12th edition of the International Conference on Hard and Electromagnetic Probes attracted over 300 physicists to Nagasaki, Japan.

The post Probing the quark–gluon plasma in Nagasaki appeared first on CERN Courier.

]]>
The 12th edition of the International Conference on Hard and Electromagnetic Probes attracted 346 physicists to Nagasaki, Japan, from 22 to 27 September 2024. Delegates discussed the recent experimental and theoretical findings on perturbative probes of the quark–gluon plasma (QGP) – a hot and deconfined state of matter formed in ultrarelativistic heavy-ion collisions.

The four main LHC experiments played a prominent role at the conference, presenting a large set of newly published results from studies performed on data collected during LHC Run 2, as well as several new preliminary results performed on the new data samples from Run 3.

Jet modifications

A number of significant results on the modification of jets in heavy-ion collisions were presented. Splitting functions characterising the evolution of parton showers are expected to be modified in the presence of the QGP, providing experimental access to the medium properties. A more differential look at these modifications was presented through a correlated measurement of the shared momentum fraction and opening angle of the first splitting satisfying the “soft drop” condition in jets. Additionally, energy–energy correlators have recently emerged as promising observables where the properties of jet modification in the medium might be imprinted at different scales on the observable.

The first measurements of the two-particle energy–energy correlators in p–Pb and Pb–Pb collisions were presented, showing modifications in both the small- and large-angle correlations for both systems compared to pp collisions. A long-sought after effect of energy exchanges between the jet and the medium is a correlated response of the medium in the jet direction. For the first time, measurements of hadron–boson correlations in events containing photons or Z bosons showed a clear depletion of the bulk medium in the direction of the Z boson, providing direct evidence of a medium response correlated to the propagating back-to-back jet. In pp collisions, the first direct measurement of the dead cone of beauty quarks, using novel machine-learning methods to reconstruct the beauty hadron from partial decay information, was also shown.

Several new results from studies of particle production in ultraperipheral heavy-ion collisions were discussed. These studies allow us to investigate the possible onset of gluon saturation at low Bjorken-x values. In this context, new results of charm photoproduction, with measurements of incoherent and coherent J/ψ mesons, as well as of D0 mesons, were released. Photonuclear production cross-sections of di-jets, covering a large interval of photon energies to scan over different regions of Bjorken-x, were also presented. These measurements pave the way for setting constraints on the gluon component of nuclear parton distribution functions at low Bjorken-x values, over a wide Q2 range, in the absence of significant final-state effects.

New experiments will explore higher-density regions of the QCD–matter phase diagram

During the last few years, a significant enhancement of charm and beauty-baryon production in proton–proton collisions was observed, compared to measurements in e+e and ep collisions. These observations have challenged the assumption of the universality of heavy-quark fragmentation across different collision systems. Several intriguing measurements on this topic were released at the conference. In addition to an extended set of charm meson-to-meson and baryon-to-meson production yield ratios, the first measurements of the production of Σc0,++(2520) relative to Σc0,++(2455) at the LHC, obtained exploiting the new Run 3 data samples, were discussed. New insights on the structure of the exotic χc1(3872) state and its hadronisation mechanism were garnered by measuring the ratio of its production yield to that of ψ(2S) mesons in hadronic collisions.

Additionally, strange-to-non-strange production-yield ratios for charm and beauty mesons as a function of the collision multiplicity were released, pointing toward an enhanced strangeness production in a higher colour-density environment. Several theoretical approaches implementing modified hadronisation mechanisms with respect to in-vacuum fragmentation have proven to be able to reproduce at least part of the measurements, but a comprehensive description of the heavy-quark hadronisation, in particular for the baryonic sector, is still to be reached.

A glimpse into the future of the experimental opportunities in this field was also provided. A new and intriguing set of physics observables for a complete characterisation of the QGP with hard probes will become accessible with the planned upgrades of the ALICE, ATLAS, CMS and LHCb detectors, both during the next long LHC shutdown and in the more distant future. New experiments at CERN, such as NA60+, or in other facilities like the Electron–Ion Collider in the US and J-PARC-HI in Japan, will explore higher-density regions of the QCD–matter phase diagram.

The next edition of this conference series is scheduled to be held in Nashville, US, from 1 to 5 June 2026.

The post Probing the quark–gluon plasma in Nagasaki appeared first on CERN Courier.

]]>
Meeting report The 12th edition of the International Conference on Hard and Electromagnetic Probes attracted over 300 physicists to Nagasaki, Japan. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_HP2024.jpg
Encounters with artists https://cerncourier.com/a/encounters-with-artists/ Wed, 26 Mar 2025 13:45:55 +0000 https://cerncourier.com/?p=112793 Over the past 10 years, Mónica Bello facilitated hundreds of encounters between artists and scientists as curator of the Arts at CERN programme.

The post Encounters with artists appeared first on CERN Courier.

]]>
Why should scientists care about art?

Throughout my experiences in the laboratory, I have seen how art is an important part of a scientist’s life. By being connected with art, scientists recognise that their activities are very embedded in contemporary culture. Science is culture. Through art and dialogues with artists, people realise how important science is for society and for culture in general. Science is an important cultural pillar in our society, and these interactions bring scientists meaning.

Are science and art two separate cultures?

Today, if you ask anyone: “What is nature?” they describe everything in scientific terms. The way you describe things, the mysteries of your research: you are actually answering the questions that are present in everyone’s life. In this case, scientists have a sense of responsibility. I think art helps to open this dialogue from science into society.

Do scientists have a responsibility to communicate their research?

All of us have a social responsibility in everything we produce. Ideas don’t belong to anyone, so it’s a collective endeavour. I think that scientists don’t have the responsibility to communicate the research themselves, but that their research cannot be isolated from society. I think it’s a very joyful experience to see that someone cares about what you do.

Why should artists care about science?

If you go to any academic institution, there’s always a scientific component, very often also a technological one. A scientific aspect of your life is always present. This is happening because we’re all on the same course. It’s a consequence of this presence of science in our culture. Artists have an important role in our society, and they help to spark conversations that are important to everyone. Sometimes it might seem as though they are coming from a very individual lens, but in fact they have a very large reach and impact. Not immediately, not something that you can count with data, but there is definitely an impact. Artists open these channels for communicating and thinking about a particular aspect of science, which is difficult to see from a scientific perspective. Because in any discipline, it’s amazing to see your activity from the eyes of others.

Creativity and curiosity are the parameters and competencies that make up artists and scientists

A few years back we did a little survey, and most of the scientists thought that by spending time with artists, they took a step back to think about their research from a different lens, and this changed their perspective. They thought of this as a very positive experience. So I think art is not only about communicating to the public, but about exploring the personal synergies of art and science. This is why artists are so important.

Do experimental and theoretical physicists have different attitudes towards art?

Typically, we think that theorists are much more open to artists, but I don’t agree. In my experiences at CERN, I found many engineers and experimental physicists being highly theoretical. Both value artistic perspectives and their ability to consider questions and scientific ideas in an unconventional way. Experimental physicists would emphasise engagement with instruments and data, while theoretical physicists would focus on conceptual abstraction.

By being with artists, many experimentalists feel that they have the opportunity to talk about things beyond their research. For example, we often talk about the “frontiers of knowledge”. When asked about this, experimentalists or theoretical physicists might tell us about something other than particle physics – like neuroscience, or the brain and consciousness. A scientist is a scientist. They are very curious about everything.

Do these interactions help to blur the distinction between art and science?

Well, here I’m a bit radical because I know that creativity is something we define. Creativity and curiosity are the parameters and competencies that make up artists and scientists. But to become a scientist or an artist you need years of training – it’s not that you can become one just because you are a curious and creative person.

Chroma VII work of art

Not many people can chat about particle physics, but scientists very often chat with artists. I saw artists speaking for hours with scientists about the Higgs field. When you see two people speaking about the same thing, but with different registers, knowledge and background, it’s a precious moment.

When facilitating these discussions between physicists and artists, we don’t speak only about physics, but about everything that worries them. Through that, grows a sort of intimacy that often becomes something else: a friendship. This is the point at which a scientist stops being an information point for an artist and becomes someone who deals with big questions alongside an artist – who is also a very knowledgeable and curious person. This is a process rich in contrast, and you get many interesting surprises out of these interactions.

But even in this moment, they are still artists and scientists. They don’t become this blurred figure that can do anything.

Can scientific discovery exist without art?

That’s a very tricky question. I think that art is a component of science, therefore science cannot exist without art – without the qualities that the artist and scientist have in common. To advance science, you have to create a question that needs to be answered experimentally.

Did discoveries in quantum mechanics affect the arts?

Everything is subjected to quantum mechanics. Maybe what it changed was an attitude towards uncertainty: what we see and what we think is there. There was an increased sense of doubt and general uncertainty in the arts.

Do art and science evolve together or separately?

I think there have been moments of convergence – you can clearly see it in any of the avant garde. The same applies to literature; for example, modernist writers showed a keen interest in science. Poets such as T S Eliot approached poetry with a clear resonance of the first scientific revolutions of the century. There are references to the contributions of Faraday, Maxwell and Planck. You can tell these artists and poets were informed and eager to follow what science was revealing about the world.

You can also note the influence of science in music, as physicists get a better understanding of the physical aspects of sound and matter. Physics became less about viewing the world through a lens, and instead focused on the invisible: the vibrations of matter, electricity, the innermost components of materials. At the end of the 19th and 20th centuries, these examples crop up constantly. It’s not just representing the world as you see it through a particular lens, but being involved in the phenomena of the world and these uncensored realities.

From the 1950s to the 1970s you can see these connections in every single moment. Science is very present in the work of artists, but my feeling is that we don’t have enough literature about it. We really need to conduct more research on this connection between humanities and science.

What are your favourite examples of art influencing science?

Feynman diagrams are one example. Feynman was amazing – a prodigy. Many people before him tried to represent things that escaped our intuition visually and failed. We also have the Pauli Archives here at CERN. Pauli was not the most popular father of quantum mechanics, but he was determined to not only understand mathematical equations but to visualise them, and share them with his friends and colleagues. This sort of endeavour goes beyond just writing – it is about the possibility of creating a tangible experience. I think scientists do that all the time by building machines, and then by trying to understand these machines statistically. I see that in the laboratory constantly, and it’s very revealing because usually people might think of these statistics as something no one cares about – that the visuals are clumsy and nerdy. But they’re not.

Even Leonardo da Vinci was known as a scientist and an artist, but his anatomical sketches were not discovered until hundreds of years after his other works. Newton was also paranoid about expressing his true scientific theories because of the social standards and politics of the time. His views were unorthodox, and he did not want to ruin his prestigious reputation.

Today’s culture also influences how we interpret history. We often think of Aristotle as a philosopher, yet he is also recognised for contributions to natural history. The same with Democritus, whose ideas laid foundations for scientific thought.

So I think that opening laboratories to artists is very revealing about the influence of today’s culture on science.

When did natural philosophy branch out into art and science?

I believe it was during the development of the scientific method: observation, analysis and the evolution of objectivity. The departure point was definitely when we developed a need to be objective. It took centuries to get where we are now, but I think there is a clear division: a line with philosophy, natural philosophy and natural history on one side, and modern science on the other. Today, I think art and science have different purposes. They convene at different moments, but there is always this detour. Some artists are very scientific minded, and some others are more abstract, but they are both bound to speculate massively.

Its really good news for everyone that labs want to include non-scientists

For example, at our Arts at CERN programme we have had artists who were interested in niche scientific aspects. Erich Berger, an artist from Finland, was interested in designing a detector, and scientists whom he met kept telling him that he would need to calibrate the detector. The scientist and the artist here had different goals. For the scientist, the most important thing is that the detector has precision in the greatest complexity. And for the artist, it’s not. It’s about the process of creation, not the analysis.

Do you think that science is purely an objective medium while art is a subjective one?

No. It’s difficult to define subjectivity and objectivity. But art can be very objective. Artists create artefacts to convey their intended message. It’s not that these creations are standing alone without purpose. No, we are beyond that. Now art seeks meaning that is, in this context, grounded in scientific and technological expertise.

How do you see the future of art and science evolving?

There are financial threats to both disciplines. We are still in this moment where things look a bit bleak. But I think our programme is pioneering, because many scientific labs are developing their own arts programmes inspired by the example of Arts at CERN. This is really great, because unless you are in a laboratory, you don’t see what doing science is really about. We usually read science in the newspapers or listen to it on a podcast – everything is very much oriented to the communication of science, but making science is something very specific. It’s really good news for everyone that laboratories want to include non-scientists. Arts at CERN works mostly with visual artists, but you could imagine filmmakers, philosophers, those from the humanities, poets or almost anyone at all, depending on the model that one wants to create in the lab.

The post Encounters with artists appeared first on CERN Courier.

]]>
Opinion Over the past 10 years, Mónica Bello facilitated hundreds of encounters between artists and scientists as curator of the Arts at CERN programme. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_INT_Bello.jpg
Breaking new ground in flavour universality https://cerncourier.com/a/breaking-new-ground-in-flavour-universality/ Wed, 26 Mar 2025 13:43:49 +0000 https://cerncourier.com/?p=112758 A new result from the LHCb collaboration further tightens constraints on the lepton-flavour-universality violation in rare B decays.

The post Breaking new ground in flavour universality appeared first on CERN Courier.

]]>
LHCb figure 1

A new result from the LHCb collaboration supports the hypothesis that the rare decays B± K±e+e and B± K±µ+µoccur at the same rate, further tightening constraints on the magnitude of lepton flavour universality (LFU) violation in rare B decays. The new measurement is the most precise to date in the high-q2 region and the first of its kind at a hadron collider.

LFU is an accidental symmetry of the Standard Model (SM). Under LFU, each generation of lepton ℓ± (electron, muon and tau lepton) is equally likely to interact with the W boson in decay processes such as B± K±+. This symmetry leads to the prediction that the ratio of branching fractions for these decay channels should be unity except for kinematic effects due to the different masses of the charged leptons. The most straightforward ratio to measure is that between the muon and electron decay modes, known as RK. Any significant deviation from RK = 1 could only be explained by the existence of new physics (NP) particles that preferentially couple to one lepton generation over another, violating LFU.

B± K±+ decays are a powerful probe for virtual NP particles. These decays involve an underlying b–to–s quark transition – an example of a flavour-changing neutral current (FCNC). FCNC transitions are extremely rare in the SM, as they occur only through higher-order Feynman diagrams. This makes them particularly sensitive to contributions from NP particles, which could significantly alter the characteristics of the decays. In this case, the mass of the NP particles could be much larger than can be produced directly at the LHC. “Indirect” searches for NP, such as measuring the precisely predicted ratio RK, can probe mass scales beyond the reach of direct-production searches with current experimental resources.

The new measurement is the most precise to date in the high-q2 region

In the decay process B± K±+, the final-state leptons can also originate from an intermediate resonant state, such as a J/ψ or ψ(2S). These resonant channels occur through tree-level Feynman diagrams. Their contributions significantly outnumber the non-resonant FCNC processes and are not expected to be affected by NP. RK is therefore measured in ranges of dilepton invariant mass-squared (q2), which exclude these resonances, to preserve sensitivity to potential NP effects in FCNC processes.

The new result from the LHCb collaboration measures RK in the high-q2 region, above the ψ(2S) resonance. The high-q2 region data has a different composition of backgrounds compared to the low-q2 data, leading to different strategies for their rejection and modelling, and different systematic effects. With RK expected to be unity in all domains in the SM, low-q2 and high-q2 measurements offer powerfully complementary constraints on the magnitude of LFU-violating NP in rare B decays.

The new measurement of RK agrees with the SM prediction of unity and is the most precise to date in the high-q2 region (figure 1). It complements a refined analysis below the J/ψ resonance published by LHCb in 2023, which also reported RK consistent with unity. Both results use the complete proton–proton collision data collected by LHCb from 2011 to 2018. They lay the groundwork for even more precise measurements with data from Run 3 and beyond.

The post Breaking new ground in flavour universality appeared first on CERN Courier.

]]>
News A new result from the LHCb collaboration further tightens constraints on the lepton-flavour-universality violation in rare B decays. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_EF_LHCb_feature.jpg
A new record for precision on B-meson lifetimes https://cerncourier.com/a/a-new-record-for-precision-on-b-meson-lifetimes/ Wed, 26 Mar 2025 13:24:30 +0000 https://cerncourier.com/?p=112771 As direct searches for physics beyond the Standard Model continue to push frontiers at the LHC, the b-hadron physics sector remains a crucial source of insight for testing established theoretical models.

The post A new record for precision on B-meson lifetimes appeared first on CERN Courier.

]]>
ATLAS figure 1

As direct searches for physics beyond the Standard Model continue to push frontiers at the LHC, the b-hadron physics sector remains a crucial source of insight for testing established theoretical models.

The ATLAS collaboration recently published a new measurement of the B0 lifetime using B0 J/ψK*0 decays from the entire Run-2 dataset it has recorded at 13 TeV. The result improves the precision of previous world-leading measurements by the CMS and LHCb collaborations by a factor of two.

Studies of b-hadron lifetimes probe our understanding of the weak interaction. The lifetimes of b-hadrons can be systematically computed within the heavy-quark expansion (HQE) framework, where b-hadron observables are expressed as a perturbative expansion in inverse powers of the b-quark mass.

ATLAS measures the “effective” B0 lifetime, which represents the average decay time incorporating effects from mixing and CP contributions, as τ(B0) = 1.5053 ± 0.0012 (stat.) ± 0.0035 (syst.) ps. The result is consistent with previous measurements published by ATLAS and other experiments, as summarised in figure 1. It also aligns with theoretical predictions from HQE and lattice QCD, as well as with the experimental world average.

The analysis benefitted from the large Run-2 dataset and a refined trigger selection, enabling the collection of an extensive sample of 2.5 million B0 J/ψK*0 decays. Events with a J/ψ meson decaying into two muons with sufficient transverse momentum are cleanly identified in the ATLAS Muon Spectrometer by the first-level hardware trigger. In the next-level software trigger, exploiting the full detector information, these muons are then combined with two tracks measured by the Inner Detector, ensuring they originate from the same vertex.

The B0-meson lifetime is determined through a two-dimensional unbinned maximum-likelihood fit, utilising the measured B0-candidate mass and decay time, and accounting for both signal and background components. The limited hadronic particle-identification capability of ATLAS requires careful modelling of the significant backgrounds from other processes that produce J/ψ mesons. The sensitivity of the fit is increased by estimating the uncertainty of the decay-time measurement provided by the ATLAS tracking and vertexing algorithms on a per-candidate basis. The resulting lifetime measurement is limited by systematic uncertainties, with the largest contributions arising from the correlation between B0 mass and lifetime, and ambiguities in modelling the mass distribution. 

ATLAS combined its measurement with the average decay width (Γs) of the light and heavy Bs-meson mass eigenstates, also measured by ATLAS, to determine the ratio of decay widths as Γd/Γs = 0.9905 ± 0.0022 (stat.) ± 0.0036 (syst.) ± 0.0057 (ext.). The result is consistent with unity and provides a stringent test of QCD predictions, which also support a value near unity.

The post A new record for precision on B-meson lifetimes appeared first on CERN Courier.

]]>
News As direct searches for physics beyond the Standard Model continue to push frontiers at the LHC, the b-hadron physics sector remains a crucial source of insight for testing established theoretical models. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_EF_ATLAS_feature.jpg
Beyond Bohr and Einstein https://cerncourier.com/a/beyond-bohr-and-einstein/ Wed, 26 Mar 2025 13:22:32 +0000 https://cerncourier.com/?p=112808 Jim Al-Khalili reviews Quantum Drama, a new book by physicist and science writer Jim Baggott and the late historian of science John L Heilbron.

The post Beyond Bohr and Einstein appeared first on CERN Courier.

]]>
When I was an undergraduate physics student in the mid-1980s, I fell in love with the philosophy of quantum mechanics. I devoured biographies of the greats of early-20th-century atomic physics – physicists like Bohr, Heisenberg, Schrödinger, Pauli, Dirac, Fermi and Born. To me, as I was struggling with the formalism of quantum mechanics, there seemed to be something so exciting, magical even, about that era, particularly those wonder years of the mid-1920s when its mathematical framework was being developed and the secrets of the quantum world were revealing themselves.

I went on to do a PhD in nuclear reaction theory, which meant I spent most of my time working through mathema­tical derivations, becoming familiar with S-matrices, Green’s functions and scattering amplitudes, scribbling pages of angular-momentum algebra and coding in Fortran 77. And I loved that stuff. There certainly seemed to be little time for worrying about what was really going on inside atomic nuclei. Indeed, I was learning that even the notion of something “really going on” was a vague one. My generation of theoretical physicists were still being very firmly told to “shut up and calculate”, as many adherents of the Copenhagen school of quantum mechanics were keen to advocate. To be fair, so much progress has been made over the past century, in nuclear and particle physics, quantum optics, condensed-matter physics and quantum chemistry, that philosophical issues were seen as an unnecessary distraction. I recall one senior colleague, frustrated by my abiding interest in interpretational matters, admonishing me with: “Jim, an electron is an electron is an electron. Stop trying to say more about it.” And there certainly seemed to be very little in the textbooks I was reading about unresolved issues arising from such topics as the EPR (Einstein–Podolsky–Rosen) paradox and the measurement problem, let alone any analysis of the work of Hugh Everett and David Bohm, who were regarded as mavericks. The Copenhagen hegemony ruled supreme.

What I wasn’t aware of until later in my career was that a community of physicists had indeed continued to worry and think about such matters. These physicists were doing more than just debating and philosophising – they were slowly advancing our understanding of the quantum world. Experimentalists such as Alain Aspect, John Clauser and Anton Zeilinger were devising ingenious experiments in quantum optics – all three of whom were only awarded the Nobel Prize for their work on tests of John Bell’s famous inequality in 2022, which says a lot about how we are only now acknowledging their contribution. Meanwhile, theorists such as Wojciech Zurek, Erich Joos, Deiter Zeh, Abner Shimony and Asher Peres, to name just a few, were formalising ideas on entanglement and decoherence theory. It is certainly high time that quantum-mechanics textbooks – even advanced undergraduate ones – should contain their new insights.

Quantum Drama

All of which brings me to Quantum Drama, a new popular-science book and collaboration between the physicist and science writer Jim Baggott and the late historian of science John L Heilbron. In terms of level, the book is at the higher end of the popular-science market and, as such, will probably be of most interest to, for example, readers of CERN Courier. If I have a criticism of the book it is that its level is not consistent. For it tries to be all things. On occasion, it has wonderful biographical detail, often of less well-known but highly deserving characters. It is also full of wit and new insights. But then sometimes it can get mired in technical detail, such as in the lengthy descriptions of the different Bell tests, which I imagine only professional physicists are likely to fully appreciate.

Having said that, the book is certainly timely. This year the world celebrates the centenary of quantum physics, since the publication of the momentous papers of Heisenberg and Schrödinger on matrix and wave mechanics, in 1925 and 1926, respectively. Progress in quantum information theory and in the development of new quantum technologies is also gathering pace right now, with the promise of quantum computers, quantum sensing and quantum encryption getting ever closer. This all provides an opportunity for the philosophy of quantum mechanics to finally emerge from the shadows into mainstream debate again.

A new narrative

So, what makes Quantum Drama stand out from other books that retell the story of quantum mechanics? Well, I would say that most historical accounts tend to focus only on that golden age between 1900 and 1927, which came to an end at the Solvay Conference in Brussels and those well-documented few days when Einstein and Bohr had their debate about what it all means. While these two giants of 20th-century physics make the front cover of the book, Quantum Drama takes the story on beyond that famous conference. Other accounts, both popular and scholarly, tend to push the narrative that Bohr won the argument, leaving generations of physicists with the idea that the interpretational issues had been resolved – apart that is, from the odd dissenting voices from the likes of Everett or Bohm who tried, unsuccessfully it was argued, to put a spanner in the Copenhagen works. All the real progress in quantum foundations after 1927, or so we were told, was in the development of quantum field theories, such as QED and QCD, the excitement of high-energy physics and the birth of the Standard Model, with the likes of Murray Gell-Mann and Steven Weinberg replacing Heisenberg and Schrödinger at centre stage. Quantum Drama takes up the story after 1927, showing that there has been a lively, exciting and ongoing dispute over what it all means, long after the death of those two giants of physics. In fact, the period up to Solvay 1927 is all dealt with in Act I of the book. The subtitle puts it well: From the Bohr–Einstein Debate to the Riddle of Entanglement.

The Bohr–Einstein debate is still very much alive and kicking

All in all, Quantum Drama delivers something remarkable, for it shines a light on all the muddle, complexity and confusion surrounding a century of debate about the meaning of quantum mechanics and the famous “Copenhagen spirit”, treating the subject with thoroughness and genuine scholarship, and showing that the Bohr–Einstein debate is still very much alive and kicking.

The post Beyond Bohr and Einstein appeared first on CERN Courier.

]]>
Review Jim Al-Khalili reviews Quantum Drama, a new book by physicist and science writer Jim Baggott and the late historian of science John L Heilbron. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_REV_Bell.jpg
Guido Barbiellini 1936–2024 https://cerncourier.com/a/guido-barbiellini-1936-2024/ Wed, 26 Mar 2025 13:21:08 +0000 https://cerncourier.com/?p=112839 Guido Barbiellini Amidei, who passed away on 15 November 2024, made fundamental contributions to both particle physics and astrophysics.

The post Guido Barbiellini 1936–2024 appeared first on CERN Courier.

]]>
Guido Barbiellini

Guido Barbiellini Amidei, who passed away on 15 November 2024, made fundamental contributions to both particle physics and astrophysics.

In 1959 Guido earned a degree in physics from Rome University with a thesis on electron bremsstrahlung in monocrystals under Giordano Diambrini, a skilled experimentalist and excellent teacher. Another key mentor was Marcello Conversi, spokesperson for one of the detectors at the Adone electron–positron collider at INFN Frascati, where Guido became a staff member and developed the first luminometer based on small-angle electron–positron scattering – a technique still used today. Together with Shuji Orito, he also built the first double-tagging system for studying gamma-ray collisions.

Guido later spent several years at CERN, collaborating with Carlo Rubbia, first on the study of K-meson decays at the Proton Synchrotron and then on small-angle proton–proton scattering at the Intersecting Storage Rings. In 1974 he proposed an experiment in a new field for him: neutrino-electron scattering, a fundamental but extremely rare phenomenon known from a handful of events seen in Gargamelle. To distinguish electromagnetic showers from hadronic ones, the CHARM collaboration built a “light” calorimeter made of 150 tonnes of Carrara marble. From 1979 to 1983, 200 electron–neutrino scattering events were recorded.

In 1980 Guido remarked to his friend Ugo Amaldi: “Why don’t we start our own collaboration for LEP instead of joining others?” This suggestion sparked the genesis of the DELPHI collaboration, in which Guido played a pivotal role in defining its scientific objectives and overseeing the construction of the barrel electromagnetic calorimeter. He also contributed significantly to the design of the luminosity monitors. Above all, Guido was a constant driving force within the experiment, offering innovative ideas for fundamental physics during the transition to LEP’s higher-energy phase, and engaging tirelessly with both young students and senior colleagues.

Guido’s insatiable scientific curiosity also extended to CP symmetry violation. In 1989 he co-organised a workshop, with Konrad Kleinknecht and Walter Hoogland, exploring the possibility of an electron–positron ϕ-factory to study CP violation in neutral kaon decays. Two of his papers, with Claudio Santoni, laid the groundwork for constructing the DAΦNE collider in Frascati.

The year 1987 was a turning point for Guido. Firstly, he became a professor at the University of Trieste. Secondly, the detection of neutrinos produced by Supernova 1987A inspired a letter, published in Nature in collaboration with Giuseppe Cocconi, in which it was established that neutrinos have a charge smaller than 10–17 elementary charges. Thirdly, Guido presented a new idea to mount silicon detectors (which he had encountered through work done in DELPHI by Bernard Hyams and Peter Weilhammer) on the International Space Station or a spacecraft to detect cosmic rays and their showers, which led to a seminal paper.

At the beginning of the 1990s, an international collaboration for a large NASA space mission focused on gamma-ray astrophysics (initially named GLAST) began to form, led by SLAC scientists. Guido was among the first proponents and later was the national representative of many INFN groups. The mission, later renamed Fermi, was launched in 2008 and continues to produce significant insights in topics ranging from neutron stars and black holes to dark-matter annihilation.

Beyond GLAST, Guido was captivated by the application of silicon sensors to a new programme of small space missions initiated by the Italian Space Agency. The AGILE gamma-ray astrophysics mission, for which Guido was co-principal investigator, was conceived and approved during this period. Launched in 2007, AGILE made numerous discoveries over nearly 17 years, including identifying the origin of hadronic cosmic rays in supernova remnants and discovering novel, rapid particle acceleration phenomena in the Crab Nebula.

Guido’s passion for physics made him inexhaustible. He always brought fresh insights and thoughtful judgments, fostering a collaborative environment that enriched all the projects he took part in. He was not only a brilliant physicist but also a true gentleman of calm and mild manners, widely appreciated as a teacher and as director of INFN Trieste. Intellectually free and always smiling, he conveyed determination and commitment with grace and a profound dedication to nurturing young talents. He will be deeply missed.

The post Guido Barbiellini 1936–2024 appeared first on CERN Courier.

]]>
News Guido Barbiellini Amidei, who passed away on 15 November 2024, made fundamental contributions to both particle physics and astrophysics. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_Obits_Barbiellini_feature.jpg
Meinhard Regler 1941–2024 https://cerncourier.com/a/meinhard-regler-1941-2024/ Wed, 26 Mar 2025 13:20:13 +0000 https://cerncourier.com/?p=112845 Meinhard Regler, an expert in detector development and software analysis, passed away on 22 September 2024 at the age of 83.

The post Meinhard Regler 1941–2024 appeared first on CERN Courier.

]]>
Meinhard Regler

Meinhard Regler, an expert in detector development and software analysis, passed away on 22 September 2024 at the age of 83.

Born and raised in Vienna, Meinhard studied physics at the Technical University Vienna (TUW) and completed his master’s thesis on deuteron acceleration in a linac at CERN. In 1966 he joined the newly founded Institute of High Energy Physics (HEPHY) of the Austrian Academy of Sciences. He settled in Geneva to participate in a counter experiment at the CERN Proton Synchrotron, and in 1970 obtained his PhD with distinction from TUW.

In 1970 Meinhard became staff member in CERN’s data-handling division. He joined the Split Field Magnet experiment at the Intersecting Storage Rings and, together with HEPHY, contributed specially designed multi-wire proportional chambers. Early on, he realised the importance of rigorous statistical methods for track and vertex reconstruction in complex detectors, resulting in several seminal papers.

In 1975 Meinhard returned to Vienna as leader of HEPHY’s experimental division. From 1993 until his retirement at the end of 2006 he was deputy director and responsible for the detector development and software analysis groups. As a faculty member of TUW he created a series of specialised lectures and practical courses, which shaped a generation of particle physicists. In 1978 Meinhard and Georges Charpak founded the Wire Chamber Conference, now known as the Vienna Conference on Instrumentation (VCI).

Meinhard continued his participation in experiments at CERN, including WA6, UA1 and the European Hybrid Spectrometer. After joining the DELPHI experiment at LEP, he realised the emerging potential of semiconductor tracking devices and established this technology at HEPHY. First applied at DELPHI’s Very Forward Tracker, this expertise was successfully continued with important contributions to the CMS tracker at LHC, the Belle vertex detector at KEKB and several others.

Meinhard is author and co-author of several hundred scientific papers. His and his group’s contributions to track and vertex reconstruction are summarised in the standard textbook Data Analysis Techniques for High-Energy Physics, published by Cambridge University Press and translated into Russian and Chinese.

All that would suffice for a lifetime achievement, but not so for Meinhard. Inspired by the fall of the Iron Curtain, he envisaged the creation of an international centre of excellence in the Vienna region. Initially planned as a spallation neutron source, the project eventually transmuted into a facility for cancer therapy by proton and carbon-ion beams, called MedAustron. Financed by the province of Lower Austria and the hosting city of Wiener Neustadt, and with crucial scientific and engineering support from CERN and Austrian institutes, clinical treatment started in 2016.

Meinhard received several prizes and was rewarded with the highest scientific decoration of Austria

Meinhard was invited as a lecturer to many international conferences and post-graduate schools worldwide. He chaired the VCI series, organised several accelerator schools and conferences in Austria, and served on the boards of the European Physical Society’s international group on accelerators. For his tireless scientific efforts and in particular the realisation of MedAustron, Meinhard received several prizes and was rewarded with the highest scientific decoration of Austria – the Honorary Cross for Science and Arts of First Class.

He was also a co-founder and long-term president of a non-profit organisation in support of mentally handicapped people. His character was incorruptible, strictly committed to truth and honesty, and responsive to loyalty, independent thinking and constructive criticism.

In Meinhard Regler we have lost an enthusiastic scientist, visionary innovator, talented organiser, gifted teacher, great humanist and good friend. His legacy will forever stay with us.

The post Meinhard Regler 1941–2024 appeared first on CERN Courier.

]]>
News Meinhard Regler, an expert in detector development and software analysis, passed away on 22 September 2024 at the age of 83. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_Obits_Regler_feature.jpg
Iosif Khriplovich 1937–2024 https://cerncourier.com/a/iosif-khriplovich-1937-2024/ Wed, 26 Mar 2025 13:19:30 +0000 https://cerncourier.com/?p=112842 Renowned theorist Iosif Khriplovich passed away on 26 September 2024, aged 87.

The post Iosif Khriplovich 1937–2024 appeared first on CERN Courier.

]]>
Renowned Soviet/Russian theorist Iosif Khriplovich passed away on 26 September 2024, aged 87. Born in 1937 in Ukraine to a Jewish family, he graduated from Kiev University and moved to the newly built Academgorodok in Siberia. From 1959 to 2014 he was a prominent member of the theory department at the Budker Institute of Nuclear Physics. He combined his research with teaching at Novosibirsk University, where he also held a professorship in 1983–2009. In 2014 he moved to St. Petersburg to take up a professorial position at Petersburg University and was a corresponding member of the Russian Academy of Sciences from 2000.

In a paper published in 1969, Khriplovich was the first to discover the phenomenon of anti-screening in the SU(2) Yang–Mills theory by calculating the first loop correction to the charge renormalisation. This immediately translates into the crucial first coefficient (–22/3) of the Gell-Mann–Low function and asymptotic freedom of the theory.

Regretfully, Khriplovich did not follow this interpretation of his result even after the key SLAC experiment on deep inelastic scattering and its subsequent partonic interpretation by Feynman. The honour of the discovery of asymptotic freedom in QCD went to three authors of papers published in 1973, who seemingly did not know of Khriplo­vich’s calculations.

In the early 1970s, Khriplovich’s interests turned to fundamental questions on the way towards the Standard Model. One was whether the electroweak theory is described by the Weinberg–Salam model, with neutral currents interacting via Z bosons, or the Georgi–Glashow model without them. While neutrino scattering on nucleons was soon confirmed, the electron interaction with nucleons was still unchecked. One practical way to find out was to use atomic spectroscopy to look for any mixing between states of opposite parity. Actively entering this area, Khriplovich and his students worked out quantitative predictions for the rotation of laser polarisation due to the weak interaction between electrons and nucleons. Their predictions were triumphantly confirmed in experiments, firstly by Barkov and Zolotorev at the Budker Institute. The same parity violating interaction was later observed at SLAC in 1978, proving the Z-exchange and the Weinberg–Salam model beyond any doubt. In 1973, together with Arkady Vainshtein, Khriplovich also derived the first solid limit on the mass of the charm quark that was unexpectedly discovered the following year.

He became engaged in Yang–Mills theories at a time when very few people were interested in them

The work of Khriplovich and his group significantly advanced the theory of many-electron atoms and contributed to the subsequent studies of the violation of fundamental symmetries in processes involving elementary particles, atoms, molecules and atomic nuclei. His students and later close collaborators, such as Victor Flambaum, Oleg Sushkov and Maxim Pospelov, grew as strong physicists who made important contributions to various subfields of theoretical physics. He was awarded the Silver Dirac Medal by the University of New South Wales (Sydney) and the Pomeranchuk Prize by the Institute of Theoretical and Experimental Physics (Moscow).

Yulik, as he was affectionately known, had his own style in physics. He was feisty and focused on issues where he could become a trailblazer, unafraid to cut relations with scientists of any rank if he felt their behaviour did not match his high ethical standards. This is why he became engaged in Yang–Mills theories at a time when very few people were interested in them. Yet, Yulik was always graceful and respectful in his interactions with others, and smiling, as we would like to remember him.

The post Iosif Khriplovich 1937–2024 appeared first on CERN Courier.

]]>
News Renowned theorist Iosif Khriplovich passed away on 26 September 2024, aged 87. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_Obits_Khriplovich.jpg
Strategy symposium shapes up https://cerncourier.com/a/strategy-symposium-shapes-up/ Wed, 26 Mar 2025 13:17:46 +0000 https://cerncourier.com/?p=112593 The Open Symposium of the 2026 update to the European Strategy for Particle Physics will see scientists from around the world debate the future of the field.

The post Strategy symposium shapes up appeared first on CERN Courier.

]]>
Registration is now open for the Open Symposium of the 2026 update to the European Strategy for Particle Physics (ESPP). It will take place from 23 to 27 June at Lido di Venezia in Italy, and see scientists from around the world debate the inputs to the ESPP (see “A call to engage”).

The symposium will begin by surveying the implementation of the last strategy process, whose recommendations were approved by the CERN Council in June 2020. In-depth working-group discussions on all areas of physics and technology will follow.

The rest of the week will see plenary sessions on the different physics and technology areas, starting with various proposals for possible large accelerator projects at CERN, and the status and plans in other regions of the world. Open questions, as well as how they can be addressed by the proposed projects, will be presented in rapporteur talks. This will be followed by longer discussion blocks where the full community can get engaged. On the final day, members of the European Strategy Group will summarise the national inputs and other overarching topics to the ESPP.

The post Strategy symposium shapes up appeared first on CERN Courier.

]]>
News The Open Symposium of the 2026 update to the European Strategy for Particle Physics will see scientists from around the world debate the future of the field. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_symposium.jpg
Karel Šafařík 1953–2024 https://cerncourier.com/a/karel-safarik-1953-2024/ Wed, 26 Mar 2025 13:16:21 +0000 https://cerncourier.com/?p=112849 Karel Šafařík, one of the founding members of the ALICE collaboration, passed away on 7 October 2024.

The post Karel Šafařík 1953–2024 appeared first on CERN Courier.

]]>
Karel Šafařík, one of the founding members of the ALICE collaboration, passed away on 7 October 2024.

Karel graduated in theoretical physics in Bratislava, Slovakia (then Czechoslovakia) in 1976 and worked at JINR Dubna for over 10 years, participating in experiments in Serpukhov and doing theoretical studies on the phenomenology of particle production at high energies. In 1990 he joined Collège de France and the heavy-ion programme at CERN, soon becoming one of the most influential scientists in the Omega series of heavy-ion experiments (WA85, WA94, WA97, NA57) at the CERN Super Proton Synchrotron (SPS). In 2002 Karel was awarded the Slovak Academy of Sciences Prize for his contributions to the observation of the enhancement of the production of multi-strange particles in heavy-ion collisions at the SPS. In 2013 he was awarded the medal of the Czech Physical Society.

As early as 1991, Karel was part of the small group who designed the first heavy-ion detector for the LHC, which later became ALICE. He played a central role in shaping the ALICE experiment, from the definition of physics topics and the detector layout to the design of the data format, tracking, data storage and data analysis. He was pivotal in convincing the collaboration to introduce two layers of pixel detectors to reconstruct decays of charm hadrons only a few tens of microns from the primary vertex in central lead–lead collisions at the LHC – an idea considered by many to be impossible in heavy-ion collisions, but that is now one of the pillars of the ALICE physics programme. He was the ALICE physics coordinator for many years leading up to and including first data taking. Over the years, he also made multiple contributions to ALICE upgrade studies and became known as the “wise man” to be consulted on the trickiest questions.

Karel was a top-class physicist, with a sharp analytical mind, a legendary memory, a seemingly unlimited set of competences ranging from higher mathematics to formal theory, and from detector physics to high-performance computing. At the same time he was a generous, caring and kind colleague who supported, helped, mentored and guided a large number of ALICE collaborators. We miss him dearly.

The post Karel Šafařík 1953–2024 appeared first on CERN Courier.

]]>
News Karel Šafařík, one of the founding members of the ALICE collaboration, passed away on 7 October 2024. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_Obits_Safarik.jpg
Günter Wolf 1937–2024 https://cerncourier.com/a/gunter-wolf-1937-2024/ Wed, 26 Mar 2025 13:15:16 +0000 https://cerncourier.com/?p=112852 Günter Wolf, who played a leading role in the planning, construction and data analysis of experiments that were instrumental in establishing the Standard Model, passed away on 29 October 2024 at the age of 86.

The post Günter Wolf 1937–2024 appeared first on CERN Courier.

]]>
Günter Wolf

Günter Wolf, who played a leading role in the planning, construction and data analysis of experiments that were instrumental in establishing the Standard Model, passed away on 29 October 2024 at the age of 86. He significantly shaped and contributed to the research programme of DESY, and knew better than almost anyone how to form international collaborations and lead them to the highest achievements.

Born in Ulm, Germany in 1937, Wolf studied physics in Tübingen. At the urging of his supervisor Helmut Faissner, he went to Hamburg in 1961 where the DESY synchrotron was being built under DESY founder Willibald Jentschke. Together with Erich Lohrmann and Martin Teucher, he was involved in the preparation of the bubble-chamber experiments there and at the same time took part in experiments at CERN.

The first phase of experiments with high-energy photons at the DESY synchrotron, in which he was involved, had produced widely recognised results on the electromagnetic interactions of elementary particles. In 1967 Wolf seized the opportunity to continue this research at the higher energies of the recently completed linear accelerator at Stanford University (SLAC). He became the spokesperson for an experiment with a polarised gamma beam, which provided new insights into the nature of vector mesons.

In 1971, Jentschke succeeded in bringing Wolf back to Hamburg as senior scientist. He remained associated with DESY for the rest of his life and became a leader in the planning, construction and analysis of key DESY experiments.

Together with Bjørn Wiik, as part of an international collaboration, Wolf designed and realised the DASP detector for DORIS, the first electron–positron storage ring at DESY. This led to the discovery of the excited states of charmonium in 1975 and thus to the ultimate confirmation that quarks are particles. For the next, larger electron–positron storage ring, PETRA, he designed the TASSO detector, again together with Wiik. In 1979, the TASSO collaboration was able to announce the discovery of the gluon through its spokesperson Wolf, for which he, together with colleagues from TASSO, was awarded the High Energy Particle Physics Prize of the European Physical Society.

Wolf’s negotiating skills and deep understanding of physics and technology served particle physics worldwide

In 1982 Wolf became the chair of the experiment selection committee for the planned LEP collider at CERN. His deep understanding of physics and technology, and his negotiating skills, were an essential foundation for the successful LEP programme, just one example of how Wolf has served particle physics worldwide as a member of international scientific committees.

At the same time, Wolf was involved in the planning of the physics programme for the electron–proton collider HERA. The ZEUS general-purpose detector for experiments at HERA was the work of an international collaboration of more than 400 scientists, that Wolf brought together and led as its spokesperson for many years. The experiments at HERA ran from 1992 to 2007, producing outstanding results that include the direct demonstration of the unification of the weak and electromagnetic force at high momentum transfers, the precise measurement of the structure of the proton, which is determined by quarks and gluons, and the surprising finding that there are collisions in which the proton remains intact even at the highest momentum transfers. In 2011 Wolf was awarded the Stern–Gerlach Medal of the German Physical Society, its highest award for achievements in experimental physics.

When dealing with colleagues and staff, Günter Wolf was always friendly, helpful, encouraging and inspiring, but at the same time demanding and insistent on precision and scientific excellence. He took the opinions of others seriously, but only a thorough and competent analysis could convince him. As a result, he enjoyed the greatest respect from everyone and became a role model and friend to many. DESY owes its reputation in the international physics community not least to people like him.

The post Günter Wolf 1937–2024 appeared first on CERN Courier.

]]>
News Günter Wolf, who played a leading role in the planning, construction and data analysis of experiments that were instrumental in establishing the Standard Model, passed away on 29 October 2024 at the age of 86. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_Obits_Wolf_feature.jpg
A call to engage https://cerncourier.com/a/a-call-to-engage/ Mon, 24 Mar 2025 08:47:33 +0000 https://cerncourier.com/?p=112676 The secretary of the 2026 European strategy update, Karl Jakobs, talks about the strong community involvement needed to reach a consensus for the future of our field.

The post A call to engage appeared first on CERN Courier.

]]>
The European strategy for particle physics is the cornerstone of Europe’s decision-making process for the long-term future of the field. In March 2024 CERN Council launched the programme for the third update of the strategy. The European Strategy Group (ESG) and the strategy secretariat for this update were established by CERN Council in June 2024 to organise the full process. Over the past few months, important aspects of the process have been set up, and these are described in more detail on the strategy web pages at europeanstrategyupdate.web.cern.ch/welcome.

The Physics Preparatory Group (PPG) will play an important role in distilling the community’s scientific input and scientific discussions at the open symposium in Venice in June 2025 into a “physics briefing book”. At its meeting in September 2024, CERN Council appointed eight members of the PPG, four on the recommendation of the scientific policy committee and four on the recommendation of the European Committee for Future Accelerators (ECFA). In addition, the PPG has one representative from CERN and two representatives each from the Americas and Asia.

The strategy secretariat also proposed to form nine working groups to cover the full range of physics topics as well as the technology areas of accelerators, detectors and computing. The work of these groups will be co-organised by two conveners, with one of them being a member of the PPG. In addition, an early-career researcher has been appointed to each group to act as a scientific secretary. Both the appointments of the co-conveners and of the early-career researchers are important to increase the engagement by the broader community in the current update. The full composition of the PPG, the co-conveners and the scientific secretaries of the working groups is available on the strategy web pages.

Karl Jakobs

The strategy secretariat has also devised guidelines for input by the community. Any submitted documents must be no more than 10 pages long and provide a comprehensive and self-contained summary of the input. Additional information and details can be submitted in a separate backup document that can be consulted on by the PPG if clarification on any aspect is required. A backup document is not, however, mandatory.

A major component are inputs by national high-energy physics communities, which are expected to be collected individually by each country, and in some cases by region. The information collected from different countries and regions will be most useful if it is as coherent and uniform as possible when addressing the key issues. To assist with this, the ECFA has put together a set of guidelines.

It is anticipated that a number of proposals for large-scale research projects will be submitted as input to the strategy process, including, but not limited to, particle colliders and collider detectors. These proposals are likely to vary in scale, anticipated timeline and technical maturity. In addition to studying the scientific potential of these projects, the ESG wishes to evaluate the sequence of delivery steps and the challenges associated with delivery, and to understand how each project could fit into the wider roadmap for European particle physics. In order to allow a straightforward comparison of projects, we therefore request that all large-scale projects submit a standardised set of technical data in addition to their physics case and technical description.

It is anticipated that a number of proposals for large-scale research projects will be submitted as input to the strategy

To allow the community to take into account and to react to the submissions collected by March 2025 and to the content of the briefing book, national communities are offered further opportunities for input: first ahead of the open symposium (see p11), with a deadline of 26 May 2025; and then ahead of the drafting session, with a deadline of 14 November 2025.

In this strategy process the community must converge on a preferred option for the next collider at CERN and identify a prioritised list of alternative options. The outcome of the process will provide the basis for the decision by CERN Council in 2027 or 2028 on the construction of the next large collider at CERN, following the High-Luminosity LHC. Areas of priority for exploration complementary to colliders and for other experiments to be considered at CERN and other laboratories in Europe will also be identified, as well as priorities for participation in projects outside Europe.

Given the importance of this process and its outcomes, I encourage strong community involvement throughout to reach a consensus for the future of our field.

The post A call to engage appeared first on CERN Courier.

]]>
Opinion The secretary of the 2026 European strategy update, Karl Jakobs, talks about the strong community involvement needed to reach a consensus for the future of our field. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_VIEW-motti.jpg
Edoardo Amaldi and the birth of Big Science https://cerncourier.com/a/edoardo-amaldi-and-the-birth-of-big-science/ Mon, 24 Mar 2025 08:45:02 +0000 https://cerncourier.com/?p=112656 In an interview drawing on memories from childhood and throughout his own distinguished career at CERN, Ugo Amaldi offers deeply personal insights into his father Edoardo’s foundational contributions to international cooperation in science.

The post Edoardo Amaldi and the birth of Big Science appeared first on CERN Courier.

]]>
Ugo Amaldi beside a portrait of his father Edoardo

Should we start with your father’s involvement in the founding of CERN?

I began hearing my father talk about a new European laboratory while I was still in high school in Rome. Our lunch table was always alive with discussions about science, physics and the vision of this new laboratory. Later, I learned that between 1948 and 1949, my father was deeply engaged in these conversations with two of his friends: Gilberto Bernardini, a well-known cosmic-ray expert, and Bruno Ferretti, a professor of theoretical physics at Rome University. I was 15 years old and those table discussions remain vivid in my memory.

So, the idea of a European laboratory was already being discussed before the 1950 UNESCO meeting?

Yes, indeed. Several eminent European physicists, including my father, Pierre Auger, Lew Kowarski and Francis Perrin, recognised that Europe could only be competitive in nuclear physics through collaborative efforts. All the actors wanted to create a research centre that would stop the post-war exodus of physics talent to North America and help rebuild European science. I now know that my father’s involvement began in 1946 when he travelled to Cambridge, Massachusetts, for a conference. There, he met Nobel Prize winner John Cockcroft, and their conversations planted in his mind the first seeds for a European laboratory.

Parallel to scientific discussions, there was an important political initiative led by Swiss philosopher and writer Denis de Rougemont. After spending the war years at Princeton University, he returned to Europe with a vision of fostering unity and peace. He established the Institute of European Culture in Lausanne, Switzerland, where politicians from France, Britain and Germany would meet. In December 1949, during the European Cultural Conference in Lausanne, French Nobel Prize winner Louis de Broglie sent a letter advocating for a European laboratory where scientists from across the continent could work together peacefully.

The Amaldi family in 1948

My father strongly believed in the importance of accelerators to advance the new field that, at the time, was at the crossroads between nuclear physics and cosmic-ray physics. Before the war, in 1936, he had travelled to Berkeley to learn about cyclotrons from Ernest Lawrence. He even attempted to build a cyclotron in Italy in 1942, profiting from the World’s Fair that had to be held in Rome. Moreover, he was deeply affected by the exodus of talented Italian physicists after the war, including Bruno Rossi, Gian Carlo Wick and Giuseppe Cocconi. He saw CERN as a way to bring these scientists back and rebuild European physics.

How did Isidor Rabi’s involvement come into play?

In 1950 my father was corresponding with Gilberto Bernardini, who was spending a year at Columbia University. There Bernardini mentioned the idea of a European laboratory to Isidor Rabi, who, at the same time, was in contact with other prominent figures in this decentralised and multi-centered initiative. Together with Norman Ramsay, Rabi had previously succeeded, in 1947, in persuading nine northeastern US universities to collaborate under the banner of Associated Universities, Inc, which led to the establishment of Brookhaven National Laboratory.

What is not generally known is that before Rabi gave his famous speech at the fifth assembly of UNESCO in Florence in June 1950, he came to Rome and met with my father. They discussed how to bring this idea to fruition. A few days later, Rabi’s resolution at the UNESCO meeting calling for regional research facilities was a crucial step in launching the project. Rabi considered CERN a peaceful compensation for the fact that physicists had built the nuclear bomb.

How did your father and his colleagues proceed after the UNESCO resolution?

Following the UNESCO meeting, Pierre Auger, at that time director of exact and natural sciences at UNESCO, and my father took on the task of advancing the project. In September 1950 Auger spoke of it at a nuclear physics conference in Oxford, and at a meeting of the International Union of Pure and Applied Physics (IUPAP), my father– one of the vice presidents – urged the executive committee to consider how best to implement the Florence resolution. In May 1951, Auger and my father organised a meeting of experts at UNESCO headquarters in Paris, where a compelling justification for the European project was drafted.

The cost of such an endeavour was beyond the means of any single nation. This led to an intergovernmental conference under the auspices of UNESCO in December 1951, where the foundations for CERN were laid. Funding, totalling $10,000 for the initial meetings of the board of experts, came from Italy, France and Belgium. This was thanks to the financial support of men like Gustavo Colonnetti, president of the Italian Research Council, who had already – a year before – donated the first funds to UNESCO.

Were there any significant challenges during this period?

Not everyone readily accepted the idea of a European laboratory. Eminent physicists like Niels Bohr, James Chadwick and Hendrik Kramers questioned the practicality of starting a new laboratory from scratch. They were concerned about the feasibility and allocation of resources, and preferred the coordination of many national laboratories and institutions. Through skilful negotiation and compromise, Auger and my father incorporated some of the concerns raised by the sceptics into a modified version of the project, ensuring broader support. In February 1952 the first agreement setting up a provisional council for CERN was written and signed, and my father was nominated secretary general of the provisional CERN.

Enrico and Giulio Fermi, Ginestra Amaldi, Laura Fermi, Edoardo and Ugo Amaldi

He worked tirelessly, travelling through Europe to unite the member states and start the laboratory’s construction. In particular, the UK was reluctant to participate fully. They had their own advanced facilities, like the 40 MeV cyclotron at the University of Liverpool. In December 1952 my father visited John Cockcroft, at the time director of the Harwell Atomic Energy Research Establishment, to discuss this. There’s an interesting episode where my father, with Cockcroft, met Frederick Lindemann and Baron Cherwell, who was a long-time scientific advisor to Winston Churchill. Cherwell dismissed CERN as another “European paper mill.” My father, usually composed, lost his temper and passionately defended the project. During the following visit to Harwell, Cockcroft reassured him that his reaction was appropriate. From that point on, the UK contributed to CERN, albeit initially as a series of donations rather than as the result of a formal commitment. It may be interesting to add that, during the same visit to London and Harwell, my father met the young John Adams and was so impressed that he immediately offered him a position at CERN.

What were the steps following the ratification of CERN’s convention?

Robert Valeur, chairman of the council during the interim period, and Ben Lockspeiser, chairman of the interim finance committee, used their authority to stir up early initiatives and create an atmosphere of confidence that attracted scientists from all over Europe. As Lew Kowarski noted, there was a sense of “moral commitment” to leave secure positions at home and embark on this new scientific endeavour.

During the interim period from May 1952 to September 1954, the council convened three sessions in Geneva whose primary focus was financial management. The organisation began with an initial endowment of approximately 1 million Swiss Francs, which – as I said – included a contribution from the UK known as the “observer’s gift”. At each subsequent session, the council increased its funding, reaching around 3.7 million Swiss Francs by the end of this period. When the permanent organisation was established, an initial sum of 4.1 million Swiss Francs was made available.

Giuseppe Fidecaro, Edoardo Amaldi and Werner Heisenberg at CERN in 1960

In 1954, my father was worried that if the parliaments didn’t approve the convention before winter, then construction would be delayed because of the wintertime. So he took a bold step and, with the approval of the council president, authorised the start of construction on the main site before the convention was fully ratified.

This led to Lockspeiser jokingly remarking later that council “has now to keep Amaldi out of jail”. The provisional council, set up in 1952, was dissolved when the European Organization for Nuclear Research officially came into being in 1954, though the acronym CERN (Conseil Européen pour la Recherche Nucléaire) was retained. By the conclusion of the interim period, CERN had grown significantly. A critical moment occurred on 29 September  1954, when a specific point in the ratification procedure was reached, rendering all assets temporarily ownerless. During this eight-day period, my father, serving as secretary general, was the sole owner on behalf of the newly forming permanent organisation. The interim phase concluded with the first meeting of the permanent council, marking the end of CERN’s formative years.

Did your father ever consider becoming CERN’s Director-General?

People asked him to be Director-General, but he declined for two reasons. First, he wanted to return to his students and his cosmic-ray research in Rome. Second, he didn’t want people to think he had done all this to secure a prominent position. He believed in the project for its own sake.

When the convention was finally ratified in 1954, the council offered the position of Director-General to Felix Bloch, a Swiss–American physicist and Nobel Prize winner for his work on nuclear magnetic resonance. Bloch accepted but insisted that my father serve as his deputy. My father, dedicated to CERN’s success, agreed to this despite his desire to return to Rome full time.

How did that arrangement work out?

My father agreed but Bloch wasn’t at that time rooted in Europe. He insisted on bringing all his instruments from Stanford so he could continue his research on nuclear magnetic resonance at CERN. He found it difficult to adapt to the demands of leading CERN and soon resigned. The council then elected Cornelis Jan Bakker, a Dutch physicist who had led the synchrocyclotron group, as the new Director-General. From the beginning, he was the person my father thought would have been the ideal director for the initial phase of CERN. Tragically though, Bakker died in a plane crash a year and a half later. I well remember how hard my father was hit by this loss.

How did the development of accelerators at CERN progress?

The decision to adopt the strong focusing principle for the Proton Synchrotron (PS) was a pivotal moment. In August 1952 Otto Dahl, leader of the Proton Synchrotron study group, Frank Goward and Rolf Widerøe visited Brookhaven just as Ernest Courant, Stanley Livingston and Hartland Snyder were developing this new principle. They were so excited by this development that they returned to CERN determined to incorporate it into the PS design. In 1953 Mervyn Hine, a long-time friend of John Adams with whom he had moved to CERN, studied potential issues with misalignment in strong focusing magnets, which led to further refinements in the design. Ultimately, the PS became operational before the comparable accelerator at Brookhaven, marking a significant achievement for European science.

Edoardo Amaldi and Victor Weisskopf in 1974

It’s important here to recognise the crucial contributions of the engineers, who often don’t receive the same level of recognition as physicists. They are the ones who make the work of experimental physicists and theorists possible. “Viki” Weisskopf, Director-General of CERN from 1961 to 1965, compared the situation to the discovery of America. The machine builders are the captains and shipbuilders. The experimentalists are those fellows on the ships who sailed to the other side of the world and wrote down what they saw. The theoretical physicists are those who stayed behind in Madrid and told Columbus that he was going to land in India.

Your father also had a profound impact on the development of other Big Science organisations in Europe

Yes, in 1958 my father was instrumental, together with Pierre Auger, in the founding of the European Space Agency. In a letter written in 1958 to his friend Luigi Crocco, who was professor of jet propulsion in Princeton, he wrote that “it is now very much evident that this problem is not at the level of the single states like Italy, but mainly at the continental level. Therefore, if such an endeavour is to be pursued, it must be done on a European scale, as already done for the building of the large accelerators for which CERN was created… I think it is absolutely imperative for the future organisation to be neither military nor linked to any military organisation. It must be a purely scientific organisation, open – like CERN – to all forms of cooperation and outside the participating countries.” This document reflects my father’s vision of peaceful and non-military European science.

How is it possible for one person to contribute so profoundly to science and global collaboration?

My father’s ability to accept defeats and keep pushing forward was key to his success. He was an exceptional person with a clear vision and unwavering dedication. I hope that by sharing these stories, others might be inspired to pursue their goals with the same persistence and passion.

Could we argue that he was not only a visionary but also a relentless advocate?

He travelled extensively, talked to countless people, and was always cheerful and energetic. He accepted setbacks but kept moving forwards. In this connection, I want to mention Eliane Bertrand, later de Modzelewska, his secretary in Rome who later became secretary of the CERN Council for about 20 years, serving under several Director-Generals. She left a memoir about those early days, highlighting how my father was always travelling, talking and never stopping. It’s a valuable piece of history that, I think, should be published.

Eliane de Modzelewska

International collaboration has been a recurring theme in your own career. How do you view its importance today?

International collaboration is more critical than ever in today’s world. Science has always been a bridge between cultures and nations, and CERN’s history is a testimony of what this brings to humanity. It transcends political differences and fosters mutual understanding. I hope CERN and the broader scientific community will find ways to maintain these vital connections with all countries. I’ve always believed that fostering a collaborative and inclusive environment is one of the main goals of us scientists. It’s not just about achieving results but also about how we work together and support each other along the way.

Looking ahead, what are your thoughts on the future of CERN and particle physics?

I firmly believe that pursuing higher collision energies is essential. While the Large Hadron Collider has achieved remarkable successes, there’s still much we haven’t uncovered – especially regarding supersymmetry. Even though minimal supersymmetry does not apply, I remain convinced that supersymmetry might manifest in ways we haven’t yet understood. Exploring higher energies could reveal supersymmetric particles or other new phenomena.

Like most European physicists, I support the initiative of the Future Circular Collider and starting with an electron–positron collider phase so to explore new frontiers at two very different energy levels. However, if geopolitical shifts delay or complicate these plans, we should consider pushing hard on alternative strategies like developing the technologies for muon colliders.

Ugo Amaldi first arrived at CERN as a fellow in September 1961. Then, for 10 years at the ISS in Rome, he opened two new lines of research: quasi-free electron scattering on nuclei and atoms. Back at CERN, he developed the Roman pots experimental technique, was a co-discoverer of the rise of the proton–proton cross-section with energy, measured the polarisation of muons produced by neutrinos, proposed the concept of a superconducting electron–positron linear collider, and led LEP’s DELPHI Collaboration. Today, he advances the use of accelerators in cancer treatment as the founder of the TERA Foundation for hadron therapy and as president emeritus of the National Centre for Oncological Hadrontherapy (CNAO) in Pavia. He continues his mother and father’s legacy of authoring high-school physics textbooks used by millions of Italian pupils. His motto is: “Physics is beautiful and useful.”

This interview first appeared in the newsletter of CERN’s experimental physics department. It has been edited for concision.

The post Edoardo Amaldi and the birth of Big Science appeared first on CERN Courier.

]]>
Feature In an interview drawing on memories from childhood and throughout his own distinguished career at CERN, Ugo Amaldi offers deeply personal insights into his father Edoardo’s foundational contributions to international cooperation in science. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_AMALDI_Ugo_feature.jpg
Isospin symmetry broken more than expected https://cerncourier.com/a/isospin-symmetry-broken-more-than-expected/ Mon, 24 Mar 2025 08:42:50 +0000 https://cerncourier.com/?p=112578 The NA61/SHINE collaboration have observed a strikingly large imbalance between charged and neutral kaons in argon–scandium collisions.

The post Isospin symmetry broken more than expected appeared first on CERN Courier.

]]>
In the autumn of 2023, Wojciech Brylinski was analysing data from the NA61/SHINE collaboration at CERN for his thesis, when he noticed an unexpected anomaly – a strikingly large imbalance between charged and neutral kaons in argon–scandium collisions. Instead of producing roughly equal numbers, he found that charged kaons were produced 18.4% more often. This suggested that the “isospin symmetry” between up (u) and down (d) quarks might be broken by more than expected due to the differences in their electric charges and masses – a discrepancy that existing theoretical models would struggle to explain. Known sources of isospin asymmetry only predict deviations of a few percent.

“When Wojciech got started, we thought it would be a trivial verification of the symmetry,” says Marek Gaździcki of Jan Kochanowski University of Kielce, spokesperson of NA61/SHINE at the time of the discovery. “We expected it to be closely obeyed – though we had previously measured discrepancies at NA49, they had large uncertainties and were not significant.”

Isospin symmetry is one facet of flavour symmetry, whereby the strong interaction treats all quark flavours identically, except for kinematic differences arising from their different masses. Strong interactions should therefore generate nearly equal yields of charged K+ (us) and K (us), and neutral K0 (ds) and K0 (ds), given the similar masses of the two lightest quarks. NA61/SHINE’s data contradict the hypothesis of equal yields with 4.7σ significance.

“I see two options to interpret the results,” says Francesco Giacosa, a theo­retical physicist at Jan Kochanowski University working with NA61/SHINE. “First, we substantially underestimate the role of electromagnetic interactions in creating quark–antiquark pairs. Second, strong interactions do not obey flavour symmetry – if so, this would falsify QCD.” Isospin is not a symmetry of the electromagnetic interaction as up and down quarks have different electric charges.

While the experiment routinely measures particle yields in nuclear collisions, finding a discrepancy in isospin symmetry was not something researchers were actively looking for. NA61/SHINE’s primary focus is studying the phase diagram of high-energy nuclear collisions using a range of ion beams. This includes looking at the onset of deconfinement, the formation of a quark-gluon plasma fireball, and the search for the hypothesised QCD critical point where the transition between hadronic matter and quark–gluon plasma changes from a smooth crossover to a first-order phase transition. Data is also shared with neutrino and cosmic-ray experiments to help refine their models.

The collaboration is now planning additional studies using different projectiles, targets and collision energies to determine whether this effect is unique to certain heavy-ion collisions or a more general feature of high-energy interactions. They have also put out a call to theorists to help explain what might have caused such an unexpectedly large asymmetry.

“The observation of the rather large isospin violation stands in sharp contrast to its validity in a wide range of physical systems,” says Rob Pisarski, a theoretical physicist from Brookhaven National Laboratory. “Any explanation must be special to heavy-ion systems at moderate energy. NA61/SHINE’s discrepancy is clearly significant, and shows that QCD still has the power to surprise our naive expectations.”

The post Isospin symmetry broken more than expected appeared first on CERN Courier.

]]>
News The NA61/SHINE collaboration have observed a strikingly large imbalance between charged and neutral kaons in argon–scandium collisions. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_NA61.jpg
Cosmogenic candidate lights up KM3NeT https://cerncourier.com/a/cosmogenic-candidate-lights-up-km3net/ Mon, 24 Mar 2025 08:40:44 +0000 https://cerncourier.com/?p=112563 Strings of photodetectors anchored to the seabed off the coast of Sicily have detected the most energetic neutrino ever observed, smashing previous records.

The post Cosmogenic candidate lights up KM3NeT appeared first on CERN Courier.

]]>
Muon neutrino

On 13 February 2023, strings of photodetectors anchored to the seabed off the coast of Sicily detected the most energetic neutrino ever observed, smashing previous records. Embargoed until the publication of a paper in Nature last month, the KM3NeT collaboration believes their observation may have originated in a novel cosmic accelerator, or may even be the first detection of a “cosmogenic” neutrino.

“This event certainly comes as a surprise,” says KM3NeT spokesperson Paul de Jong (Nikhef). “Our measurement converted into a flux exceeds the limits set by IceCube and the Pierre Auger Observatory. If it is a statistical fluctuation, it would correspond to an upward fluctuation at the 2.2σ level. That is unlikely, but not impossible.” With an estimated energy of a remarkable 220 PeV, the neutrino observed by KM3NeT surpasses IceCube’s record by almost a factor of 30.

The existence of ultra-high-energy cosmic neutrinos has been theorised since the 1960s, when astrophysicists began to conceive ways that extreme astrophysical environments could generate particles with very high energies. At about the same time, Arno Penzias and Robert Wilson discovered “cosmic microwave background” (CMB) photons emitted in the era of recombination, when the primordial plasma cooled down and the universe became electrically neutral. Cosmogenic neutrinos were soon hypothesised to result from ultra-high-energy cosmic rays interacting with the CMB. They are expected to have energies above 100 PeV (1017 eV), however, their abundance is uncertain as it depends on cosmic rays, whose sources are still cloaked in intrigue (CERN Courier July/August 2024 p24).

A window to extreme events

But how might they be detected? In this regard, neutrinos present a dichotomy: though outnumbered in the cosmos only by photons, they are notoriously elusive. However, it is precisely their weakly interacting nature that makes them ideal for investigating the most extreme regions of the universe. Cosmic neutrinos travel vast cosmic distances without being scattered or absorbed, providing a direct window into their origins, and enabling scientists to study phenomena such as black-hole jets and neutron-star mergers. Such extreme astrophysical sources test the limits of the Standard Model at energy scales many times higher than is possible in terrestrial particle accelerators.

Because they are so weakly interacting, studying cosmic neutrinos requires giant detectors. Today, three large-scale neutrino telescopes are in operation: IceCube, in Antarctica; KM3NeT, under construction deep in the Mediterranean Sea; and Baikal–GVD, under construction in Lake Baikal in southern Siberia. So far, IceCube, whose construction was completed over 10 years ago, has enabled significant advancements in cosmic-neutrino physics, including the first observation of the Glashow resonance, wherein a 6 PeV electron antineutrino interacts with an electron in the ice sheet to form an on-shell W boson, and the discovery of neutrinos emitted by “active galaxies” powered by a supermassive black hole accreting matter. The previous record-holder for the highest recorded neutrino energy, IceCube has also searched for cosmogenic neutrinos but has not yet observed neutrino candidates above 10 PeV.

Its new northern-hemisphere colleague, KM3NeT, consists of two subdetectors: ORCA, designed to study neutrino properties, and ARCA, which made this detection, designed to detect high-energy cosmic neutrinos and find their astronomical counterparts. Its deep-sea arrays of optical sensors detect Cherenkov light emitted by charged particles created when a neutrino interacts with a quark or electron in the water. At the time of the 2023 event, ARCA comprised 21 vertical detection units, each around 700 m in length. Its location 3.5 km deep under the sea reduces background noise, and its sparse set up over one cubic kilometre optimises the detector for neutrinos of higher energies.

The event that KM3NeT observed in 2023 is thought to be a single muon created by the charged-current interaction of an ultra-high-energy muon neutrino. The muon then crossed horizontally through the entire ARCA detector, emitting Cherenkov light that was picked up by a third of its active sensors. “If it entered the sea as a muon, it would have travelled some 300 km water-equivalent in water or rock, which is impossible,” explains de Jong. “It is most likely the result of a muon neutrino interacting with sea water some distance from the detector.”

The network will improve the chances of detecting new neutrino sources

The best estimate for the neutrino energy of 220 PeV hides substantial uncertainties, given the unknown interaction point and the need to correct for an undetected hadronic shower. The collaboration expects the true value to lie between 110 and 790 PeV with 68% confidence. “The neutrino energy spectrum is steeply falling, so there is a tug-of-war between two effects,” explains de Jong. “Low-energy neutrinos must give a relatively large fraction of their energy to the muon and interact close to the detector, but they are numerous; high-energy neutrinos can interact further away, and give a smaller fraction of their energy to the muon, but they are rare.”

More data is needed to understand the sources of ultra-high-energy neutrinos such as that observed by KM3NeT, where construction has continued in the two years since this remarkable early detection. So far, 33 of 230 ARCA detection units and 24 of 115 ORCA detection units have been installed. Once construction is complete, likely by the end of the decade, KM3NeT will be similar in size to IceCube.

“Once KM3NeT and Baikal–GVD are fully constructed, we will have three large-scale neutrino telescopes of about the same size in operation around the world,” adds Mauricio Bustamante, theoretical astroparticle physicist at the Niels Bohr Institute of the University of Copenhagen. “This expanded network will monitor the full sky with nearly equal sensitivity in any direction, improving the chances of detecting new neutrino sources, including faint ones in new regions of the sky.”

The post Cosmogenic candidate lights up KM3NeT appeared first on CERN Courier.

]]>
News Strings of photodetectors anchored to the seabed off the coast of Sicily have detected the most energetic neutrino ever observed, smashing previous records. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_KM3NeT_feature.jpg
CERN gears up for tighter focusing https://cerncourier.com/a/cern-gears-up-for-tighter-focusing/ Mon, 24 Mar 2025 08:38:19 +0000 https://cerncourier.com/?p=112574 New quadrupole magnets for the High-Luminosity LHC will use Nb3Sn conductors for the first time in an accelerator.

The post CERN gears up for tighter focusing appeared first on CERN Courier.

]]>
When it comes online in 2030, the High-Luminosity LHC (HL-LHC) will feel like a new collider. The hearts of the ATLAS and CMS detectors, and 1.2 km of the 27 km-long Large Hadron Collider (LHC) ring will have been transplanted with cutting-edge technologies that will push searches for new physics into uncharted territory.

On the accelerator side, one of the most impactful upgrades will be the brand-new final focusing systems just before the proton or ion beams arrive at the interaction points. In the new “inner triplets”, particles will slalom in a more focused and compacted way than ever before towards collisions inside the detectors.

To achieve the required focusing strength, the new quadrupole magnets will use Nb3Sn conductors for the first time in an accelerator. Nb3Sn will allow fields as high as 11.5 T, compared to 8.5 T for the conventional NbTi bending magnets used elsewhere in the LHC. As they are a new technology, an integrated test stand of the full 60 m-long inner-triplet assembly is essential – and work is now in full swing.

Learning opportunity

“The main challenge at this stage is the interconnections between the magnets, particularly the interfaces between the magnets and the cryogenic line,” explains Marta Bajko, who leads work on the inner-triplet-string test facility. “During this process, we have encountered nonconformities, out-of-tolerance components, and other difficulties – expected challenges given that these connections are being made for the first time. This phase is a learning opportunity for everyone involved, allowing us to refine the installation process.”

The last magnet – one of two built in the US – is expected to be installed in May. Before then, the so-called N lines, which enable the electrical connections between the different magnets, will be pulled through the entire magnet chain to prepare for splicing the cables together. Individual system tests and short-circuit tests have already been successfully performed and a novel alignment system developed for the HL-LHC is being installed on each magnet. Mechanical transfer function measurements of some magnets are ongoing, while electrical integrity tests in a helium environment have been successfully completed, along with the pressure and leak test of the superconducting link.

“Training the teams is at the core of our focus, as this setup provides the most comprehensive and realistic mock-up before the installations are to be done in the tunnel,” says Bajko. “The surface installation, located in a closed and easily accessible building near the teams’ workshops and laboratories, offers an invaluable opportunity for them to learn how to perform their tasks effectively. This training often takes place alongside other teams, under real installation constraints, allowing them to gain hands-on experience in a controlled yet authentic environment.”

The inner triplet string is composed of a separation and recombination dipole, a corrector-package assembly and a quadrupole triplet. The dipole combines the two counter-rotating beams into a single channel; the corrector package fine-tunes beam parameters; and the quadrupole triplet focuses the beam onto the interaction point.

Quadrupole triplets have been a staple of accelerator physics since they were first implemented in the early 1950s at synchrotrons such as the Brookhaven Cosmotron and CERN’s Proton Synchrotron. Quadrupole magnets are like lenses that are convex (focusing) in one transverse plane and concave (defocusing) in the other, transporting charged particles like beams of light on an optician’s bench. In a quadrupole triplet, the focusing plane alternates with each quadrupole magnet. The effect is to precisely focus the particle beams onto tight spots within the LHC experiments, maximising the number of particles that interact, and increasing the statistical power available to experimental analyses.

Nb3Sn is strategically important because it lays the foundation for future high-energy colliders

Though quadrupole triplets are a time-honoured technique, Nb3Sn brings new challenges. The HL-LHC magnets are the first accelerator magnets to be built at lengths of up to 7 m, and the technical teams at CERN and in the US collaboration – each of which is responsible for half the total “cold mass” production – have decided to produce two variants, primarily driven by differences in available production and testing infrastructure.

Since 2011, engineers and accelerator physicists have been hard at work designing and testing the new magnets and their associated powering, vacuum, alignment, cryogenic, cooling and protection systems. Each component of the HL-LHC will be individually tested before installation in the LHC tunnel, however, this is only half the story as all components must be integrated and operated within the machine, where they will all share a common electrical and cooling circuit. Throughout the rest of 2025, the inner-triplet string will test the integration of all these components, evaluating them in terms of their collective behaviour, in preparation for hardware commissioning and nominal operation.

“We aim to replicate the operational processes of the inner-triplet string using the same tools planned for the HL-LHC machine,” says Bajko. “The control systems and software packages are in an advanced stage of development, prepared through extensive collaboration across CERN, involving three departments and nine equipment groups. The inner-triplet-string team is coordinating these efforts and testing them as if operating from the control room – launching tests in short-circuit mode and verifying system performance to provide feedback to the technical teams and software developers. The test programme has been integrated into a sequencer, and testing procedures are being approved by the relevant stakeholders.”

Return on investment

While Nb3Sn offers significant advantages over NbTi, manufacturing magnets with it presents several challenges. It requires high-temperature heat treatment after winding, and is brittle and fragile, making it more difficult to handle than the ductile NbTi. As the HL-LHC Nb3Sn magnets operate at higher current and energy densities, quench protection is more challenging, and the possibility of a sudden loss of superconductivity requires a faster and more robust protection system.

The R&D required to meet these challenges will provide returns long into the future, says Susana Izquierdo Bermudez, who is responsible at CERN for the new HL-LHC magnets.

“CERN’s investment in R&D for Nb3Sn is strategically important because it lays the foundation for future high-energy colliders. Its increased field strength is crucial for enabling more powerful focusing and bending magnets, allowing for higher beam energies and more compact accelerator designs. This R&D also strengthens CERN’s expertise in advanced superconducting materials and technology, benefitting applications in medical imaging, energy systems and industrial technologies.”

The inner-triplet string will remain an installation on the surface at CERN and is expected to operate until early 2027. Four identical assemblies will be installed underground in the LHC tunnel from 2028 to 2029, during Long Shutdown 3. They will be located 20 m away on either side of the ATLAS and CMS interaction points.

The post CERN gears up for tighter focusing appeared first on CERN Courier.

]]>
News New quadrupole magnets for the High-Luminosity LHC will use Nb3Sn conductors for the first time in an accelerator. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_corrector.jpg
How to unfold with AI https://cerncourier.com/a/how-to-unfold-with-ai/ Mon, 27 Jan 2025 08:00:50 +0000 https://cerncourier.com/?p=112161 Inspired by high-dimensional data and the ideals of open science, high-energy physicists are using artificial intelligence to reimagine the statistical technique of ‘unfolding’.

The post How to unfold with AI appeared first on CERN Courier.

]]>
Open-science unfolding

All scientific measurements are affected by the limitations of measuring devices. To make a fair comparison between data and a scientific hypothesis, theoretical predictions must typically be smeared to approximate the known distortions of the detector. Data is then compared with theory at the level of the detector’s response. This works well for targeted measurements, but the detector simulation must be reapplied to the underlying physics model for every new hypothesis.

The alternative is to try to remove detector distortions from the data, and compare with theoretical predictions at the level of the theory. Once detector effects have been “unfolded” from the data, analysts can test any number of hypotheses without having to resimulate or re-estimate detector effects – a huge advantage for open science and data preservation that allows comparisons between datasets from different detectors. Physicists without access to the smearing functions can only use unfolded data.

No simple task

But unfolding detector distortions is no simple task. If the mathematical problem is solved through a straightforward inversion, using linear algebra, noisy fluctuations are amplified, resulting in large uncertainties. Some sort of “regularisation” must be imposed to smooth the fluctuations, but algorithms vary substantively and none is preeminent. Their scope has remained limited for decades. No traditional algorithm is capable of reliably unfolding detector distortions from data relative to more than a few observables at a time.

In the past few years, a new technique has emerged. Rather than unfolding detector effects from only one or two observables, it can unfold detector effects from multiple observables in a high-dimensional space; and rather than unfolding detector effects from binned histograms, it unfolds detector effects from an unbinned distribution of events. This technique is inspired by both artificial-intelligence techniques and the uniquely sparse and high-dimensional data sets of the LHC.

An ill-posed problem

Unfolding is used in many fields. Astronomers unfold point-spread functions to reveal true sky distributions. Medical physicists unfold detector distortions from CT and MRI scans. Geophysicists use unfolding to infer the Earth’s internal structure from seismic-wave data. Economists attempt to unfold the true distribution of opinions from incomplete survey samples. Engineers use deconvolution methods for noise reduction in signal processing. But in recent decades, no field has had a greater need to innovate unfolding techniques than high-energy physics, given its complex detectors, sparse datasets and stringent standards for statistical rigour.

In traditional unfolding algorithms, analysers first choose which quantity they are interested in measuring. An event generator then creates a histogram of the true values of this observable for a large sample of events in their detector. Next, a Monte Carlo simulation simulates the detector response, accounting for noise, background modelling, acceptance effects, reconstruction errors, misidentification errors and energy smearing. A matrix is constructed that transforms the histogram of the true values of the observable into the histogram of detector-level events. Finally, analysts “invert” the matrix and apply it to data, to unfold detector effects from the measurement.

How to unfold traditionally

Diverse algorithms have been invented to unfold distortions from data, with none yet achieving preeminence.

• Developed by Soviet mathematician Andrey Tikhonov in the late 1940s, Tikhonov regularisation (TR) frames unfolding as a minimisation problem with a penalty term added to suppress fluctuations in the solution.

• In the 1950s, statistical mechanic Edwin Jaynes took inspiration from information theory to seek solutions with maximum entropy, seeking to minimise bias beyond the data constraints.

• Between the 1960s and the 1990s, high-energy physicists increasingly drew on the linear algebra of 19th-century mathematicians Eugenio Beltrami and Camille Jordan to develop singular value decomposition as a pragmatic way to suppress noisy fluctuations.

• In the 1990s, Giulio D’Agostini and other high-energy physicists developed iterative Bayesian unfolding (IBU)– a similar technique to Lucy–Richardson deconvolution, which was developed independently in astronomy in the 1970s. An explicitly probabilistic approach well suited to complex detectors, IBU may be considered a forerunner of the neural-network-based technique described in this article.

IBU and TR are the most widely-used approaches in high-energy physics today, with the RooUnfold tool started by Tim Adye serving countless analysts.

At this point in the analysis, the ill-posed nature of the problem presents a major challenge. A simple matrix inversion seldom suffices as statistical noise produces large changes in the estimated input. Several algorithms have been proposed to regularise these fluctuations. Each comes with caveats and constraints, and there is no consensus on a single method that outperforms the rest (see “How to unfold traditionally” panel).

While these approaches have been successfully applied to thousands of measurements at the LHC and beyond, they have limitations. Histogramming is an efficient way to describe the distributions of one or two observables, but the number of bins grows exponentially with the number of parameters, restricting the number of observables that can be simultaneously unfolded. When unfolding only a few observables, model dependence can creep in, for example due to acceptance effects, and if another scientist wants to change the bin sizes or measure a different observable, they will have to redo the entire process.

New possibilities

AI opens up new possibilities for unfolding particle-physics data. Choosing good parameterisations in a high-dimensional space is difficult for humans, and binning is a way to limit the number of degrees of freedom in the problem, making it more tractable. Machine learning (ML) offers flexibility due to the large number of parameters in a deep neural network. Dozens of observables can be unfolded at once, and unfolded datasets can be published as an unbinned collection of individual events that have been corrected for detector distortions as an ensemble.

Unfolding performance

One way to represent the result is as a set of simulated events with weights that encode information from the data. For example, if there are 10 times as many simulated events as real events, the average weight would be about 0.1, with the distribution of weights correcting the simulation to match reality, and errors on the weights reflecting the uncertainties inherent in the unfolding process. This approach gives maximum flexibility to future analysts, who can recombine them into any binning or combination they desire. The weights can be used to build histograms or compute statistics. The full covariance matrix can also be extracted from the weights, which is important for downstream fits.

But how do we know the unfolded values are capturing the truth, and not just “hallucinations” from the AI model?

An important validation step for these analyses are tests performed on synthetic data with a known answer. Analysts take new simulation models, different from the one being used for the primary analysis, and treat them as if they were real data. By unfolding these alternative simulations, researchers are able to compare their results to a known answer. If the biases are large, analysts will need to refine their methods to reduce the model-dependency. If the biases are small compared to the other uncertainties then this remaining difference can be added into the total uncertainty estimate, which is calculated in the traditional way using hundreds of simulations. In unfolding problems, the choice of regularisation method and strength always involves some tradeoff between bias and variance.

Just as unfolding in two dimensions instead of one with traditional methods can reduce model dependence by incorporating more aspects of the detector response, ML methods use the same underlying principle to include as much of the detector response as possible. Learning differences between data and simulation in high-dimensional spaces is the kind of task that ML excels at, and the results are competitive with established methods (see “Better performance” figure).

Neural learning

In the past few years, AI techniques have proven to be useful in practice, yielding publications from the LHC experiments, the H1 experiment at HERA and the STAR experiment at RHIC. The key idea underpinning the strategies used in each of these results is to use neural networks to learn a function that can reweight simulated events to look like data. The neural network is given a list of relevant features about an event such as the masses, energies and momenta of reconstructed objects, and trained to output the probability that it is from a Monte Carlo simulation or the data itself. Neural connections that reweight and combine the inputs across multiple layers are iteratively adjusted depending on the network’s performance. The network thereby learns the relative densities of the simulation and data throughout phase space. The ratio of these densities is used to transform the simulated distribution into one that more closely resembles real events (see “OmniFold” figure).

Illustration of AI unfolding using the OmniFold algorithm

As this is a recently-developed technique, there are plenty of opportunities for new developments and improvements. These strategies are in principle capable of handling significant levels of background subtraction as well as acceptance and efficiency effects, but existing LHC measurements using AI-based unfolding generally have small backgrounds. And as with traditional methods, there is a risk in trying to estimate too many parameters from not enough data. This is typically controlled by stopping the training of the neural network early, combining multiple trainings into a single result, and performing cross validations on different subsets of the data.

Beyond the “OmniFold” methods we are developing, an active community is also working on alternative techniques, including ones based on generative AI. Researchers are also considering creative new ways to use these unfolded results that aren’t possible with traditional methods. One possibility in development is unfolding not just a selection of observables, but the full event. Another intriguing direction could be to generate new events with the corrections learnt by the network built-in. At present, the result of the unfolding is a reweighted set of simulated events, but once the neural network has been trained, its reweighting function could be used to simulate the unfolded sample from scratch, simplifying the output.

The post How to unfold with AI appeared first on CERN Courier.

]]>
Feature Inspired by high-dimensional data and the ideals of open science, high-energy physicists are using artificial intelligence to reimagine the statistical technique of ‘unfolding’. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_AI_feature.jpg
CERN and ESA: a decade of innovation https://cerncourier.com/a/cern-and-esa-a-decade-of-innovation/ Mon, 27 Jan 2025 07:59:01 +0000 https://cerncourier.com/?p=112108 Enrico Chesta, Véronique Ferlet-Cavrois and Markus Brugger highlight seven ways CERN and ESA are working together to further fundamental exploration and innovation in space technologies.

The post CERN and ESA: a decade of innovation appeared first on CERN Courier.

]]>
Sky maps

Particle accelerators and spacecraft both operate in harsh radiation environments, extreme temperatures and high vacuum. Each must process large amounts of data quickly and autonomously. Much can be gained from cooperation between scientists and engineers in each field.

Ten years ago, the European Space Agency (ESA) and CERN signed a bilateral cooperation agreement to share expertise and facilities. The goal was to expand the limits of human knowledge and keep Europe at the leading edge of progress, innovation and growth. A decade on, CERN and ESA have collaborated on projects ranging from cosmology and planetary exploration to Earth observation and human spaceflight, supporting new space-tech ventures and developing electronic systems, radiation-monitoring instruments and irradiation facilities.

1. Mapping the universe

The Euclid space telescope is exploring the dark universe by mapping the large-scale structure of billions of galaxies out to 10 billion light-years across more than a third of the sky. With tens of petabytes expected in its final data set – already a substantial reduction of the 850 billion bits of compressed images Euclid processes each day – it will generate more data than any other ESA mission by far.

With many CERN cosmologists involved in testing theories of beyond-the-Standard-Model physics, Euclid first became a CERN-recognised experiment in 2015. CERN also contributes to the development of Euclid’s “science ground segment” (SGS), which processes raw data received from the Euclid spacecraft into usable scientific products such as galaxy catalogues and dark-matter maps. CERN’s virtual-machine file system (CernVM-FS) has been integrated into the SGS to allow continuous software deployment across Euclid’s nine data centres and on developers’ laptops.

The telescope was launched in July 2023 and began observations in February 2024. The first piece of its great map of the universe was released in October 2024, showing millions of stars and galaxies from observations and covering 132 square degrees of the southern sky (see “Sky map” figure). Based on just two weeks of observations, it accounts for just 1% of project’s six-year survey, which will be the largest cosmic map ever made.

Future CERN–ESA collaborations on cosmology, astrophysics and multimessenger astronomy are likely to include the Laser Interferometer Space Antenna (LISA) and the NewAthena X-ray observatory. LISA will be the first space-based observatory to study gravitational waves. NewAthena will study the most energetic phenomena in the universe. Both projects are expected to be ready to launch about 10 years from now.

2. Planetary exploration

Though planetary exploration is conceptually far from fundamental physics, its technical demands require similar expertise. A good example is the Jupiter Icy Moons Explorer (JUICE) mission, which will make detailed observations of the gas giant and its three large ocean-bearing moons Ganymede, Callisto and Europa.

Jupiter’s magnetic field is a million times greater in volume than Earth’s magnetosphere, trapping large fluxes of highly energetic electrons and protons. Before JUICE, the direct and indirect impact of high-energy electrons on modern electronic devices, and in particular their ability to cause “single event effects”, had never been studied before. Two test campaigns took place in the VESPER facility, which is part of the CERN Linear Electron Accelerator for Research (CLEAR) project. Components were tested with tuneable beam energies between 60 and 200 MeV, and average fluxes of roughly 108 electrons per square centimetre per second, mirroring expected radiation levels in the Jovian system.

JUICE radiation-monitor measurements

JUICE was successfully launched in April 2023, starting an epic eight-year journey to Jupiter including several flyby manoeuvres that will be used to commission the onboard instruments (see “Flyby” figure). JUICE should reach Jupiter in July 2031. It remains to be seen whether test results obtained at CERN have successfully de-risked the mission.

Another interesting example of cooperation on planetary exploration is the Mars Sample Return mission, which must operate in low temperatures during eclipse phases. CERN supported the main industrial partner, Thales Alenia Space, in qualifying the orbiter’s thermal-protection systems in cryogenic conditions.

3. Earth observation

Earth observation from orbit has applications ranging from environmental monitoring to weather forecasting. CERN and ESA collaborate both on developing the advanced technologies required by these applications and ensuring they can operate in the harsh radiation environment of space.

In 2017 and 2018, ESA teams came to CERN’s North Area with several partner companies to test the performance of radiation monitors, field-programmable gate arrays (FPGAs) and electronics chips in ultra-high-energy ion beams at the Super Proton Synchrotron. The tests mimicked the ultra-high-energy part of the galactic cosmic-ray spectrum, whose effects had never previously been measured on the ground beyond 10 GeV/nucleon. In 2017, ESA’s standard radiation-environment monitor and several FPGAs and multiprocessor chips were tested with xenon ions. In 2018, the highlight of the campaign was the testing of Intel’s Myriad-2 artificial intelligence (AI) chip with lead ions (see “Space AI” figure). Following its radiation characterisation and qualification, in 2020 the chip embarked on the φ-sat-1 mission to autonomously detect clouds using images from a hyperspectral camera.

Myriad 2 chip testing

More recently, CERN joined Edge SpAIce – an EU project to monitor ecosystems onboard the Balkan-1 satellite and track plastic pollution in the oceans. The project will use CERN’s high-level synthesis for machine learning (hls4ml) AI technology to run inference models on an FPGA that will be launched in 2025.

Looking further ahead, ESA’s φ-lab and CERN’s Quantum Technology Initiative are sponsoring two PhD programmes to study the potential of quantum machine learning, generative models and time-series processing to advance Earth observation. Applications may accelerate the task of extracting features from images to monitor natural disasters, deforestation and the impact of environmental effects on the lifecycle of crops.

4. Dosimetry for human spaceflight

In space, nothing is more important than astronauts’ safety and wellbeing. To this end, in August 2021 ESA astronaut Thomas Pesquet activated the LUMINA experiment inside the International Space Station (ISS), as part of the ALPHA mission (see “Space dosimetry” figure). Developed under the coordination of the French Space Agency and the Laboratoire Hubert Curien at the Université Jean-Monnet-Saint-Étienne and iXblue, LUMINA uses two several-kilometre-long phosphorous-doped optical fibres as active dosimeters to measure ionising radiation aboard the ISS.

ESA astronaut Thomas Pesquet

When exposed to radiation, optical fibres experience a partial loss of transmitted power. Using a reference control channel, radiation-induced attenuation can be accurately measured related to the total ionising dose, with the sensitivity of the device primarily governed by the length of the fibre. Having studied optical-fibre-based technologies for many years, CERN helped optimise the architecture of the dosimeters and performed irradiation tests to calibrate the instrument, which will operate on the ISS for a period of up to five years.

LUMINA complements dosimetry measurements performed on the ISS using CERN’s Timepix technology – an offshoot of the hybrid-pixel-detector technology developed for the LHC experiments (CERN Courier September/October 2024 p37). Timepix dosimeters have been integrated in multiple NASA payloads since 2012.

5. Radiation-hardness assurance

It’s no mean feat to ensure that CERN’s accelerator infrastructure functions in increasingly challenging radiation environments. Similar challenges are found in space. Damage can be caused by accumulating ionising doses, single-event effects (SEEs) or so-called displacement damage dose, which dislodges atoms within a material’s crystal lattice rather than ionising them. Radiation-hardness assurance (RHA) reduces radiation-induced failures in space through environment simulations, part selection and testing, radiation-tolerant design, worst-case analysis and shielding definition.

Since its creation in 2008, CERN’s Radiation to Electronics project has amplified the work of many equipment and service groups in modelling, mitigating and testing the effect of radiation on electronics. A decade later, joint test campaigns with ESA demonstrated the value of CERN’s facilities and expertise to RHA for spaceflight. This led to the signing of a joint protocol on radiation environments, technologies and facilities in 2019, which also included radiation detectors and radiation-tolerant systems, and components and simulation tools.

CHARM facility

Among CERN’s facilities is CHARM: the CERN high-energy-accelerator mixed-field facility, which offers an innovative approach to low-cost RHA. CHARM’s radiation field is generated by the interaction between a 24 GeV/c beam from the Proton Synchrotron and a metallic target. CHARM offers a uniquely wide spectrum of radiation types and energies, the possibility to adjust the environment using mobile shielding, and enough space to test a medium-sized satellite in full operating conditions.

Radiation testing is particularly challenging for the new generation of rapidly developed and often privately funded “new space” projects, which frequently make use of commercial and off-the-shelf (COTS) components. Here, RHA relies on testing and mitigation rather than radiation hardening by design. For “flip chip” configurations, which have their active circuitry facing inward toward the substrate, and dense three-dimensional structures that cannot be directly exposed without compromising their performance, heavy-ion beams accelerated to between 10 and 100 MeV/nucleon are the only way to induce SEE in the sensitive semiconductor volumes of the devices.

To enable testing of highly integrated electronic components, ESA supported studies to develop the CHARM heavy ions for micro-electronics reliability-assurance facility – CHIMERA for short (see “CHIMERA” figure). ESA has sponsored key feasibility activities such as: tuning the ion flux in a large dynamic range; tuning the beam size for board-level testing; and reducing beam energy to maximise the frequency of SEE while maintaining a penetration depth of a few millimetres in silicon.

6. In-orbit demonstrators

Weighing 1 kg and measuring just 10 cm on each side – a nanosatellite standard – the CELESTA satellite was designed to study the effects of cosmic radiation on electronics (see “CubeSat” figure). Initiated in partnership with the University of Montpellier and ESA, and launched in July 2022, CELESTA was CERN’s first in-orbit technology demonstrator.

Radiation-testing model of the CELESTA satellite

As well as providing the first opportunity for CHARM to test a full satellite, CELESTA offered the opportunity to flight-qualify SpaceRadMon, which counts single-event upsets (SEUs) and single-event latchups (SELs) in static random-access memory while using a field-effect transistor for dose monitoring. (SEUs are temporary errors caused by a high-energy particle flipping a bit and SELs are short circuits induced by high-energy particles.) More than 30 students contributed to the mission development, partially in the frame of ESA’s Fly Your Satellite Programme. Built from COTS components calibrated in CHARM, SpaceRadMon has since been adopted by other ESA missions such as Trisat and GENA-OT, and could be used in the future as a low-cost predictive maintenance tool to reduce space debris and improve space sustainability.

The maiden flight of the Vega-C launcher placed CELESTA on an atypical quasi-circular medium-Earth orbit in the middle of the inner Van Allen proton belt at roughly 6000 km. Two months of flight data sufficed to validate the performance of the payload and the ground-testing procedure in CHARM, though CELESTA will fly for thousands of years in a region of space where debris is not a problem due to the harsh radiation environment.

The CELESTA approach has since been adopted by industrial partners to develop radiation-tolerant cameras, radios and on-board computers.

7. Stimulating the space economy

Space technology is a fast-growing industry replete with opportunities for public–private cooperation. The global space economy will be worth $1.8 trillion by 2035, according to the World Economic Forum – up from $630 billion in 2023 and growing at double the projected rate for global GDP.

Whether spun off from space exploration or particle physics, ESA and CERN look to support start-up companies and high-tech ventures in bringing to market technologies with positive societal and economic impacts (see “Spin offs” figure). The use of CERN’s Timepix technology in space missions is a prime example. Private company Advacam collaborated with the Czech Technical University to provide a Timepix-based radiation-monitoring payload called SATRAM to ESA’s Proba-V mission to map land cover and vegetation growth across the entire planet every two days.

The Hannover Messe fair

Advacam is now testing a pixel-detector instrument on JoeySat – an ESA-sponsored technology demonstrator for OneWeb’s next-generation constellation of satellites designed to expand global connectivity. Advacam is also working with ESA on radiation monitors for Space Rider and NASA’s Lunar Gateway. Space Rider is a reusable spacecraft whose maiden voyage is scheduled for the coming years, and Lunar Gateway is a planned space station in lunar orbit that could act as a staging post for Mars exploration.

Another promising example is SigmaLabs – a Polish startup founded by CERN alumni specialising in radiation detectors and predictive-maintenance R&D for space applications. SigmaLabs was recently selected by ESA and the Polish Space Agency to provide one of the experiments expected to fly on Axiom Mission 4 – a private spaceflight to the ISS in 2025 that will include Polish astronaut and CERN engineer Sławosz Uznański (CERN Courier May/June 2024 p55). The experiment will assess the scalability and versatility of the SpaceRadMon radiation-monitoring technology initially developed at CERN for the LHC and flight tested on the CELESTA CubeSat.

In radiation-hardness assurance, the CHIMERA facility is associated with the High-Energy Accelerators for Radiation Testing and Shielding (HEARTS) programme sponsored by the European Commission. Its 2024 pilot user run is already stimulating private innovation, with high-energy heavy ions used to perform business-critical research on electronic components for a dozen aerospace companies.

The post CERN and ESA: a decade of innovation appeared first on CERN Courier.

]]>
Feature Enrico Chesta, Véronique Ferlet-Cavrois and Markus Brugger highlight seven ways CERN and ESA are working together to further fundamental exploration and innovation in space technologies. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_CERNandESA_pesquet.jpg
A word with CERN’s next Director-General https://cerncourier.com/a/a-word-with-cerns-next-director-general/ Mon, 27 Jan 2025 07:56:07 +0000 https://cerncourier.com/?p=112181 Mark Thomson, CERN's Director General designate for 2025, talks to the Courier about the future of particle physics.

The post A word with CERN’s next Director-General appeared first on CERN Courier.

]]>
Mark Thomson

What motivates you to be CERN’s next Director-General?

CERN is an incredibly important organisation. I believe my deep passion for particle physics, coupled with the experience I have accumulated in recent years, including leading the Deep Underground Neutrino Experiment, DUNE, through a formative phase, and running the Science and Technology Facilities Council in the UK, has equipped me with the right skill set to lead CERN though a particularly important period.

How would you describe your management style?

That’s a good question. My overarching approach is built around delegating and trusting my team. This has two advantages. First, it builds an empowering culture, which in my experience provides the right environment for people to thrive. Second, it frees me up to focus on strategic planning and engagement with numerous key stakeholders. I like to focus on transparency and openness, to build trust both internally and externally.

How will you spend your familiarisation year before you take over in 2026?

First, by getting a deep understanding of CERN “from within”, to plan how I want to approach my mandate. Second, by lending my voice to the scientific discussion that will underpin the third update to the European strategy for particle physics. The European strategy process is a key opportunity for the particle-physics community to provide genuine bottom-up input and shape the future. This is going to be a really varied and exciting year.

What open question in fundamental physics would you most like to see answered in your lifetime?

I am going to have to pick two. I would really like to understand the nature of dark matter. There are a wide range of possibilities, and we are addressing this question from multiple angles; the search for dark matter is an area where the collider and non-collider experiments can both contribute enormously. The second question is the nature of the Higgs field. The Higgs boson is just so different from anything else we’ve ever seen. It’s not just unique – it’s unique and very strange. There are just so many deep questions, such as whether it is fundamental or composite. I am confident that we will make progress in the coming years. I believe the High-Luminosity LHC will be able to make meaningful measurements of the self-coupling at the heart of the Higgs potential. If you’d asked me five years ago whether this was possible, I would have been doubtful. But today I am very optimistic because of the rapid progress with advanced analysis techniques being developed by the brilliant scientists on the LHC experiments.

What areas of R&D are most in need of innovation to meet our science goals?

Artificial intelligence is changing how we look at data in all areas of science. Particle physics is the ideal testing ground for artificial intelligence, because our data is complex there are none of the issues around the sensitive nature of the data that exist in other fields. Complex multidimensional datasets are where you’ll benefit the most from artificial intelligence. I’m also excited by the emergence of new quantum technologies, which will open up fresh opportunities for our detector systems and also new ways of doing experiments in fundamental physics. We’ve only scratched the surface of what can be achieved with entangled quantum systems.

How about in accelerator R&D?

There are two areas that I would like to highlight: making our current technologies more sustainable, and the development of high-field magnets based on high-temperature superconductivity. This connects to the question of innovation more broadly. To quote one example among many, high-temperature superconducting magnets are likely to be an important component of fusion reactors just as much as particle accelerators, making this a very exciting area where CERN can deploy its engineering expertise and really push that programme forward. That’s not just a benefit for particle physics, but a benefit for wider society.

How has CERN changed since you were a fellow back in 1994?

The biggest change is that the collider experiments are larger and more complex, and the scientific and technical skills required have become more specialised. When I first came to CERN, I worked on the OPAL experiment at LEP – a collaboration of less than 400 people. Everybody knew everybody, and it was relatively easy to understand the science of the whole experiment.

My overarching approach is built around delegating and trusting my team

But I don’t think the scientific culture of CERN and the particle-physics community has changed much. When I visit CERN and meet with the younger scientists, I see the same levels of excitement and enthusiasm. People are driven by the wonderful mission of discovery. When planning the future, we need to ensure that early-career researchers can see a clear way forward with opportunities in all periods of their career. This is essential for the long-term health of particle physics. Today we have an amazing machine that’s running beautifully: the LHC. I also don’t think it is possible to overstate the excitement of the High-Luminosity LHC. So there’s a clear and exciting future out to the early 2040s for today’s early-career researchers. The question is what happens beyond that? This is one reason to ensure that there is not a large gap between the end of the High-Luminosity LHC and the start of whatever comes next.

Should the world be aligning on a single project?

Given the increasing scale of investment, we do have to focus as a global community, but that doesn’t necessarily mean a single project. We saw something similar about 10 years ago when the global neutrino community decided to focus its efforts on two complementary long-baseline projects, DUNE and Hyper-Kamiokande. From the perspective of today’s European strategy, the Future Circular Collider (FCC) is an extremely appealing project that would map out an exciting future for CERN for many decades. I think we’ll see this come through strongly in an open and science-driven European strategy process.

How do you see the scientific case for the FCC?

For me, there are two key points. First, gaining a deep understanding of the Higgs boson is the natural next step in our field. We have discovered something truly unique, and we should now explore its properties to gain deeper insights into fundamental physics. Scientifically, the FCC provides everything you want from a Higgs factory, both in terms of luminosity and the opportunity to support multiple experiments.

Second, investment in the FCC tunnel will provide a route to hadron–hadron collisions at the 100 TeV scale. I find it difficult to foresee a future where we will not want this capability.

These two aspects make the FCC a very attractive proposition.

How successful do you believe particle physics is in communicating science and societal impacts to the public and to policymakers?

I think we communicate science well. After all, we’ve got a great story. People get the idea that we work to understand the universe at its most basic level. It’s a simple and profound message.

Going beyond the science, the way we communicate the wider industrial and societal impact is probably equally important. Here we also have a good story. In our experiments we are always pushing beyond the limits of current technology, doing things that have not been done before. The technologies we develop to do this almost always find their way back into something that will have wider applications. Of course, when we start, we don’t know what the impact will be. That’s the strength and beauty of pushing the boundaries of technology for science.

Would the FCC give a strong return on investment to the member states?

Absolutely. Part of the return is the science, part is the investment in technology, and we should not underestimate the importance of the training opportunities for young people across Europe. CERN provides such an amazing and inspiring environment for young people. The scale of the FCC will provide a huge number of opportunities for young scientists and engineers.

We need to ensure that early-career researchers can see a clear way forward with opportunities in all periods of their career. This is essential for the long-term health of particle physics

In terms of technology development, the detectors for the electron–positron collider will provide an opportunity for pushing forward and deploying new, advanced technologies to deliver the precision required for the science programme. In parallel, the development of the magnet technologies for the future hadron collider will be really exciting, particularly the potential use of high-temperature superconductors, as I said before.

It is always difficult to predict the specific “return on investment” on the technologies for big scientific research infrastructure. Part of this challenge is that some of that benefits might be 20, 30, 40 years down the line. Nevertheless, every retrospective that has tried, has demonstrated that you get a huge downstream benefit.

Do we reward technical innovation well enough in high-energy physics?

There needs to be a bit of a culture shift within our community. Engineering and technology innovation are critical to the future of science and critical to the prosperity of Europe. We should be striving to reward individuals working in these areas.

Should the field make it more flexible for physicists and engineers to work in industry and return to the field having worked there?

This is an important question. I actually think things are changing. The fluidity between academia and industry is increasing in both directions. For example, an early-career researcher in particle physics with a background in deep artificial-intelligence techniques is valued incredibly highly by industry. It also works the other way around, and I experienced this myself in my career when one of my post-doctoral researchers joined from an industry background after a PhD in particle physics. The software skills they picked up from industry were incredibly impactful.

I don’t think there is much we need to do to directly increase flexibility – it’s more about culture change, to recognise that fluidity between industry and academia is important and beneficial. Career trajectories are evolving across many sectors. People move around much more than they did in the past.

Does CERN have a future as a global laboratory?

CERN already is a global laboratory. The amazing range of nationalities working here is both inspiring and a huge benefit to CERN.

How can we open up opportunities in low- and middle-income countries?

I am really passionate about the importance of diversity in all its forms and this includes national and regional inclusivity. It is an agenda that I pursued in my last two positions. At the Deep Underground Neutrino Experiment, I was really keen to engage the scientific community from Latin America, and I believe this has been mutually beneficial. At STFC, we used physics as a way to provide opportunities for people across Africa to gain high-tech skills. Going beyond the training, one of the challenges is to ensure that people use these skills in their home nations. Otherwise, you’re not really helping low- and middle-income countries to develop.

What message would you like to leave with readers?

That we have really only just started the LHC programme. With more than a factor of 10 increase in data to come, coupled with new data tools and upgraded detectors, the High-Luminosity LHC represents a major opportunity for a new discovery. Its nature could be a complete surprise. That’s the whole point of exploring the unknown: you don’t know what’s out there. This alone is incredibly exciting, and it is just a part of CERN’s amazing future.

The post A word with CERN’s next Director-General appeared first on CERN Courier.

]]>
Opinion Mark Thomson, CERN's Director General designate for 2025, talks to the Courier about the future of particle physics. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_INT_thompson_feature.jpg
The other 99% https://cerncourier.com/a/the-other-99/ Mon, 27 Jan 2025 07:44:48 +0000 https://cerncourier.com/?p=112146 Daniel Tapia Takaki describes how ultraperipheral collisions mediated by high-energy photons are shedding light on gluon saturation, gluonic hotspots and nuclear shadowing.

The post The other 99% appeared first on CERN Courier.

]]>
Quarks contribute less than 1% to the mass of protons and neutrons. This provokes an astonishing question: where does the other 99% of the mass of the visible universe come from? The answer lies in the gluon, and how it interacts with itself to bind quarks together inside hadrons.

Much remains to be understood about gluon dynamics. At present, the chief experimental challenge is to observe the onset of gluon saturation – a dynamic equilibrium between gluon splitting and recombination predicted by QCD. The experimental key looks likely to be a rare but intriguing type of LHC interaction known as an ultra­peripheral collision (UPC), and the breakthrough may come as soon as the next experimental run.

Gluon saturation is expected to end the rapid growth in gluon density measured at the HERA electron–proton collider at DESY in the 1990s and 2000s. HERA observed this growth as the energy of interactions increased and as the fraction of the proton’s momentum borne by the gluons (Bjorken x) decreased.

So gluons become more numerous in hadrons as their energy decreases – but to what end?

Gluonic hotspots are now being probed with unprecedented precision at the LHC and are central to understanding the high-energy regime of QCD

Nonlinear effects are expected to arise due to processes like gluon recombination, wherein two gluons combine to become one. When gluon recombination becomes a significant factor in QCD dynamics, gluon saturation sets in – an emergent phenomenon whose energy scale is a critical parameter to determine experimentally. At this scale, gluons begin to act like classical fields and gluon density plateaus. A dilute partonic picture transitions to a dense, saturated state. For recombination to take precedence over splitting, gluon momenta must be very small, corresponding to low values of Bjorken x. The saturation scale should also be directly proportional to the colour-charge density, making heavy nuclei like lead ideal for studying nonlinear QCD phenomena.

But despite strong theoretical reasoning and tantalising experimental hints, direct evidence for gluon saturation remains elusive.

Since the conclusion of the HERA programme, the quest to explore gluon saturation has shifted focus to the LHC. But with no point-like electron to probe the hadronic target, LHC physicists had to find a new point-like probe: light itself. UPCs at the LHC exploit the flux of quasi-real high-energy photons generated by ultra-relativistic particles. For heavy ions like lead, this flux of photons is enhanced by the square of the nuclear charge, enabling studies of photon-proton (γp) and photon-nucleus interactions at centre-of-mass energies reaching the TeV scale.

Keeping it clean

What really sets UPCs apart is their clean environment. UPCs occur at large impact parameters well outside the range of the strong nuclear force, allowing the nuclei to remain intact. Unlike hadronic collisions, which can produce thousands of particles, UPCs often involve only a few final-state particles, for example a single J/ψ, providing an ideal laboratory for gluon saturation. J/ψ are produced when a cc pair created by two or more gluons from one nucleus is brought on-shell by interacting with a quasi-real photon from the other nucleus (see “Sensitivity to saturation” figure).

Power-law observation

Gluon saturation models predict deviations in the γp → J/ψp cross section from the power-law behaviour observed at HERA. The LHC experiments are placing a significant focus on investigating the energy dependence of this process to identify potential signatures of saturation, with ALICE and LHCb extending studies to higher γp centre-of-mass energies (Wγp) and lower Bjorken x than HERA. The results so far reveal that the cross-section continues to increase with energy, consistent with the power-law trend (see “Approaching the plateau?” figure).

The symmetric nature of pp collisions introduces significant challenges. In pp collisions, either proton can act as the photon source, leading to an intrinsic ambiguity in identifying the photon emitter. In proton–lead (pPb) collisions, the lead nucleus overwhelmingly dominates photon emission, eliminating this ambiguity. This makes pPb collisions an ideal environment for precise studies of the photoproduction of J/ψ by protons.

During LHC Run 1, the ALICE experiment probed Wγp up to 706 GeV in pPb collisions, more than doubling HERA’s maximum reach of 300 GeV. This translates to probing Bjorken-x values as low as 10–5, significantly beyond the regime explored at HERA. LHCb took a different approach. The collaboration inferred the behaviour of pp collisions at high energies (“W+ solutions”) by assuming knowledge of their energy dependence at low energies (“W- solutions”), allowing LHCb to probe gluon energies as small as 10–6 in Bjorken x and Wγp up to 2 TeV.

There is not yet any theoretical consensus on whether LHC data align with gluon-saturation predictions, and the measurements remain statistically limited, leaving room for further exploration. Theoretical challenges include incomplete next-to-leading-order calculations and the reliance of some models on fits to HERA data. Progress will depend on robust and model-independent calculations and high-quality UPC data from pPb collisions in LHC Run 3 and Run 4.

Some models predict a slowing increase in the γp → J/ψp cross section with energy at small Bjorken x. If these models are correct, gluon saturation will likely be discovered in LHC Run 4, where we expect to see a clear observation of whether pPb data deviate from the power law observed so far.

Gluonic hotspots

If a UPC photon interacts with the collective colour field of a nucleus – coherent scattering – it probes its overall distribution of gluons. If a UPC photon interacts with individual nucleons or smaller sub-nucleonic structures – incoherent scattering – it can probe smaller-scale gluon fluctuations.

Simulations of the transverse density of gluons in protons

These fluctuations, known as gluonic hotspots, are theorised to become more numerous and overlap in the regime of gluon saturation (see “Onset of saturation” figure). Now being probed with unprecedented precision at the LHC, they are central to understanding the high-energy regime of QCD.

Gluonic hotspots are used to model the internal transverse structure of colliding protons or nuclei (see “Hotspot snapshots” figure). The saturation scale is inherently impact-parameter dependent, with the densest colour charge densities concentrated at the core of the proton or nucleus, and diminishing toward the periphery, though subject to fluctuations. Researchers are increasingly interested in exploring how these fluctuations depend on the impact parameter of collisions to better characterise the spatial dynamics of colour charge. Future analyses will pinpoint contributions from localised hotspots where saturation effects are most likely to be observed.

The energy dependence of incoherent or dissociative photoproduction promises a clear signature for gluon saturation, independent of the coherent power-law method described above. As saturation sets in, all gluon configurations in the target converge to similar densities, causing the variance of the gluon field to decrease, and with it the dissociative cross section. Detecting a peak and a decline in the incoherent cross-section as a function of energy would represent a clear signature of gluon saturation.

Simulations of the transverse density of gluons in lead nuclei

The ALICE collaboration has taken significant steps in exploring this quantum terrain, demonstrating the possibility of studying different geometrical configurations of quantum fluctuations in processes where protons or lead nucleons dissociate. The results highlight a striking correlation between momentum transfer, which is inversely proportional to the impact parameter, and the size of the target structure. The observation that sub-nucleonic structures impart the greatest momentum transfer is compelling evidence for gluonic quantum fluctuations at the sub-nucleon level.

Into the shadows

In 1982 the European Muon Collaboration observed an intriguing phenomenon: nuclei appeared to contain fewer gluons than expected based on the contributions from their individual protons and neutrons. This effect, known as nuclear shadowing, was observed in experiments conducted at CERN at moderate values of Bjorken x. It is now known to occur because the interaction of a probe with one gluon reduces the likelihood of the probe interacting with other gluons within the nucleus – the gluons hiding behind them, in their shadow, so to speak. At smaller values of Bjorken x, saturation further suppresses the number of gluons contributing to the interaction.

Nuclear suppression factor for lead relative to protons

The relationship between gluon saturation and nuclear shadowing is poorly understood, and separating their effects remains an open challenge. The situation is further complicated by an experimental reliance on lead–lead (PbPb) collisions, which, like pp collisions, suffer from ambiguity in identifying the interacting nucleus, unless the interaction is accompanied by an ejected neutron.

The ALICE, CMS and LHCb experiments have extensively studied nuclear shadowing via the exclusive production of vector mesons such as J/ψ in ultraperipheral PbPb
collisions. Results span photon–nucleus collision energies from 10 to 1000 GeV. The onset of nuclear shadowing, or another nonlinear QCD phenomenon like saturation, is clearly visible as a function of energy and Bjorken x (see “Nuclear shadowing” figure).

Multidimensional maps

While both saturation-based and gluon shadowing models describe the data reasonably well at high energies, neither framework captures the observed trends across the entire kinematic range. Future efforts must go beyond energy dependence by being differential in momentum transfer and studying a range of vector mesons with complementary sensitivities to the saturation scale.

Soon to be constructed at Brookhaven National Laboratory, the Electron-Ion Collider (EIC) promises to transform our understanding of gluonic matter. Designed specifically for QCD research, the EIC will probe gluon saturation and shadowing in unprecedented detail, using a broad array of reactions, collision species and energy levels. By providing a multidimensional map of gluonic behaviour, the EIC will address funda­mental questions such as the origin of mass and nuclear spin.

ALICE’s high-granularity forward calorimeter

Before then, a tenfold increase in PbPb statistics in LHC Runs 3 and 4 will allow a transformative leap in low Bjorken-x physics. Though not originally designed for low Bjorken-x physics, the LHC’s unparalleled energy reach and diverse range of colliding systems offers unique opportunities to explore gluon dynamics at the highest energies.

Enhanced capabilities

Surpassing the gains from increased luminosity alone, ALICE’s new triggerless detector readout mode will offer a vast improvement over previous runs, which were constrained by dedicated triggers and bandwidth limitations. Subdetector upgrades will also play an important role. The muon forward tracker has already enhanced ALICE’s capabilities, and the high-granularity forward calorimeter set to be installed in time for Run 4 is specifically designed to improve sensitivity to small Bjorken-x physics (see “Saturation specific” figure).

Ultraperipheral-collision physics at the LHC is far more than a technical exploration of QCD. Gluons govern the structure of all visible matter. Saturation, hotspots and shadowing shed light on the origin of 99% of the mass of the visible universe. 

The post The other 99% appeared first on CERN Courier.

]]>
Feature Daniel Tapia Takaki describes how ultraperipheral collisions mediated by high-energy photons are shedding light on gluon saturation, gluonic hotspots and nuclear shadowing. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_GLUON_frontis.jpg
Charm and synthesis https://cerncourier.com/a/charm-and-synthesis/ Mon, 27 Jan 2025 07:43:29 +0000 https://cerncourier.com/?p=112128 Sheldon Glashow recalls the events surrounding a remarkable decade of model building and discovery between 1964 and 1974.

The post Charm and synthesis appeared first on CERN Courier.

]]>
In 1955, after a year of graduate study at Harvard, I joined a group of a dozen or so students committed to studying elementary particle theory. We approached Julian Schwinger, one of the founders of quantum electrodynamics, hoping to become his thesis students – and we all did.

Schwinger lined us up in his office, and spent several hours assigning thesis subjects. It was a remarkable performance. I was the last in line. Having run out of well-defined thesis problems, he explained to me that weak and electromagnetic interactions share two remarkable features: both are vectorial and both display aspects of universality. Schwinger suggested that I create a unified theory of the two interactions – an electroweak synthesis. How I was to do this he did not say, aside from slyly hinting at the Yang–Mills gauge theory.

By the summer of 1958, I had convinced myself that weak and electromagnetic interactions might be described by a badly broken gauge theory, and Schwinger that I deserved a PhD. I had hoped to partly spend a postdoctoral fellowship in Moscow at the invitation of the recent Russian Nobel laureate Igor Tamm, and sought to visit Niels Bohr’s institute in Copenhagen while awaiting my Soviet visa. With Bohr’s enthusiastic consent, I boarded the SS Île de France with my friend Jack Schnepps. Following a memorable and luxurious crossing – one of the great ship’s last – Jack drove south to Padova to work with Milla Baldo-Ceolin’s emulsion group in Padova, and I took the slow train north to Copenhagen. Thankfully, my Soviet visa never arrived. I found the SU(2) × U(1) structure of the electroweak model in the spring of 1960 at Bohr’s famous institute at Blegsdamvej 19, and wrote the paper that would earn my share of the 1979 Nobel Prize.

We called the new quark flavour charm, completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day

A year earlier, in 1959, Augusto Gamba, Bob Marshak and Susumo Okubo had proposed lepton–hadron symmetry, which regarded protons, neutrons and lambda hyperons as the building blocks of all hadrons, to match the three known leptons at the time: neutrinos, electrons and muons. The idea was falsified by the discovery of a second neutrino in 1962, and superseded in 1964 by the invention of fractionally charged hadron constituents, first by George Zweig and André Petermann, and then decisively by Murray Gell-Mann with his three flavours of quarks. Later in 1964, while on sabbatical in Copenhagen, James Bjorken and I realised that lepton–hadron symmetry could be revived simply by adding a fourth quark flavour to Gell-Mann’s three. We called the new quark flavour “charm”, completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day.

Annus mirabilis

1964 was a remarkable year. In addition to the invention of quarks, Nick Samios spotted the triply strange Ω baryon, and Oscar Greenberg devised what became the critical notion of colour. Arno Penzias and Robert Wilson stumbled on the cosmic microwave background radiation. James Cronin, Val Fitch and others discovered CP violation. Robert Brout, François Englert, Peter Higgs and others invented spontaneously broken non-Abelian gauge theories. And to top off the year, Abdus Salam rediscovered and published my SU(2) × U(1) model, after I had more-or-less abandoned electroweak thoughts due to four seemingly intractable problems.

Four intractable problems of early 1964

How could the W and Z bosons acquire masses while leaving the photon massless?

Steven Weinberg, my friend from both high-school and college, brilliantly solved this problem in 1967 by subjecting the electroweak gauge group to spontaneous symmetry breaking, initiating the half-century-long search for the Higgs boson. Salam published the same solution in 1968.

How could an electroweak model of leptons be extended to describe the weak interactions of hadrons?

John Iliopoulos, Luciano Maiani and I solved this problem in 1970 by introducing charm and quark-lepton symmetry to avoid unobserved strangeness-changing neutral currents.

Was the spontaneously broken electroweak gauge model mathematically consistent?

Gerard ’t Hooft announced in 1971 that he had proven Steven Weinberg’s electroweak model to be renormalisable. In 1972, Claude Bouchiat, John Iliopoulos and Philippe Meyer demonstrated the electroweak model to be free of Adler anomalies provided that lepton–quark symmetry is maintained.

Could the electroweak model describe CP violation without invoking additional spinless fields?

In 1973, Makoto Kobayashi and Toshihide Maskawa showed that the electroweak model could easily and naturally violate CP if there are more than four quark flavours.

Much to my surprise and delight, all of them would be solved within just a few years, with the last theoretical obstacle removed by Makoto Kobayashi and Toshihide Maskawa in 1973 (see “Four intractable problems” panel). A few months later, Paul Musset announced that CERN’s Gargamelle detector had won the race to detect weak neutral-current interactions, giving the electroweak model the status of a predictive theory. Remarkably, the year had begun with Gell-Mann, Harald Fritzsch and Heinrich Leutwyler proposing QCD, and David Gross, Frank Wilczek and David Politzer showing it to be asymptotically free. The Standard Model of particle physics was born.

Charmed findings

But where were the charmed quarks? Early on Monday morning on 11 November, 1974, I was awakened by a phone call from Sam Ting, who asked me to come to his MIT office as soon as possible. He and Ulrich Becker were waiting for me impatiently. They showed me an amazingly sharp resonance. Could it be a vector meson like the ρ or ω and be so narrow, or was it something quite different? I hopped in my car and drove to Harvard, where my colleagues Alvaro de Rújula and Howard Georgi excitedly regaled me about the Californian side of the story. A few days later, experimenters in Frascati confirmed the BNL–SLAC discovery, and de Rújula and I submitted our paper “Is Bound Charm Found?” – one of two papers on the J/ψ discovery printed in Physical Review Letters on 5 July 1965 that would prove to be correct. Among five false papers was one written by my beloved mentor, Julian Schwinger.

Sam Ting at CERN in 1976

The second correct paper was by Tom Appelquist and David Politzer. Well before that November, they had realised (without publishing) that bound states of a charmed quark and its antiquark lying below the charm threshold would be exceptionally narrow due the asymptotic freedom of QCD. De Rújula suggested to them that such a system be called charmonium in an analogy with positronium. His term made it into the dictionary. Shortly afterward, the 1976 Nobel Prize in Physics was jointly awarded to Burton Richter and Sam Ting for “their pioneering work in the discovery of a heavy elementary particle of a new kind” – evidence that charm was not yet a universally accepted explanation. Over the next few years, experimenters worked hard to confirm the predictions of theorists at Harvard and Cornell by detecting and measuring the masses, spins and transitions among the eight sub-threshold charmonium states. Later on, they would do the same for 14 relatively narrow states of bottomonium.

Abdus Salam, Tom Ball and Paul Musset

Other experimenters were searching for particles containing just one charmed quark or antiquark. In our 1975 paper “Hadron Masses in a Gauge Theory”, de Rújula, Georgi and I included predictions of the masses of several not-yet-discovered charmed mesons and baryons. The first claim to have detected charmed particles was made in 1975 by Robert Palmer and Nick Samios at Brookhaven, again with a bubble-chamber event. It seemed to show a cascade decay process in which one charmed baryon decays into another charmed baryon, which itself decays. The measured masses of both of the charmed baryons were in excellent agreement with our predictions. Though the claim was not widely accepted, I believe to this day that Samios and Palmer were the first to detect charmed particles.

Sheldon Glashow and Steven Weinberg

The SLAC electron–positron collider, operating well above charm threshold, was certainly producing charmed particles copiously. Why were they not being detected? I recall attending a conference in Wisconsin that was largely dedicated to this question. On the flight home, I met my old friend Gerson Goldhaber, who had been struggling unsuccessfully to find them. I think I convinced him to try a bit harder. A couple of weeks later in 1976, Goldhaber and François Pierre succeeded. My role in charm physics had come to a happy ending. 

  • This article is adapted from a presentation given at the Institute of High-Energy Physics in Beijing on 20 October 2024 to celebrate the 50th anniversary of the discovery of the J/ψ.

The post Charm and synthesis appeared first on CERN Courier.

]]>
Feature Sheldon Glashow recalls the events surrounding a remarkable decade of model building and discovery between 1964 and 1974. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_GLASHOW_lectures.jpg
Muon cooling kickoff at Fermilab https://cerncourier.com/a/muon-cooling-kickoff-at-fermilab/ Mon, 27 Jan 2025 07:27:55 +0000 https://cerncourier.com/?p=112324 The first of a new series of workshops to discuss the future of beam-cooling technology for a muon collider.

The post Muon cooling kickoff at Fermilab appeared first on CERN Courier.

]]>
More than 100 accelerator scientists, engineers and particle physicists gathered in person and remotely at Fermilab from 30 October to 1 November for the first of a new series of workshops to discuss the future of beam-cooling technology for a muon collider. High-energy muon colliders offer a unique combination of discovery potential and precision. Unlike protons, muons are point-like particles that can achieve comparable physics outcomes at lower centre-of-mass energies. The large mass of the muon also suppresses synchrotron radiation, making muon colliders promising candidates for exploration at the energy frontier.

The International Muon Collider Collaboration (IMCC), supported by the EU MuCol study, is working to assess the potential of a muon collider as a future facility, along with the R&D needed to make it a reality. European engagement in this effort crystalised following the 2020 update to the European Strategy for Particle Physics (ESPPU), which identified the development of bright muon beams as a high-priority initiative. Worldwide interest in a muon collider is quickly growing: the 2023 Particle Physics Project Prioritization Panel (P5) recently identified it as an important future possibility for the US particle-physics community; Japanese colleagues have proposed a muon-collider concept, muTRISTAN (CERN Courier July/August 2024 p8); and Chinese colleagues have actively contributed to IMCC efforts as collaboration members.

Lighting the way

The workshop focused on reviewing the scope and design progress of a muon-cooling demonstrator facility, identifying potential host sites and timelines, and exploring science programmes that could be developed alongside it. Diktys Stratakis (Fermilab) began by reviewing the requirements and challenges of muon cooling. Delivering a high-brightness muon beam will be essential to achieving the luminosity needed for a muon collider. The technique proposed for this is ionisation cooling, wherein the phase-space volume of the muon beam decreases as it traverses a sequence of cells, each containing an energy- absorbing mat­erial and accelerating radiofrequency (RF) cavities.

Roberto Losito (CERN) called for a careful balance between ambition and practicality – the programme must be executed in a timely way if a muon collider is to be a viable next-generation facility. The Muon Cooling Demonstrator programme was conceived to prove that this technology can be developed, built and reliably operated. This is a critical step for any muon-collider programme, as highlighted in the ESPPU–LDG Accelerator R&D Roadmap published in 2022. The plan is to pursue a staged approach, starting with the development of the magnet, RF and absorber technology, and demonstrating the robust operation of high-gradient RF cavities in high magnetic fields. The components will then be integrated into a prototype cooling cell. The programme will conclude with a demonstration of the operation of a multi-cell cooling system with a beam, building on the cooling proof of principle made by the Muon Ionisation Cooling Experiment.

Chris Rogers (STFC RAL) summarised an emerging consensus that it is critical to demonstrate the reliable operation of a cooling lattice formed of multiple cells. While the technological complexity of the cooling-cell prototype will undergo further review, the preliminary choice presents a moderately challenging performance that could be achieved within five to seven years with reasonable investment. The target cooling performance of a whole cooling lattice remains to be established and depends on future funding levels. However, delegates agreed that a timely demonstration is more important than an ambitious cooling target.

Worldwide interest in a muon collider is quickly growing

The workshop also provided an opportunity to assess progress in designing the cooling-cell prototype. Given that the muon beam originates from hadron decays and is initially the size of a watermelon, solenoid magnets were chosen as they can contain large beams in a compact lattice and provide focusing in both horizontal and vertical planes simultaneously. Marco Statera (INFN LASA) presented preliminary solutions for the solenoid coil configuration based on high-temperature superconductors operating at 20 K: the challenge is to deliver the target magnetic field profile given axial forces, coil stresses and compact integration.

In ionisation cooling, low-Z absorbers are used to reduce the transverse momenta of the muons while keeping the multiple scattering at manageable levels. Candidate materials are lithium hydride and liquid hydrogen. Chris Rogers discussed the need to test absorbers and containment windows at the highest intensities. The potential for performance tests using muons or intensity tests using another particle species such as protons was considered to verify understanding of the collective interaction between the beam and the absorber. RF cavities are required to replace longitudinal energy lost in the absorbers.  Dario Giove (INFN LASA) introduced the prototype of an RF structure based on three coupled 704 MHz cavities and presented a proposal to use existing INFN capabilities to carry out a test programme for materials and cavities in magnetic fields. The use of cavity windows was also discussed, as it would enable greater accelerating gradients, though at the cost of beam degradation, increased thermal loads and possible cavity detuning. The first steps in integ­rating these latest hardware designs into a compact cooling cell were presented by Lucio Rossi (INFN LASA and UMIL). Future work needs to address the management of the axial forces and cryogenic heat loads, Rossi observed.

Many institutes presented a strong interest in contributing to the programme, both in the hardware R&D and hosting the eventual demonstrator. The final sessions of the workshop focused on potential host laboratories.

The event underscored the critical need for sustained innovation, timely implementation and global cooperation

At CERN, two potential sites were discussed, with ongoing studies focusing on the TT7 tunnel, where a moderate-power 10 kW proton beam from the Proton Synchrotron could be used for muon production. Preliminary beam physics studies of muon beam production and transport are already underway. Lukasz Krzempek (CERN) and Paul Jurj (Imperial College London) presented the first integration and beam-physics studies of the demonstrator facility in the TT7 tunnel, highlighting civil engineering and beamline design requirements, logistical challenges and safety considerations, finding no apparent showstoppers.

Jeff Eldred (Fermilab) gave an overview of Fermilab’s broad range of candidate sites and proton-beam energies. While further feasibility studies are required, Eldred highlighted that using 8 GeV protons from the Booster is an attractive option due to the favourable existing infrastructure and its alignment with Fermilab’s muon-collider scenario, which envisions a proton driver based on the same Booster proton energy.

The Fermilab workshop represented a significant milestone in advancing the Muon Cooling Demonstrator, highlighting enthusiasm from the US community to join forces with the IMCC and growing interest in Asia. As Mark Palmer (BNL) observed in his closing remarks, the event underscored the critical need for sustained innovation, timely implementation and global cooperation to make the muon collider a reality.

The post Muon cooling kickoff at Fermilab appeared first on CERN Courier.

]]>
Meeting report The first of a new series of workshops to discuss the future of beam-cooling technology for a muon collider. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_cool.jpg
CLOUD explains Amazon aerosols https://cerncourier.com/a/cloud-explains-amazon-aerosols/ Mon, 27 Jan 2025 07:26:49 +0000 https://cerncourier.com/?p=112200 The CLOUD collaboration at CERN has revealed a new source of atmospheric aerosol particles that could help scientists to refine climate models.

The post CLOUD explains Amazon aerosols appeared first on CERN Courier.

]]>
In a paper published in the journal Nature, the CLOUD collaboration at CERN has revealed a new source of atmospheric aerosol particles that could help scientists to refine climate models.

Aerosols are microscopic particles suspended in the atmosphere that arise from both natural sources and human activities. They play an important role in Earth’s climate system because they seed clouds and influence their reflectivity and coverage. Most aerosols arise from the spontaneous condensation of molecules that are present in the atmosphere only in minute concentrations. However, the vapours responsible for their formation are not well understood, particularly in the remote upper troposphere.

The CLOUD (Cosmics Leaving Outdoor Droplets) experiment at CERN is designed to investigate the formation and growth of atmospheric aerosol particles in a controlled laboratory environment. CLOUD comprises a 26 m3 ultra-clean chamber and a suite of advanced instruments that continuously analyse its contents. The chamber contains a precisely selected mixture of gases under atmospheric conditions, into which beams of charged pions are fired from CERN’s Proton Synchrotron to mimic the influence of galactic cosmic rays.

“Large concentrations of aerosol particles have been observed high over the Amazon rainforest for the past 20 years, but their source has remained a puzzle until now,” says CLOUD spokesperson Jasper Kirkby. “Our latest study shows that the source is isoprene emitted by the rainforest and lofted in deep convective clouds to high altitudes, where it is oxidised to form highly condensable vapours. Isoprene represents a vast source of biogenic particles in both the present-day and pre-industrial atmospheres that is currently missing in atmospheric chemistry and climate models.”

Isoprene is a hydrocarbon containing five carbon atoms and eight hydrogen atoms. It is emitted by broad-leaved trees and other vegetation and is the most abundant non-methane hydrocarbon released into the atmosphere. Until now, isoprene’s ability to form new particles has been considered negligible.

Seeding clouds

The CLOUD results change this picture. By studying the reaction of hydroxyl radicals with isoprene at upper tropospheric temperatures of –30 °C and –50 °C, the collaboration discovered that isoprene oxidation products form copious particles at ambient isoprene concentrations. This new source of aerosol particles does not require any additional vapours. However, when minute concentrations of sulphuric acid or iodine oxoacids were introduced into the CLOUD chamber, a 100-fold increase in aerosol formation rate was observed. Although sulphuric acid derives mainly from anthropogenic sulphur dioxide emissions, the acid concentrations used in CLOUD can also arise from natural sources.

In addition, the team found that isoprene oxidation products drive rapid growth of particles to sizes at which they can seed clouds and influence the climate – a behaviour that persists in the presence of nitrogen oxides produced by lightning at upper-tropospheric concentrations. After continued growth and descent to lower altitudes, these particles may provide a globally important source for seeding shallow continental and marine clouds, which influence Earth’s radiative balance – the amount of incoming solar radiation compared to outgoing longwave radiation (see “Seeding clouds” figure).

“This new source of biogenic particles in the upper troposphere may impact estimates of Earth’s climate sensitivity, since it implies that more aerosol particles were produced in the pristine pre-industrial atmosphere than previously thought,” adds Kirkby. “However, until our findings have been evaluated in global climate models, it’s not possible to quantify the effect.”

The CLOUD findings are consistent with aircraft observations over the Amazon, as reported in an accompanying paper in the same issue of Nature. Together, the two papers provide a compelling picture of the importance of isoprene-driven aerosol formation and its relevance for the atmosphere.

Since it began operation in 2009, the CLOUD experiment has unearthed several mechanisms by which aerosol particles form and grow in different regions of Earth’s atmosphere. “In addition to helping climate researchers understand the critical role of aerosols in Earth’s climate, the new CLOUD result demonstrates the rich diversity of CERN’s scientific programme and the power of accelerator-based science to address societal challenges,” says CERN Director for Research and Computing, Joachim Mnich.

The post CLOUD explains Amazon aerosols appeared first on CERN Courier.

]]>
News The CLOUD collaboration at CERN has revealed a new source of atmospheric aerosol particles that could help scientists to refine climate models. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_NA_cloudfrontis.jpg
Painting Higgs’ portrait in Paris https://cerncourier.com/a/painting-higgs-portrait-in-paris/ Mon, 27 Jan 2025 07:25:46 +0000 https://cerncourier.com/?p=112363 The 14th Higgs Hunting workshop deciphered the latest results from the ATLAS and CMS experiments.

The post Painting Higgs’ portrait in Paris appeared first on CERN Courier.

]]>
The 14th Higgs Hunting workshop took place from 23 to 25 September 2024 at Orsay’s IJCLab and Paris’s Laboratoire Astroparticule et Cosmologie. More than 100 participants joined lively discussions to decipher the latest developments in theory and results from the ATLAS and CMS experiments.

The portrait of the Higgs boson painted by experimental data is becoming more and more precise. Many new Run 2 and first Run 3 results have developed the picture this year. Highlights included the latest di-Higgs combinations with cross-section upper limits reaching down to 2.5 times the Standard Model (SM) expectations. A few excesses seen in various analyses were also discussed. The CMS collaboration reported a brand new excess of top–antitop events near the top–antitop production threshold, with a local significance of more than 5σ above the background described by perturbative quantum chromodynamics (QCD) only, that could be due to a pseudoscalar top–antitop bound state. A new W-boson mass measurement by the CMS collaboration – a subject deeply connected to electroweak symmetry breaking – was also presented, reporting a value consistent with the SM prediction with a very accurate precision of 9.9 MeV (CERN Courier November/December 2024 p7).

Parton shower event generators were in the spotlight. Historical talks by Torbjörn Sjöstrand (Lund University) and Bryan Webber (University of Cambridge) described the evolution of the PYTHIA and HERWIG generators, the crucial role they played in the discovery of the Higgs boson, and the role they now play in the LHC’s physics programme. Differences in the modelling of the parton–shower systematics by the ATLAS and CMS collaborations led to lively discussions!

The vision talk was given by Lance Dixon (SLAC) about the reconstruction of scattering amplitudes directly from analytic properties, as a complementary approach to Lagrangians and Feynman diagrams. Oliver Bruning (CERN) conveyed the message that the HL-LHC accelerator project is well on track, and Patricia McBride (Fermilab) reached a similar conclusion regarding ATLAS and CMS’s Phase-2 upgrades, enjoining new and young people to join the effort, to ensure they are ready and commissioned for the start of Run 4.

The next Higgs Hunting workshop will be held in Orsay and Paris from 15 to 17 July 2025, following EPS-HEP in Marseille from 7 to 11 July.

The post Painting Higgs’ portrait in Paris appeared first on CERN Courier.

]]>
Meeting report The 14th Higgs Hunting workshop deciphered the latest results from the ATLAS and CMS experiments. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_higgs.jpg
Trial trap on a truck https://cerncourier.com/a/trial-trap-on-a-truck/ Mon, 27 Jan 2025 07:24:01 +0000 https://cerncourier.com/?p=112206 CERN'S BASE-STEP experiment has taken the first step in testing the world's most compact antimatter trap.

The post Trial trap on a truck appeared first on CERN Courier.

]]>
Thirty years ago, physicists from Harvard University set out to build a portable antiproton trap. They tested it on electrons, transporting them 5000 km from Nebraska to Massachusetts, but it was never used to transport antimatter. Now, a spin-off project of the Baryon Antibaryon Symmetry Experiment (BASE) at CERN has tested their own antiproton trap, this time using protons. The ultimate goal is to deliver antiprotons to labs beyond CERN’s reach.

“For studying the fundamental properties of protons and antiprotons, you need to take extremely precise measurements – as precise as you can possibly make it,” explains principal investigator Christian Smorra. “This level of precision is extremely difficult to achieve in the antimatter factory, and can only be reached when the accelerator is shut down. This is why we need to relocate the measurements – so we can get rid of these problems and measure anytime.”

The team has made considerable strides to miniaturise their apparatus. BASE-STEP is far and away the most compact design for an antiproton trap yet built, measuring just 2 metres in length, 1.58 metres in height and 0.87 metres across. Weighing in at 1 tonne, transportation is nevertheless a complex operation. On 24 October, 70 protons were introduced into the trap and lifted onto a truck using two overhead cranes. The protons made a round trip through CERN’s main site before returning home to the antimatter factory. All 70 protons were safely transported and the experiment with these particles continued seemlessly, successfully demonstrating the trap’s performance.

Antimatter needs to be handled carefully, to avoid it annihilating with the walls of the trap. This is hard to achieve in the controlled environment of a laboratory, let alone on a moving truck. Just like in the BASE laboratory, BASE–STEP uses a Penning trap with two electrode stacks inside a single solenoid. The magnetic field confines charged particles radially, and the electric fields trap them axially. The first electrode stack collects antiprotons from CERN’s antimatter factory and serves as an “airlock” by protecting antiprotons from annihilation with the molecules of external gases. The second is used for long-term storage. While in transit, non-destructive image-current detection monitors the particles and makes sure they have not hit the walls of the trap.

“We originally wanted a system that you can put in the back of your car,” says Smorra. “Next, we want to try using permanent magnets instead of a superconducting solenoid. This would make the trap even smaller and save CHF 300,000. With this technology, there will be so much more potential for future experiments at CERN and beyond.”

With or without a superconducting magnet, continuous cooling is essential to prevent heat from degrading the trap’s ultra-high vacuum. Penning traps conventionally require two separate cooling systems – one for the trap and one for the superconducting magnet. BASE-STEP combines the cooling systems into one, as the Harvard team proposed in 1993. Ultimately, the transport system will have a cryocooler that is attached to a mobile power generator with a liquid-helium buffer tank present as a backup. Should the power generator be interrupted, the back-up cooling system provides a grace period of four hours to fix it and save the precious cargo of antiprotons. But such a scenario carries no safety risk given the miniscule amount of antimatter being transported. “The worst that can happen is the antiprotons annihilate, and you have to go back to the antimatter factory to refill the trap,” explains Smorra.

With the proton trial-run a success, the team are confident they will be able to use this apparatus to successfully deliver antiprotons to precision laboratories in Europe. Next summer, BASE-STEP will load up the trap with 1000 antiprotons and hit the road. Their first stop is scheduled to be Heinrich Heine University in  Germany.

“We can use the same apparatus for the antiproton transport,” says Smorra. “All we need to do is switch the polarity of the electrodes.”

The post Trial trap on a truck appeared first on CERN Courier.

]]>
News CERN'S BASE-STEP experiment has taken the first step in testing the world's most compact antimatter trap. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_NA_base.jpg
Emphasising the free circulation of scientists https://cerncourier.com/a/emphasising-the-free-circulation-of-scientists/ Mon, 27 Jan 2025 07:23:24 +0000 https://cerncourier.com/?p=112341 The 33rd assembly of the International Union of Pure and Applied Physics took place in Haikou, China.

The post Emphasising the free circulation of scientists appeared first on CERN Courier.

]]>
Physics is a universal language that unites scientists worldwide. No event illustrates this more vividly than the general assembly of the International Union of Pure and Applied Physics (IUPAP). The 33rd assembly convened 100 delegates representing territories around the world in Haikou, China, from 10 to 14 October 2024. Amid today’s polarised global landscape, one clear commitment emerged: to uphold the universality of science and ensure the free movement of scientists.

IUPAP was established in 1922 in the aftermath of World War I to coordinate international efforts in physics. Its logo is recognisable from conferences and proceedings, but its mission is less widely understood. IUPAP is the only worldwide organisation dedicated to the advancement of all fields of physics. Its goals include promoting global development and cooperation in physics by sponsoring international meetings; strengthening physics education, especially in developing countries; increasing diversity and inclusion in physics; advancing the participation and recognition of women and of people from under-represented groups; enhancing the visibility of early-career talents; and promoting international agreements on symbols, units, nomenclature and standards. At the 33rd assembly, 300 physicists were elected to the executive council and specialised commissions for a period of three years.

Global scientific initiatives were highlighted, including the International Year of Quantum Science and Technology (IYQ2025) and the International Decade on Science for Sustainable Development (IDSSD) from 2024 to 2033, which was adopted by the United Nations General Assembly in August 2023. A key session addressed the importance of industry partnerships, with delegates exploring strategies to engage companies in IYQ2025 and IDSSD to further IUPAP’s mission of using physics to drive societal progress. Nobel laureate Giorgio Parisi discussed the role of physics in promoting a sustainable future, and public lectures by fellow laureates Barry Barish, Takaaki Kajita and Samuel Ting filled the 1820-seat Oriental Universal Theater with enthusiastic students.

A key focus of the meeting was visa-related issues affecting international conferences. Delegates reaffirmed the union’s commitment to scientists’ freedom of movement. IUPAP stands against any discrimination in physics and will continue to sponsor events only in locations that uphold this value – a stance that is orthogonal to the policy of countries imposing sanctions on scientists affiliated with specific institutions.

A joint session with the fall meeting of the Chinese Physical Society celebrated the 25th anniversary of the IUPAP working group “Women in Physics” and emphasised diversity, equity and inclusion in the field. Since 2002, IUPAP has established precise guidelines for the sponsorship of conferences to ensure that women are fairly represented among participants, speakers and committee members, and has actively monitored the data ever since. This has contributed to a significant change in the participation of women in IUPAP-sponsored conferences. IUPAP is now building on this still-necessary work on gender by focusing on discrimination on the grounds of disability and ethnicity.

The closing ceremony brought together the themes of continuity and change. Incoming president Silvina Ponce Dawson (University of Buenos Aires) and president-designate Sunil Gupta (Tata Institute) outlined their joint commitment to maintaining an open dialogue among all physicists in an increasingly fragmented world, and to promoting physics as an essential tool for development and sustainability. Outgoing leaders Michel Spiro (CNRS) and Bruce McKellar (University of Melbourne) were honoured for their contributions, and the ceremonial handover symbolised a smooth transition of leadership.

As the general assembly concluded, there was a palpable sense of momentum. From strategic modernisation to deeper engagement with global issues, IUPAP is well-positioned to make physics more relevant and accessible. The resounding message was one of unity and purpose: the physics community is dedicated to leveraging science for a brighter, more sustainable future.

The post Emphasising the free circulation of scientists appeared first on CERN Courier.

]]>
Meeting report The 33rd assembly of the International Union of Pure and Applied Physics took place in Haikou, China. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_IUPAP.jpg
The new hackerpreneur https://cerncourier.com/a/the-new-hackerpreneur/ Mon, 27 Jan 2025 07:22:11 +0000 https://cerncourier.com/?p=112258 Hackathons can kick-start your career, says hacker and entrepreneur Jiannan Zhang.

The post The new hackerpreneur appeared first on CERN Courier.

]]>
The World Wide Web, AI and quantum computing – what do these technologies have in common? They all started out as “hacks”, says Jiannan Zhang, founder of the open-source community platform DoraHacks. “When the Web was invented at CERN, it demonstrated that in order to fundamentally change how people live and work, you have to think of new ways to use existing technology,” says Zhang. “Progress cannot be made if you always start from scratch. That’s what hackathons are for.”

Ten years ago, Zhang helped organise the first CERN Webfest, a hackathon that explores creative uses of technology for science and society. Webfest helped Zhang develop his coding skills and knowledge of physics by applying it to something beyond his own discipline. He also made long-lasting connections with teammates, who were from different academic backgrounds and all over the world. After participating in more hackathons, Zhang’s growing “hacker spirit” inspired him to start his own company. In 2024 Zhang returned to Webfest not as a participant, but as the CEO of DoraHacks.

Hackathons are social coding events often spanning multiple days. They are inclusive and open – no academic institution or corporate backing is required – making them accessible to a diverse range of talented individuals. Participants work in teams, pooling their skills to tackle technical problems through software, hardware or a business plan for a new product. Physicists, computer scientists, engineers and entrepreneurs all bring their strengths to the table. Young scientists can pursue work that may not fit within typical research structures, develop their skills, and build portfolios and professional networks.

“If you’re really passionate about some­thing, you should be able to jump on a project and work on it,” says Zhang. “You shouldn’t need to be associated with a university or have a PhD to pursue it.”

For early-career researchers, hackathons offer more than just technical challenges. They provide an alternative entry point into research and industry, bridging the gap between academia and real-world applications. University-run hackathons often attract corporate sponsors, giving them the budget to rent out stadiums with hundreds, sometimes thousands, of attendees.

“These large-scale hackathons really capture the attention of headhunters and mentors from industry,” explains Zhang. “They see the events as a recruitment pool. It can be a really effective way to advance careers and speak to representatives of big companies, as well as enhancing your coding skills.”

In the 2010s, weekend hackathons served as Zhang’s stepping stone into entrepreneurship. “I used to sit in the computer-science common room and work on my hacks. That’s how I met most of my friends,” recalled Zhang. “But later I realised that to build something great, I had to effectively organise people and capital. So I started to skip my computer-science classes and sneak into the business classrooms.” Zhang would hide in the back row of the business lectures, plotting his plan towards entrepreneurship. He networked with peers to evaluate different business models each day. “It was fun to combine our knowledge of engineering and business theory,” he added. “It made the journey a lot less stressful.”

But the transition from science to entrepreneurship was hard. “At the start you must learn and do everything yourself. The good thing is you’re exposed to lots of new skills and new people, but you also have to force yourself to do things you’re not usually good at.”

This is a dilemma many entrepreneurs face: whether to learn new skills from scratch, or to find business partners and delegate tasks. But finding trustworthy business partners is not always easy, and making the wrong decision can hinder the start up’s progress. That’s why planning the company’s vision and mission from the start is so important.

“The solution is actually pretty straight forward,” says Zhang. “You need to spend more time completing the important milestones yourself, to ensure you have a feasible product. Once you make the business plan and vision clear, you get support from everywhere.”

Decentralised community governance

Rather than hackathon participants competing for a week before abandoning their code, Zhang started DoraHacks to give teams from all over the world a chance to turn their ideas into fully developed products. “I want hackathons to be more than a recruitment tool,” he explains. “They should foster open-source development and decentralised community governance. Today, a hacker from Tanzania can collaborate virtually with a team in the US, and teams gain support to develop real products. This helps make tech fields much more diverse and accessible.”

Zhang’s company enables this by reducing logistical costs for organisers and providing funding mechanisms for participants, making hackathons accessible to aspiring researchers beyond academic institutions. As the community expands, new doors open for young scientists at the start of their careers.

“The business model is changing,” says Zhang. Hackathons are becoming fundamental to emerging technologies, particularly in areas like quantum computing, blockchain and AI, which often start out open source. “There will be a major shift in the process of product creation. Instead of building products in isolation, new technologies rely on platforms and infrastructure where hackers can contribute.”

Today, hackathons aren’t just about coding or networking – they’re about pushing the boundaries of what’s possible, creating meaningful solutions and launching new career paths. They act as incubators for ideas with lasting impact. Zhang wants to help these ideas become reality. “The future of innovation is collaborative and open source,” he says. “The old world relies on corporations building moats around closed-source technology, which is inefficient and inaccessible. The new world is centred around open platform technology, where people can build on top of old projects. This collaborative spirit is what makes the hacker movement so important.”

The post The new hackerpreneur appeared first on CERN Courier.

]]>
Careers Hackathons can kick-start your career, says hacker and entrepreneur Jiannan Zhang. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_CAR_zhang.jpg
The value of being messy https://cerncourier.com/a/the-value-of-being-messy/ Mon, 27 Jan 2025 07:20:48 +0000 https://cerncourier.com/?p=112190 Claire Malone argues that science communicators should not stray too far into public-relations territory.

The post The value of being messy appeared first on CERN Courier.

]]>
The line between science communication and public relations has become increasingly blurred. On one side, scientific press officers highlight institutional success, secure funding and showcase breakthrough discoveries. On the other, science communicators and journalists present scientific findings in a way that educates and entertains readers – acknowledging both the triumphs and the inherent uncertainties of the scientific process.

The core difference between these approaches lies in how they handle the inevitable messiness of science. Science isn’t a smooth, linear path of consistent triumphs; it’s an uncertain, trial-and-error journey. This uncertainty, and our willingness to discuss it openly, is what distinguishes authentic science communication from a polished public relations (PR) pitch. By necessity, PR often strives to present a neat narrative, free of controversy or doubt, but this risks creating a distorted perception of what science actually is.

Finding your voice

Take, for example, the situation in particle physics. Experiments probing the fundamental laws of physics are often critiqued in the press for their hefty price tags – particularly when people are eager to see resources directed towards solving global crises like climate change or preventing future pandemics. When researchers and science communicators are finding their voice, a pressing question is how much messiness to communicate in uncertain times.

After completing my PhD as part of the ATLAS collaboration, I became a science journalist and communicator, connecting audiences across Europe and America with the joy of learning about fundamental physics. After a recent talk at the Royal Institution in London, in which I explained how ATLAS measures fundamental particles, I received an email from a colleague. The only question the talk prompted him to ask was about the safety of colliding protons, aiming to create undiscovered particles. This reaction reflects how scientific misinformation – such as the idea that experiments at CERN could endanger the planet – can be persistent and difficult to eradicate.

In response to such criticisms and concerns, I have argued many times for the value of fundamental physics research, often highlighting the vast number of technological advancements it enables, from touch screens to healthcare advances. However, we must be wary not to only rely on this PR tactic of stressing the tangible benefits of research, as it can sometimes sidestep the uncertainties and iterative nature of scientific investigation, presenting an oversimplified version of scientific progress.

From Democritus to the Standard Model

This PR-driven approach risks undermining public understanding and trust in science in the long run. When science is framed solely as a series of grand successes without any setbacks, people may become confused or disillusioned when they inevitably encounter controversies or failures. Instead, this is where honest science communication shines – admitting that our understanding evolves, that we make mistakes and that uncertainties are an integral part of the process.

Our evolving understanding of particle physics is a perfect illustration of this. From Democritus’ concept of “indivisible atoms” to the development of the Standard Model, every new discovery has refined or even overhauled our previous understanding. This is the essence of science – always refining, never perfect – and it’s exactly what we should be communicating to the public.

Embracing this messiness doesn’t necessarily reduce public trust. When presenting scientific results to the public, it’s important to remember that uncertainty can take many forms, and how we communicate these forms can significantly affect credibility. Technical uncertainty – expressing complexity or incomplete information – often increases audience trust, as it communicates the real intricacies of scientific research. Conversely, consensus uncertainty – spotlighting disagreements or controversies among experts – can have a negative impact on credibility. When it comes to genuine disagreements among scientists, effectively communicating uncertainty to the public requires a thoughtful balance. Transparency is key: acknowledging the existence of different scientific perspectives helps the public understand that science is a dynamic process. Providing context about why disagreements exist, whether due to limited data or competing theoretical frameworks, also helps in making the uncertainty comprehensible.

Embrace errors

In other words, the next time you present your latest results on social media, don’t shy away from including the error bars. And if you must have a public argument with a colleague about what the results mean, context is essential!

Acknowledging the existence of different scientific perspectives helps the public understand that science is a dynamic process

No one knows where the next breakthrough will come from or how it might solve the challenges we face. In an information ecosystem increasingly filled with misinformation, scientists and science communicators must help people understand the iterative, uncertain and evolving nature of science. As science communicators, we should be cautious not to stray too far into PR territory. Authentic communication doesn’t mean glossing over uncertainties but rather embracing them as an essential part of the story. This way, the public can appreciate science not just as a collection of established facts, but as an ongoing, dynamic process – messy, yet ultimately satisfying.

The post The value of being messy appeared first on CERN Courier.

]]>
Opinion Claire Malone argues that science communicators should not stray too far into public-relations territory. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_VIEW_malone.jpg
Cornering compressed SUSY https://cerncourier.com/a/cornering-compressed-susy/ Mon, 27 Jan 2025 07:18:49 +0000 https://cerncourier.com/?p=112235 A new CMS analysis explores an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra.

The post Cornering compressed SUSY appeared first on CERN Courier.

]]>
CMS figure 1

Since the LHC began operations in 2008, the CMS experiment has been searching for signs of supersymmetry (SUSY) – the only remaining spacetime symmetry not yet observed to have consequences for physics. It has explored higher and higher masses of supersymmetric particles (sparticles) with increasing collision energies and growing datasets. No evidence has been observed so far. A new CMS analysis using data recorded between 2016 and 2018 continues this search in an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra.

The masses of SUSY sparticles have very important implications for both the physics of our universe and how they could be potentially produced and observed at experiments like CMS. The heavier the sparticle, the rarer its appearance. On the other hand, when heavy sparticles decay, their mass is converted to the masses and momenta of SM particles, like leptons and jets. These particles are detected by CMS, with large masses leaving potentially spectacular (and conspicuous) signatures. Each heavy sparticle is expected to continue to decay to lighter ones, ending with the lightest SUSY particles (LSPs). LSPs, though massive, are stable and do not decay in the detector. Instead, they appear as missing momentum. In cases of compressed sparticle mass spectra, the mass difference between the initially produced sparticles and LSPs is small. This means the low rates of production of massive sparticles are not accompanied by high-momentum decay products in the detector. Most of their mass ends up escaping in the form of invisible particles, significantly complicating observation.

This new CMS result turns this difficulty on its head, using a kinematic observable RISR, which is directly sensitive to the mass of LSPs as opposed to the mass difference between parent sparticles and LSPs. The result is even better discrimination between SUSY and SM backgrounds when sparticle spectra are more compressed.

This approach focuses on events where putative SUSY candidates receive a significant “kick” from initial-state radiation (ISR) – additional jets recoiling opposite the system of sparticles. When the sparticle masses are highly compressed, the invisible, massive LSPs receive most of the ISR momentum-kick, with this fraction telling us about the LSP masses through the RISR observable.

Given the generic applicability of the approach, the analysis is able to systematically probe a large class of possible scenarios. This includes events with various numbers of leptons (0, 1, 2 or 3) and jets (including those from heavy-flavour quarks), with a focus on objects with low momentum. These multiplicities, along with RISR and other selected discriminating variables, are used to categorise recorded events and a comprehensive fit is performed to all these regions. Compressed SUSY signals would appear at larger values of RISR, while bins at lower values are used to model and constrain SM backgrounds. With more than 2000 different bins in RISR, over several hundred object-based categ­ories, a significant fraction of the experimental phase space in which compressed SUSY could hide is scrutinised.

In the absence of significant observed deviations in data yields from SM expectations, a large collection of SUSY scenarios can be excluded at high confidence level (CL), including those with the production of stop quarks, EWKinos and sleptons. As can be seen in the results for stop quarks (figure 1), the analysis is able to achieve excellent sensitivity to compressed SUSY. Here, as for many of the SUSY scenarios considered, the analy­sis provides the world’s most stringent constraints on compressed SUSY, further narrowing the space it could be hiding.

The post Cornering compressed SUSY appeared first on CERN Courier.

]]>
News A new CMS analysis explores an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_CMS_feature.jpg
Chinese space station gears up for astrophysics https://cerncourier.com/a/chinese-space-station-gears-up-for-astrophysics/ Mon, 27 Jan 2025 07:16:33 +0000 https://cerncourier.com/?p=112214 China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades.

The post Chinese space station gears up for astrophysics appeared first on CERN Courier.

]]>
Completed in 2022, China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades. Like the International Space Station, its ability to provide large amounts of power, support heavy payloads and access powerful communication and computing facilities give it many advantages over typical satellite platforms. As such, both Chinese and international collaborations have been developing a number of science missions ranging from optical astronomy to the detection of cosmic rays with PeV energies.

For optical astronomy, the space station will be accompanied by the Xuntian telescope, which can be translated to “survey the heavens”. Xuntian is currently planned to be launched in mid-2025 to fly alongside Tiangong, thereby allowing for regular maintenance. Although its spatial resolution will be similar to that of the Hubble Space Telescope, Xuntian’s field of view will be about 300 times larger, allowing the observation of many objects at the same time. In addition to producing impressive images similar to those sent by Hubble, the instrument will be important for cosmological studies where large statistics for astronomical objects are typically required to study their evolution.

Another instrument that will observe large portions of the sky is LyRIC (Lyman UV Radiation from Interstellar medium and Circum-galactic medium). After being placed on the space station in the coming years, LyRIC will probe the poorly studied far-ultraviolet regime that contains emission lines from neutral hydrogen and other elements. While difficult to measure, this allows studies of baryonic matter in the universe, which can be used to answer important questions such as why only about half of the total baryons in the standard “ΛCDM” cosmological model can be accounted for.

At slightly higher energies, the Diffuse X-ray Explorer (DIXE) aims to use a novel type of X-ray detector to reach an energy resolution better than 1% in the 0.1 to 10 keV energy range. It achieves this using cryogenic transition-edge sensors (TESs), which exploit the rapid change in resistance that occurs during a superconducting phase transition. In this regime, the resistivity of the material is highly dependent on its temperature, allowing the detection of minuscule temperature increases resulting from X-rays being absorbed by the material. Positioned to scan the sky above the Tiangong space station, DIXE will be able, among other things, to measure the velocity of mat­erial that appears to have been emitted by the Milky Way during an active stage of its central black hole. Its high-energy resolution will allow Doppler shifts of the order of several eV to be measured, requiring the TES detectors to operate at 50 mK. Achieving such temperatures demands a cooling system of 640 W – a power level that is difficult to achieve on a satellite, but relatively easy to acquire on a space station. As such, DIXE will be one of the first detectors using this new technology when it launches in 2025, leading the way for missions such as the European ATHENA mission that plans to use it starting in 2037.

Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up

POLAR-2 was accepted as an international payload on the China space station through the United Nations Office for Outer Space Affairs and has since become a CERN-recognised experiment. The mission started as a Swiss, German, Polish and Chinese collaboration building on the success of POLAR, which flew on the space station’s predecessor Tiangong-2. Like its earlier incarnation, POLAR-2 measures the polarisation of high-energy X rays or gamma rays to provide insights into, for example, the magnetic fields that produced the emission. As one of the most sensitive gamma-ray detectors in the sky, POLAR-2 can also play an important role in alerting other instruments when a bright gamma-ray transient, such as a gamma-ray burst, appears. The importance of such alerts has resulted in the expansion of POLAR-2 to include an accompanying imaging spectrometer, which will provide detailed spectral and location information on any gamma-ray transient. Also now foreseen for this second payload is an additional wide-field-of-view X-ray polarimeter. The international team developing the three instruments, which are scheduled to be launched in 2027, is led by the Institute of High Energy Physics in Beijing.

For studying the universe using even higher energy emissions, the space station will host the High Energy cosmic-Radiation Detection Facility (HERD). HERD is designed to study both cosmic rays and gamma rays at energies beyond those accessible to instruments like AMS-02, CALET (CERN Courier July/August 2024 p24) and DAMPE. It aims to achieve this, in part, by simply being larger, resulting in a mass that is currently only possible to support on a space station. The HERD calorimeter will be 55 radiation lengths long and consist of several tonnes of scintillating cubic LYSO crystals. The instrument will also use high-precision silicon trackers, which in combination with the deep calorimeter, will provide a better angular resolution and a geometrical acceptance 30 times larger than the present AMS-02 (which is due to be upgraded next year). This will allow HERD to probe the cosmic-ray spectrum up to PeV energies, filling in the energy gap between current space missions and ground-based detectors. HERD started out as an international mission with a large European contribution, however delays on the European side regarding participation, in combination with a launch requirement of 2027, mean that it is currently foreseen to be a fully Chinese mission.

Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up. As well as providing researchers with a pristine view of the electromagnetic universe, instruments such as HERD will enable vital cross-checks of data from AMS-02 and other unique experiments in space.

The post Chinese space station gears up for astrophysics appeared first on CERN Courier.

]]>
News China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_NA_astro.jpg
Taking the lead in the monopole hunt https://cerncourier.com/a/taking-the-lead-in-the-monopole-hunt/ Mon, 27 Jan 2025 07:15:21 +0000 https://cerncourier.com/?p=112230 Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931.

The post Taking the lead in the monopole hunt appeared first on CERN Courier.

]]>
ATLAS figure 1

Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931. He pointed out that if monopoles exist, electric charge must be quantised, meaning that particle charges must be integer multiples of a fundamental charge. Electric charge quantisation is indeed observed in nature, with no other known explanation for this striking phenomenon. The ATLAS collaboration performed a search for these elusive particles using lead–lead (PbPb) collisions at 5.36 TeV from Run 3 of the Large Hadron Collider.

The search targeted the production of monopole–antimonopole pairs via photon–photon interactions, a process enhanced in heavy-ion collisions due to the strong electromagnetic fields (Z2) generated by the Z = 82 lead nuclei. Ultraperipheral collisions are ideal for this search, as they feature electromagnetic interactions without direct nuclear contact, allowing rare processes like monopole production to dominate in visible signatures. The ATLAS study employed a novel detection technique exploiting the expected highly ionising nature of these particles, leaving a characteristic signal in the innermost silicon detectors of the ATLAS experiment (figure 1).

The analysis employed a non-perturbative semiclassical model to estimate monopole production. Traditional perturbative models, which rely on Feynman diagrams, are inadequate due to the large coupling constant of magnetic monopoles. Instead, the study used a model based on the Schwinger mechanism, adapted for magnetic fields, to predict monopole production in the ultraperipheral collisions’ strong magnetic fields. This approach offers a more robust
theoretical framework for the search.

ATLAS figure 2

The experiment’s trigger system was critical to the search. Given the high ionisation signature of monopoles, traditional calorimeter-based triggers were unsuitable, as even high-momentum monopoles lose energy rapidly through ionisation and do not reach the calorimeter. Instead, the trigger, newly introduced for the 2023 PbPb data-taking campaign, focused on detecting the forward neutrons emitted during electromagnetic interactions. The level-1 trigger system identified neutrons using the Zero-Degree Calorimeter, while the high-level trigger required more than 100 clusters of pixel-detector hits in the inner detector – an approach sensitive to monopoles due to their high ionisation signatures.

Additionally, the analysis examined the topology of pixel clusters to further refine the search, as a more aligned azimuthal distribution in the data would indicate a signature consistent with monopoles (figure 1), while the uniform distribution typically associated with beam-induced backgrounds could be identified and suppressed.

No significant monopole signal is observed beyond the expected background, with the latter being estimated using a data-driven technique. Consequently, the analysis set new upper limits on the cross-section for magnetic monopole production (figure 2), significantly improving existing limits for low-mass monopoles in the 20–150 GeV range. Assuming a non-perturbative semiclassical model, the search excludes monopoles with a single Dirac magnetic charge and masses below 120 GeV. The techniques developed in this search will open new possibilities to study other highly ionising particles that may emerge from beyond-Standard Model physics.

The post Taking the lead in the monopole hunt appeared first on CERN Courier.

]]>
News Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_ATLAS_feature.jpg
Unprecedented progress in energy-efficient RF https://cerncourier.com/a/unprecedented-progress-in-energy-efficient-rf/ Mon, 27 Jan 2025 07:14:38 +0000 https://cerncourier.com/?p=112349 Forty-five experts from industry and academia met in the magnificent city of Toledo for the second workshop on efficient RF sources.

The post Unprecedented progress in energy-efficient RF appeared first on CERN Courier.

]]>
Forty-five experts from industry and academia met in the magnificent city of Toledo, Spain from 23 to 25 September 2024 for the second workshop on efficient RF sources. Part of the I.FAST initiative on sustainable concepts and technologies (CERN Courier July/August 2024 p20), the event focused on recent advances in energy-efficient technology for RF sources essential to accelerators. Progress in the last two years has been unprecedented, with new initiatives and accomplishments around the world fuelled by the ambitious goals of new, high-energy particle-physics projects.

Out of more than 30 presentations, a significant number featured pulsed, high-peak-power RF sources working at frequencies above 3 GHz in the S, C and X bands. These involve high-efficiency klystrons that are being designed, built and tested for the KEK e/e+ Injector, the new EuPRAXIA@SPARC_LAB linac, the CLIC testing facilities, muon collider R&D, the CEPC injector linac and the C3 project. Reported increases in beam-to-RF power efficiency range from 15 percentage points for the retro­fit prototype for CLIC to more than 25 points (expected) for a new greenfield klystron design that can be used across most new projects.

A very dynamic area for R&D is the search of efficient sources for the continuous wave (CW) and long-pulse RF needed for circular accelerators. Typically working in the L-band, existing devices deliver less than 3 MW in peak power. Solid-state amplifiers, inductive output tubes, klystrons, magnetrons, triodes and exotic newly rediscovered vacuum tubes called “tristrons” compete in this arena. Successful prototypes have been built for the High-Luminosity LHC and CEPC with power efficiency gains of 10 to 20 points. In the case of the LHC, this will allow 15% more power without an impact on the electricity bill; in the case of a circular Higgs factory, this will allow a 30% reduction. CERN and SLAC are also investigating very-high-efficiency vacuum tubes for the Future Circular Collider with a potential reduction of close to 50% on the final electricity bill. A collaboration between academia and industry would certainly be required to bring this exciting new technology to light.

Besides the astounding advances in vacuum-tube technology, solid-state amplifiers based on cheap transistors are undergoing a major transformation thanks to the adoption of gallium-nitride technology. Commercial amplifiers are now capable of delivering kilowatts of power at low duty cycles with a power efficiency of 80%, while Uppsala University and the European Spallation Source have demonstrated the same efficiency for combined systems working in CW.

The search for energy efficiency does not stop at designing and building more efficient RF sources. All aspects of operation, power combination and using permanent magnets and efficient modulators need to be folded in, as described by many concrete examples during the workshop. The field is thriving.

The post Unprecedented progress in energy-efficient RF appeared first on CERN Courier.

]]>
Meeting report Forty-five experts from industry and academia met in the magnificent city of Toledo for the second workshop on efficient RF sources. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_WERFSII.jpg
ICFA talks strategy and sustainability in Prague https://cerncourier.com/a/icfa-talks-strategy-and-sustainability-in-prague-2/ Mon, 27 Jan 2025 07:13:18 +0000 https://preview-courier.web.cern.ch/?p=111309 The 96th ICFA meeting heard extensive reports from the leading HEP laboratories and various world regions on their recent activities and plans.

The post ICFA talks strategy and sustainability in Prague appeared first on CERN Courier.

]]>
ICFA, the International Committee for Future Accelerators, was formed in 1976 to promote international collaboration in all phases of the construction and exploitation of very-high-energy accelerators. Its 96th meeting took place on 20 and 21 July during the recent ICHEP conference in Prague. Almost all of the 16 members from across the world attended in person, making the assembly lively and constructive.

The committee heard extensive reports from the leading HEP laboratories and various world regions on their recent activities and plans, including a presentation by Paris Sphicas, the chair of the European Committee for Future Accelerators (ECFA), on the process for the update of the European strategy for particle physics (ESPP). Launched by CERN Council in March 2024, the ESPP update is charged with recommending the next collider project at CERN after HL-LHC operation.

A global task

The ESPP update is also of high interest to non-European institutions and projects. Consequently, in addition to the expected inputs to the strategy from European HEP communities, those from non-European HEP communities are also welcome. Moreover, the recent US P5 report and the Chinese plans for CEPC, with a potential positive decision in 2025/2026, and discussions about the ILC project in Japan, will be important elements of the work to be carried out in the context of the ESPP update. They also emphasise the global nature of high-energy physics.

An integral part of the work of ICFA is carried out within its panels, which have been very active. Presentations were given from the new panel on the Data Lifecycle (chair Kati Lassila-Perini, Helsinki), the Beam Dynamics panel (new chair Yuan He, IMPCAS) and the Advanced and Novel Accelerators panel (new chair Patric Muggli, Max Planck Munich, proxied at the meeting by Brigitte Cros, Paris-Saclay). The Instrumentation and Innovation Development panel (chair Ian Shipsey, Oxford) is setting an example with its numerous schools, the ICFA instrumentation awards and centrally sponsored instrumentation studentships for early-career researchers from underserved world regions. Finally, the chair of the ILC International Development Team panel (Tatsuya Nakada, EPFL) summarised the latest status of the ILC Technological Network, and the proposed ILC collider project in Japan.

ICFA noted interesting structural developments in the global organisation of HEP

A special session was devoted to the sustainability of HEP accelerator infrastructures, considering the need to invest efforts into guidelines that enable better comparison of the environmental reports of labs and infrastructures, in particular for future facilities. It was therefore natural for ICFA to also hear reports not only from the panel on Sustainable Accelerators and Colliders led by Thomas Roser (BNL), but also from the European Lab Directors Working Group on Sustainability. This group, chaired by Caterina Bloise (INFN) and Maxim Titov (CEA), is mandated to develop a set of key indicators and a methodology for the reporting on future HEP projects, to be delivered in time for the ESPP update.

Finally, ICFA noted some very interesting structural developments in the global organisation of HEP. In the Asia-Oceania region, ACFA-HEP was recently formed as a sub-panel under the Asian Committee for Future Accelerators (ACFA), aiming for a better coordination of HEP activities in this particular region of the world. Hopefully, this will encourage other world regions to organise themselves in a similar way in order to strengthen their voice in the global HEP community – for example in Latin America. Here, a meeting was organised in August by the Latin American Association for High Energy, Cosmology and Astroparticle Physics (LAA-HECAP) to bring together scientists, institutions and funding agencies from across Latin America to coordinate actions for jointly funding research projects across the continent.

The next in-person ICFA meeting will be held during the Lepton–Photon conference in Madison, Wisconsin (USA), in August 2025.

The post ICFA talks strategy and sustainability in Prague appeared first on CERN Courier.

]]>
Meeting report The 96th ICFA meeting heard extensive reports from the leading HEP laboratories and various world regions on their recent activities and plans. https://cerncourier.com/wp-content/uploads/2024/09/CCNovDec24_FN_ICFA.jpg
Isolating photons at low Bjorken x https://cerncourier.com/a/isolating-photons-at-low-bjorken-x/ Mon, 27 Jan 2025 07:11:37 +0000 https://cerncourier.com/?p=112249 A new measurement by ALICE will help to constrain the gluon PDF.

The post Isolating photons at low Bjorken x appeared first on CERN Courier.

]]>
ALICE figure 1

In high-energy collisions at the LHC, prompt photons are those that do not originate from particle decays and are instead directly produced by the hard scattering of quarks and gluons (partons). Due to their early production, they provide a clean method to probe the partons inside the colliding nucleons, and in particular the fraction of the momentum of the nucleon carried by each parton (Bjorken x). The distribution of each parton in Bjorken x is known as its parton distribution function (PDF).

Theoretical models of particle production rely on the precise knowledge of PDFs, which are derived from vast amounts of experimental data. The high centre-of-mass energies (√s) at the LHC probe very small values of the momentum fraction, Bjorken x. At “midrapidity”, when a parton scatters with a large angle with respect to the beam axis, and a prompt photon is produced in the final state, a useful approximation to Bjorken x is provided by the dimensionless variable xT = 2pT/√s, where pT is the transverse momentum of the prompt photon.

Prompt photons can also be produced by next-to-leading order processes such as parton fragmentation or bremsstrahlung. A clean separation of the different prompt photon sources is difficult experimentally, but fragmentation can be suppressed by selecting “isolated photons”. For a photon to be considered isolated, the sum of the transverse energies or transverse momenta of the particles produced in a cone around the photon must be smaller than some threshold – a selection that can be done both in the experimental measurement and theoretical calculations. An isolation requirement also helps to reduce the background of decay photons, since hadrons that can decay to photons are often produced in jet fragmentation.

The ALICE collaboration now reports the measurement of the differential cross-section for isolated photons in proton–proton collisions at √s = 13 TeV at midrapidity. The photon measurement is performed by the electromagnetic calorimeter, and the isolated photons are selected by combining with the data from the central inner tracking system and time-projection chamber, requiring that the summed pT of the charged particles in a cone of angular radius 0.4 radians centred on the photon candidate be smaller than 1.5 GeV/c. The isolated photon cross-sections are obtained within the transverse momentum range from 7 to 200 GeV/c, corresponding to 1.1 × 10–3 < xT < 30.8 × 10–3.

Figure 1 shows the new ALICE results alongside those from ATLAS, CMS and prior measurements in proton–proton and proton–antiproton collisions at lower values of √s. The figure spans more than 15 orders of magnitude on the y-axis, representing the cross-section, over a wide range of xT. The present measurement probes the smallest Bjorken x with isolated photons at midrapidity to date. The experimental data points show an agreement between all the measurements when scaled with the collision energy to the power n = 4.5. Such a scaling is designed to cancel the predicted 1/(pT)n dependence of partonic 2  2 scattering cross-sections in perturbative QCD and reveal insights into the gluon PDF (see “The other 99%“).

This measurement will help to constrain the gluon PDF and will play a crucial role in exploring medium-induced modifications of hard probes in nucleus–nucleus collisions.

The post Isolating photons at low Bjorken x appeared first on CERN Courier.

]]>
News A new measurement by ALICE will help to constrain the gluon PDF. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_ALICE_feature.jpg
R(D) ratios in line at LHCb https://cerncourier.com/a/rd-ratios-in-line-at-lhcb/ Fri, 24 Jan 2025 16:00:50 +0000 https://cerncourier.com/?p=112240 The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation.

The post R(D) ratios in line at LHCb appeared first on CERN Courier.

]]>
LHCb figure 1

The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation in the framework of the Standard Model (SM). The b  cτντ transition has the potential to reveal new particles or forces that interact primarily with third-generation particles, which are subject to the less stringent experimental constraints at present. As a tree-level SM process mediated by W-boson exchange, its amplitude is large, resulting in large branching fractions and significant data samples to analyse.

The observable under scrutiny is the ratio of decay rates between the signal mode involving τ and ντ leptons from the third generation of fermions and the normalisation mode containing μ and νμ leptons from the second generation. Within the SM, this lepton flavour universality (LFU) ratio deviates from unity only due to the different mass of the charged leptons – but new contributions could change the value of the ratios. A longstanding tension exists between the SM prediction and the experimental measurements, requiring further input to clarify the source of the discrepancy.

The LHCb collaboration analysed four decay modes: B0 D(*)+ν, with ℓ representing τ or μ. Each is selected using the same visible final state of one muon and light hadrons from the decay of the charm meson. In the normalisation mode, the muon originates directly from the B-hadron decay, while in the signal mode, it arises from the decay of the τ lepton. The four contributions are analysed simultaneously, yielding two LFU ratios between taus and muons – one using the ground state of the D+ meson and one the excited state D*+.

The control of the background contributions is particularly complicated in this analysis as the final state is not fully reconstructible, limiting the resolution on some of the discriminating variables. Instead, a three-dimensional template fit separates the signal and the normalisation from the background versus: the momentum transferred to the lepton pair (q2); the energy of the muon in the rest frame of the B meson (Eμ*); and the invariant mass missing from the visible system. Each contribution is modelled using a template histogram derived either from simulation or from selected control samples in data.

This constitutes the world’s second most precise measurement of R(D)

To prevent the simulated data sample size from becoming a limiting factor in the precision of the measurement, a fast tracker-only simulation technique was exploited for the first time in LHCb. Another novel aspect of this work is the use of the HAMMER software tool during the minimisation procedure of the likelihood fit, which enables a fast, but exact, variation of a template as a function of the decay-model parameters. This variation is important to allow the form factors of both the signal and normalisation channels to vary as the constraints derived from the predictions that use precise lattice calculations can have larger uncertainties than those obtained from the fit.

The fit projection over one of the discriminating variables is shown in figure 1, illustrating the complexity of the analysed data sample but nonetheless showcasing LHCb’s ability to distinguish the signal modes (red and orange) from the normalisation modes (two shades of blue) and background contributions.

The measured LFU ratios are in good agreement with the current world average and the predictions of the SM: R(D+) = 0.249 ± 0.043 (stat.) ± 0.047 (syst.) and R(D*+) = 0.402 ± 0.081(stat.) ± 0.085 (syst.). Under isospin symmetry assumptions, this constitutes the world’s second most precise measurement of R(D), following a 2019 measurement by the Belle collaboration. This analysis complements other ongoing efforts at LHCb and other experiments to test LFU across different decay channels. The precision of the measurements reported here is primarily limited by the size of the signal and control samples, so more precise measurements are expected with future LHCb datasets.

The post R(D) ratios in line at LHCb appeared first on CERN Courier.

]]>
News The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_LHCb_feature.jpg
Rapid developments in precision predictions https://cerncourier.com/a/rapid-developments-in-precision-predictions/ Fri, 24 Jan 2025 15:57:39 +0000 https://cerncourier.com/?p=112358 Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions.

The post Rapid developments in precision predictions appeared first on CERN Courier.

]]>
High Precision for Hard Processes in Turin

Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions. To keep pace with experimental observations at the LHC and elsewhere, precision computing has had to develop rapidly in recent years – efforts that have been monitored and driven by the biennial High Precision for Hard Processes (HP2) conference for almost two decades now. The latest edition attracted 120 participants to the University of Torino from 10 to 13 September 2024.

All speakers addressed the same basic question: how can we achieve the most precise theoretical description for a wide variety of scattering processes at colliders?

The recipe for precise prediction involves many ingredients, so the talks in Torino probed several research directions. Advanced methods for the calculation of scattering amplitudes were discussed, among others, by Stephen Jones (IPPP Durham). These methods can be applied to detailed high-order phenomenological calculations for QCD, electroweak processes and BSM physics, as illustrated by Ramona Groeber (Padua) and Eleni Vryonidou (Manchester). Progress in parton showers – a crucial tool to bridge amplitude calculations and experimental results – was presented by Silvia Ferrario Ravasio (CERN). Dedicated methods to deal with the delicate issue of infrared divergences in high-order cross-section calculations were reviewed by Chiara Signorile-Signorile (Max Planck Institute, Munich).

The Torino conference was dedicated to the memory of Stefano Catani, a towering figure in the field of high-energy physics, who suddenly passed away at the beginning of this year. Starting from the early 1980s, and for the whole of his career, Catani made groundbreaking contributions in every facet of HP2. He was an inspiration to a whole generation of physicists working in high-energy phenomenology. We remember him as a generous and kind person, and a scientist of great rigour and vision. He will be sorely missed.

The post Rapid developments in precision predictions appeared first on CERN Courier.

]]>
Meeting report Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_HP_feature.jpg
AI treatments for stroke survivors https://cerncourier.com/a/ai-treatments-for-stroke-survivors/ Fri, 24 Jan 2025 15:52:08 +0000 https://cerncourier.com/?p=112345 Data on strokes is plentiful but fragmented, making it difficult to exploit in data-driven treatment strategies.

The post AI treatments for stroke survivors appeared first on CERN Courier.

]]>
Data on strokes is plentiful but fragmented, making it difficult to exploit in data-driven treatment strategies. The toolbox of the high-energy physicist is well adapted to the task. To amplify CERN’s societal contributions through technological innovation, the Unleashing a Comprehensive, Holistic and Patient-Centric Stroke Management for a Better, Rapid, Advanced and Personalised Stroke Diagnosis, Treatment and Outcome Prediction (UMBRELLA) project – co-led by Vall d’Hebron Research Institute and Siemens Healthineers – was officially launched on 1 October 2024. The kickoff meeting in Barcelona, Spain, convened more than 20 partners, including Philips, AstraZeneca, KU Leuven and EATRIS. Backed by nearly €27 million from the EU’s Innovative Health Initiative and industry collaborators, the project aims to transform stroke care across Europe.

The meeting highlighted the urgent need to address stroke as a pressing health challenge in Europe. Each year, more than one million acute stroke cases occur in Europe, with nearly 10 million survivors facing long-term consequences. In 2017, the economic burden of stroke treatments was estimated to be €60 billion – a figure that continues to grow. UMBRELLA’s partners outlined their collective ambition to translate a vast and fragmented stroke data set into actionable care innovations through standardisation and integration.

UMBRELLA will utilise advanced digital technologies to develop AI-powered predictive models for stroke management. By standardising real-world stroke data and leveraging tools like imaging technologies, wearable devices and virtual rehabilitation platforms, UMBRELLA aims to refine every stage of care – from diagnosis to recovery. Based on post-stroke data, AI-driven insights will empower clinicians to uncover root causes of strokes, improve treatment precision and predict patient outcomes, reshaping how stroke care is delivered.

Central to this effort is the integration of CERN’s federated-learning platform, CAFEIN. A decentralised approach to training machine-learning algorithms without exchanging data, it was initiated thanks to seed funding from CERN’s knowledge transfer budget for the benefit of medical applications: now CAFEIN promises to enhance diagnosis, treatment and prevention strategies for stroke victims, ultimately saving countless lives. A main topic of the kickoff meeting was the development of the “U-platform” – a federated data ecosystem co-designed by Siemens Healthineers and CERN. Based on CAFEIN, the infrastructure will enable the secure and privacy preserving training of advanced AI algorithms for personalised stroke diagnostics, risk prediction and treatment decisions without sharing sensitive patient data between institutions. Building on CERN’s expertise, including its success in federated AI modelling for brain pathologies under the EU TRUST­roke project, the CAFEIN team is poised to handle the increasing complexity and scale of data sets required by UMBRELLA.

Beyond technological advancements, the UMBRELLA consortium discussed a plan to establish standardised protocols for acute stroke management, with an emphasis on integrating these protocols into European healthcare guidelines. By improving data collection and facilitating outcome predictions, these standards will particularly benefit patients in remote and underserved regions. The project also aims to advance research into the causes of strokes, a quarter of which remain undetermined – a statistic UMBRELLA seeks to change.

This ambitious initiative not only showcases CERN’s role in pioneering federated-learning technologies but also underscores the broader societal benefits brought by basic science. By pushing technologies beyond the state-of-the-art, CERN and other particle-physics laboratories have fuelled innovations that have an impact on our everyday lives. As UMBRELLA begins its journey, its success holds the potential to redefine stroke care, delivering life-saving advancements to millions and paving the way for a healthier, more equitable future.

The post AI treatments for stroke survivors appeared first on CERN Courier.

]]>
Meeting report Data on strokes is plentiful but fragmented, making it difficult to exploit in data-driven treatment strategies. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_UMBRELLA.jpg
Dark matter: evidence, theory and constraints https://cerncourier.com/a/dark-matter-evidence-theory-and-constraints/ Fri, 24 Jan 2025 15:49:47 +0000 https://cerncourier.com/?p=112278 Dark Matter: Evidence, Theory and Constraints will be useful to those who wish to broaden or extend their research interests, for instance to a different dark-matter candidate.

The post Dark matter: evidence, theory and constraints appeared first on CERN Courier.

]]>
Dark Matter: Evidence, Theory and Constraints

Cold non-baryonic dark matter appears to make up 85% of the matter and 25% of the energy in our universe. However, we don’t yet know what it is. As the opening of many research proposals state, “The nature of dark matter is one of the major open questions in physics.”

The evidence for dark matter comes from astronomical and cosmological observations. Theoretical particle physics provides us with various well motivated candidates, such as weakly interacting massive particles (WIMPs), axions and primordial black holes. Each has different experimental and observational signatures and a wide range of searches are taking place. Dark-matter research spans a very broad range of topics and methods. This makes it a challenging research field to enter and master. Dark Matter: Evidence, Theory and Constraints by David Marsh, David Ellis and Viraf Mehta, the latest addition to the Princeton Series in Astrophysics, clearly presents the relevant essentials of all of these areas.

The book starts with a brief history of dark matter and some warm-up calculations involving units. Part one outlines the evidence for dark matter, on scales ranging from individual galaxies to the entire universe. It compactly summarises the essential background material, including cosmological perturbation theory.

Part two focuses on theories of dark matter. After an overview of the Standard Model of particle physics, it covers three candidates with very different motivations, properties and phenomenology: WIMPs, axions and primordial black holes. Part three then covers both direct and indirect searches for these candidates. I particularly like the schematic illustrations of experiments; they should be helpful for theorists who want to (and should!) understand the essentials of experimental searches.

The main content finishes with a brief overview of other dark-matter candidates. Some of these arguably merit more extensive coverage, in particular sterile neutrinos. The book ends with extensive recommendations for further reading, including textbooks, review papers and key research papers.

Dark-matter research spans a broad range of topics and methods, making it a challenging field to master

The one thing I would argue with is the claim in the introduction that dark matter has already been discovered. I agree with the authors that the evidence for dark matter is strong and currently cannot all be explained by modified gravity theories. However, given that all of the evidence for dark matter comes from its gravitational effects, I’m open to the possibility that our understanding of gravity is incorrect or incomplete. The authors are also more positive than I am about the prospects for dark-matter detection in the near future, claiming that we will soon know which dark-matter candidates exist “in the real pantheon of nature”. Optimism is a good thing, but this is a promise that dark-matter researchers (myself included…) have now been making for several decades.

The conversational writing style is engaging and easy to read. The annotation of equations with explanatory text is novel and helpful, and  the inclusion of numerous diagrams – simple and illustrative where possible and complex when called for – aids understanding. The attention to detail is impressive. I reviewed a draft copy for the publishers, and all of my comments and suggestions have been addressed in detail.

This book will be extremely useful to newcomers to the field, and I recommend it strongly to PhD students and undergraduate research students. It is particularly well suited as a companion to a lecture course, with numerous quizzes, problems and online materials, including numerical calculations and plots using Jupyter notebooks. It will also be useful to those who wish to broaden or extend their research interests, for instance to a different dark-matter candidate.

The post Dark matter: evidence, theory and constraints appeared first on CERN Courier.

]]>
Review Dark Matter: Evidence, Theory and Constraints will be useful to those who wish to broaden or extend their research interests, for instance to a different dark-matter candidate. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_REV-dark_feature.jpg
The B’s Ke+e–s https://cerncourier.com/a/the-bs-kee-s/ Fri, 24 Jan 2025 15:45:52 +0000 https://cerncourier.com/?p=112331 The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world.

The post The B’s Ke<sup>+</sup>e<sup>–</sup>s appeared first on CERN Courier.

]]>
The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world to CERN from 23 to 25 October 2024. Patrick Koppenburg (Nikhef) began the meeting by looking back 10 years, when three and four sigma anomalies abounded: the inclusive/exclusive puzzles; the illuminatingly named P5 observable; and the lepton-universality ratios for rare B decays. While LHCb measurements have mostly eliminated the anomalies seen in the lepton-universality ratios, many of the other anomalies persist – most notably, the corresponding branching fractions for rare B-meson decays still appear to be suppressed significantly below Standard Model (SM) theory predictions. Sara Celani (Heidelberg) reinforced this picture with new results for Bs→ φμ+μ and Bs→ φe+e, showing the continued importance of new-physics searches in these modes.

Changing flavour

The discussion on rare B decays continued in the session on flavour-changing neutral-currents. With new lattice-QCD results pinning down short-distance local hadronic contributions, the discussion focused on understanding the long-distance contributions arising from hadronic resonances and charm rescattering. Arianna Tinari (Zurich) and Martin Hoferichter (Bern) judged the latter not to be dramatic in magnitude. Lakshan Madhan (Cambridge) presented a new amplitude analysis in which the long and short-distance contributions are separated via the kinematic dependence of the decay amplitudes. New theo­retical analyses of the nonlocal form factors for B → K(*)μ+μ and B → K(*)e+e were representative of the workshop as a whole: truly the bee’s knees.

Another challenge to accurate theory predictions for rare decays, the widths of vector final states, snuck its way into the flavour-changing charged-currents session, where Luka Leskovec (Ljubljana) presented a comprehensive overview of lattice methods for decays to resonances. Leskovec’s optimistic outlook for semileptonic decays with two mesons in the final state stood in contrast to prospects for applying lattice methods to D-D mixing: such studies are currently limited to the SU(3)-flavour symmetric point of equal light-quark masses, explained Felix Erben (CERN), though he offered a glimmer of hope in the form of spectral reconstruction methods currently under development.

LHCb’s beauty and charm physics programme reported substantial progress. Novel techniques have been implemented in the most recent CP-violation studies, potentially leading to an impressive uncertainty of just 1° in future measurements of the CKM angle gamma. LHCb has recently placed a special emphasis on beauty and charm baryons, where the experiment offers unique capabilities to perform many interesting measurements ranging from CP violation to searches for very rare decays and their form factors. Going from three quarks to four and five, the spectroscopy session illustrated the rich and complex debate around tetraquark and pentaquark states with a big open discussion on the underlying structure of the 20 or so discovered at LHCb: which are bound states of quarks and which are simply meson molecules? (CERN Courier November/December 2024 p26 and p33.)

LHCb’s ability to do unique physics was further highlighted in the QCD, electroweak (EW) and exotica session, where the collaboration has shown the most recent publicly available measurement of the weak-mixing angle in conjunction with W/Z-boson production cross-sections and other EW observables. LHCb have put an emphasis on combined QCD + QED and effective-field-theory calculations, and the interplay between EW precision observables and new-physics effects in couplings to the third generation. By studying phase space inaccessible to any other experiment, a study of hypothetical dark photons decaying to electrons showed the LHCb experiment to be a unique environment for direct searches for long-lived and low-mass particles.

Attendees left the workshop with a fresh perspective

Parallel to Implications 2024, the inaugural LHCb Open Data and Ntuple Wizard Workshop, took place on 22 October as a satellite event, providing theorists and phenomenologists with a first look at a novel software application for on-demand access to custom ntuples from the experiment’s open data. The LHCb Ntupling Service will offer a step-by-step wizard for requesting custom ntuples and a dashboard to monitor the status of requests, communicate with the LHCb open data team and retrieve data. The beta version was released at the workshop in advance of the anticipated public release of the application in 2025, which promises open access to LHCb’s Run 2 dataset for the first time.

A recurring satellite event features lectures by theorists on topics following LHCb’s scientific output. This year, Simon Kuberski (CERN) and Saša Prelovšek (Ljubljana) took the audience on a guided tour through lattice QCD and spectroscopy.

With LHCb’s integrated luminosity in 2024 exceeding all previous years combined, excitement was heightened. Attendees left the workshop with a fresh perspective on how to approach the challenges faced by our community.

The post The B’s Ke<sup>+</sup>e<sup>–</sup>s appeared first on CERN Courier.

]]>
Meeting report The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_bees.jpg
From spinors to supersymmetry https://cerncourier.com/a/from-spinors-to-supersymmetry/ Fri, 24 Jan 2025 15:34:01 +0000 https://cerncourier.com/?p=112273 In their new book, From Spinors to Supersymmetry, Herbi Dreiner, Howard Haber and Stephen Martin describe the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry.

The post From spinors to supersymmetry appeared first on CERN Courier.

]]>
From Spinors to Supersymmetry

This text is a hefty volume of around 1000 pages describing the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry. The authors of this volume, Herbi Dreiner, Howard Haber and Stephen Martin, are household names in the phenomenology of particle physics with many original contributions in the topics that are covered in the book. Haber is also well known at CERN as a co-author of the legendary Higgs Hunter’s Guide (Perseus Books, 1990), a book that most collider physicists of the pre and early LHC eras are very familiar with.

The book starts with a 250-page introduction (chapters one to five) to the Standard Model (SM), covering more or less the theory material that one finds in standard advanced textbooks. The emphasis is on the theoretical side, with no discussion on experimental results, providing a succinct discussion of topics ranging from how to obtain Feynman rules to anomaly-cancellation calculations. In chapter six, extensions of the SM are discussed, starting with the seesaw-extended SM, moving on to a very detailed exposition of the two-Higgs-doublet model and finishing with grand unification theories (GUTs).

The second part of the book (from chapter seven onwards) is about supersymmetry in general. It begins with an accessible introduction that is also applicable to other beyond-SM-physics scenarios. This gentle and very pedagogical pattern continues to chapter eight, before proceeding to a more demanding supersymmetry-algebra discussion in chapter nine. Superfields, supersymmetric radiative corrections and supersymmetry symmetry breaking, which are discussed in the subsequent chapters, are more advanced topics that will be of interest to specialists in these areas.

The third part (chapter 13 onwards) discusses realistic supersymmetric models starting from the minimal supersymmetric SM (MSSM). After some preliminaries, chapter 15 provides a general presentation of MSSM phenomenology, discussing signatures relevant for proton–proton and electron–positron collisions, as well as direct dark-matter searches. A short discussion on beyond-MSSM scenarios is given in chapter 16, including NMSSM, seesaw, GUTs and R-parity violating theories. Phenomenological implications, for example their impact on proton decay, are also discussed.

Part four includes basic Feynman diagram calculations in the SM and MSSM using two-component spinor formalism. Starting from very simple tree-level SM processes, like Bhabha scattering and Z-boson decays, it proceeds with tree-level supersymmetric processes, standard one-loop calculations and their supersymmetric counterparts, and Higgs-boson mass corrections. The presentation of this is very practical and useful for those who want to see how to perform easy calculations in SM or MSSM using two-component spinor formalism. The material is accessible and detailed enough to be used for teaching master’s or graduate-level students.

A valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry

The book finishes with almost 200 pages of appendices covering all sorts of useful topics, from notation to commonly used identity lists and group theory.

The book requires some familiarity with master’s-level particle-physics concepts, for example via Halzen and Martin’s Quarks and Leptons or Paganini’s Fundamentals of Particle Physics. Some familiarity with quantum field theory is helpful but not needed for large parts of the book. No effort is made to be brief: two-component spinor formalism is discussed in all its detail in a very pedagogic and clear way. Parts two and three are a significant enhancement to the well known A Supersymmetry Primer (arXiv:hep-ph/9709356), which is very popular among beginners to supersymmetry and written by Stephen Martin, one of authors of this volume. A rich collection of exercises is included in every chapter, and the appendix chapters are no exception to this.

Do not let the word supersymmetry in the title to fool you: even if you are not interested in supersymmetric extensions you can find a detailed exposition on two-component formalism for spinors, SM calculations with this formalism and a detailed discussion on how to design extensions of the scalar sector of the SM. Chapter three is particularly useful, describing in 54 pages how to get from the two-component to the four-component spinor formalism that is more familiar to many of us.

This is a book for advanced graduate students and researchers in particle-physics phenomenology, which nevertheless contains much that will be of interest to advanced physics students and particle-physics researchers in boththeory and experiment. This is because the size of the volume allows the authors to start from the basics and dwell in topics that most other books of that type cover in less detail, making them less accessible. I expect that Dreiner, Haber and Martin will become a valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry.

The post From spinors to supersymmetry appeared first on CERN Courier.

]]>
Review In their new book, From Spinors to Supersymmetry, Herbi Dreiner, Howard Haber and Stephen Martin describe the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_REV-spinors_feature.jpg
Intensely focused on physics https://cerncourier.com/a/intensely-focused-on-physics/ Fri, 24 Jan 2025 15:32:54 +0000 https://cerncourier.com/?p=112269 The High Luminosity Large Hadron Collider, edited by Oliver Brüning and Lucio Rossi, provides a comprehensive review of an upgrade project designed to boost the total event statistics of the LHC by nearly an order of magnitude.

The post Intensely focused on physics appeared first on CERN Courier.

]]>
The High Luminosity Large Hadron Collider, edited by Oliver Brüning and Lucio Rossi, is a comprehensive review of an upgrade project designed to boost the total event statistics of CERN’s Large Hadron Collider (LHC) by nearly an order of magnitude. The LHC is the world’s largest and, in many respects, most performant particle accelerator. It may well represent the most complex infrastructure ever built for scientific research. The increase in event rate is achieved by higher beam intensities and smaller beam sizes at the collision points.

Brüning and Rossi’s book offers a comprehensive overview of this work across 31 chapters authored by more than 150 contributors. Due to the mentioned complexity of the HL-LHC, it is advisable to read the excellent introductory chapter first to obtain an overview on the various physics aspects, different components and project structure. After coverage of the physics case and the upgrades to the LHC experiments, the operational experiences with the LHC and its performance development are described.

The LHC’s upgrade is a significant project, as evidenced by the involvement of nine collaborating countries including China and the US, a materials budget that exceeds one billion Swiss Francs, more than 2200 years of integrated work, and the complexity of the physics and engineering. The safe operation of the enormous beam intensity represented a major challenge for the original LHC, and will be even more challenging with the upgraded beam parameters. For example, the instantaneous power carried by the circulating beam will be 7.6 TW, while the total beam energy is then 680 MJ – enough energy to boil two tonnes of water. Such numbers should be compared with the extremely low power density of 30 mW/cm3, which is sufficient to quench a superconducting magnet coil and interrupt the operation of the entire facility.

The book continues with descriptions of the two subsystems of greatest importance for the luminosity increase: the superconducting magnets and the RF systems including the crab cavities.

The High Luminosity Large Hadron Collider

Besides the increase in intensity, the primary factor for instantaneous luminosity gain is obtained by a reduction in beam size at the interaction points (IPs), partly through a smaller emittance but mainly through improved beam optics. This change results in a larger beam in the superconducting quadrupoles beside the IP. To accommodate the upgraded beam and to shield the magnet coils from radiation, the aperture of these magnets is increased by more than a factor of two to 150 mm. New quadrupoles have been developed, utilising the superconductor material Nb3Sn, allowing higher fields at the location of the coils. Further measures include the cancellation of the beam crossing angle during collision by dynamic tilting of the bunch orientation using the superconducting crab cavities that were designed for this special application in the LHC. The authors make fascinating observations, for example regarding the enhanced sensitivity to errors due to the extreme beam demagnification at the IPs: a typical relative error of 10–4 in the strength of the IP quadrupoles results in a significant distortion in beam optics, a so-called beta-beat of 7%.

Chapter eight describes the upgrade to the beam-collimation system, which is of particular importance for the safe operation of high-intensity beams. For ion collimation, halo particles are extracted most efficiently using collimators made from bent crystals.

The book continues with a description of the magnet-powering circuits. For the new superconducting magnets CERN is using “superconducting links” for the first time: cable sets made of a high-temperature superconductor that can carry enormous currents on many circuits in parallel in a small cross section; it suffices to cool them to temperatures of around 20 to 30K with gaseous helium by evaporating some of the liquid helium that is used for cooling the superconducting magnets in the accelerator.

Magnetic efforts

The next chapters cover machine protection, the interface with the detectors and the cryogenic system. Chapter 15 is dedicated to the effects of beam-induced stray radiation, in particular on electronics – an effect that has become quite important at high intensities in recent years. Another chapter covers the development of an 11 Tesla dipole magnet that was intended to replace a regular superconducting magnet, thereby gaining space for additional collimators in the arc of the ring. Despite considerable effort, this programme was eventually dropped from the project because the new magnet technology could not be mastered with the required reliability for routine operation; and, most importantly, alternative collimation solutions were identified.

Other chapters describe virtually all the remaining technical subsystems and beam-dynamics aspects of the collider, as well as the extensive test infrastructure required before installation in the LHC. A whole chapter is dedicated to high-field-magnet R&D – a field of utmost importance to the development of a next-generation hadron collider beyond the LHC.

Brüning and Rossi’s book will interest accelerator physicists in that it describes many outstanding beam-physics aspects of the HL-LHC. Engineers and readers with an interest in technology will also find many technical details on its subsystems.

The post Intensely focused on physics appeared first on CERN Courier.

]]>
Review The High Luminosity Large Hadron Collider, edited by Oliver Brüning and Lucio Rossi, provides a comprehensive review of an upgrade project designed to boost the total event statistics of the LHC by nearly an order of magnitude. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_REV-testing.jpg
Open-science cloud takes shape in Berlin https://cerncourier.com/a/open-science-cloud-takes-shape-in-berlin/ Fri, 24 Jan 2025 15:16:54 +0000 https://cerncourier.com/?p=112354 Findable, Accessible, Interoperable and Reusable: the sixth symposium of the European Open Science Cloud (EOSC) attracted over 1,000 participants.

The post Open-science cloud takes shape in Berlin appeared first on CERN Courier.

]]>
Findable. Accessible. Interoperable. Reusable. That’s the dream scenario for scientific data and tools. The European Open Science Cloud (EOSC) is a pan-European initiative to develop a web of “FAIR” data services across all scientific fields. EOSC’s vision is to put in place a system for researchers in Europe to store, share, process, analyse and reuse research outputs such as data, publications and software across disciplines and borders.

EOSC’s sixth symposium attracted 450 delegates to Berlin from 21 to 23 October 2024, with a further 900 participating online. Since its launch in 2017, EOSC activities have focused on conceptualisation, prototyping and planning. In order to develop a trusted federation of research data and services for research and innovation, EOSC is being deployed as a network of nodes. With the launch during the symposium of the EOSC EU node, this year marked a transition from design to deployment.

While EOSC is a flagship science initiative of the European Commission, FAIR concerns researchers and stakeholders globally. Via the multiple projects under the wings of EOSC that collaborate with software and data institutes around the world, a pan-European effort can be made to ensure a research landscape that encourages knowledge sharing while recognising work and training the next generation in best practices in research. The EU node – funded by the European Commission, and the first to be implemented – will serve as a reference for roughly 10 additional nodes to be deployed in a first wave, with more to follow. They are accessible using any institutional credentials based on GÉANT’s MyAccess or with an EU login. A first operational implementation of the EOSC Federation is expected by the end of 2025.

A thematic focus of this year’s symposium was the need for clear guidelines on the adaption of FAIR governance for artificial intelligence (AI), which relies on the accessibility of large and high-quality datasets. It is often the case that AI models are trained with synthetic data, large-scale simulations and first-principles mathematical models, although these may only provide an incomplete description of complex and highly nonlinear real-world phenomena. Once AI models are calibrated against experimental data, their predictions become increasingly accurate. Adopting FAIR principles for the production, collection and curation of scientific datasets will streamline the design, training, validation and testing of AI models (see, for example, Y Chen et al. 2021 arXiv:2108.02214).

EOSC includes five science clusters, from natural sciences to social sciences, with a dedicated cluster for particle physics and astronomy called ESCAPE: the European Science Cluster of Astronomy and Particle Physics. The future deployment of the ESCAPE Virtual Research Environment across multiple nodes will provide users with tools to bring together diverse experimental results, for example, in the search for evidence of dark matter, and to perform new analyses incorporating data from complementary searches.

The post Open-science cloud takes shape in Berlin appeared first on CERN Courier.

]]>
Meeting report Findable, Accessible, Interoperable and Reusable: the sixth symposium of the European Open Science Cloud (EOSC) attracted over 1,000 participants. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_cloud.jpg
First signs of antihyperhelium-4 https://cerncourier.com/a/first-signs-of-antihyperhelium-4/ Fri, 24 Jan 2025 14:58:40 +0000 https://cerncourier.com/?p=112209 Hypernuclei remain a source of fascination due to their rarity in nature and the challenge of creating and studying them in the lab.

The post First signs of antihyperhelium-4 appeared first on CERN Courier.

]]>
Heavy-ion collisions at the LHC create suitable conditions for the production of atomic nuclei and exotic hypernuclei, as well as their antimatter counterparts, antinuclei and antihypernuclei. Measurements of these forms of matter are important for understanding the formation of hadrons from the quark–gluon plasma and studying the matter–antimatter asymmetry seen in the present-day universe.

Hypernuclei are exotic nuclei formed by a mix of protons, neutrons and hyperons, the latter being unstable particles containing one or more strange quarks. More than 70 years since their discovery in cosmic rays, hypernuclei remain a source of fascination for physicists due to their rarity in nature and the challenge of creating and studying them in the laboratory.

In heavy-ion collisions, hypernuclei are created in significant quantities, but only the lightest hypernucleus, hypertriton, and its antimatter partner, antihypertriton, have been observed. Hypertriton is composed of a proton, a neutron and a lambda hyperon containing one strange quark. Antihypertriton is made up of an antiproton, an antineutron and an antilambda.

Following hot on the heels of the observation of antihyperhydrogen-4 (a bound state of an antiproton, two antineutrons and an antilambda) earlier this year by the STAR collaboration at the Relativistic Heavy Ion Collider (RHIC), the ALICE collaboration at the LHC has now seen the first ever evidence for antihyperhelium-4, which is composed of two antiprotons, an antineutron and an antilambda. The result has a significance of 3.5 standard deviations. If confirmed, antihyper­helium-4 would be the heaviest antimatter hypernucleus yet seen at the LHC.

Hypernuclei remain a source of fascination due to their rarity in nature and the challenge of creating and studying them in the lab

The ALICE measurement is based on lead–lead collision data taken in 2018 at a centre-of-mass energy of 5.02 TeV for each colliding pair of nucleons, be they protons or neutrons. Using a machine-learning technique that outperforms conventional hypernuclei search techniques, the ALICE researchers looked at the data for signals of hyperhydrogen-4, hyperhelium-4 and their antimatter partners. Candidates for (anti)hyperhydrogen-4 were identified by looking for the (anti)helium-4 nucleus and the charged pion into which it decays, whereas candidates for (anti)hyperhelium-4 were identified via its decay into an (anti)helium-3 nucleus, an (anti)proton and a charged pion.

In addition to finding evidence of antihyperhelium-4 with a significance of 3.5 standard deviations, and evidence of antihyperhydrogen-4 with a significance of 4.5 standard deviations, the ALICE team measured the production yields and masses of both hypernuclei.

For both hypernuclei, the measured masses are compatible with the current world-average values. The measured production yields were compared with predictions from the statistical hadronisation model, which provides a good description of the formation of hadrons and nuclei in heavy-ion collisions. This comparison shows that the model’s predictions agree closely with the data if both excited hypernuclear states and ground states are included in the predictions. The results confirm that the statistical hadronisation model can also provide a good description of the production of hyper­nuclei modelled to be compact objects with sizes of around 2 femtometres.

The researchers also determined the antiparticle-to-particle yield ratios for both hypernuclei and found that they agree with unity within the experimental uncertainties. This agreement is consistent with ALICE’s observation of the equal production of matter and antimatter at LHC energies and adds to the ongoing research into the matter–antimatter imbalance in the universe.

The post First signs of antihyperhelium-4 appeared first on CERN Courier.

]]>
News Hypernuclei remain a source of fascination due to their rarity in nature and the challenge of creating and studying them in the lab. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_NA_antihelium.jpg
Tsung-Dao Lee 1926–2024 https://cerncourier.com/a/tsung-dao-lee-1926-2024/ Fri, 24 Jan 2025 14:49:16 +0000 https://cerncourier.com/?p=112292 On 4 August 2024, Tsung-Dao Lee passed away at his home in San Francisco, aged 97.

The post Tsung-Dao Lee 1926–2024 appeared first on CERN Courier.

]]>
On 4 August 2024, the great physicist Tsung-Dao Lee (also known as T D Lee) passed away at his home in San Francisco, aged 97.

Born in 1926 to an intellectual family in Shanghai, Lee’s education was disrupted several times by the war against Japan. He neither completed high school nor graduated from university. In 1943, however, he took the national entrance exam and, with outstanding scores, was admitted to the chemical engineering department of Zhejiang University. He then transferred to the physics department of Southwest Associated University, a temporary setup during the war for Peking, Tsinghua and Nankai universities. In the autumn of 1946, under the recommendation of Ta-You Wu, Lee went to study at the University of Chicago under the supervision of Enrico Fermi, earning his PhD in June 1950.

From 1950 to 1953 Lee conducted research at the University of Chicago, the University of California, Berkeley and the Institute for Advanced Study, located in Princeton. During this period, he made significant contributions to particle physics, statistical mechanics, field theory, astrophysics, condensed-matter physics and turbulence theory, demonstrating a wide range of interests and deep insights in several frontiers of physics. In a 1952 paper on turbulence, for example, Lee pointed out the significant difference between fluid dynamics in two-dimensional and three-dimensional spaces, namely, there is no turbulence in two dimensions. This finding provided essential conditions for John von Neumann’s model, which used supercomputers to simulate weather.

Profound impact

During this period, Lee and Chen-Ning Yang collaborated on two foundational works in statistical physics concerning phase transitions, discovering the famous “unit circle theorem” on lattice gases, which had a profound impact on statistical mechanics and phase-transition theory.

Between 1952 and 1953, during a visit to the University of Illinois at Urbana-Champaign, Lee was inspired by discussions with John Bardeen (winner, with Leon Neil Cooper and John Robert Schrieffer, of the 1972 Nobel Prize in Physics for developing the first successful microscopic theory of superconductivity). Lee applied field-theory methods to study the motion of slow electrons in polar crystals, pioneering the use of field theory to investigate condensed matter systems. According to Schrieffer, Lee’s work directly influenced the development of their “BCS” theory of superconductivity.

In 1953, after taking an assistant professor position at Columbia University, Lee proposed a renormalisable field-theory model, widely known as the “Lee Model,” which had a substantial impact on the study of renormalisation in quantum field theory.

On 1 October 1956, Lee and Yang’s theory of parity non-conservation in weak interactions was published in Physical Review. It was quickly confirmed by the experiments of Chien-Shiung Wu and others, earning Lee and Yang the 1957 Nobel Prize in Physics – one of the fastest recognitions in the history of the Nobel Prize. The discovery of parity violation significantly challenged the established understanding of fundamental physical laws and directly led to the establishment of the universal V–A theory of weak interactions in 1958. It also laid the groundwork for the unified theory of weak and electromagnetic interactions developed a decade later.

In 1957, Lee, Oehme and Yang extended symmetry studies to combined charge–parity (CP) transformations. The CP non-conservation discovered in neutral K-meson decays in 1964 validated the importance of Lee and his colleagues’ theoretical work, as well as the later establishment of CP violation theories. The same year, Lee was appointed the Fermi Professor of Physics at Columbia.

In the 1970s, Lee published papers exploring the origins of CP violation, suggesting that it might stem from spontaneous symmetry breaking in the vacuum and predicting several significant phenomenological consequences. In 1974, Lee and G C Wick investigated whether spontaneously broken symmetries in the vacuum could be partially restored under certain conditions. They found that heavy-ion collisions could achieve this restoration and produce observable effects. This work pioneered the study of the quantum chromodynamics (QCD) vacuum, phase transitions and quark–gluon plasma. It also laid the theoretical and experimental foundation for relativistic heavy-ion collision physics.

From 1982, Lee devoted significant efforts to solving non-perturbative QCD using lattice-QCD methods. Together with Norman Christ and Fred Friedberg, he developed stochastic lattice field theory and promoted first-principle lattice simulations on supercomputers, greatly advancing lattice QCD research.

Immense respect

In 2011 Lee retired as a professor emeritus from Columbia at the age of 85. In China, he enjoyed immense respect, not only for being the first Chinese scientist (with Chen-Ning Yang) to win a Nobel Prize, but also for enhancing the level of science and education in China and promoting the Sino-American collaboration in high-energy physics. This led to the establishment and successful construction of China’s first major high-energy physics facility, the Beijing Electron–Positron Collider (BEPC). At the beginning of this century, Lee supported and personally helped the upgrade of BEPC, the Daya Bay reactor neutrino experiment and others. In addition, he initiated, promoted and executed the China–US Physics Examination and Application plan, the National Natural Science Foundation of China, and the postdoctoral system in China.

Tsung-Dao Lee’s contributions to an extraordinarily wide range of fields profoundly shaped humanity’s understanding of the basic laws of the universe.

The post Tsung-Dao Lee 1926–2024 appeared first on CERN Courier.

]]>
News On 4 August 2024, Tsung-Dao Lee passed away at his home in San Francisco, aged 97. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_OBITS-Lee.jpg
Robert Aymar 1936–2024 https://cerncourier.com/a/robert-aymar-1936-2024-3/ Fri, 24 Jan 2025 14:48:14 +0000 https://cerncourier.com/?p=112305 Robert Aymar, CERN Director-General from January 2004 to December 2008, passed away on 23 September at the age of 88.

The post Robert Aymar 1936–2024 appeared first on CERN Courier.

]]>
Robert Aymar, CERN Director-General from January 2004 to December 2008, passed away on 23 September at the age of 88. An inspirational leader in big-science projects for several decades, including the International Thermonuclear Experimental Reactor (ITER), his term of office at CERN was marked by the completion of construction and the first commissioning of the Large Hadron Collider (LHC). His experience of complex industrial projects proved to be crucial, as the CERN teams had to overcome numerous challenges linked to the LHC’s innovative technologies and their industrial production.

Robert Aymar was educated at École Poly­technique in Paris. He started his career in plasma physics at the Commissariat à l’Énergie Atomique (CEA), since renamed the Commissariat à l’Énergie Atomique et aux Énergies Alternatives, at the time when thermonuclear fusion was declassified and research started on its application to energy production. After being involved in several studies at CEA, Aymar contributed to the design of the Joint European Torus, the European tokamak project based on conventional magnet technology, built in Culham, UK in the late 1970s. In the same period, CEA was considering a compact tokamak project based on superconducting magnet technology, for which Aymar decided to use pressurised superfluid helium cooling – a technology then recently developed by Gérard Claudet and his team at CEA Grenoble. Aymar was naturally appointed head of the Tore Supra tokamak project, built at CEA Cadarache from 1977 to 1988. The successful project served inter alia as an industrial-sized demonstrator of superfluid helium cryogenics, which became a key technology of the LHC.

As head of the Département des Sciences de la Matière at CEA from 1990 to 1994, Aymar set out to bring together the physics of the infinitely large and the infinitely small, as well as the associated instrumentation, in a department that has now become the Institut de Recherche sur les Lois Fondamentales de l’Univers. In that position, he actively supported CEA–CERN collaboration agreements on R&D for the LHC and served on many national and international committees. In 1993 he chaired the LHC external review committee, whose recommendation proved decisive in the project’s approval. From 1994 to 2003 he led the ITER engineering design activities under the auspices of the International Atomic Energy Agency, establishing the basic design and validity of the project that would be approved for construction in 2006. In 2001, the CERN Council called on his expertise once again by entrusting him to chair the external review committee for CERN’s activities.

When Robert Aymar took over as Director-General of CERN in 2004, the construction of the LHC was well under way. But there were many industrial and financial challenges, and a few production crises still to overcome. During his tenure, which saw the ramp-up, series production and installation of major components, the machine was completed and the first beams circulated. That first start-up in 2008 was followed by a major technical problem that led to a shutdown lasting several months. But the LHC had demonstrated that it could run, and in 2009 the machine was successfully restarted. Aymar’s term of office also saw a simplification of CERN’s structure and procedures, aimed at making the laboratory more efficient. He also set about reducing costs and secured additional funding to complete the construction and optimise the operation of the LHC. After retirement, he remained active as a scientific advisor to the head of the CEA, occasionally visiting CERN and the ITER construction site in Cadarache.

Robert Aymar was a dedicated and demanding leader, with a strong drive and search for pragmatic solutions in the activities he undertook or supervised. CERN and the LHC project owe much to his efforts. He was also a man of culture with a marked interest in history. It was a privilege to serve under his direction.

The post Robert Aymar 1936–2024 appeared first on CERN Courier.

]]>
News Robert Aymar, CERN Director-General from January 2004 to December 2008, passed away on 23 September at the age of 88. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_OBITS-Aymar.jpg
James D Bjorken 1934–2024 https://cerncourier.com/a/james-d-bjorken-1934-2024/ Fri, 24 Jan 2025 14:45:41 +0000 https://cerncourier.com/?p=112296 Theoretical physicist James D Bjorken, whose work played a key role in revealing the existence of quarks, passed away on 6 August aged 90.

The post James D Bjorken 1934–2024 appeared first on CERN Courier.

]]>
James Bjorken

Theoretical physicist James D “BJ” Bjorken, whose work played a key role in revealing the existence of quarks, passed away on 6 August aged 90. Part of a wave of young physicists who came to Stanford in the mid-1950s, Bjorken also made important contributions to the design of experiments and the efficient operation of accelerators.

Born in Chicago on 22 June 1934, James Daniel Bjorken grew up in Park Ridge, Illinois, where he was drawn to mathematics and chemistry. His father, who had immigrated from Sweden in 1923, was an electrical engineer who repaired industrial motors and generators. After earning a bachelor’s degree at MIT, he went to Stanford University as a graduate student in 1956. He was one of half a dozen MIT physicists, including his adviser Sidney Drell and future director of the SLAC National Accelerator Laboratory Burton Richter, who were drawn by new facilities on the Stanford campus. This included an early linear accelerator that scattered electrons off targets to explore the nature of the neutron and proton.

Ten years later those experiments moved to SLAC, where the newly constructed Stanford Linear Collider would boost electrons to much higher energies. By that time, theorists had proposed that protons and neutrons contained fundamental particles. But no one knew much about their properties or how to go about proving they were there. Bjorken, who joined the Stanford faculty in 1961, wrote an influential 1969 paper in which he suggested that electrons were bouncing off point-like particles within the proton, a process known as deep inelastic scattering. He started lobbying experimentalists to test it with the SLAC accelerator.

Carrying out the experiments would require a new mathematical language and Bjorken contributed to its development, with simplifications and improvements from two of his students (John Kogut and Davison Soper) and Caltech physicist Richard Feynman. In the late 1960s and early 1970s, those experiments confirmed that the proton does indeed consist of fundamental particles – a discovery honoured with the 1990 Nobel Prize in Physics for SLAC’s Richard Taylor and MIT’s Henry Kendall and Jerome Friedman. Bjorken’s role was later recognised by the prestigious Wolf Prize in Physics and the 2015 High Energy and Particle Physics Prize of the European Physical Society.

While the invention of “Bjorken scaling” was his most famous scientific achievement, Bjorken was also known for identifying a wide variety of interesting problems and tackling them in novel ways. He was somewhat iconoclastic. He also had colourful and often distinctly visual ways of thinking about physics – for instance, describing physics concepts in terms of plumbing or a baked Alaska. He never sought recognition for himself and was very generous in recognising the contributions of others.

In 1979 Bjorken headed east to become associate director for physics at Fermilab. He returned to SLAC in 1989, where he continued to innovate. Over the course of his career, among other things, he invented ideas related to the existence of the charm quark and the circulation of protons in a storage ring. He helped popularise the unitarity triangle and, along with Drell, co-wrote the widely used graduate-level textbooks Relativistic Quantum Mechanics and Relativistic Quantum Fields. In 2009 Bjorken contributed to an influential paper by three younger theorists suggesting approaches for searching for “dark” photons, hypothetical carriers of a new fundamental force.

He was also awarded the American Physical Society’s Dannie Heineman Prize, the Department of Energy’s Ernest Orlando Lawrence Award, and the Dirac Medal from the International Center for Theoretical Physics. In 2017 he shared the Robert R Wilson Prize for Achievement in the Physics of Particle Accelerators for groundbreaking theoretical work he did at Fermilab that helped to sharpen the focus of particle beams in many types of accelerators.

Known for his warmth, generosity and collaborative spirit, Bjorken passionately pursued many interests outside physics, from mountain climbing, skiing, cycling and windsurfing to listening to classical music. He divided his time between homes in Woodside, California and Driggs, Idaho, and thought nothing of driving long distances to see an opera in Chicago or dropping in unannounced at the office of some fellow physicist for deep conversations about general relativity, dark matter or dark energy – once remarking: “I’ve found the most efficient way to test ideas and get hard criticism is one-on-one conversation with people who know more than I do.”

The post James D Bjorken 1934–2024 appeared first on CERN Courier.

]]>
News Theoretical physicist James D Bjorken, whose work played a key role in revealing the existence of quarks, passed away on 6 August aged 90. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_OBITS-Bjorken_feature.jpg
Max Klein 1951–2024 https://cerncourier.com/a/max-klein-1951-2024/ Fri, 24 Jan 2025 14:44:15 +0000 https://cerncourier.com/?p=112301 Experimental particle physicist Max Klein, whose exceptional career spanned theory, detectors, accelerators and data analysis, passed away on 23 August 2024.

The post Max Klein 1951–2024 appeared first on CERN Courier.

]]>
Experimental particle physicist Max Klein, whose exceptional career spanned theory, detectors, accelerators and data analysis, passed away on 23 August 2024.

Born in Berlin in 1951, Max earned his diploma in physics in 1973 from Humboldt University of Berlin (HUB, East-Germany, GDR) with a thesis on low-energy heavy-ion physics. He received his PhD in 1977 from the Institute for High Energy Physics (IHEP) of the Academy of Sciences of the GDR in Zeuthen (now part of DESY) on the subject of multiparticle production, and his habilitation degree in 1984 from HUB. From 1973 to 1991 he conducted research at IHEP Zeuthen, spending several years from 1977 at the Joint Institute for Nuclear Research in Dubna, and from the 1980s at DESY and CERN. For his role in determining the asymmetry of the interaction of polarised positive and negative muons with the NA4 muon spectrometer at CERN’s SPS M2 muon beam, he was awarded the Max von Laue Medal by the Academy of Sciences of the GDR in 1985.

Max worked as a scientist at DESY from 1992 to 2006. As a member of the H1 experiment at the lepton–proton collider HERA since 1985, his research focused on investigating the internal structure of protons using deep inelastic scattering. He served as spokesperson of the H1 collaboration from 2002 to 2006 for two mandates.

Max became a professor at the University of Liverpool in 2006, and the following year he joined the ATLAS collaboration. He served as chair of the ATLAS publication committee and as editorial-board chair of the ATLAS detector paper and other important works. Max made key contributions to data analysis, notably on the high-precision 7 TeV inclusive W and Z boson production cross sections and associated properties, and was a convener of the PDF forum in 2015–2016. From 2017 to 2019, Max was chair of the ATLAS collaboration board, during which he made invaluable contributions to the experiment and collaboration life. He led the Liverpool ATLAS team from 2009 to 2017. Under his guidance, the 30-strong group contributed to the maintenance of the SCT detector, as well as to ATLAS data preparation and physics analyses. The group also developed hybrids, mechanics and software for the new ITk pixel and strip detectors.

In recent years, Max’s scientific contributions extended well beyond ATLAS. He was a strong advocate for the development of an electron-beam upgrade of the LHC, the LHeC, and collaborated closely with the CERN accelerator group and international teams on the development of energy-recovery linacs. Here, he was influential in the development of the PERLE demonstrator accelerator at IJCLab, for which he acted as spokesperson until 2023.

A strong advocate for the responsibility of scientists toward their societies

In 2013 Max was awarded the Max Born Prize by the Deutsche Physikalische Gesellschaft and the UK Institute of Physics for his fundamental experimental contributions to the elucidation of the proton structure using deep-inelastic scattering. The prize citation stands as a testament to his scientific stature: “In the last 40 years, Max Klein has dedicated himself to the study of the innermost structure of the proton. In the 1990s he was a leading figure in the discovery that gluons form a surprisingly large component of proton structure. These gluons play an important role in the production of Higgs bosons in proton–proton collisions for which experiments at CERN have recently found promising candidates.”

Besides being a distinguished scientist, Max was a man of unwavering principles, grounded in his selfless interactions with others and his deep sense of humanity. Drawing from his experience as a bridge between East and West, he was a strong advocate for international scientific collaboration and the responsibility of scientists toward their societies. He had a strong desire and ability to mentor and support students, postdocs and early-career researchers, and an admirably wise and calm approach to problem solving.

Max Klein had a profound knowledge of physics and a tireless dedication to ATLAS and to experimental particle physics in general. His passing is a profound loss for the entire community, but his legacy will endure.

The post Max Klein 1951–2024 appeared first on CERN Courier.

]]>
News Experimental particle physicist Max Klein, whose exceptional career spanned theory, detectors, accelerators and data analysis, passed away on 23 August 2024. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_OBITS-Klein.jpg
Ian Shipsey 1959–2024 https://cerncourier.com/a/ian-shipsey-1959-2024/ Fri, 24 Jan 2025 14:43:21 +0000 https://cerncourier.com/?p=112309 Experimental particle physicist Ian Shipsey, a remarkable leader and individual, passed away suddenly and unexpectedly in Oxford on 7 October.

The post Ian Shipsey 1959–2024 appeared first on CERN Courier.

]]>
Ian Shipsey

Experimental particle physicist Ian Shipsey, a remarkable leader and individual, passed away suddenly and unexpectedly in Oxford on 7 October.

Ian was educated at Queen Mary University of London and the University of Edinburgh, where he earned his PhD in 1986 for his work on the NA31 experiment at CERN. Moving to the US, he joined Syracuse as a post-doc and then became a faculty member at Purdue, where, in 2007, he was elected Julian Schwinger Distinguished Professor of Physics. In 2013 he was appointed the Henry Moseley Centenary Professor of Experimental Physics at the University of Oxford.

Ian was a central figure behind the success of the CLEO experiment at Cornell, which was for many years the world’s pre-eminent detector in flavour physics. He led many analyses, most notably in semi-leptonic decays, from which he measured four different CKM matrix elements, and oversaw the construction of the silicon vertex detector for the CLEO III phase of the experiment. He served as co-spokesperson between 2001 and 2004, and was one of the intellectual leaders that saw the opportunity to re-configure the detector and the CESR accelerator as a facility for making precise exploration of physics at the charm threshold. The resulting CLEO-c programme yielded many important measurements in the charm system and enabled critical experimental validations of lattice–QCD predictions.

Influential voice

At CMS, Ian played a leading role in the construction of the forward-pixel detector, exploiting the silicon laboratory he had established at Purdue. His contributions to CMS physics analy­ses were no less significant. These included the observation of upsilon suppression in heavy-ion collisions (a smoking gun for the production of quark–gluon plasma) and the discovery, reported in a joint Nature paper with the LHCb collaboration, of the ultra-rare decay Bs→ μ+μ. He was also an influential voice as CMS collaboration board chair (2013–2014).

After moving to the University of Oxford and, in 2015, joining the ATLAS collaboration, Ian became Oxford’s ATLAS team leader and established state-of-the-art cleanrooms, which are used for the construction of the future inner tracker (ITk) pixel end-cap modules. Together with his students, he contributed to measurements of the Higgs boson mass and width, and to the search for its rare di-muon decay. Ian also led the UK’s involvement in LSST (now the Vera Rubin Observatory), where Oxford is providing deep expertise for the CCD cameras.

Following his tenure as the dynamic head of the particle physics sub-department, Ian was elected head of Oxford physics in 2018 and re-elected in 2023. Among his many successful initiatives, he played a leading role in establishing the £40 million UKRI “Quantum Technologies for Fundamental Physics” programme, which is advancing quantum-based applications across various areas of physics. With the support of this programme, he led the development of novel atom interferometers for light dark matter searches and gravitational-wave detection.

Ian took a central role in establishing roadmaps for detector R&D both in the US and (via ECFA) in Europe. He was one of the coordinators and driving force of the ECFA R&D roadmap panel, and co-chair of the US effort to define the basic research needs in this area. As chair of the ICFA instrumentation, innovation and development panel, he promoted R&D in instrumentation for particle physics and the recognition of excellence in this field.

Among his many prestigious honours, Ian was elected a Fellow of the Royal Society in 2022 and received the James Chadwick Medal and Prize from the Institute of Physics in 2019. He served on numerous collaboration boards, panels, and advisory and decision-making committees shaping national and international science strategies.

The success of Ian’s career is even more remarkable given that he lost his hearing in 1989. He received a cochlear implant, which restored limited auditory ability, and gave unforgettable talks on this subject, explaining the technology and its impact on his life.

Ian was an outstanding physicist and also a remarkable individual. His legacy is not only an extensive body of transformative scientific results, but also the impact that he had on all who met him. He was equally charming, whether speaking to graduate students or lab directors. Everyone felt better after talking to Ian. His success derived from a remarkable combination of optimism and limitless energy. Once he had identified the correct course of action, he would not allow himself to be dissuaded by cautious pessimists who worried about the challenges ahead. His colleagues and many graduate students will continue to benefit for many years from the projects he initiated. The example he set as a physicist, and the memories he leaves as friend, will endure still longer.

The post Ian Shipsey 1959–2024 appeared first on CERN Courier.

]]>
News Experimental particle physicist Ian Shipsey, a remarkable leader and individual, passed away suddenly and unexpectedly in Oxford on 7 October. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_OBITS-Shipsey_feature.jpg
W mass snaps back https://cerncourier.com/a/w-mass-snaps-back/ Wed, 20 Nov 2024 13:58:46 +0000 https://cern-courier.web.cern.ch/?p=111397 A new measurement from the CMS experiment at the LHC contradicts the anomaly reported by CDF.

The post W mass snaps back appeared first on CERN Courier.

]]>
Based on the latest data inputs, the Standard Model (SM) constrains the mass of the W boson (mW) to be 80,353 ± 6 MeV. At tree level, mW depends only on the mass of the Z boson and the weak and electromagnetic couplings. The boson’s tendency to briefly transform into a top quark and a bottom quark causes the largest quantum correction. Any departure from the SM prediction could signal the presence of additional loops containing unknown heavy particles.

The CDF experiment at the Tevatron observed just such a departure in 2022, plunging the boson into a midlife crisis 39 years after it was discovered at CERN’s SpSS collider (CERN Courier September/October 2023 p27). A new measurement from the CMS experiment at the LHC now contradicts the anomaly reported by CDF. While the CDF result stands seven standard deviations above the SM, CMS’s measurement aligns with the SM prediction and previous results at the LHC. The CMS and CDF results claim joint first place in precision, provoking a dilemma for phenomenologists.

New-physics puzzle

“The result by CDF remains puzzling, as it is extremely difficult to explain the discrepancy with the three LHC measurements by the presence of new physics, in particular as there is also a discrepancy with D0 at the same facility,” says Jens Erler of Johannes Gutenberg-Universität Mainz. “Together with measurements of the weak mixing angle, the CMS result confirms the validity of the SM up to new physics scales well into the TeV region.”

“I would not call this ‘case closed’,” agrees Sven Heinemeyer of the Universidad Autónoma de Madrid. “There must be a reason why CDF got such an anomalously high value, and understanding what is going on may be very beneficial for future investigations. We know that the SM is not the last word, and there are clear cases that require physics beyond the SM (BSM). The question is at which scale BSM physics appears, or how strongly it is coupled to the SM particles.”

The result confirms the validity of the SM up to new physics scales well into the TeV region

To obtain their result, CDF analysed four million W-boson decays originating from 1.96 TeV proton–antiproton collisions at Fermilab’s Tevatron collider between 1984 and 2011. In stark disagreement with the SM, the analysis yielded a mass of 80,433.5 ± 9.4 MeV. This result induced the ATLAS collaboration to revisit its 2017 analysis of W → μν and W → eνdecays in 7 TeV proton–proton collisions using the latest global data on parton distribution functions, which describe the probable momenta of quarks and gluons inside the proton. A newly developed fit was also implemented. The central value remained consistent with the SM, with a reduced uncertainty of 16 MeV increasing its tension with the new CDF result. A less precise measurement by the LHCb collaboration also favoured the SM (CERN Courier May/June 2023 p10).

CMS now reports mW to be 80,360.2 ± 9.9 MeV, concluding a study of W → μν decays begun eight years ago.

“One of the main strategic choices of this analysis is to use a large dataset of Run 2 data,” says CMS spokesperson Gautier Hamel de Monchenault. “We are using 16.8 fb–1 of 13 TeV data at a relatively high pileup of on average 25 interactions per bunch crossing, leading to very large samples of about 7.5 million Z bosons and 90 million W bosons.”

With high pileup and high energies come additional challenges. The measurement uses an innovative analysis tech­nique that benchmarks W → μν decay systematics using Z → μμ decays as independent validation wherein one muon is treated as a neutrino. The ultimate precision of the measurement relies on reconstructing the muon’s momentum in the detector’s silicon tracker to better than one part in 10,000 – a groundbreaking level of accuracy built on minutely modelling energy loss, multiple scattering, magnetic-field inhomogeneities and misalignments. “What is remarkable is that this incredible level of precision on the muon momentum measurement is obtained without using Z → μμ as a calibration candle, but only using a huge sample of J/ψ→ μμ events,” says Hamel de Monchenault. “In this way, the Z → μμ sample can be used for an independent closure test, which also provides a competitive measurement of the Z mass.”

Measurement matters

Measuring mW using W → μν decays is challenging because the neutrino escapes undetected. mW must be inferred from either the distribution of the transverse mass visible in the events (mT) or the distribution of the transverse momentum of the muons (pT). The mT approach used by CDF is the most precise option at the Tevatron, but typically less precise at the LHC, where hadronic recoil is difficult to distinguish from pileup. The LHC experiments also face a greater challenge when reconstructing mW from distributions of pT. In proton–antiproton collisions at the Tevatron, W bosons could be created via the annihilation of pairs of valence quarks. In proton–proton collisions at the LHC, the antiquark in the annihilating pair must come from the less well understood sea; and at LHC energies, the partons have lower fractions of the proton’s momentum – a less well constrained domain of parton distribution functions.

“Instead of exploiting the Z → μμ sample to tune the parameters of W-boson production, CMS is using the W data themselves to constrain the theory parameters of the prediction for the pT spectrum, and using the independent Z → μμ sample to validate this procedure,” explains Hamel de Monchenault. “This validation gives us great confidence in our theory modelling.”

“The CDF collaboration doesn’t have an explanation for the incompatibility of the results,” says spokesperson David Toback of Texas A&M University. “Our focus is on the checks of our own analysis and understanding of the ATLAS and CMS methods so we can provide useful critiques that might be helpful in future dialogues. On the one hand, the consistency of the ATLAS and CMS results must be taken seriously. On the other, given the number of iterations and improvements needed over decades for our own analysis – CDF has published five times over 30 years – we still consider both LHC results ‘early days’ and look forward to more details, improved methodology and additional measurements.”

The LHC experiments each plan improvements using new data. The results will build on a legacy of electroweak precision at the LHC that was not anticipated to be possible at a hadron collider (CERN Courier September/October 2024 p29).

“The ATLAS collaboration is extremely impressed with the new measurement by CMS and the extraordinary precision achieved using high-pileup data,” says spokesperson Andreas Hoecker. “It is a tour de force, accomplished by means of a highly complex fit, for which we applaud the CMS collaboration.” ATLAS’s next measurement of mW will focus on low-pileup data, to improve sensitivity to mT relative to their previous result.

The ATLAS collaboration is extremely impressed with the new measurement by CMS

The LHCb collaboration is working on an update of their measurement using its full Run 2 data set. LHCb’s forward acceptance may prove to be powerful in a global fit. “LHCb probes parton density functions in different phase space regions, and that makes the measurements from LHCb anticorrelated with those of ATLAS and CMS, promising a significant impact on the average, even if the overall uncertainty is larger,” says spokesperson Vincenzo Vagnoni. The goal is to progress LHC measurements towards a combined precision of 5 MeV. CMS plans several improvements to their own analysis.

“There is still a significant factor to be gained on the momentum scale, with which we could reach the same precision on the Z-boson mass as LEP,” says Hamel de Monchenault. “We are confident that we can also use a future, large low-pileup run to exploit the W recoil and mT to complement the muon pT spectrum. Electrons can also be used, although in this case the Z sample could not be kept independent in the energy calibration.”

The post W mass snaps back appeared first on CERN Courier.

]]>
News A new measurement from the CMS experiment at the LHC contradicts the anomaly reported by CDF. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NA_tension-1-1.jpg
Shifting sands for muon g–2 https://cerncourier.com/a/shifting-sands-for-muon-g-2/ Wed, 20 Nov 2024 13:56:37 +0000 https://cern-courier.web.cern.ch/?p=111400 Two recent results may ease the tension between theory and experiment.

The post Shifting sands for muon g–2 appeared first on CERN Courier.

]]>
Lattice–QCD calculation

The Dirac equation predicts the magnetic moment of the muon (g) to be precisely two in units of the Bohr magneton. Virtual lines and loops add roughly 0.1% to this value, giving rise to a so-called anomalous contribution often quantified by aμ = (g–2)/2. Countless electromagnetic loops dominate the calculation, spontaneous symmetry breaking is evident in the effect of weak interactions, and contributions from the strong force are non-perturbative. Despite this formidable complexity, theoretical calculations of aμ have been experimentally verified to nine significant figures.

The devil is in the 10th digit. The experimental world average for aμ currently stands more than 5σ above the Standard Model (SM) prediction published by the Muon g-2 Theory Initiative in a 2020 white paper. But two recent results may ease this tension in advance of a new showdown with experiment next year.

The first new input is data from the CMD-3 experiment at the Budker Institute of Nuclear Physics, which yields aμconsistent with experimental data. Comparable electron–positron (e+e) collider data from the KLOE experiment at the National Laboratory of Frascati, the BaBar experiment at SLAC, the BESIII experiment at IHEP Beijing and CMD-3’s predecessor CMD-2, were the backbone of the 2020 theory white paper. With KLOE and CMD-3 now incompatible at the level of 5σ, theorists are exploring alternative bases for the theoretical prediction, such as an ab-initio approach based on lattice QCD and a data-driven approach using tau–lepton decays.

The second new result is an updated theory calculation of aμ by the Budapest–Marseille–Wuppertal (BMW) collaboration. BMW’s ab-initio lattice–QCD calculation of 2020 was the first to challenge the data-driven consensus expressed in the 2020 white paper. The recent update now claims a superior precision, driven in part by the pragmatic implementation of a data-driven approach in the low-mass region, where experiments are in good agreement. Though only accounting for 5% of the hadronic contribution to aμ, this “long distance” region is often the largest source of error in lattice–QCD calculations, and relatively insensitive to the use of finer lattices.

The new BMW result is fully compatible with the experimental world average, and incompatible with the 2020 white paper at the level of 4σ.

“It seems to me that the 0.9σ agreement between the direct experimental measurement of the magnetic moment of the muon and the ab-initio calculation of BMW has most probably postponed the possible discovery of new physics in this process,” says BMW spokesperson Zoltán Fodor (Wuppertal). “It is important to mention that other groups have partial results, too, so-called window results, and they all agree with us and in several cases disagree with the result of the data-driven method.”

These two analyses were among the many discussed at the seventh plenary workshop of the Muon g-2 Theory Initiative held in Tsukuba, Japan from 9 to 13 September. The theory initiative is planning to release an updated prediction in a white paper due to be published in early 2025. With multiple mature e+e and lattice–QCD analyses underway for several years, attention now turns to tau decays – the subject of a soon-to-be-announced mini-workshop to ensure their full availability for consideration as a possible basis for the 2025 white paper. Input data would likely originate from tau decays recorded by the Belle experiment at KEK and the ALEPH experiment at CERN, both now decommissioned.

I am hopeful we will be able to establish consolidation between independent lattice calculations at the sub-percent level

“From a theoretical point of view, the challenge for including the tau data is the isospin rotation that is needed to convert the weak hadronic tau decay to the desired input for hadronic vacuum polarisation,” explains theory-initiative chair Aida X El-Khadra (University of Illinois). Hadronic vacuum polarisation (HVP) is the most challenging part of the calculation of aμ, accounting for the effect of a muon emitting a virtual photon that briefly transforms into a flurry of quarks and gluons just before it absorbs the photon representing the magnetic field (CERN Courier May/June 2021 p25).

Lattice QCD offers the possibility of a purely theoretical calculation of HVP. While BMW remains the only group to have published a full lattice-QCD calculation, multiple groups are zeroing in on its most sensitive aspects (CERN CourierSeptember/October 2024 p21).

“The main challenge in lattice-QCD calculations of HVP is improving the precision to the desired sub-percent level, especially at long distances,” continues El-Khadra. “With the new results for the long-distance contribution by the RBC/UKQCD and Mainz collaborations that were already reported this year, and the results that are still expected to be released this fall, I am hopeful that we will be able to establish consolidation between independent lattice calculations at the sub-percent level. In this case we will provide a lattice-only determination of HVP in the second white paper.”

The post Shifting sands for muon g–2 appeared first on CERN Courier.

]]>
News Two recent results may ease the tension between theory and experiment. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NA_twoprong_feature-1-1.jpg
A bestiary of exotic hadrons https://cerncourier.com/a/a-bestiary-of-exotic-hadrons/ Wed, 20 Nov 2024 13:54:33 +0000 https://cern-courier.web.cern.ch/?p=111387 Patrick Koppenburg and Marco Pappagallo survey the 23 exotic hadrons discovered at the LHC so far.

The post A bestiary of exotic hadrons appeared first on CERN Courier.

]]>
Twenty-three exotic states discovered at the LHC

Seventy-six new particles have been discovered at the Large Hadron Collider (LHC) so far: the Higgs boson, 52 conventional hadrons and a bestiary of 23 exotic hadrons whose structure cannot reliably be explained or their existence predicted.

The exotic states are varied and complex, displaying little discernible pattern at first glance. They represent a fascinating detective story: an experimentally driven quest to understand the exotic offspring of the strong interaction, motivating rival schools of thought among theorists.

This surge in new hadrons has been one of the least expected outcomes of the LHC (see “Unexpected” figure). With a tenfold increase in data at the High-Luminosity LHC (HL-LHC) on the horizon, and further new states also likely to emerge at the Belle II experiment in Japan, the BESIII experiment in China, and perhaps at a super charm–tau factory in the same country, their story is in its infancy, with twists and turns still to come.

Building blocks

Just as electric charges arrange themselves in neutral atoms, the colour charges that carry the strong interaction arrange themselves into colourless composite states. As fundamental particles with colour charge, quarks (q) and gluons (g) therefore cannot exist independently, but only in colour-neutral composite states called hadrons. Since the discovery of the pion in 1947, a rich phenomenology of mesons (qq) and baryons (qqq) inspired the quark model and eventually the theory of quantum chromodynamics (QCD), which serves as an impeccable description of the strong interaction to this day.

But why should nature not also contain exotic colour-neutral combinations such as tetraquarks (qqqq), pentaquarks (qqqqq), hexaquarks (qqqqqq or qqqqqq), hybrid hadrons (qqg or qqgg) and glueballs (gg or ggg)?

Twenty-three exotic hadrons have been discovered so far at the LH

The existence of exotic hadrons was debated without consensus for decades, with interest growing in the early 2000s, when new states with unexpected features were observed. In 2003, the BaBar experiment at SLAC discovered the D*s0(2317)+ meson, with a mass close to the sum of the masses of a D meson and a kaon. A few months later that year, Belle discovered the χc1(3872) meson, then called X(3872) (see “What’s in a name?” panel), with a mass close to the sum of the masses of a D0 meson and a D*0 meson. As well as their striking closeness to meson–meson thresholds, the “width” of their signals was much narrower than expected. (Measured in units of energy, such widths are reciprocal to particle lifetimes.)

Soon afterwards, in 2007, a number of other charmonium-like and bottomonium-like states were observed. Belle’s observation in 2007 of the electrically charged charmonium-like state Z(4430)+ (now called Tcc1(4430)+) was a pathfinder in theorising the existence of QCD exotics. Though these states exhibited the telltale signs of being excitations of a charm–anticharm (cc) system (see “The new particles”), their net electric charge indicated a system that could not be composed of only a quark–antiquark pair, as particles and antiparticles have opposite electric charges. Two additional quarks had to be present.

Exotic states at the LHC

The start-up of the LHC opened up the trail, with 23 new exotic hadrons observed there so far (see “The 23 exotic hadrons discovered at the LHC” table). The harvest of new states began in autumn 2013 with the CMS experiment at the LHC reporting the observation of the χc1(4140) state in the J/ψφ mass spectrum in B+→ J/ψφK+ decays, confirming a hint from the CDF experiment at Fermilab. Its minimal quark content is likely ccss. CMS also reported evidence for a state at a higher mass, observed by the LHCb experiment at the LHC in 2016 as the χc1(4274), alongside two more states at masses of 4500 and 4700 MeV.

What’s in a name?

Reflecting their mystery, the first exotic states were named X, Y and Z. Later on, the proliferation of exotic states required an extension of the particle naming scheme. Manifestly exotic tetraquarks and pentaquarks are now denoted T and P, respectively, with a subscript listing the bottom (b), charm (c) and strange (s) quark content. Exotic quarkonium-like states follow the naming scheme of the conventional mesons, where the name is related to the quark content and spin-parity combination. For example, ψ denotes a state with at least a cc quark pair and JPC = 1––, and χc1 denotes a state with at least a cc quark pair and JPC = 1++. Numbers in parentheses refer to approximate measured masses in MeV. Exotic hadrons are classified as mesons or baryons depending on whether they have baryon number zero or not.

In a 2021 analysis of the same B+→ J/ψφK+ decay mode including LHC Run 2 data, LHCb reported two more neutral states, χc1(4685) and X(4630), that do not correspond to cc states expected from the quark model. The analysis also reported two more resonances seen in the J/ψK+ mass spectrum, Tccs1(4000)+ and Tccs1(4220)+. Carrying charge and strangeness, these charmonia-like states are manifestly exotic, with a minimal quark content ccus.

For Tccs1(4000)+, LHCb had sufficient data to produce an Argand diagram with the distinct signature of a resonance (see “Round resonances” panel). A possible isospin partner, Tccs1(4000)0 was later found in B0→ J/ψφK0s decays, lending further evidence that it is a resonance and not a kinematical feature. (According to an approximate symmetry of QCD, the strong interaction should treat a ccus state almost exactly like a ccds. state, as up and down quarks have the same colour charges and similar masses.) Other charmonium-like tetraquarks were later seen by LHCb in the decays χc0(3960) → D+sDs and χc1(4010) → D*+D.

Table of the 23 exotic hadrons discovered at the LHC

The world’s first pentaquarks were discovered by LHCb in 2015. Two pentaquarks appeared in the J/ψp spectrum by studying Λ0b→ J/ψpK decays: Pcc(4380)+, a rather broad resonance with a width of 200 MeV; and Pcc(4450)+, which is narrower at 40 MeV. The observed decay mode implied a minimal quark content ccuud, excluding any conventional interpretation.

These states were hiding in plain sight: they were spotted independently by several LHCb physicists, including a CERN summer student. In a 2019 analysis using more data, the heavier state was identified as the sum of two overlapping pentaquarks now called Pcc(4440)+ and Pcc(4457)+. Another narrow state was also seen at a mass of 4312 MeV. LHCb observed the first strange pentaquark in B→ J/ψΛp decays in 2022, with a quark content ccuds.

Other manifestly exotic hadrons followed, with two exotic hadrons Tcccc(6600) and Tcccc(6900) observed by LHCb, CMS and ATLAS in the J/ψJ/ψ spectrum. They can be interpreted as a tetraquark made of two charm and two anti-charm quarks – a fully charmed tetraquark. When both J/ψ mesons decay to a muon pair, the final state consists of four muons, allowing the LHCb, ATLAS and CMS experiments to study the final spectrum in multiple acceptance regions and transverse momentum ranges. These states do not contain any light quarks, which eases their theoretical study and also implies a state with four bottom quarks that could be long-lived.

Doubly charming

The world’s first double-open-charm meson was discovered by LHCb in 2021: the Tcc(3875)+. With a charm of two, it cannot be accommodated in the conventional qq scheme. There is an intriguing similarity between the exotic Tcc(3875)+(ccud) and the charmonium-like (cc-like) χc1(3872) meson discovered by Belle in 2003, whose nature is still controversial. Both have similar masses and remarkably narrow widths. The jury is still out on their interpretation (see “Inside pentaquarks and tetraquarks“).

The discovery of a Tcc(3875)+ (ccud) meson also implies the existence of a Tbb state, with a bbud quark content, that should be stable except with regard to weak decays. The observation of the first long-lived exotic state, with a sizable flight distance, is an intriguing goal for future experiments. At the HL-LHC, the search for B+c mesons displaced from the interaction point, could return the first evidence for a Tbb tetraquark given that the decays of weakly decaying double-beauty hadrons such as Ξbbq and Tbb are their only known sources.

Round resonances

Round resonances

Particles are most likely to be created in collisions when the centre-of-mass energy matches their mass. The longer the mean lifetime of the new particle, the greater the uncertainty on its decay time and, via Heisenberg’s uncertainty principle, the smaller the uncertainty on their energy. Such particles have narrow peaks in their energy spectra. Fast-decaying particles have broad peaks. Searching for such “resonances” can reveal new particles – but bumps can be deceiving. A more revealing analysis fits differential decay rates to measure the complex quantum amplitude A(s) describing the production of the particle. As the energy (√s) increases, the amplitude traces a circle counterclockwise in the complex plane, with the magnitude of the amplitude tracing the classic resonant peak observed in energy spectra (see figure above left).

Demonstrating this behaviour, as LHCb did in 2021 for theTccs1(4000)+ meson (above, centre) is a significant experimental achievement, which the collaboration also performed in 2018 for the pathfinding Z(4430)+ (Tcc1(4430)+) meson discovered by Belle in 2007 (black points, above right). The LHCb measurement confirmed its resonant character and resolved any controversy over whether it was a true exotic state. The simulated blue measurement illustrates the improvement such measurements stand to accrue with upgraded detectors and increased statistics at the HL-LHC.

There are also other exotic states predicted by QCD that are still missing in the particle zoo, such as meson–gluon hybrids and glueballs. Hybrid mesons could be identified by exotic spin-parity (JP) quantum numbers not allowed in the qq scheme. Glueballs could be observed in gluon-enriched heavy-ion collisions. A potential candidate has recently been observed by the BESIII collaboration, which is another major player in exotic spectroscopy.

Exotic hadrons might even have been observed in the light quark sector without having been searched for. The scalar mesons are too numerous to fit in the conventional quark model, and some of them, for instance the f0(980) and a0(980) mesons, might be tetraquarks. Exotic light pentaquarks may also exist. Twenty years ago, the θ+ baryon caused quite some excitement, being apparently openly exotic, with a positive strangeness and a minimal quark content uudds. No fewer than 10 different experiments presented evidence for it, including several quoting 5σ significance, before it disappeared in blind analyses of larger data samples with better background subtraction (CERN Courier April 2004 p29). Its story is now material for historians of science, but its interpretation triggered many theory papers that are still useful today.

The challenge of understanding how quarks are bound inside exotic hadrons is the greatest outstanding question in hadron spectroscopy. Models include a cloud of light quarks and gluons bound to a heavy qq core by van-der-Waals-like forces (hadro-quarkonium); colour-singlet hadrons bound by residual nuclear forces (hadronic molecules); and compact tetraquarks [qq] [qq] and pentaquarks [qq][qq]q composed of diquarks [qq] and antidiquarks [qq], which masquerade as antiquarks and quarks, respectively.

The LHCb experiment at CERN

Some exotic hadrons may also have been misinterpreted as resonant states when they are actually “threshold cusps” – enhancements caused by rescattering. For instance, the Pcc(4457)+ pentaquark seen in Λ0b→ J/ψpK decays could in fact be rescattering between the D0 and Λc(2595)+ decay products in Λ0b→ Λc(2595)+D0K to exchange a charm quark and form a J/ψp system. This hypothesis can be tested by searching for additional decay modes and isospin partners, or via detailed amplitude analyses – a process already completed for many of the aforementioned states, but not yet all.

Establishing the nature of the exotic hadrons will be challenging, and a comprehensive organisation of exotic hadrons in flavour multiples is still missing. Establishing whether exotic hadrons obey the same flavour symmetries as conventional hadrons will be an important step forward in understanding their composition.

Effective predictions

The dynamics of quarks and gluons can be described perturbatively in hard processes thanks to the smallness of the strong coupling constant at short distances, but the spectrum of stable hadrons is affected by non-perturbative effects and cannot be computed from the fundamental theory. Though lattice QCD attempts this by discretising space–time in a cubic lattice, the results are time consuming and limited in precision by computational power. Predictions rely on approximate analytical methods such as effective field theories.

The challenge of understanding how quarks are bound inside exotic hadrons is the greatest outstanding question in hadron spectroscopy

Hadron physics is therefore driven by empirical data, and hadron spectroscopy plays a pivotal role in testing the predictions of lattice QCD, which is itself an increasingly important tool in precision electroweak physics and searches for physics beyond the Standard Model.

Like Mendeleev and Gell-Mann, we are at the beginning of a new field, in the taxonomy stage, discovering, studying and classifying exotic hadrons. The deeper challenge is to explain and anticipate them. Though the underlying principles are fully known, we are still far from being able to do the chemistry of quantum chromodynamics.

The post A bestiary of exotic hadrons appeared first on CERN Courier.

]]>
Feature Patrick Koppenburg and Marco Pappagallo survey the 23 exotic hadrons discovered at the LHC so far. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_BESTIARY_feature-1-1.jpg
Inside pentaquarks and tetraquarks https://cerncourier.com/a/inside-pentaquarks-and-tetraquarks/ Wed, 20 Nov 2024 13:52:05 +0000 https://cern-courier.web.cern.ch/?p=111383 Marek Karliner and Jonathan Rosner ask what makes tetraquarks and pentaquarks tick, revealing them to be at times exotic compact states, at times hadronic molecules and at times both – with much still to be discovered.

The post Inside pentaquarks and tetraquarks appeared first on CERN Courier.

]]>
Strange pentaquarks

Breakthroughs are like London buses. You wait a long time, and three turn up at once. In 1963 and 1964, Murray Gell-Mann, André Peterman and George Zweig independently developed the concept of quarks (q) and antiquarks (q) as the fundamental constituents of the observed bestiary of mesons (qq) and baryons (qqq).

But other states were allowed too. Additional qq pairs could be added at will, to create tetraquarks (qqqq), pentaquarks (qqqqq) and other states besides. In the
1970s, Robert L Jaffe carried out the first explicit calculations of multiquark states, based on the framework of the MIT bag model. Under the auspices of the new theory of quantum chromodynamics (QCD), this computationally simplified model ignored gluon interactions and considered quarks to be free, though confined in a bag with a steep potential at its boundary. These and other early theoretical efforts triggered many experimental searches, but no clear-cut results.

New regimes

Evidence for such states took nearly two decades to emerge. The essential precursors were the discovery of the charm quark (c) at SLAC and BNL in the November Revolution of 1974, some 50 years ago (p41), and the discovery of the bottom quark (b) at Fermilab three years later. The masses and lifetimes of these heavy quarks allowed experiments to probe new regimes in parameter space where otherwise inexplicable bumps in energy spectra could be resolved (see “Heavy breakthroughs” panel).

Heavy breakthroughs

Double hidden charm

With the benefit of hindsight, it is clear why early experimental efforts did not find irrefutable evidence for multiquark states. For a multiquark state to be clearly identifiable, it is not enough to form a multiquark colour-singlet (a mixture of colourless red–green–blue, red–antired, green–antigreen and blue–antiblue components). Such a state also needs to be narrow and long-lived enough to stand out on top of the experimental background, and has to have distinct decay modes that cannot be explained by the decay of a conventional hadron. Multiquark states containing only light quarks (up, down and strange) typically have many open decay channels, with a large phase space, so they tend to be wide and short-lived. Moreover, they share these decay channels with excited states of conventional hadrons and mix with them, so they are extremely difficult to pin down.

Multiquark states with at least one heavy quark are very different. Once hadrons are “dressed” by gluons, they acquire effective masses of the order of several hundred MeV, with all quarks coupling in the same way to gluons. For light quarks, the bare quark masses are negligible compared to the effective mass, and can be neglected to zeroth order. But for heavy quarks (c or b), the ratio of the bare quark masses to the effective mass of the hadron dramatically affects the dynamics and the experimental situation, creating narrow multiquark states that stand out. These states were not seen in the early searches simply because the relevant production cross sections are very small and particle identification requires very high spatial resolution. These features became accessible only with the advent of the huge luminosity and the superb spatial resolution provided by vertex detectors in bottom and charm factories such as BaBar, Belle, BESIII and LHCb.

The attraction between two heavy quarks scales like α2smq, where αs is the strong coupling constant and mq is the mass of the quarks. This is because the Coulomb-like part of the QCD potential dominates, scaling as –αs/r as a function of distance r, and yielding an analogue of the Bohr radius ~1/(αsmq). Thus, the interaction grows approximately linearly with the heavy quark mass. In at least one case (discussed below), the highly anticipated but as yet undiscovered bbud. tetraquark Tbb is expected to result in a state with a mass that is below the two-meson threshold, and therefore stable under strong interactions.

Exclusively heavy states are also possible. In 2020 and in 2024, respectively, LHCb and CMS discovered exotic states Tcccc(6900) and Tcccc(6600), which both decay into two J/ψ particles, implying a quark content (cccc). J/ψ does not couple to light quarks, so these states are unlikely to be hadronic molecules bound by light meson exchange. Though they are too heavy to be the ground state of a (cccc) compact tetraquark, they might perhaps be its excitations. Measuring their spin and parity would be very helpful in distinguishing between the various alternatives that have been proposed.

The first unambiguously exotic hadron, the X(3872) (dubbed χc1(3872) in the LHCb collaboration’s new taxonomy; see “What’s in a name?” panel), was discovered at the Belle experiment at KEK in Japan in 2003. Subsequently confirmed by many other experiments, its nature is still controversial. (More of that later.) Since then, there has been a rapidly growing body of experimental evidence for the existence of exotic multiquark hadrons. New states have been discovered at Belle, at the BaBar experiment at SLAC in the US, at the BESIII experiment at IHEP in China, and at the CMS and LHCb experiments at CERN (see “A bestiary of exotic hadrons“). In all cases with robust evidence, the exotic new states contain at least one heavy charm or bottom quark. The majority include two.

The key theoretical question is how the quarks are organised inside these multiquark states. Are they hadronic molecules, with two heavy hadrons bound by the exchange of light mesons? Or are they compact objects with all quarks located within a single confinement volume?

Compact candidate

The compact and molecular interpretations each provide a natural explanation for part of the data, but neither explains all. Both kinds of structures appear in nature, and certain states may be superpositions of compact and molecular states.

In the molecular case the deuteron is a good mental image. (As a bound state of a proton and a neutron, it is technically a molecular hexaquark.) In the compact interpretation, the diquark – an entangled pair of quarks with well-defined spin, colour and flavour quantum numbers – may play a crucial role. Diquarks have curious properties, whereby, for example, a strongly correlated red–green pair of quarks can behave like a blue antiquark, opening up intriguing possibilities for the interpretation of qqqq and qqqqq states.

Compact states

A clearcut example of a compact structure is the Tbb tetraquark with quark content bbud. Tbb has not yet been observed experimentally, but its existence is supported by robust theoretical evidence from several complementary approaches. As for any ground-state hadron, its mass is given to a good approximation by the sum of its constituent quark masses and their (negative) binding energy. The constituent masses implied here are effective masses that also include the quarks’ kinetic energies. The binding energy is negative as it was released when the compact state formed.

In the case of Tbb, the binding energy is expected to be so large that its mass is below all two-meson decay channels: it can only decay weakly, and must be stable with respect to the strong interaction. No such exotic hadron has yet been discovered, making Tbb a highly prized target for experimentalists. Such a large binding energy cannot be generated by meson exchange and must be due to colour forces between the very heavy b quarks. Tbb is an iso­scalar with JP = 1+. Its charmed analogue, Tcc = (ccud), also known as Tcc(3875)+, was observed by LHCb in 2021 to be a whisker away from stability, with a very small binding energy and width less than 1 MeV (CERN Courier September/October 2021 p7). The big difference between the binding energies of Tbb and Tcc, which make the former stable and the latter unstable, is due to the substantially greater mass of the b quark than the c quark, as discussed in the panel above. An intermediate case, Tbc = (bcud), is very likely also below threshold for strong decay and therefore stable. It is also easier to produce and detect than Tbb and therefore extremely tempting experimentally.

Molecular pentaquarks

At the other extreme, we have states that are most probably pure hadronic molecules. The most conspicuous examples are the Pc(4312), Pc(4440) and Pc(4457) pentaquarks discovered by LHCb in 2019, and labelled according to the convention adopted by the Particle Data Group as Pcc(4312)+, Pcc(4440)+ and Pcc(4457)+. All three have quark content (ccuud) and decay into J/ψp, with an energy release of order 300 MeV. Yet, despite having such a large phase space, all three have anomalously narrow widths less than about 10 MeV. Put more simply, the pentaquarks decay remarkably slowly, given how much energy stands to be released.

But why should long life count against the pentaquarks being tightly bound and compact? In a compact (ccuud) state there is nothing to prevent the charm quark from binding with the anticharm quark, hadronising as J/ψ and leaving behind a (uud) proton. It would decay immediately with a large width.

Anomalously narrow

On the other hand, hadronic molecules such as ΣcD and ΣcD* automatically provide a decay-suppression mechanism. Hadronic molecules are typically large, so the c quark inside the Σc baryon is typically far from the c quark inside the D or D* meson. Because of this, the formation of J/ψ = (c c) has a low probability, resulting in a long lifetime and a narrow width. (Unstable particles decay randomly within fixed half-lives. According to Heisenberg’s uncertainty principle, this uncertainty on their lifetime yields a reciprocal uncertainty on their energy, which may be directly observed as the width of the peak in the spectrum of their measured masses when they are created in particle collisions. Long-lived particles exhibit sharply spiked peaks, and short-lived particles exhibit broad peaks. Though the lifetimes of strongly interacting particle are usually not measurable directly, they may be inferred from these “widths”, which are measured in units of energy.)

Additional evidence in favour of their molecular nature comes from the mass of Pc(4312) being just below the ΣcD production threshold, and the masses of Pc(4440) and Pc(4457) being just below the ΣcD* production threshold. This is perfectly natural. Hadronic molecules are weakly bound, so they typically only form an S-wave bound state, with no orbital angular momentum. So ΣcD, which combines a spin-1/2 baryon and a spin-0 negative-parity meson, can only form a single state with JP = 1/2. By contrast, ΣcD*, which combines a spin-1/2 baryon and spin-1 negative-parity meson, can form two closely-spaced states with JP = 1/2 and 3/2, with a small splitting coming from a spin–spin interaction.

An example of a possible mixture of a compact state and a hadronic molecule is provided by the X(3872) meson

The robust prediction of the JP quantum numbers makes it very straightforward in principle to kill this physical picture, if one were to measure JP values different from these. Conversely, measuring the predicted values of JP would provide a strong confirmation (see “The 23 exotic hadrons discovered at the LHC table”).

These predictions have already received substantial indirect support from the strange-pentaquark sector. The spin-parity of the Pccs(4338), which also has a narrow width below 10 MeV, has been determined by LHCb to be 1/2, exactly as expected for a Ξc D molecule (see “Strange pentaquark” figure).

The mysterious X(3872)

An example of a possible mixture of a compact state and a hadronic molecule is provided by the already mentioned X(3872) meson. Its mass is so close to the sum of the masses of a D0 meson and a D*0 meson that no difference has yet been established with statistical significance, but it is known to be less than about 1 MeV. It can decay to J/ψπ+π with a branching ratio (3.5 ± 0.9)%, releasing almost 500 MeV of energy. Yet its width is only of order 1 MeV. This is an even more striking case of relative stability in the face of naively expected instability than for the pentaquarks. At first sight, then, it is tempting to identify X(3872) as a clearcut D0D*0 hadronic molecule.

Particle precision

The situation is not that simple, however. If X(3872) is just a weakly-bound hadronic molecule, it is expected to be very large, of the scale of a few fermi (10–15 m). So it should be very difficult to produce it in hard reactions, requiring a large momentum transfer. Yet this is not the case. A possible resolution might come from X(3872) being a mixture of a D0D*0molecular state and χc1(2P), a conventional radial excitation of P-wave charmonium, which is much more compact and is expected to have a similar mass and the same JPC = 1++ quantum numbers. Additional evidence in favour of such a mixing comes from comparing the rates of the radiative decays X(3872) → J/ψγ and X(3872) → ψ(2S)γ.

The question associated with exotic mesons and baryons can be posed crisply: is an observed state a molecule, a compact multiquark system or something in between? We have given examples of each. Definitive compact-multiquark behaviour can be confirmed if a state’s flavour-SU(3) partners are identified. This is because compact states are bound by colour forces, which are only weakly sensitive to flavour-SU(3) rotations. (Such rotations exchange up, down and strange quarks, and to a good approximation the strong force treats these light flavours equally at the energies of charmed and beautiful exotic hadrons.) For example, if X(3872) should in fact prove to be a compact tetraquark, it should have charged isospin partners that have not yet been observed.

On the experimental front, the sensitivity of LHCb, Belle II, BESIII, CMS and ATLAS have continued to reap great benefits to hadron spectroscopy. Together with the proposed super τ-charm factory in China, they are virtually guaranteed to discover additional exotic hadrons, expanding our understanding of QCD in its strongly interacting regime.

The post Inside pentaquarks and tetraquarks appeared first on CERN Courier.

]]>
Feature Marek Karliner and Jonathan Rosner ask what makes tetraquarks and pentaquarks tick, revealing them to be at times exotic compact states, at times hadronic molecules and at times both – with much still to be discovered. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_EXOTIC_feature-1-1.jpg
Data analysis in the age of AI https://cerncourier.com/a/data-analysis-in-the-age-of-ai/ Wed, 20 Nov 2024 13:50:36 +0000 https://cern-courier.web.cern.ch/?p=111424 Experts in data analysis, statistics and machine learning for physics came together from 9 to 12 September for PHYSTAT’s Statistics meets Machine Learning workshop.

The post Data analysis in the age of AI appeared first on CERN Courier.

]]>
Experts in data analysis, statistics and machine learning for physics came together from 9 to 12 September at Imperial College London for PHYSTAT’s Statistics meets Machine Learning workshop. The goal of the meeting, which is part of the PHYSTAT series, was to discuss recent developments in machine learning (ML) and their impact on the statistical data-analysis techniques used in particle physics and astronomy.

Particle-physics experiments typically produce large amounts of highly complex data. Extracting information about the properties of fundamental physics interactions from these data is a non-trivial task. The general availability of simulation frameworks makes it relatively straightforward to model the forward process of data analysis: to go from an analytically formulated theory of nature to a sample of simulated events that describe the observation of that theory for a given particle collider and detector in minute detail. The inverse process – to infer from a set of observed data what is learned about a theory – is much harder as the predictions at the detector level are only available as “point clouds” of simulated events, rather than as the analytically formulated distributions that are needed by most statistical-inference methods.

Traditionally, statistical techniques have found a variety of ways to deal with this problem, mostly centered on simplifying the data via summary statistics that can be modelled empirically in an analytical form. A wide range of ML algorithms, ranging from neural networks to boosted decision trees trained to classify events as signal- or background-like, have been used in the past 25 years to construct such summary statistics.

The broader field of ML has experienced a very rapid development in recent years, moving from relatively straightforward models capable of describing a handful of observable quantities, to neural models with advanced architectures such as normalising flows, diffusion models and transformers. These boast millions to billions of parameters that are potentially capable of describing hundreds to thousands of observables – and can now extract features from the data with an order-of-magnitude better performance than traditional approaches. 

New generation

These advances are driven by newly available computation strategies that not only calculate the learned functions, but also their analytical derivatives with respect to all model parameters, greatly speeding up training times, in particular in combination with modern computing hardware with graphics processing units (GPUs) that facilitate massively parallel calculations. This new generation of ML models offers great potential for novel uses in physics data analyses, but have not yet found their way to the mainstream of published physics results on a large scale. Nevertheless, significant progress has been made in the particle-physics community in learning the technology needed, and many new developments using this technology were shown at the workshop.

This new generation of machine-learning models offers great potential for novel uses in physics data analyses

Many of these ML developments showcase the ability of modern ML architectures to learn multidimensional distributions from point-cloud training samples to a very good approximation, even when the number of dimensions is large, for example between 20 and 100. 

A prime use-case of such ML models is an emerging statistical analysis strategy known as simulation-based inference (SBI), where learned approximations of the probability density of signal and background over the full high-dimensional observables space are used, dispensing with the notion of summary statistics to simplify the data. Many examples were shown at the workshop, with applications ranging from particle physics to astronomy, pointing to significant improvements in sensitivity. Work is ongoing on procedures to model systematic uncertainties, and no published results in particle physics exist to date. Examples from astronomy showed that SBI can give results of comparable precision to the default Markov chain Monte Carlo approach for Bayesian computations, but with orders of magnitude faster computation times.

Beyond binning

A commonly used alternative approach to the full-fledged theory parameter inference from observed data is known as deconvolution or unfolding. Here the goal is publishing intermediate results in a form where the detector response has been taken out, but stopping short of interpreting this result in a particular theory framework. The classical approach to unfolding requires estimating a response matrix that captures the smearing effect of the detector on a particular observable, and applying the inverse of that to obtain an estimate of a theory-level distribution – however, this approach is challenging and limited in scope, as the inversion is numerically unstable, and requires a low dimensionality binning of the data. Results on several ML-based approaches were presented, which either learn the response matrix from modelling distributions outright (the generative approach) or learn classifiers that reweight simulated samples (the discriminative approach). Both approaches show very promising results that do not have the limitations on the binning and dimensionality of the distribution of the classical response-inversion approach.

A third domain where ML is facilitating great progress is that of anomaly searches, where an anomaly can either be a single observation that doesn’t fit the distribution (mostly in astronomy), or a collection of events that together don’t fit the distribution (mostly in particle physics). Several analyses highlighted both the power of ML models in such searches and the bounds from statistical theory: it is impossible to optimise sensitivity for single-event anomalies without knowing the outlier distribution, and unsupervised anomaly detectors require a semi-supervised statistical model to interpret ensembles of outliers.

A final application of machine-learned distributions that was much discussed is data augmentation – sampling a new, larger data sample from a learned distribution. If the synthetic data is significantly larger than the training sample, its statistical power will be greater, but will derive this statistical power from the smooth interpolation of the model, potentially generating so-called inductive bias. The validity of the assumed smoothness depends on its realism in a particular setting, for which there is no generic validation strategy. The use of a generative model amounts to a tradeoff between bias and variance.

Interpretable and explainable

Beyond the various novel applications of ML, there were lively discussions on the more fundamental aspects of artificial intelligence (AI), notably on the notion of and need for AI to be interpretable or explainable. Explainable AI aims to elucidate what input information was used, and its relative importance, but this goal has no unambiguous definition. The discussion on the need for explainability centres to a large extent on trust: would you trust a discovery if it is unclear what information the model used and how it was used? Can you convince peers of the validity of your result? The notion of interpretable AI goes beyond that. It is an often-desired quality by scientists, as human knowledge resulting from AI-based science is generally desired to be interpretable, for example in the form of theories based on symmetries, or structures that are simple, or “low-rank”. However, interpretability has no formal criteria, which makes it an impractical requirement. Beyond practicality, there is also a fundamental point: why should nature be simple? Why should models that describe it be restricted to being interpretable? The almost philosophical nature of this question made the discussion on interpretability one of the liveliest ones in the workshop, but for now without conclusion.

Human knowledge resulting from AI-based science is generally desired to be interpretable

For the longer-term future there are several interesting developments in the pipeline. In the design and training of new neural models, two techniques were shown to have great promise. The first one is the concept of foundation models, which are very large models that are pre-trained by very large datasets to learn generic features of the data. When these pre-trained generic models are retrained to perform a specific task, they are shown to outperform purpose-trained models for that same task. The second is on encoding domain knowledge in the network. Networks that have known symmetry principles encoded in the model can significantly outperform models that are generically trained on the same data.

The evaluation of systematic effects is still mostly taken care of in the statistical post-processing step. Future ML techniques may more fully integrate systematic uncertainties, for example by reducing the sensitivity to these uncertainties through adversarial training or pivoting methods. Beyond that, future methods may also integrate the currently separate step of propagating systematic uncertainties (“learning the profiling”) into the training of the procedure. A truly global end-to-end optimisation of the full analysis chain may ultimately become feasible and computationally tractable for models that provide analytical derivatives.

The post Data analysis in the age of AI appeared first on CERN Courier.

]]>
Meeting report Experts in data analysis, statistics and machine learning for physics came together from 9 to 12 September for PHYSTAT’s Statistics meets Machine Learning workshop. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24FN_Phystat-1-1.jpg
Inside pyramids, underneath glaciers https://cerncourier.com/a/inside-pyramids-underneath-glaciers/ Wed, 20 Nov 2024 13:48:19 +0000 https://cern-courier.web.cern.ch/?p=111476 Coordinated by editors Paola Scampoli and Akitaka Ariga, Cosmic Ray Muography provides an invaluable snapshot of a booming research area.

The post Inside pyramids, underneath glaciers appeared first on CERN Courier.

]]>
Muon radiography – muography for short – uses cosmic-ray muons to probe and image large, dense objects. Coordinated by editors Paola Scampoli and Akitaka Ariga of the University of Bern, the authors of this book provide an invaluable snapshot of this booming research area. From muon detectors, which differ significantly from those used in fundamental physics research, to applications of muography in scientific, cultural, industrial and societal scenarios, a broad cross section of experts describe the physical principles that underpin modern muography.

Hiroyuki Tanaka of the University of Tokyo begins the book with historical developments and perspectives. He guides readers from the first documented use of cosmic-ray muons in 1955 for rock overburden estimation, to current studies of the sea-level dynamics in Tokyo Bay using muon detectors laid on the seafloor and visionary ideas to bring muography to other planets using teleguided rovers.

Scattering methods

Tanaka limits his discussion to the muon-absorption approach to muography, which images an object by comparing the muon flux before and after – or with and without – an object. The muon-scattering approach, which was invented two decades ago, instead exploits the deflection of muons passing through matter that is due to electromagnetic interactions with nuclei. The interested reader will find several examples of the application of muon scattering in other chapters, particularly that on civil and industrial applications by Davide Pagano (Pavia) and Altea Lorenzon (Padova). Scattering methods have an edge in these fields thanks to their sensitivity to the atomic number of the materials under investigation.

Cosmic Ray Muography

Peter Grieder (Bern), who sadly passed away shortly before the publication of the book, gives an excellent and concise introduction to the physics of cosmic rays, which Paolo Checchia (Padova) expands on, delving into the physics of interactions between muons and matter. Akira Nishio (Nagoya University) describes the history and physical principles of nuclear emulsions. These detectors played an important role in the history of particle physics, but are not very popular now as they cannot provide real-time information. Though modern detectors are a more common choice today, nuclear emulsions still find a niche in muography thanks to their portability. The large accumulation of data from muography experiments requires automatic analysis, for which dedicated scanning systems have been developed. Nishio includes a long and insightful discussion on how the nuclear-emulsions community reacted to supply-chain evolution. The transition from analogue to digital cameras meant that most film-producing firms changed their core business or simply disappeared, and researchers had to take a large part of the production process into their own hands.

Fabio Ambrosino and Giulio Saracino of INFN Napoli next take on the task of providing an overview of the much broader and more popular category of real-time detectors, such as those commonly used in experiments at particle colliders. Elaborating on the requirements set by the cosmic rate and environmental factors, their
chapter explains why scintillator and gas-based tracking devices are the most popular options in muography. They also touch on more exotic detector options, including Cherenkov telescopes and cylindrical tracking detectors that fit in boreholes.

In spite of their superficial similarity, methods that are common in X-ray imaging need quite a lot of ingenuity to be adapted to the context of muography. For example, the source cannot be controlled in muography, and is not mono­chromatic. Both energy and direction are random and have a very broad distribution, and one cannot afford to take data from more than a few viewpoints. Shogo Nagahara and Seigo Miyamoto of the University of Tokyo provide a specialised but intriguing insight into 3D image reconstruction using filtered back-projection.

A broad cross section of experts describe the physical principles that underpin modern muography

Geoscience is among the most mature applications of muography. While Jacques Marteau (Claude Bernard University Lyon 1) provides a broad overview of decades of activities spanning from volcano studies to the exploration of natural caves, Ryuichi Nishiyama (Tokyo) explores recent studies where muography provided unique data on the shape of the bedrock underneath two major glaciers in the Swiss Alps.

One of the greatest successes of muography is the study of pyramids, which is given ample space in the chapter on archaeology by Kunihiro Morishima (Nagoya). In 1971, Nobel-laureate Luis Alvarez’s team pioneered the use of muography in archaeology during an investigation at the pyramid of Khafre in Giza, Egypt, motivated by his hunch that an unknown large chamber could be hiding in the pyramid. Their data convincingly excluded that possibility, but the attempt can be regarded as launching modern muography (CERN Courier May/June 2023 p32). Half a century later, muography was reintroduced to the exploration of Egyptian pyramids thanks to ScanPyramids – an international project led by particle-physics teams in France and Japan under the supervision of the Heritage Innovation and Preservation Institute. ScanPyramids aims at systematically surveying all of the main pyramids in the Giza complex, and recently made headlines by finding a previously unknown corridor-shaped cavity in Khufu’s Great Pyramid, which is the second largest pyramid in the world. To support the claim, which was initially based on muography alone, the finding was cross-checked with the more traditional surveying method based on ground penetrating radar, and finally confirmed via visual inspection through an endoscope.

Pedagogical focus

This book is a precious resource for anyone approaching muography, from students to senior scientists, and potential practitioners from both academic and industrial communities. There are some other excellent books that have already been published on the same topic, and that have showcased original research, but Cosmic Ray Muography’s pedagogical focus, which prioritises the explanation of timeless first principles, will not become outdated any time soon. Given each chapter was written independently, there is a certain degree of overlap and some incoherence in terminology, but this gives the reader valuable exposure to different perspectives about what matters most in this type of research.

The post Inside pyramids, underneath glaciers appeared first on CERN Courier.

]]>
Review Coordinated by editors Paola Scampoli and Akitaka Ariga, Cosmic Ray Muography provides an invaluable snapshot of a booming research area. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_REV_muons-1.jpg
A rich harvest of results in Prague https://cerncourier.com/a/a-rich-harvest-of-results-in-prague/ Wed, 20 Nov 2024 13:34:58 +0000 https://cern-courier.web.cern.ch/?p=111420 The 42nd international conference on high-energy physics reported progress across all areas of high-energy physics.

The post A rich harvest of results in Prague appeared first on CERN Courier.

]]>
The 42nd international conference on high-energy physics (ICHEP) attracted almost 1400 participants to Prague in July. Expectations were high, with the field on the threshold of a defining moment, and ICHEP did not disappoint. A wealth of new results showed significant progress across all areas of high-energy physics.

With the long shutdown on the horizon, the third run of the LHC is progressing in earnest. Its high-availability operation and mastery of operational risks were highly praised. Run 3 data is of immense importance as it will be the dataset that experiments will work with for the next decade. With the newly collected data at 13.6 TeV, the LHC experiments showed new measurements of Higgs and di-electroweak-boson production, though of course most of the LHC results were based on the Run 2 (2014 to 2018) dataset, which is by now impeccably well calibrated and understood. This also allowed ATLAS and CMS to bring in-depth improvements to reconstruction algorithms.

AI algorithms

A highlight of the conference was the improvements brought by state-of-the-art artificial-intelligence algorithms such as graph neural networks, both at the trigger and reconstruction level. A striking example of this is the ATLAS and CMS flavour-tagging algorithms, which have improved their rejection of light jets by a factor of up to four. This has important consequences. Two outstanding examples are: di-Higgs-boson production, which is fundamental for the measurement of the Higgs boson self-coupling (CERN Courier July/August 2024 p7); and the Higgs boson’s Yukawa coupling to charm quarks. Di-Higgs-boson production should be independently observable by both general-purpose experiments at the HL-LHC, and an observation of the Higgs boson’s coupling to charm quarks is getting closer to being within reach.

The LHC experiments continue to push the limits of precision at hadron colliders. CMS and LHCb presented new measurements of the weak mixing angle. The per-mille precision reached is close to that of LEP and SLD measurements (CERN Courier September/October 2024 p29). ATLAS presented the most precise measurement to date (0.8%) of the strong coupling constant extracted from the measurement of the transverse momentum differential cross section of Drell–Yan Z-boson production. LHCb provided a comprehensive analysis of the B0→ K0* μ+μ angular distributions, which had previously presented discrepancies at the level of 3σ. Taking into account long-distance contributions significantly weakens the tension down to 2.1σ.

Pioneering the highest luminosities ever reached at colliders (setting a record at 4.7 × 1034 cm–2 s–1), SuperKEKB has been facing challenging conditions with repeated sudden beam losses. This is currently an obstacle to further progress to higher luminosities. Possible causes have been identified and are currently under investigation. Meanwhile, with the already substantial data set collected so far, the Belle II experiment has produced a host of new results. In addition to improved CKM angle measurements (alongside LHCb), in particular of the γ angle, Belle II (alongside BaBar) presented interesting new insights in the long standing |Vcb| and |Vub| inclusive versus exclusive measurements puzzle (CERN Courier July/August 2024 p30), with new |Vcb| exclusive measurements that significantly reduce the previous 3σ tension.

Maurizio Pierini

ATLAS and CMS furthered their systematic journey in the search for new phenomena to leave no stone unturned at the energy frontier, with 20 new results presented at the conference. This landmark outcome of the LHC puts further pressure on the naturalness paradigm.

A highlight of the conference was the overall progress in neutrino physics. Accelerator-based experiments NOvA and T2K presented a first combined measurement of the mass difference, neutrino mixing and CP parameters. Neutrino telescopes IceCube with DeepCore and KM3NeT with ORCA (Oscillation Research with Cosmics in the Abyss) also presented results with impressive precision. Neutrino physics is now at the dawn of a bright new era of precision with the next-generation accelerator-based long baseline experiments DUNE and Hyper Kamiokande, the upgrade of DeepCore, the completion of ORCA and the medium baseline JUNO experiment. These experiments will bring definitive conclusions on the measurement of the CP phase in the neutrino sector and the neutrino mass hierarchy – two of the outstanding goals in the field.

The KATRIN experiment presented a new upper limit on the effective electron–anti-neutrino mass of 0.45 eV, well en route towards their ultimate sensitivity of 0.2 eV. Neutrinoless double-beta-decay search experiments KamLAND-Zen and LEGEND-200 presented limits on the effective neutrino mass of approximately 100 meV; the sensitivity of the next-generation experiments LEGEND-1T, KamLAND-Zen-1T and nEXO should reach 20 meV and either fully exclude the inverted ordering hypothesis or discover this long-sought process. Progress on the reactor neutrino anomaly was reported, with recent fission data suggesting that the fluxes are overestimated, thus weakening the significance of the anti-neutrino deficits.

Neutrinos were also a highlight for direct-dark-matter experiments as Xenon announced the observation of nuclear recoil events from8B solar neutrino coherent elastic scattering on nuclei, thus signalling that experiments are now reaching the neutrino fog. The conference also highlighted the considerable progress across the board on the roadmap laid out by Kathryn Zurek at the conference to search for dark matter in an extraordinarily large range of possibilities, spanning 89 orders of magnitude in mass from 10–23 eV to 1057 GeV. The roadmap includes cosmological and astrophysical observations, broad searches at the energy and intensity frontier, direct searches at low masses to cover relic abundance motivated scenarios, building a suite of axion searches, and pursuing indirect-detection experiments.

Lia Merminga and Fabiola Gianotti

Neutrinos also made the headlines in multi-messenger astrophysics experiments with the announcement by the KM3Net ARCA (Astroparticle Research with Cosmics in the Abyss) collaboration of a muon-neutrino event that could be the most energetic ever found. The energy of the muon from the interaction of the neutrino is compatible with having an energy of approximately 100 PeV, thus opening a fascinating window on astrophysical processes at energies well beyond the reach of colliders. The conference showed that we are now well within the era of multi-messenger astrophysics, via beautiful neutrinos, gamma rays and gravitational-wave results.

The conference saw new bridges across fields being built. The birth of collider-neutrino physics with the beautiful results from FASERν and SND fill the missing gap in neutrino–nucleon cross sections between accelerator neutrinos and neutrino astronomy. ALICE and LHCb presented new results on He3 production that complement the AMS results. Astrophysical He3 could signal the annihilation of dark matter. ALICE also presented a broad, comprehensive review of the progress in understanding strongly interacting matter at extreme energy densities.

The highlight in the field of observational cosmology was the recent data from DESI, the Dark Energy Spectroscopic Instrument in operation since 2021, which bring splendid new data on baryon acoustic oscillation measurements. These precious new data agree with previous indirect measurements of the Hubble constant, keeping the tension with direct measurements in excess of 2.5σ. In combination with CMB measurements, the DESI measurements also set an upper limit on the sum of neutrino masses at 0.072 eV, in tension with the inverted ordering of neutrino masses hypothesis. This limit is dependent on the cosmological model.

In everyone’s mind at the conference, and indeed across the domain of high-energy physics, it is clear that the field is at a defining moment in its history: we will soon have to decide what new flagship project to build. To this end, the conference organised a thrilling panel discussion featuring the directors of all the major laboratories in the world. “We need to continue to be bold and ambitious and dream big,” said Fermilab’s Lia Merminga, summarising the spirit of the discussion.

“As we have seen at this conference, the field is extremely vibrant and exciting,” said CERN’s Fabiola Gianotti at the conclusion of the panel. In these defining times for the future of our field, ICHEP 2024 was an important success. The progress in all areas is remarkable and manifest through the outstanding number of beautiful new results shown at the conference.

The post A rich harvest of results in Prague appeared first on CERN Courier.

]]>
Meeting report The 42nd international conference on high-energy physics reported progress across all areas of high-energy physics. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24FN_ICHEP1-2.jpg
An obligation to engage https://cerncourier.com/a/an-obligation-to-engage/ Wed, 20 Nov 2024 13:29:45 +0000 https://cern-courier.web.cern.ch/?p=111403 As the CERN & Society Foundation turns 10, founding Director-General Rolf-Dieter Heuer argues that physicists have a duty to promote curiosity and evidence-based critical thinking.

The post An obligation to engage appeared first on CERN Courier.

]]>
Science is for everyone, and everyone depends on science, so why not bring more of it to society? That was the idea behind the CERN & Society Foundation, established 10 years ago.

The longer I work in science, and the more people I talk to about science, the more I become convinced that everyone is interested in science whether they realise it or not. Many have emerged from their school education with a belief that science is hard and not for them, but they nevertheless ask the very same questions that those at the cutting edge of fundamental physics research ask, and that people have been asking since time immemorial: what is the universe made of, where did we come from and where are we going? Such curiosity is part of what it is to be human. On a more prosaic level, science and technology play an ever-growing role in modern society, and it is incumbent on all of us to understand its consequences and engage on the debate about its uses.

The power to inspire

When I tell people about CERN, more often than not their eyes light up with excitement and they want to know more. Experiences like this show that the scientific community needs to do all it can to engage with society at large in a fast-changing world. We need to bring people closer to an understanding of science, of how science works and why critical evidence-based thinking is vital in every walk of life, not only in science.

Laboratories like CERN are extraordinary places where people from all over the world come together to explore nature’s mysteries. I believe that when we come together like this, we have the power to inspire and an obligation to use this power to address the critical challenge of public engagement in science and technology. CERN has always taken this responsibility seriously. Ten years ago, it added a new string to its bow in the form of the CERN & Society Foundation. Through philanthropy, the foundation spreads CERN’s spirit of scientific curiosity.

Rolf-Dieter Heuer

The CERN & Society Foundation helps the laboratory to deepen its impact beyond the core mission of fundamental physics research. Projects supported by the foundation encourage talented young people from around the globe to follow STEM careers, catalyse innovation for the benefit of all, and inspire wide and diverse audiences. From training high-school teachers to producing medical isotopes, donors’ generosity brings research excellence to all corners of society.

The foundation’s work rests on three pillars: education and outreach, innovation and knowledge exchange, and culture and creativity. Allow me to highlight one example from each pillar that I particularly like.

One of the flagships of the education and outreach pillar is the Beamline for Schools (BL4S) competition. Launched in 2014, BL4S invites groups of high-school students from around the world to submit a proposal for an experiment at CERN. The winning teams are invited to come to CERN to carry out their experiment under expert supervision from CERN scientists. More recently, the DESY laboratory has joined the programme and also welcomes high-school groups to work on a beamline there. Project proposals have ranged from fundamental physics to projects aimed at enabling cosmic-ray tomography of the pyramids by measuring muon transmission through limestone (see “Inside pyramids, underneath glaciers“). To date, some 20,000 students have taken part in the competition, with 25 winning teams coming to CERN or DESY to carry out their experiments (see “From blackboard to beamline“).

Zenodo is a great example of the innovation and knowledge-exchange pillar. It provides a repository for free and easy access to research results, data and analy­sis code, thereby promoting the ideal of open science, which is at the very heart of scientific progress. Zenodo taps into CERN’s long-standing tradition and know-how in sharing and preserving scientific knowledge for the benefit of all. The scientific community can now store data in a non-commercial environment, freely available for society at large. Zenodo goes far beyond high-energy physics and played an important role during the COVID-19 pandemic.

Mutual inspiration

Our flagship culture-and-creativity initiative is the world-leading Arts at CERN programme, which recognises the creativity inherent in both the arts and the sciences, and harnesses them to generate benefits for both. Participating artists and scientists find mutual inspiration, going on to inspire audiences around the world.

“In an era where society needs science more than ever, inspiring new generations to believe in their dreams and giving them the tools and space to change the world is essential,” said one donor recently. It is encouraging to hear such sentiments, and there’s no doubt that the CERN & Society Foundation should feel satisfied with its first decade. Through the examples I have cited above, and many more that I have not mentioned, the foundation has made a tangible difference. It is, however, but one voice. Scientists and scientific organisations in prominent positions should take inspiration from the foundation: the world needs more ambassadors for science. On that note, all that remains is for me to say happy birthday, CERN & Society Foundation.

The post An obligation to engage appeared first on CERN Courier.

]]>
Opinion As the CERN & Society Foundation turns 10, founding Director-General Rolf-Dieter Heuer argues that physicists have a duty to promote curiosity and evidence-based critical thinking. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_VIEW_summer-1-1.jpg
Combining clues from the Higgs boson https://cerncourier.com/a/combining-clues-from-the-higgs-boson/ Wed, 20 Nov 2024 13:24:29 +0000 https://cern-courier.web.cern.ch/?p=111441 Following the discovery of the Higgs boson in 2012, the CMS collaboration has been exploring its properties with ever-increasing precision.

The post Combining clues from the Higgs boson appeared first on CERN Courier.

]]>
CMS figure 1

Following the discovery of the Higgs boson in 2012, the CMS collaboration has been exploring its properties with ever-increasing precision. Data recorded during LHC Run 2 have been used to measure differential production cross-sections of the Higgs boson in different decay channels – a pair of photons, two Z bosons, two W bosons and two tau leptons – and as functions of different observables. These results have now been combined to provide measurements of spectra at the ultimate achievable precision.

Differential cross-section measurements provide the most model-independent way to study Higgs-boson production at the LHC, for which theoretical predictions exist up to next-to-next-to-next-to-leading order in perturbative QCD. One of the most important obser­vables is the transverse momentum (figure 1). This distribution is particularly sensitive both to modelling issues in Standard Model (SM) predictions and possible contributions from physics-beyond-the-SM (BSM).

In the new CMS result, two frameworks are used to test for hints of BSM: the κ-formalism and effective field theories.

The κ-formalism assumes that new physics effects would only affect the couplings between the Higgs boson and other particles. These new physics effects are then parameterised in terms of coefficients, κ. Using this approach, two-dimensional constraints are set on κc (the coupling coefficient of the Higgs boson to the charm quark), κb (Higgs to bottom) and κt (Higgs to top). None show significant deviations from the SM at present.

CMS figure 2

Effective field theories parametrise deviations from the SM by supplementing the Lagrangian with higher-dimensional operators and their associated Wilson coefficients (WCs). The effect of the operators is suppressed by powers of the putative new-physics energy scale, Λ. Measurements of WCs that differ from zero may hint at BSM physics.

The CMS differential cross-section measurements are parametrised, and constraints are derived on the WCs from a simultaneous fit. In the most challenging case, a set of 31 WCs is used as input to a principal-component analysis procedure in which the most sensitive directions in the data are identified. These directions (expressed as linear combinations of the WCs) are then constrained in a simultaneous fit (figure 2). In the upper panel, the limits on the WCs are converted to lower limits on the new physics scale. The results agree with SM predictions, with a moderate 2σ tension present in one of the directions (EV5). Here the major contribution is provided by the cHq3 coefficient, which mostly affects vector-boson fusion, VH production at high Higgs-boson transverse momenta (V = W, Z) and W-boson decays.

The combined results not only provide highly precise measurements of Higgs-boson production, but also place stringent constraints on possible deviations from the SM, deepening our understanding while leaving open the possibility of new physics at higher precision or energy scales.

The post Combining clues from the Higgs boson appeared first on CERN Courier.

]]>
News Following the discovery of the Higgs boson in 2012, the CMS collaboration has been exploring its properties with ever-increasing precision. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_EF_CMS_feature-1-1.jpg
Dignitaries mark CERN’s 70th anniversary https://cerncourier.com/a/dignitaries-mark-cerns-70th-anniversary/ Wed, 20 Nov 2024 13:22:30 +0000 https://cern-courier.web.cern.ch/?p=111412 On 1 October a high-level ceremony at CERN marked 70 years of science, innovation and collaboration.

The post Dignitaries mark CERN’s 70th anniversary appeared first on CERN Courier.

]]>
On 1 October a high-level ceremony at CERN marked 70 years of science, innovation and collaboration. In attendance were 38 national delegations, including eight heads of state or government and 13 ministers, along with many scientific, political and economic leaders who demonstrated strong support for CERN’s mission and future ambition. “CERN has become a global hub because it rallied Europe, and this is even more crucial today,” said president of the European Commission Ursula von der Leyen. “China is planning a 100 km collider to challenge CERN’s global leadership. Therefore, I am proud that we have financed the feasibility study for CERN’s Future Circular Collider. As the global science race is on, I want Europe to switch gear.” CERN’s year-long 70th anniversary programme has seen more than 100 events organised in 63 cities in 28 countries, bringing together thousands of people to discuss the wonders and applications of particle physics. “I am very honoured to welcome representatives from our Member and Associate Member States, our Observers and our partners from all over the world on this very special day,” said CERN Director-General Fabiola Gianotti. “CERN is a great success for Europe and its global partners, and our founders would be very proud to see what CERN has accomplished over the seven decades of its life.”

The post Dignitaries mark CERN’s 70th anniversary appeared first on CERN Courier.

]]>
News On 1 October a high-level ceremony at CERN marked 70 years of science, innovation and collaboration. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NA_70th-1-1.jpg
NA62 observes its golden decay https://cerncourier.com/a/na62-observes-its-golden-decay/ Wed, 20 Nov 2024 13:21:18 +0000 https://cern-courier.web.cern.ch/?p=111416 The measurement is the most precise to date and about 50% higher than the SM prediction.

The post NA62 observes its golden decay appeared first on CERN Courier.

]]>
In a game of snakes and ladders, players move methodically up the board, occasionally encountering opportunities to climb a ladder. The NA62 experiment at CERN is one such opportunity. Searching for ultra-rare decays at colliders and fixed- target experiments like NA62 can offer a glimpse at energy scales an order of magnitude higher than is directly accessible when creating particles in a frontier machine.

The trick is to study hadron decays that are highly suppressed by the GIM mechanism (see “Charming clues for existence“). Should massive particles beyond the Standard Model (SM) exist at the right energy scale, they could disrupt the delicate cancellations expected in the SM by making brief virtual appearances according to the limits imposed by Heisenberg’s uncertainty principle. In a recent featured article, Andrzej Buras (Technical University Munich) identified the six most promising rare decays where new physics might be discovered before the end of the decade (CERN Courier July/August 2024 p30). Among them is K+→ π+νν, the ultra-rare decay sought by NA62. In the SM, fewer than one K+in 10 billion decays this way, requiring the team to exercise meticulous attention to detail in excluding backgrounds. The collaboration has now announced that it has observed the process with 5σ significance.

“This observation is the culmination of a project that started more than a decade ago,” says spokesperson Giuseppe Ruggiero of INFN and the University of Florence. “Looking for effects in nature that have probabilities of happening of the order of 10–11 is both fascinating and challenging. After rigorous and painstaking work, we have finally seen the process NA62 was designed and built to observe.”

In the NA62 experiment, kaons are produced by colliding a high-intensity proton beam from CERN’s Super Proton Synchrotron into a stationary beryllium target. Almost a billion secondary particles are produced each second. Of these, about 6% are positively charged kaons that are tagged and matched with positively charged pions from the decay K+→ π+νν, with the neutrinos escaping undetected. Upgrades to NA62 during Long Shutdown 2 increased the experiment’s signal efficiency while maintaining its sample purity, allowing the collaboration to double the expected signal of their previous measurement using new data collected between 2021 and 2022. A total of 51 events pass the stringent selection criteria, over an expected background of 18+32, definitely establishing the existence of this decay for the first time.

NA62 measures the branching ratio for K+→ π+νν to be 13.0+3.3–2.9× 10–11 – the most precise measurement to date and about 50% higher than the SM prediction, though compatible with it within 1.7σ at the current level of precision. NA62’s full data set will be required to test the validity of the SM in this decay. Data taking is ongoing.

The post NA62 observes its golden decay appeared first on CERN Courier.

]]>
News The measurement is the most precise to date and about 50% higher than the SM prediction. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NA_NA62-1-1.jpg
Using U-spin to squeeze CP violation https://cerncourier.com/a/using-u-spin-to-squeeze-cp-violation/ Wed, 20 Nov 2024 13:19:45 +0000 https://cern-courier.web.cern.ch/?p=111437 The LHCb collaboration has undertaken a new study of B → DD decays using data from LHC Run 2.

The post Using U-spin to squeeze CP violation appeared first on CERN Courier.

]]>
LHCb figure 1

The LHCb collaboration has undertaken a new study of B → DD decays using data from LHC Run 2. In the case of B0→ D+D decays, the analysis excludes CP-symmetry at a confidence level greater than six standard deviations – a first in the analysis of a single decay mode.

The study of differences between matter and antimatter (CP violation) is a core aspect of the physics programme at LHCb. Measurements of CP violation in decays of neutral B0 mesons play a crucial role in the search for physics beyond the Standard Model thanks to the ability of the B0 meson to oscillate into its antiparticle, the B0 meson. Given increases in experimental precision, improved control over the magnitude of hadronic effects becomes important, which is a major challenge in most decay modes. In this measurement, a neutral B meson decays to two charm D mesons – an interesting topology that offers a method to control these high-order hadronic contributions from the Standard Model via the concept of U-spin symmetry.

In the new analysis, B0→ D+D and Bs0→ Ds+Ds are studied simultaneously. U-spin symmetry exchanges the spectator down quarks in the first decay with strange quarks to form the second decay. A joint analysis therefore strongly constrains uncertainties related to hadronic matrix elements by relating CP-violation and branching-fraction measurements in the two decay channels.

In both decays, the same final state is accessible to both matter and antimatter states of the B0 or Bs0 meson, enabling interference between two decay paths: the direct decay of the meson to the final state; and a decay after the meson has oscillated into its antiparticle counterpart. The time-dependent decay rate of each flavour (matter or antimatter) of the meson depends on CP-violating effects and is parameterised in terms dependent on the fundamental properties of the B mesons and the fundamental CP-violating weak phases β and βs, in the case of B0 and Bs0 decays, respectively. The tree-level and exchange Feynman diagrams participating to this decay process, which in turn depend on specific values of the terms in the Cabibbo–Kobayashi–Maskawa quark-mixing matrix, determine the expected value of the β(s) phases. This matrix encodes our best understanding of the CP-violating effects within the Standard Model, and testing its expected properties is a crucial means to fully exploit closure tests of this theoretical framework.

The study of differences between matter and antimatter is a core aspect of the physics programme at LHCb

The analysis uses flavour tagging to identify the matter or antimatter flavour of the neutral B meson at its production and thus allows the determination of the decay path – a key task in time- dependent measurements of CP violation. The flavour-tagging algorithms exploit the fact that b and b quarks are almost exclusively produced in pairs in pp collisions. When the b quark forms a B meson (and similarly for its antimatter equivalent), additional particles are produced in the fragmentation process of the pp collision. From the charges and species of these particles, the flavour of the signal B meson at production can be inferred. This information is combined with the reconstructed position of the decay vertex of the meson, allowing the flavour-tagged decay-time distribution of each analysed flavour to be measured.

Figure 1 shows the asymmetry between the decay-time distributions of the B0 and the B0 mesons for the B0→ D+Ddecay mode. Alongside the Bs0→ Ds+Ds data, these results represent the most precise single measurements of the CP-violation parameters in their respective channels. Results from the two decay modes are used in combination with other B → DD measurements to precisely determine Standard Model parameters.

The post Using U-spin to squeeze CP violation appeared first on CERN Courier.

]]>
News The LHCb collaboration has undertaken a new study of B → DD decays using data from LHC Run 2. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_EF_LHCb_feature-1-1.jpg
From blackboard to beamline https://cerncourier.com/a/from-blackboard-to-beamline/ Wed, 20 Nov 2024 13:18:55 +0000 https://cern-courier.web.cern.ch/?p=111448 To celebrate the 10th anniversary of Beamline for Schools, the Courier caught up with past winners whose lives were impacted by the competition.

The post From blackboard to beamline appeared first on CERN Courier.

]]>
BL4S alumni

High-school physics curricula don’t include much particle physics. The Beamline for Schools (BL4S) competition seeks to remedy this by offering high-school students the chance to turn CERN or DESY into their own laboratory. Since 2014, more than 20,000 students from 2750 teams in 108 countries have competed in BL4S, with 25 winning teams coming to the labs to perform experiments they planned from blackboard to beamline. Though, at 10 years old, the competition is still young, multiple career trajectories have already been influenced, with the impact radiating out into participants’ communities of origin.

For Hiroki Kozuki, a member of a winning team from Switzerland in 2020, learning the fundamentals of particle physics while constructing his team’s project proposal was what first sparked his interest in the subject.

“Our mentor gave us after-school classes on particle physics, fundamentals, quantum mechanics and special relativity,” says Kozuki. “I really felt as though there was so much more depth to physics. I still remember this one lecture where he taught us about the fundamental forces and quarks… It’s like he just pulled the tablecloth out from under my feet. I thought: nature is so much more beautiful when I see all these mechanisms underneath it that I didn’t know existed. That’s the moment where I got hooked on particle physics.” Kozuki will soon graduate from Imperial College London, and hopes to pursue a career in research.

Sabrina Giorgetti, from an Italian team, tells a similar story. “I can say confidently that the reason I chose physics for my bachelor’s, master’s and PhD was because of this experience.” One of the competition’s earliest winners from back in 2015, Giorgetti is now working on the CMS experiment for her PhD. One of her most memorable experiences from BL4S was getting to know the other winning team, who were from South Africa. This solidified her decision to pursue a career in academia.

“You really feel like you can reach out and collaborate with people all over the world, which is something I find truly amazing,” she says. “Now it’s even more international than it was nine years ago. I learnt at BL4S that if you’re interested in research at a place like CERN, it’s not only about physics. It may look like that from the outside, but it’s also engineering, IT and science communication – it’s a very broad world.”

The power of collaboration

As well as getting hands-on with the equipment, one of the primary aims of BL4S is to encourage students to collaborate in a way they wouldn’t in a typical high-school context. While physics experiments in school are usually conducted in pairs, BL4S allows students to work in larger teams, as is common in professional and research environments. The competition provides the chance to explore uncharted territory, rather than repeating timeworn experiments in school.

2023 winner Isabella Vesely from the US is now majoring in physics, electrical engineering and computer science at MIT. Alongside trying to fix their experiment prior to running it on the beamline, her most impactful memories involve collaborating with the other winning team from Pakistan. “We overcame so many challenges with collaboration,” explains Vesely. “They were from a completely different background to us, and it was very cool to talk to them about the experiment, our shared interest in physics and get to know each other personally. I’m still in touch with them now.”

One fellow 2023 winner is just down the road at Harvard. Zohaib Abbas, a member of the winning Pakistan team that year, is now majoring in physics. “In Pakistan, there weren’t any physical laboratories, so nothing was hands-on and all the physics was theoretical,” he says, recalling his shock at the US team’s technical skills, which included 3D printing and coding. After his education, Abbas wants to bring some of this knowledge back to Pakistan in the hopes of growing the physics community in his hometown. “After I got into BL4S, there have been hundreds of people in Pakistan who have been reaching out to me because they didn’t know about this opportunity. I think that BL4S is doing a really great job at exposing people to particle physics.”

All of the students recalled the significant challenge of ensuring the functionality of their instruments across one of CERN’s or DESY’s beamlines. While the project seemed a daunting task at first, the participants enjoyed following the process from start to finish, from the initial idea through to the data collection and analysis.

“It was really exciting to see the whole process in such a short timescale,” said Vesely. “It’s pretty complicated seeing all the work that’s already been done at these experiments, so it’s really cool to contribute a small piece of data and integrate that with everything else.”

Kozuki concurs. Though only he went on to study physics, with teammates branching off into subjects ranging from mathematics to law and medicine, they still plan to get together and take another crack at the data they compiled in 2020. “We want to take another look and see if we find anything we didn’t see before. These projects go on far beyond those two weeks, and the team that you worked with are forever connected.”

For Kozuki, it’s all about collaboration. “I want to be in a field where everyone shares this fundamental desire to crack open some mysteries about the universe. I think that this incremental contribution to science is a very noble motivation. It’s one I really felt when working at CERN. Everyone is genuinely so excited to do their work, and it’s such an encouraging environment. I learnt so much about particle physics, the accelerators and the detectors, but I think those are somewhat secondary compared to the interpersonal connections I developed at BL4S. These are the sorts of international collaborations that accelerate science, and it’s something I want to be a part of.”

The post From blackboard to beamline appeared first on CERN Courier.

]]>
Careers To celebrate the 10th anniversary of Beamline for Schools, the Courier caught up with past winners whose lives were impacted by the competition. https://cerncourier.com/wp-content/uploads/2024/11/CCNovDec24_CAR_BL4S_feature-1.jpg
FCC builds momentum in San Francisco https://cerncourier.com/a/fcc-builds-momentum-in-san-francisco/ Wed, 20 Nov 2024 11:06:24 +0000 https://cern-courier.web.cern.ch/?p=111427 FCC Week 2024 convened more than 450 scientists, researchers and industry leaders in San Francisco with the aim of engaging the wider scientific community, in particular in North America.

The post FCC builds momentum in San Francisco appeared first on CERN Courier.

]]>
The Future Circular Collider (FCC) is envisaged to be a multi-stage facility for exploring the energy and intensity frontiers of particle physics. An initial electron–positron collider phase (FCC-ee) would focus on ultra-precise measurements at the centre-of-mass energies required to create Z bosons, W-boson pairs, Higgs bosons and top-quark pairs, followed by proton and heavy-ion collisions in a hadron-collider phase (FCC-hh), which would probe the energy frontier directly. As recommended by the 2020 update of the European strategy for particle physics, a feasibility study for the FCC is in full swing. Following the submission to the CERN Council of the study’s midterm report earlier this year (CERN Courier March/April 2024 pp25–38), and the signing of a joint statement of intent on planning for large research infrastructures by CERN and the US government (CERN Courier July/August 2024 p10), FCC Week 2024 convened more than 450 scientists, researchers and industry leaders in San Francisco from 10 to 14 June, with the aim of engaging the wider scientific community, in particular in North America. Since then, more than 20 groups have joined the FCC collaboration.

SLAC and LBNL directors John Sarrao and Mike Witherell opened the meeting by emphasising the vital roles of international collaboration between national laboratories in advancing scientific discovery. Sarrao highlighted SLAC’s historical contributions to high-energy physics and expressed enthusiasm for the FCC’s scientific potential. Witherell reflected on the legacy of particle accelerators in fundamental science and the importance of continued innovation.

CERN Director-General Fabiola Gianotti identified three pillars of her vision for the laboratory: flagship projects like the LHC; a diverse complementary scientific programme; and preparations for future projects. She identified the FCC as the best future match for this vision, asserting that it has unparalleled potential for discovering new physics and can accommodate a large and diverse scientific community. “It is crucial to design a facility that offers a broad scientific programme, many experiments and exciting physics to attract young talents,” she said.

International collaboration, especially with the US, is important in ensuring the project’s success

FCC-ee would operate at several centre-of-mass energies corresponding to the Z-boson pole, W-boson pair-production, Higgs-boson pole or top-quark pair production. The beam current at each of these points would be determined by the design value of 50 MW synchrotron-radiation power per beam. At lower energies, the machine could accommodate more bunches, achieving 1.3 amperes and a luminosity in excess of 1036 cm–2 s–1 at the Z pole. Measurements of electroweak observables and Higgs-boson couplings would be improved by a factor of between 10 and 50. Remarkably, FCC-ee would also provide 10 times the ambitious design statistics of SuperKEKB/Belle II for bottom and charm quarks, making it the world-leading machine at the intensity frontier. Along with other measurements of electroweak observables, FCC-ee will indirectly probe energies up to 70 TeV for weakly interacting particles. Unlike at proposed linear colliders, four interaction points would increase scientific robustness, reduce systematic uncertainties and allow for specialised experiments, maximising the collider’s physics output.

For FCC-hh, two approaches are being pursued for the necessary high-field superconducting magnets. The first involves advancing niobium–tin technology, which is currently mastered at 11–12 T for the High-Luminosity LHC, with the goal of reaching operational fields of 14 T. The second focuses on high-temperature superconductors (HTS) such as REBCO and iron-based superconductors (IBS). REBCO comes mainly in tape form (CERN Courier May/June 2023 p37), whereas IBS comes in both tape and wire form. With niobium-tin, 14 T would allow proton–proton collision energies of 80 TeV in a 90 km ring. HTS-based magnets could potentially reach fields up to 20 T, and centre-of-mass energies proportionally higher, in the vicinity of 120 TeV. If HTS magnets prove technically feasible, they could greatly decrease the cryogenic power. The development of such technologies also holds great promise beyond fundamental research, for example in transportation and electricity transmission.

FCC study leader Michael Benedikt (CERN) outlined the status of the ongoing feasibility study, which is set to be completed by March 2025. No technical showstoppers have yet been found, paving the way for the next phase of detailed technical and environmental impact studies and critical site investigations. Benedikt stressed the importance of international collaboration, especially with the US, in ensuring the project’s success.

The next step for the FCC project is to provide information to the CERN Council, via the upcoming update of the European strategy for particle physics, to facilitate a decision on whether to pursue the FCC by the end of 2027 or in early 2028. This includes further developing the civil engineering and technical design of major systems and components to present a more detailed cost estimate, continuing technical R&D activities, and working with CERN’s host states on regional implementation development and authorisation processes along with the launch of an environmental impact study. FCC would intersect 31 municipalities in France and 10 in Switzerland. Detailed work is ongoing to identify and reserve plots of land for surface sites, address site-specific design aspects, and explore socio-economic and ecological opportunities such as waste-heat utilisation.

The post FCC builds momentum in San Francisco appeared first on CERN Courier.

]]>
Meeting report FCC Week 2024 convened more than 450 scientists, researchers and industry leaders in San Francisco with the aim of engaging the wider scientific community, in particular in North America. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_FN_FCC-1-1.jpg
Hypertriton and ‘little bang’ nucleosynthesis https://cerncourier.com/a/hypertriton-and-little-bang-nucleosynthesis/ Wed, 20 Nov 2024 10:55:45 +0000 https://cern-courier.web.cern.ch/?p=111434 The ALICE collaboration investigated the nucleosynthesis mechanism by measuring hypertriton production in heavy-ion collisions.

The post Hypertriton and ‘little bang’ nucleosynthesis appeared first on CERN Courier.

]]>
ALICE figure 1

According to the cosmological standard model, the first generation of nuclei was produced during the cooling of the hot mixture of quarks and gluons that was created shortly following the Big Bang. Relativistic heavy-ion collisions create a quark–gluon plasma (QGP) on a small scale, producing a “little bang”. In such collisions, the nucleosynthesis mechanism at play is different from the one of the Big Bang due to the rapid cool down of the fireball. Recently, the nucleosynthesis mechanism in heavy-ion collisions has been investigated via the measurement of hypertriton production by the ALICE collaboration.

The hypertriton, which consists of a proton, a neutron and a Λ hyperon, can be considered to be a loosely bound deuteron-Λ molecule (see “Inside pentaquarks and tetraquarks“). In this picture, the energy required to separate the Λ from the deuteron (BΛ) is about 100 keV, significantly lower than the binding energy of ordinary nuclei. This makes hypertriton production a sensitive probe of the properties of the fireball.

In heavy-ion collisions, the formation of nuclei can be explained by two main classes of models. The statistical hadronisation model (SHM) assumes that particles are produced from a system in thermal equilibrium. In this model, the production rate of nuclei depends only on their mass, quantum numbers and the temperature and volume of the system. On the other hand, in coalescence models, nuclei are formed from nucleons that are close together in phase space. In these models, the production rate of nuclei is also sensitive to their nuclear structure and size.

For an ordinary nucleus like the deuteron, coalescence and SHM predict similar production rates in all colliding systems, but for a loosely bound molecule such as the hypertriton, the predictions of the two models differ significantly. In order to identify the mechanism of nuclear production, the ALICE collab­oration used the ratio between the production rates of hypertriton and helium-3 – also known as a yield ratio – as an observable.

ALICE measured hypertriton production as a function of charged-particle multiplicity density using Pb–Pb collisions collected at a centre-of-mass energy of 5.02 TeV per nucleon pair during LHC Run 2. Figure 1 shows the yield ratio of hypertriton to 3He across different multiplicity intervals. The data points (red) exhibit a clear deviation from the SHM (dashed orange line), but are well-described by the coalescence model (blue band), supporting the conclusion that hypertriton formation at the LHC is driven by the coalescence mechanism.

The ongoing LHC Run 3 is expected to improve the precision of these measurements across all collision systems, allowing us to probe the internal structure of hypertriton and even heavier hypernuclei, whose properties remain largely unknown. This will provide insights into the interactions between ordinary nucleons and hyperons, which are essential for understanding the internal composition of neutron stars.

The post Hypertriton and ‘little bang’ nucleosynthesis appeared first on CERN Courier.

]]>
News The ALICE collaboration investigated the nucleosynthesis mechanism by measuring hypertriton production in heavy-ion collisions. https://cerncourier.com/wp-content/uploads/2019/05/LRsaba_CERN_0212_00685.jpg
Charming clues for existence https://cerncourier.com/a/charming-clues-for-existence/ Fri, 15 Nov 2024 13:54:18 +0000 https://cern-courier.web.cern.ch/?p=111390 Alexander Lenz argues that the charm quark is an experimental and theoretical enigma that has the potential to shed light on the matter–antimatter asymmetry in the universe.

The post Charming clues for existence appeared first on CERN Courier.

]]>
In November 1974, the research groups of Samuel Ting at Brookhaven National Laboratory and Burton Richter at SLAC independently discovered a resonance at 3.1 GeV that was less than 1 MeV wide. Posterity soon named it J/ψ, juxtaposing the names chosen by each group in a unique compromise. Its discovery would complete the second generation of fermions with the charm quark, giving experimental impetus to the new theories of electroweak unification (1967) and quantum chromodynamics (1973). But with the theories fresh and experimenters experiencing an annus mirabilisfollowing the indirect discovery of the Z boson in neutral currents the year before, the nature of the J/ψ was not immediately clear.

“Why the excitement over the new discoveries?” asked the Courier in December 1974 (see “The new particles“). “A brief answer is that the particles have been found in a mass region where they were completely un­expected, with stability properties which, at this stage of the game, are completely inexplicable.”

The J/ψ is now known to be made up of a charm quark and a charm antiquark. Unable to decay via the strong interaction, its width is just 92.6 keV, corresponding to an unexpectedly long lifetime of 7.1 × 10–21 s. Charm quarks do not form ordinary matter like protons and neutrons, but J/ψ resonances and D mesons, which contain a charm quark and a less-massive up, down or strange antiquark.

A 1971 cosmic-ray interaction in an emulsion chamber aboard a Japanese cargo aeroplane

Fifty years on from the November Revolution, charm physics is experiencing a renaissance. The LHCb, BESIII and Belle II experiments are producing a huge number of interesting and precise measurements in the charm system, with two crucial groundbreaking results on D0 mesons by LHCb holding particular significance: the observation that they violate CP symmetry when they decay; and the observation that they oscillate into their antiparticles. The rate of CP violation is particularly interesting – about 10 times larger than the most sophisticated Standard Model (SM) predictions, preliminary and uncertain though they are. Are these predictions naive, or is this the first glimpse of why there is more matter than antimatter in the universe?

Suppressed

Despite the initial confusion, the charm quark had already been indirectly discovered in 1970 by Sheldon Glashow, John Iliopoulos and Luciano Maiani (GIM), who introduced it to explain why K0 μ+μ decays are suppressed. Their paper gained widespread recognition during the November Revolution, and the GIM mechanism they discovered impacts cutting-edge calculations in charm physics to this day.

Previously, only the three light quarks (up, down and strange) were known. Alongside electrons and electron neutrinos, up and down quarks make up the first generation of fermions. The detection of muons in cosmic rays in 1936 was the first evidence for a second generation, triggering Isidor Rabi’s famous exclamation “Who ordered that?” Strange particles were found in 1947, providing evidence for a second generation of quarks, though it took until 1964 for Murray Gell-Mann and George Zweig to discover this ordering principle of the subatomic world.

A J/ψ event in the BESIII detector

In a model of three quarks, the decay of a K0 meson (a down–antistrange system) into two muons can only proceed by briefly transforming the meson into a W+W pair – an infamous flavour-changing neutral current – linked in a loop by a virtual up quark and virtual muon neutrino. While the amplitude for this process is problematically large given observed rates, the GIM mechanism cancels it almost exactly by introducing destructive quantum interference with a process that replaces the up quark with a new charm quark. The remaining finite value of the amplitude stems from the difference in the masses of the virtual quarks compared to the W boson, mu2/MW2 and mc2/MW2. Since both mass ratios are close to zero, K0 μ+μ is highly suppressed.

The interference is destructive because the Cabibbo matrix describing the coupling strength of the charged weak interaction is a rotation of the two generations of quarks. All four couplings in the matrix – up–down (cos θC), charm-strange (cos θC), charm-down (sin θC) and up-strange (–sin θC) – arise in the decay of a K0 meson, with the minus sign causing the cancellation.

Maybe the charm quark will in the end provide the ultimate clue to explain our existence

The direct experimental detection of the first particle containing charm is typically attributed to Ting and Richter in 1974, however, there was already some direct evidence for charmed mesons in Japan in 1971, though unfortunately in only one cosmic-ray event, and with no estimation of background (see “Cosmic charm” figure). Unnoticed by Western scientists, the measurements indicated a charm-quark mass of the order of 1.5 GeV, which is close to current estimates. In 1973, the quark-mixing formalism was extended by Makoto Kobayashi and Toshihide Maskawa to three generations of quarks, incorporating CP violation in the SM by allowing the couplings to be complex numbers with an imaginary part. The amount of CP violation contained in the resulting Cabibbo–Kobayashi–Maskawa (CKM) matrix does not appear to be sufficient to explain the observed matter–antimatter asymmetry in the universe.

The third generation of quarks began to be experimentally established in 1977 with the discovery of ϒ resonances (bottom–antibottom systems). In 1986, GIM cancellations in the matter–antimatter oscillations of neutral B mesons (B0–B0 mixing) indicated a large value of the top-quark mass, with mt2/MW2 not negligible, in contrast to mu2/MW2 and mc2/MW2. The top quark was directly discovered at the Tevatron in 1995. With the discovery of the Higgs boson in 2012 at the LHC, the full particle spectrum of the SM has now been experimentally confirmed.

Charm renaissance

More recently, two crucial effects in the charm system have been experimentally confirmed. Both measurements present intriguing discrepancies by comparison with naive theoretical expectations.

Matter–antimatter mixing

First, in 2019, the LHCb collaboration at CERN observed the first definitive evidence for CP violation in charm. A difference in the behaviour of matter and antimatter particles, CP violation can be expressed directly in charm decays, indirectly in the matter–antimatter oscillations of charmed particles, or in a quantum admixture of both effects. To isolate direct CP violation, LHCb proved that the difference in matter–antimatter asymmetries seen in D0→ K+K and D0→ π+π decays (ΔACP) is nonzero. Though the observed CP violation is tiny, it is nevertheless approximately a factor 10 larger than the best available SM predictions. Currently the big question is whether these naive SM expectations can be enhanced by a factor of 10 due to non-perturbative effects, or whether the measurement of ΔACP is a first glimpse of physics beyond the SM, perhaps also answering the question of why there is more matter than antimatter in the universe.

Two years later, LHCb definitively demonstrated the transformation of neutral D0 mesons into their antiparticles (D0–D0mixing). These transitions only involve virtual down-type quarks (down, strange and bottom), causing extreme GIM cancellations as md2/MW2, ms2/MW2  and mb2/MW2 are all negligible (see “Matter–antimatter mixing” figure). Theory calculations are preliminary here too, but naive SM predictions of the mass splitting between the mass eigenstates of the neutral D-meson system are at present several orders of magnitude below the experimental value.

Theoretical attempts to reproduce experimental measurements

The charm system has often proved to be more experimentally challenging than the bottom system, with matter–antimatter oscillations and direct and indirect CP violation all discovered first for the bottom quark, and indirect CP violation still awaiting confirmation in charm. The theoretical description of the charm system also presents several interesting features by comparison to the bottom system. They may be regarded as challenges, peculiarities, or even opportunities.

A challenge is the use of perturbation theory. The strong coupling at the scale of the charm-quark mass is quite large – αs(mc) ≈ 0.35 – and perturbative expansions in the strong coupling only converge as (1, 0.35, 0.12, …). The charm quark is also not particularly heavy, and perturbative expansions in Λ/mc only converge as roughly (1, 0.33, 0.11, …), assuming Λ is an energy scale of the order of the hadronic scale of the strong interaction. If the coefficients being multiplied are of similar sizes, then these series may converge.

Numerical cancellations are a peculiarity, and often classified as strong or even crazy in cases such as D0–D0 mixing, where contributions cancel to one part in 105.

The fact that CKM couplings involving the charm quark (Vcd, Vcs and Vcb) have almost vanishing imaginary parts is an opportunity. With CP-violating effects in charm systems expected to be tiny, any measurement of sizable CP violating effects would indicate the presence of physics beyond the SM (BSM).

A final peculiarity is that loop-induced charm decays and D-mixing both proceed exclusively via virtual down-type quarks, presenting opportunities to extend sensitivity to BSM physics via joint analyses with complementary bottom and strange decays.

At first sight, these effects complicate the theoretical treatment of the charm system. Many approaches are therefore based on approximations such as SU(3)F flavour symmetry or U-spin symmetry (see “Using U-spin to squeeze CP violation”). On the other hand, these properties can also be a virtue, making some observables very sensitive to higher orders in our expansions and providing an ideal testing ground for QCD tools.

Branching fractions of non-leptonic two-body D0 decays

Thanks to many theoretical improvements, we are now in a position to start answering the question of whether perturbative expansions in the strong coupling and the inverse of the quark mass are applicable in the charm system. Recently, progress has been made with observables that are free from severe cancellations: a double expansion in Λ/mcand αs (the heavy-quark expansion) seems to be able to reproduce the D0 lifetime (see “Charmed life” figure); and theoretical calculations of branching fractions for non-leptonic two-body D0 decays seem to be in good agreement with experimental values (see “Two body” figure).

All these theory predictions still suffer from large uncertainties, but they can be systematically improved. Demonstrating the validity of these theory tools with higher precision could imply that the measured value of CP violation in the charm system (ΔACP) has a BSM origin.

The future

Charm physics therefore has a bright future. Many of the current theory approaches can be systematically improved with currently available technologies by adding higher-order perturbative corrections. A full lattice-QCD description of D-mixing and non-leptonic D-meson decays requires new ideas, but first steps have already been taken. These theory developments should give us deeper insights into the question of whether ΔACP and D0–D0 mixing can be described within the SM.

More precise experimental data can also help in answering these questions. The BESIII experiment at IHEP in China and the Belle II experiment at KEK in Japan can investigate inclusive semileptonic charm decays and measure parameters that are needed for the heavy-quark expansion. LHCb and Belle II can investigate CP-violating effects in D0–D0 mixing and in channels other than D0→ K+K and π+π. The super tau–charm factory proposed by China could contribute further precise data and a future e+e collider running as an ultimate Z factory could provide an independent experimental cross-check for ΔACP.

Another exciting field is that of rare charm decays such as D+→ π+μ+μ and D+→ π+ νν, which proceed via loop diagrams similar to those in K0→ μ+μ decays and D0–D0 oscillations. Here, null tests can be constructed using observables that vanish precisely in the SM, allowing future experimental data to unambiguously probe BSM effects.

Maybe the charm quark will in the end provide the ultimate clue to explain our existence. Wouldn’t that be charming?

The post Charming clues for existence appeared first on CERN Courier.

]]>
Feature Alexander Lenz argues that the charm quark is an experimental and theoretical enigma that has the potential to shed light on the matter–antimatter asymmetry in the universe. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_CHARM_frontis-1-1.jpg
The new particles https://cerncourier.com/a/the-new-particles/ Fri, 15 Nov 2024 13:43:09 +0000 https://cern-courier.web.cern.ch/?p=111393 Fifty years ago, the discovery of the J/ψ and its excitations sparked the November Revolution in particle physics, giving fresh experimental impetus to the theoretical ideas that would become the Standard Model.

The post The new particles appeared first on CERN Courier.

]]>
Sam Ting in November 1974

Anyone in touch with the world of high-energy physics will be well aware of the ferment created by the news from Brookhaven and Stanford, followed by Frascati and DESY, of the existence of new particles. But new particles have been unearthed in profusion by high-energy accelerators during the past 20 years. Why the excitement over the new discoveries?

A brief answer is that the particles have been found in a mass region where they were completely unexpected with stability properties which, at this stage of the game, are completely inexplicable. In this article we will first describe the discoveries and then discuss some of the speculations as to what the discoveries might mean.

We begin at the Brookhaven National Laboratory where, since the Spring of this year, a MIT/Brookhaven team have been looking at collisions between two protons which yielded (amongst other things) an electron and a positron. A series of experiments on the production of electron–positron pairs in particle collisions has been going on for about eight years in groups led by Sam Ting, mainly at the DESY synchrotron in Hamburg. The aim is to study some of the electromagnetic features of particles where energy is manifest in the form of a photon which materialises in an electron–positron pair. The experiments are not easy to do because the probability that the collisions will yield such a pair is very low. The detection system has to be capable of picking out an event from a million or more other types of event.

Beryllium bombardment

It was with long experience of such problems behind them that the MIT/Brookhaven team led by Ting, J J Aubert, U J Becker and P J Biggs brought into action a detection system with a double arm spectrometer in a slow ejected proton beam at the Brookhaven 33 GeV synchrotron. They used beams of 28.5 GeV bombarding a beryllium target. The two spectrometer arms span out at 15° either side of the incident beam direction and have magnets, Cherenkov counters, multiwire proportional chambers, scintillation counters and lead glass counters. With this array, it is possible to identify electrons and positrons coming from the same source and to measure their energy.

From about August, the realisation that they were on to something important began slowly to grow. The spectrometer was totting up an unusually large number of events where the combined energies of the electron and positron were equal to 3.1 GeV.

The detection system of the experiment at Brookhaven that spotted the new particle

This is the classic way of spotting a resonance. An unstable particle, which breaks up too quickly to be seen itself, is identified by adding up the energies of more stable particles which emerge from its decay. Looking at many interactions, if energies repeatedly add up to the same figure (as opposed to the other possible figures all around it), they indicate that the measured particles are coming from the break up of an unseen particle whose mass is equal to the measured sum.

The team went through extraordinary contortions to check their apparatus to be sure that nothing was biasing their results. The particle decaying into the electron and positron they were measuring was a difficult one to swallow. The energy region had been scoured before, even if not so thoroughly, without anything being seen. Also the resonance was looking “narrow” – this means that the energy sums were coming out at 3.1 GeV with great precision rather than, for example, spanning from 2.9 to 3.3 GeV. The width is a measure of the stability of the particle (from Heisenberg’s Uncertainty Principle, which requires only that the product of the average lifetime and the width be a constant). A narrow width means that the particle lives a long time. No other particle of such a heavy mass (over three times the mass of the proton) has anything like that stability.

By the end of October, the team had about 500 events from a 3.1 GeV particle. They were keen to extend their search to the maximum mass their detection system could pin down (about 5.5 GeV) but were prodded into print mid-November by dramatic news from the other coast of America. They baptised the particle J, which is a letter close to the Chinese symbol for “ting”. From then on, the experiment has had top priority. Sam Ting said that the Director of the Laboratory, George Vineyard, asked him how much time on the machine he would need – which is not the way such conversations usually go.

The apparition of the particle at the Stanford Linear Accelerator Center on 10 November was nothing short of shattering. Burt Richter described it as “the most exciting and frantic week-end in particle physics I have ever been through”. It followed an upgrading of the electron–positron storage ring SPEAR during the late Summer.

Until June, SPEAR was operating with beams of energy up to 2.5 GeV so that the total energy in the collision was up to a peak of 5 GeV. The ring was shut down during the late summer to install a new RF system and new power supplies so as to reach about 4.5 GeV per beam. It was switched on again in September and within two days beams were orbiting the storage ring again. Only three of the four new RF cavities were in action so the beams could only be taken to 3.8 GeV. Within two weeks the luminosity had climbed to 5 × 1030cm–2 s–1 (the luminosity dictates the number of interactions the physicists can see) and time began to be allocated to experimental teams to bring their detection systems into trim.

It was the Berkeley/Stanford team led by Richter, M Perl, W Chinowsky, G Goldhaber and G H Trilling who went into action during the week-end 9–10 November to check back on some “funny” readings they had seen in June. They were using a detection system consisting of a large solenoid magnet, wire chambers, scintillation counters and shower counters, almost completely surrounding one of the two intersection regions where the electrons and positrons are brought into head-on collision.

Put through its paces

During the first series of measurements with SPEAR, when it went through its energy paces, the cross-section (or probability of an interaction between an electron and positron occurring) was a little high at 1.6 GeV beam energy (3.2 GeV collision energy) compared with at the neighbouring beam energies. The June exercise, which gave the funny readings, was a look over this energy region again. Cross-sections were measured with electrons and positrons at 1.5, 1.55, 1.6 and 1.65 GeV. Again 1.6 GeV was a little high but 1.55 GeV was even more peculiar. In eight runs, six measurements agreed with the 1.5 GeV data while two were higher (one of them five-times higher). So, obviously, a gremlin had crept in to the apparatus. While meditating during the transformation from SPEAR I to SPEAR II, the gremlin was looked for but not found. It was then that the suspicion grew that between 3.1 and 3.2 GeV collision energies could lie a resonance.

During the night of 9–10 November the hunt began, changing the beam energies in 0.5 MeV steps. By 11.00 a.m. Sunday morning the new particle had been unequivocally found. A set of cross-section measurements around 3.1 GeV showed that the probability of interaction jumped by a factor of 10 from 20 to 200 nanobarns. In a state of euphoria, the champagne was cracked open and the team began celebrating an important discovery. Gerson Goldhaber retired in search of peace and quiet to write the findings for immediate publication.

The detection system at the SPEAR storage ring at Stanford

While he was away, it was decided to polish up the data by going slowly over the resonance again. The beams were nudged from 1.55 to 1.57 and everything went crazy. The interaction probability soared higher; from around 20 nanobarns the cross-section jumped to 2000 nanobarns and the detector was flooded with events producing hadrons. Pief Panofsky, the Director of SLAC, arrived and paced around invoking the Deity in utter amazement at what was being seen. Gerson Goldhaber then emerged with his paper proudly announcing the 200 nanobarn resonance and had to start again, writing 10 times more proudly.

Within hours of the SPEAR measurements, the telephone wires across the Atlantic were humming as information enquiries and rumours were exchanged. As soon as it became clear what had happened, the European Laboratories looked to see how they could contribute to the excitement. The obvious candidates, to be in on the act quickly, were the electron–positron storage rings at Frascati and DESY.

From 13 November, the experimental teams on the ADONE storage ring (from Frascati and the INFN sections of the universities of Naples, Padua, Pisa and Rome) began to search in the same energy region. They have detection systems for three experiments known as gamma–gamma (wide solid angle detector with high efficiency for detecting neutral particles), MEA (solenoidal magnetic spectrometer with wide gap spark chambers and shower detectors) and baryon–antibaryon (coaxial hodoscopes of scintillators covering a wide solid angle). The ADONE operators were able to jack the beam energy up a little above its normal peak of 1.5 GeV and on 15 November the new particle was seen in all three detection systems. The data confirmed the mass and the high stability. The experiments are continuing using the complementary abilities of the detectors to gather as much information as possible on the nature of the particle.

At DESY, the DORIS storage ring was brought into action with the PLUTO and DASP detection systems described later in this issue on page 427. During the week-end of 23–24 November, a clear signal at about 3.1 GeV total energy was seen in both detectors, with PLUTO measuring events with many emerging hadrons and DASP measuring two emerging particles. The angular distribution of elastic electron–positron scattering was measured at 3.1 GeV, and around it, and a distinct change was seen. The detectors are now concentrating on measuring branching ratios – the relative rate at which the particle decays in different ways.

Excitation times

In the meantime, SPEAR II had struck again. On 21 November, another particle was seen at 3.7 GeV. Like the first it is a very narrow resonance indicating the same high stability. The Berkeley/Stanford team have called the particles psi (3105) and psi (3695).

No-one had written the recipe for these particles and that is part of what all the excitement is about. At this stage, we can only speculate about what they might mean.  First of all, for the past year, something has been expected in the hadron–lepton relationship. The leptons are particles, like the electron, which we believe do not feel the strong force. Their interactions, such as are initiated in an electron–positron storage ring, can produce hadrons (or strong force particles) via their common electromagnetic features. On the basis of the theory that hadrons are built up of quarks (a theory that has a growing weight of experimental support – see CERN Courier October 1974 pp331–333), it is possible to calculate relative rates at which the electron–positron interaction will yield hadrons and the rate should decrease as the energy goes higher. The results from the Cambridge bypass and SPEAR about a year ago showed hadrons being produced much more profusely than these predictions.

What seems to be the inverse of this observation is seen at the CERN Intersecting Storage Rings and the 400 GeV synchrotron at the FermiLab. In interactions between hadrons, such as proton–proton collisions, leptons are seen coming off at much higher relative rates than could be predicted. Are the new particles behind this hadron–lepton mystery? And if so, how?

Signs of a revolution

Other speculations are that the particles have new properties to add to the familiar ones like charge, spin, parity… As the complexity of particle behaviour has been uncovered, names have had to be selected to describe different aspects. These names are linked, in the mathematical description of what is going on, to quantum numbers. When particles interact, the quantum numbers are generally conserved – the properties of the particles going into the interaction are carried away, in some perhaps very different combination, by the particles which emerge. If there are new properties, they also will influence what interactions can take place.

To explain what might be happening, we can consider the property called “strangeness”. This was assigned to particles like the neutral kaon and lambda to explain why they were always produced in pairs – the strangeness quantum number is then conserved, the kaon carrying +1, the lambda carrying –1. It is because the kaon has strangeness that it is a very stable particle. It will not readily break up into other particles which do not have this property.

They baptised the particle J, which is a letter close to the Chinese symbol for “ting”

Two new properties have recently been invoked by the theorists – colour and charm. Colour is a suggested property of quarks which makes sense of the statistics used to calculate the consequences of their existence. This gives us nine basic quarks – three coloured varieties of each of the three familiar ones. Charm is a suggested property which makes sense of some observations concerning neutral current interactions (discussed below).

It is the remarkable stability of the new particles which makes it so attractive to invoke colour or charm. From the measured width of the resonances they seem to live for about 10–20 seconds and do not decay rapidly like all the other resonances in their mass range. Perhaps they carry a new quantum number?

Unfortunately, even if the new particles are coloured, since they are formed electromagnetically they should be able to decay the same way and the sums do not give their high stability. In addition, the sums say that there is not enough energy around for them to be built up of charmed constituents. The answer may lie in new properties but not in a way that we can easily calculate.

Yet another possibility is that we are, at last, seeing the intermediate boson. This particle was proposed many years ago as an intermediary of the weak force. Just as the strong force is communicated between hadrons by passing mesons around and the electromagnetic force is communicated between charged particles by passing photons around, it is thought that the weak force could also act via the exchange of a particle rather than “at a point”.

Perhaps the new particles carry a new quantum number?

When it was believed that the weak interactions always involved a change of electric charge between the lepton going into the interaction and the lepton going out, the intermediate boson (often referred to as the W particle) was always envisaged as a charged particle. The CERN discovery of neutral currents in 1973 revealed that a charge change between the leptons need not take place; there could also be a neutral version of the intermediate boson (often referred to as the Z particle). The Z particle can also be treated in the theory which has had encouraging success in uniting the interpretations of the weak and electromagnetic forces.

This work has taken the Z mass into the 70 GeV region and its appearance around 3 GeV would damage some of the beautiful features of the reunification theories. A strong clue could come from looking for asymmetries in the decays of the new particles because, if they are of the Z variety, parity violation should occur.

1974 has been one of the most fascinating years ever experienced in high-energy physics. Still reeling from the neutral current discovery, the year began with the SPEAR hadron production mystery, continued with new high-energy information from the FermiLab and the CERN ISR, including the high lepton production rate, and finished with the discovery of the new particles. And all this against a background of feverish theoretical activity trying to keep pace with what the new accelerators and storage rings have been uncovering.

The post The new particles appeared first on CERN Courier.

]]>
Feature Fifty years ago, the discovery of the J/ψ and its excitations sparked the November Revolution in particle physics, giving fresh experimental impetus to the theoretical ideas that would become the Standard Model. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NOVEMBER_feature-1-1.jpg
Exploding misconceptions https://cerncourier.com/a/exploding-misconceptions/ Wed, 13 Nov 2024 09:47:45 +0000 https://cern-courier.web.cern.ch/?p=111451 Cosmologist Katie Mack talks to the Courier about how high-energy physics can succeed in #scicomm by throwing open the doors to academia.

The post Exploding misconceptions appeared first on CERN Courier.

]]>
Katie Mack

What role does science communication play in your academic career?

When I was a postdoc I started to realise that the science communication side of my life was really important to me. It felt like I was having a big impact – and in research, you don’t always feel like you’re having that big impact. When you’re a grad student or postdoc, you spend a lot of time dealing with rejection, feeling like you’re not making progress or you’re not good enough. I realised that with science communication, I was able to really feel like I did know something, and I was able to share that with people.

When I began to apply for faculty jobs, I realised I didn’t want to just do science writing as a nights and weekends job, I wanted it to be integrated into my career. Partially because I didn’t want to give up the opportunity to have that kind of impact, but also because I really enjoyed it. It was energising for me and helped me contextualise the work I was doing as a scientist.

How did you begin your career in science communication?

I’ve always enjoyed writing stories and poetry. At some point I figured out that I could write about science. When I went to grad school I took a class on science journalism and the professor helped me pitch some stories to magazines, and I started to do freelance science writing. Then I discovered Twitter. That was even better because I could share every little idea I had with a big audience. Between Twitter and freelance science writing, I garnered quite a large profile in science communication and that led to opportunities to speak and do more writing. At some point I was approached by agents and publishers about writing books.

Who is your audience?

When I’m not talking to other scientists, my main community is generally those who have a high-school education, but not necessarily a university education. I don’t tailor things to people who aren’t interested in science, or try to change people’s minds on whether science is a good idea. I try to help people who don’t have a science background feel empowered to learn about science. I think there are a lot of people who don’t see themselves as “science people”. I think that’s a silly concept but a lot of people conceptualise it that way. They feel like science is closed to them.

The more that science communicators can give people a moment of understanding, an insight into science, I think they can really help people get more involved in science. The best feedback I’ve ever gotten is when students have come up to me and said “I started studying physics because I followed you on Twitter and I saw that I could do this,” or they read my book and that inspired them. That’s absolutely the best thing that comes out of this. It is possible to have a big impact on individuals by doing social media and science communication – and hopefully change the situation in science itself over time.

What were your own preconceptions of academia?

I have been excited about science since I was a little kid. I saw that Stephen Hawking was called a cosmologist, so I decided I wanted to be a cosmologist too. I had this vision in my head that I would be a theoretical physicist. I thought that involved a lot of standing alone in a small room with a blackboard, writing equations and having eureka moments. That’s what was always depicted on TV: you just sit by yourself and think real hard. When I actually got into academia, I was surprised by how collaborative and social it is. That was probably the biggest difference between expectation and reality.

How do you communicate the challenges of academia, alongside the awe-inspiring discoveries and eureka moments?

I think it’s important to talk about what it’s really like to be an academic, in both good ways and bad. Most people outside of academia have no idea what we do, so it’s really valuable to share our experiences, both because it challenges stereotypes in terms of what we’re really motivated by and how we spend our time, but also because there are a lot of people who have the same impression I did: where you just sit alone in a room with a chalkboard. I believe it’s important to be clear about what you actually do in academia, so more people can see themselves happy in the job.

At the same time, there are challenges. Academia is hard and can be very isolating. My advice for early-career researchers is to have things other than science in your life. As a student you’re working on something that potentially no one else cares very much about, except maybe your supervisor. You’re going to be the world-expert on it for a while. It can be hard to go through that and not have anybody to talk to about your work. I think it’s important to acknowledge what people go through and encourage them to get support.

Theoretical physicist Katie Mack

There are of course other parts of academia that can be really challenging, like moving all the time. I went from West coast to East coast between undergrad and grad school, and then from the US to the UK, from the UK to Australia, back to the US and then to Canada. That’s a lot. It’s hard. They’re all big moves so you lose whatever local support system you had and you have to start over in a new place, make new friends and get used to a whole new government bureaucracy.

So there are a whole lot of things that are difficult about academia, and you do need to acknowledge those because a lot of them affect equity. Some of these make it more challenging to have diversity in the field, and they disproportionately affect some groups more than others. It is important to talk about these issues instead of just sweeping people under the rug.

Do you think that social media can help to diversify science and research?

Yes! I think that a large reason why people from underrepresented groups leave science is because they lack the feeling of belonging. If you get into a field and don’t feel like you belong, it’s hard to power through that. It makes it very unpleasant to be there. So I think that one of the ways social media can really help is by letting people see scientists who are not the stereotypical old white men. Talking about what being a scientist is really like, what the lifestyle is like, is really helpful for dismantling those stereotypes.

Your first book, The End of Everything, explored astrophysics but your next will popularise particle physics. Have you had to change your strategy when communicating different subjects?

This book is definitely a lot harder to write. The first one was very big and dramatic: the universe is ending! In this one, I’m really trying to get deeper into how fundamental physics works, which is a more challenging story to tell. The way I’m framing it is through “how to build a universe”. It’s about how fundamental physics connects with the structure of reality, both in terms of what we experience in our daily lives, but also the structure of the universe, and how physicists are working to understand that. I also want to highlight some of the scientists who are doing that work.

So yes, it’s much harder to find a catchy hook, but I think the subject matter and topics are things that people are curious about and have a hunger to understand. There really is a desire amongst the public to understand what the point of studying particle physics is.

Is high-energy physics succeeding when it comes to communicating with the public?

I think that there are some aspects where high-energy physics does a fantastic job. When the Higgs boson was discovered in 2012, it was all over the news and everybody was talking about it. Even though it’s a really tough concept to explain, a lot of people got some inkling of its importance.

A lot of science communication in high-energy physics relies on big discoveries, however recently there have not been that many discoveries at the level of international news. There have been many interesting anomalies in recent years, however in terms of discoveries we had the Higgs and the neutrino mass in 1998, but I’m not sure that there are many others that would really grab your attention if you’re not already invested in physics.

Part of the challenge is just the phase of discovery that particle physics is in right now. We have a model, and we’re trying to find the edges of validity of that model. We see some anomalies and then we fix them, and some might stick around. We have some ideas and theories but they might not pan out. That’s kind of the story we’re working with right now, whereas if you’re looking at astronomy, we had gravitational waves and dark energy. We get new telescopes with beautiful pictures all the time, so it’s easier to communicate and get people excited than it is in particle physics, where we’re constantly refining the model and learning new things. It’s a fantastically exciting time, but there have been no big paradigm shifts recently.

How can you keep people engaged in a subject where big discoveries aren’t constantly being made?

I think it’s hard. There are a few ways to go about it. You can talk about the really massive journey we’re on: this hugely consequential and difficult challenge we’re facing in high-energy physics. It’s a huge task of massive global effort, so you can help people feel involved in the quest to go beyond the Standard Model of particle physics.

You need to acknowledge it’s going to be a long journey before we make any big discoveries. There’s much work to be done, and we’re learning lots of amazing things along the way. We’re getting much higher precision. The process of discovery is also hugely consequential outside of high-energy physics: there are so many technological spin-offs that tie into other fields, like cosmology. Discoveries are being made between particle and cosmological physics that are really exciting.

Every little milestone is an achievement to be celebrated

We don’t know what the end of the story looks like. There aren’t a lot of big signposts along the way where we can say “we’ve made so much progress, we’re halfway there!” Highlighting the purpose of discovery, the little exciting things that we accomplish along the way such as new experimental achievements, and the people who are involved and what they’re excited about – this is how we can get around this communication challenge.

Every little milestone is an achievement to be celebrated. CERN is the biggest laboratory in the world. It’s one of humanity’s crowning achievements in terms of technology and international collaboration – I don’t think that’s an exaggeration. CERN and the International Space Station. Those two labs are examples of where a bunch of different countries, which may or may not get along, collaborate to achieve something that they can’t do alone. Seeing how everyone works together on these projects is really inspiring. If more people were able to get a glimpse of the excitement and enthusiasm around these experiments, it would make a big difference.

The post Exploding misconceptions appeared first on CERN Courier.

]]>
Opinion Cosmologist Katie Mack talks to the Courier about how high-energy physics can succeed in #scicomm by throwing open the doors to academia. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_INT_writing-1.jpg
Revised schedule for the High-Luminosity LHC https://cerncourier.com/a/revised-schedule-for-the-high-luminosity-lhc/ Wed, 13 Nov 2024 09:43:20 +0000 https://cern-courier.web.cern.ch/?p=111408 LS3 is scheduled to begin at the start of July 2026.

The post Revised schedule for the High-Luminosity LHC appeared first on CERN Courier.

]]>
During its September session, the CERN Council was presented with a revised schedule for Long Shutdown 3 (LS3) of the LHC and its injector complex. For the LHC, LS3 is now scheduled to begin at the start of July 2026, seven and a half months later than planned. The overall length of the shutdown will increase by around four months. Combined, these measures will shift the start of the High-Luminosity LHC (HL-LHC) by approximately one year, to June 2030. The extensive programme of work for the injectors will begin in September 2026, with a gradual restart of operations scheduled to take place in 2028.

“The decision to shift the start of the HL-LHC by approximately one year and increase the length of the shutdown reflects a consensus supported by our scientific committees,” explains Mike Lamont, CERN director for accelerators and technology. “The delayed start of LS3 is primarily due to significant challenges encountered during the Phase II upgrades of the ATLAS and CMS experiments, which have led to the erosion of contingency time and introduced considerable schedule risks. The challenges faced by the experiment teams included COVID-19 and the impact of the Russian invasion of Ukraine.”

LS3 represents a pivotal phase in enhancing CERN’s capabilities. During the shutdown, ATLAS and CMS will replace many of their detectors and a large part of their electronics. Schedule contingencies have been insufficient for the new inner tracker for ATLAS, and for the HGCAL and new tracker for CMS. The delayed start of LS3 will allow the collaborations more time to develop and build these highly sophisticated detectors and systems.

On the machine side, a key activity during LS3 is the drilling of 28 vertical cores to link the new HL-LHC technical galleries to the LHC tunnel. Initially expected to take six months, this timeframe was reduced to two months in 2021 to optimise the schedule. However, challenges encountered during the tendering process and in subsequent consultations with specialists necessitated a return to the original six-month timeline for core excavation.

In addition to high-luminosity enhancements, LS3 will involve a major programme of work across the accelerator complex. This includes the North Area consolidation project and the transformation of the ECN3 cavern into a high-intensity fixed-target facility; the dismantling of the CNGS target to make way for the next phase of wakefield-acceleration research at AWAKE; improvements to ISOLDE to boost the facility’s nuclear-studies potential; and extensive maintenance and consolidation across all machines and facilities to ensure operational safety, longevity and availability.

“All these activities are essential to ensuring the medium-term future of the laboratory and allowing full exploitation of its remarkable potential in the coming decades,” says Lamont.

The post Revised schedule for the High-Luminosity LHC appeared first on CERN Courier.

]]>
News LS3 is scheduled to begin at the start of July 2026. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NA_cool-1-1.jpg
Cornering the Higgs couplings to quarks https://cerncourier.com/a/cornering-the-higgs-couplings-to-quarks/ Wed, 13 Nov 2024 09:40:25 +0000 https://cern-courier.web.cern.ch/?p=111445 The ATLAS collaboration recently released improved results on the Higgs boson’s interaction with second- and third-generation quarks.

The post Cornering the Higgs couplings to quarks appeared first on CERN Courier.

]]>
One of nature’s greatest mysteries lies in the masses of the elementary fermions. Each of the three generations of quarks and charged leptons is progressively heavier than the first one, which forms ordinary matter, but the overall pattern and vast mass differences remain empirical and unexplained. In the Standard Model (SM), charged fermions acquire mass through interactions with the Higgs field. Consequently, their interaction strength with the Higgs boson, a ripple of the Higgs field, is proportional to the fermions’ mass. Precise measurements of these interaction strengths could offer insights into the mass-generation mechanism and potentially uncover new physics to explain this mystery.

The ATLAS collaboration recently released improved results on the Higgs boson’s interaction with second- and third-generation quarks (charm, bottom and top), based on the analysis of data collected during LHC Run 2 (2015–2018). The analyses refine two studies: Higgs-boson decays to charm- and bottom-quark pairs (H → cc and H → bb) in events where the Higgs boson is produced together with a weak boson V (W or Z); and, since the Higgs boson is too light to decay into a top-quark pair, the interaction with top quarks is probed in Higgs production in association with a top-quark pair (ttH) in events with H → bb decays. Sensitivity to H → cc and H → bb in VH production is increased by a factor of three and by 15%, respectively. Sensitivity to ttH, H → bb production is doubled.

Innovative analysis techniques were crucial to these improvements, several involving machine learning techniques, such as state-of-the-art transformers in the extremely challenging ttH(bb) analysis. Both analyses utilised an upgraded algorithm for identifying particle jets from bottom and charm quarks. A bespoke implementation allowed, for the first time, analysis of VH events coherently for both H → cc and H → bb decays. The enhanced classification of the signal from various background processes allowed a tripling of the number of selected ttH, H → bb events, and was the single largest improvement to increase the sensitivity to VH, H → cc. Both analyses improved their methods for estimating background processes including new theoretical predictions and the refined assessment of related uncertainties – a key component to boost the ttH, H → bb sensitivity.

ATLAS figure 2

Due to these improvements, ATLAS measured the ttH, H → bb cross-section with a precision of 24%, better than any single measurement before. The signal strength relative to the SM prediction is found to be 0.81 ± 0.21, consistent with the SM expectation of unity. It does not confirm previous results from ATLAS and CMS that left room for a lower-than-expected ttH cross section, dispelling speculations of new physics in this process. The compatibility between new and previous ATLAS results is estimated to be 21%.

In the new analysis VH, H → bb production was measured with a record precision of 18%; WH, H → bb production was observed for the first time with a significance of 5.3σ. Because H → cc decays are suppressed by a factor of 20 relative to H → bb decays, given the difference in quark masses, and are more difficult to identify, no significant sign of this process was found in the data. However, an upper limit on potential enhancements of the VH, H → cc rate of 11.3 times the SM prediction was placed at the 95% confidence level, allowing ATLAS to constrain the Higgs-charm coupling to less than 4.2 times the SM value, the strongest direct constraint to date.

The ttH and VH cross-sections were measured (double-)differentially with increased reach, granularity, and precision (figures 1 and 2). Notably, in the high transverse-momentum regime, where potential new physics effects are not yet excluded, the measurements were extended and the precision nearly doubled. However, neither analysis shows significant deviations from Standard Model predictions.

The significant new dataset from the ongoing Run 3 of the LHC, coupled with further advanced techniques like transformer-based jet identification, promises even more rigorous tests soon, and amplifies the excitement for the High-Luminosity LHC, where further precision will push the boundaries of our understanding of the Higgs boson – and perhaps yield clues to the mystery of the fermion masses.

The post Cornering the Higgs couplings to quarks appeared first on CERN Courier.

]]>
News The ATLAS collaboration recently released improved results on the Higgs boson’s interaction with second- and third-generation quarks. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_EF_ATLAS1-1-1.jpg
ICFA talks strategy and sustainability in Prague https://cerncourier.com/a/icfa-talks-strategy-and-sustainability-in-prague/ Wed, 13 Nov 2024 09:33:12 +0000 https://cern-courier.web.cern.ch/?p=111309 The 96th ICFA meeting heard extensive reports from the leading HEP laboratories and various world regions on their recent activities and plans.

The post ICFA talks strategy and sustainability in Prague appeared first on CERN Courier.

]]>
ICFA, the International Committee for Future Accelerators, was formed in 1976 to promote international collaboration in all phases of the construction and exploitation of very-high-energy accelerators. Its 96th meeting took place on 20 and 21 July during the recent ICHEP conference in Prague. Almost all of the 16 members from across the world attended in person, making the assembly lively and constructive.

The committee heard extensive reports from the leading HEP laboratories and various world regions on their recent activities and plans, including a presentation by Paris Sphicas, the chair of the European Committee for Future Accelerators (ECFA), on the process for the update of the European strategy for particle physics (ESPP). Launched by CERN Council in March 2024, the ESPP update is charged with recommending the next collider project at CERN after HL-LHC operation.

A global task

The ESPP update is also of high interest to non-European institutions and projects. Consequently, in addition to the expected inputs to the strategy from European HEP communities, those from non-European HEP communities are also welcome. Moreover, the recent US P5 report and the Chinese plans for CEPC, with a potential positive decision in 2025/2026, and discussions about the ILC project in Japan, will be important elements of the work to be carried out in the context of the ESPP update. They also emphasise the global nature of high-energy physics.

An integral part of the work of ICFA is carried out within its panels, which have been very active. Presentations were given from the new panel on the Data Lifecycle (chair Kati Lassila-Perini, Helsinki), the Beam Dynamics panel (new chair Yuan He, IMPCAS) and the Advanced and Novel Accelerators panel (new chair Patric Muggli, Max Planck Munich, proxied at the meeting by Brigitte Cros, Paris-Saclay). The Instrumentation and Innovation Development panel (chair Ian Shipsey, Oxford) is setting an example with its numerous schools, the ICFA instrumentation awards and centrally sponsored instrumentation studentships for early-career researchers from underserved world regions. Finally, the chair of the ILC International Development Team panel (Tatsuya Nakada, EPFL) summarised the latest status of the ILC Technological Network, and the proposed ILC collider project in Japan.

ICFA noted interesting structural developments in the global organisation of HEP

A special session was devoted to the sustainability of HEP accelerator infrastructures, considering the need to invest efforts into guidelines that enable better comparison of the environmental reports of labs and infrastructures, in particular for future facilities. It was therefore natural for ICFA to also hear reports not only from the panel on Sustainable Accelerators and Colliders led by Thomas Roser (BNL), but also from the European Lab Directors Working Group on Sustainability. This group, chaired by Caterina Bloise (INFN) and Maxim Titov (CEA), is mandated to develop a set of key indicators and a methodology for the reporting on future HEP projects, to be delivered in time for the ESPP update.

Finally, ICFA noted some very interesting structural developments in the global organisation of HEP. In the Asia-Oceania region, ACFA-HEP was recently formed as a sub-panel under the Asian Committee for Future Accelerators (ACFA), aiming for a better coordination of HEP activities in this particular region of the world. Hopefully, this will encourage other world regions to organise themselves in a similar way in order to strengthen their voice in the global HEP community – for example in Latin America. Here, a meeting was organised in August by the Latin American Association for High Energy, Cosmology and Astroparticle Physics (LAA-HECAP) to bring together scientists, institutions and funding agencies from across Latin America to coordinate actions for jointly funding research projects across the continent.

The next in-person ICFA meeting will be held during the Lepton–Photon conference in Madison, Wisconsin (USA), in August 2025.

The post ICFA talks strategy and sustainability in Prague appeared first on CERN Courier.

]]>
Meeting report The 96th ICFA meeting heard extensive reports from the leading HEP laboratories and various world regions on their recent activities and plans. https://cerncourier.com/wp-content/uploads/2024/09/CCNovDec24_FN_ICFA.jpg
The Balkans, in theory https://cerncourier.com/a/the-balkans-in-theory/ Wed, 13 Nov 2024 09:32:13 +0000 https://cern-courier.web.cern.ch/?p=111431 The Southeastern European Network in Mathematical and Theoretical Physics has organised scientific training and research activities since its foundation in Vrnjačka Banja in 2003.

The post The Balkans, in theory appeared first on CERN Courier.

]]>
The Southeastern European Network in Mathematical and Theoretical Physics (SEENET-MTP) has organised scientific training and research activities since its foundation in Vrnjačka Banja in 2003. Its PhD programme started in 2014, with substantial support from CERN.

The Thessaloniki School on Field Theory and Applications in HEP was the first school in the third cycle of the programme. Fifty-four students from 16 countries were joined by a number of online participants in a programme of lectures and tutorials.

We are now approaching 110 years since the general theory of relativity was founded and the theoretical prediction of the existence of black holes. There is subsequently at least half a century of developments related to the quantum aspects of black holes. At the Thessaloniki School, Tarek Anous (Queen Mary) delivered a pivotal series of lectures on the thermal properties of black holes, entanglement and the information paradox, which continues to be unresolved.

Nikolay Bobev (KU Leuven) summarised the ideas behind holography; Daniel Grumiller (TU Vienna) addressed the application of the holographic principle in flat spacetimes, including Carrollian/celestial holography; Slava Rychkov (Paris-Saclay) gave an introduction to conformal field theory in various dimensions; while Vassilis Spanos (NKU Athens) provided an introduction to modern cosmology. The programme was completed by Kostas Skenderis (Southampton), who addressed renormalisation in conformal field theory, anti-de Sitter and de Sitter spacetimes.

The post The Balkans, in theory appeared first on CERN Courier.

]]>
Meeting report The Southeastern European Network in Mathematical and Theoretical Physics has organised scientific training and research activities since its foundation in Vrnjačka Banja in 2003. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24FN_SEENET-1-1.jpg
Accelerating climate mitigation https://cerncourier.com/a/accelerating-climate-mitigation/ Wed, 13 Nov 2024 09:31:02 +0000 https://cern-courier.web.cern.ch/?p=111306 Sustainable HEP 2024, the third online-only workshop on sustainable high-energy physics, convened more than 200 participants from 10 to 12 June.

The post Accelerating climate mitigation appeared first on CERN Courier.

]]>
Sustainable HEP 2024, the third online-only workshop on sustainable high-energy physics, convened more than 200 participants from 10 to 12 June. Emissions in HEP are principally linked to building and operating large accelerators, using gaseous detectors and using extensive computing resources. Over three half days, delegates from across the field discussed how best to participate in global efforts at climate-crisis mitigation.

HEP solutions

There is a scientific consensus that the Earth has been warming consistently since the industrial revolution, with the Earth’s surface temperature now about 1.2 °C warmer than in the late 1800s. The Paris Agreement of 2015 aims to limit this increase to 1.5 °C, requiring a 50% cut in emissions by 2030. However, the current rise in greenhouse-gas emissions far exceeds this target. The relevance of a 1.5 °C limit is underscored by the fact that the difference between now and the last ice age (12,000 years ago) is only about 5 °C, explained Veronique Boisvert (Royal Holloway) in her riveting talk on the intersection of HEP and climate solutions. If temperatures rise by 4 °C in the next 50 years, as predicted by the Intergovernmental Panel on Climate Change’s high-emissions scenario, it could cause disruptions beyond what our civilisation can handle. Intensifying heat waves and extreme weather events are already causing significant casualties and socio-economic disruptions, with 2023 the warmest year on record since 1850.

Masakazu Yoshioka (KEK) and Ben Shepherd (Daresbury) delved deeply into sustainable accelerator practices. Cement production for facility construction releases significant CO2, prompting research in material sciences to reduce these emissions. Accelerator systems consume significant energy, and if powered by electricity grids coming from grid fossil fuels, they increase the carbon footprint. Energy-saving measures include reducing power consumption and recovering and reusing thermal energy, as demonstrated by CERN’s initiative to use LHC cooling water to heat homes in Ferney-Voltaire. Efforts should also focus on increasing CO2 absorption and fixation in accelerator regions. Such measures can be effective – Yoshioka estimated that Japan’s Ichinoseki forest can absorb more CO2 annually than the construction emissions of the proposed ILC accelerator over a decade.

Suzanne Evans (ARUP) explained how to perform lifecycle assessments of carbon emissions to evaluate environmental impacts. Sustainability efforts at C3, CEPC, CERN, DESY and ISIS-II were all presented. Thomas Roser (BNL) presented the ICFA strategy for sustainable accelerators, and Jorgen D’Hondt (Vrije Universiteit Brussel) outlined the Horizon Europe project Innovate for Sustainable Accelerating Systems (CERN Courier July/August 2024 p20).

Gaseous detectors contribute significantly to emissions through particle detection, cooling and insulation. Ongoing research to develop eco-friendly gas mixtures for Cherenkov detectors, resistive plate chambers and other detectors were discussed at length – alongside an emphasis from delegates on the need for more efficient and leak-free recirculating systems. On the subject of greener computing solutions, Loïc Lannelongue (Cambridge) emphasised the high-energy consumption of servers, storage and cooling. Collaborative efforts from grassroots movements, funding bodies and industry will be essential for progress.

Stopping global warming is an urgent task for humanity

Thijs Bouman (Groningen) delivered an engaging talk on the psychological aspects of sustainable energy transitions, emphasising the importance of understanding societal perceptions and behaviours. Ayan Paul (DESY) advocated for optimising scientific endeavours to reduce environmental impact, urging a balance between scientific advancement and ecological preservation. The workshop concluded with an interactive session on the “Know Your Footprint” tool by the Young High Energy Physicists (yHEP) Association, facilitated by Naman Bhalla (Freiburg), to calculate individual carbon impacts (CERN Courier May/June 2024 p66). The workshop also sparked dynamic discussions on reducing flight emissions, addressing travel culture and the high cost of public transport. Key questions included the effectiveness of lobbying and the need for more virtual meetings.

Jyoti Parikh, a recipient of the Nobel Peace Prize awarded to Intergovernmental Panel on Climate Change authors in 2007 and member of India’s former Prime Minister’s Council on Climate Change, presented the keynote lecture on global energy system and technology choices. While many countries aim to decarbonise their electricity grids, challenges remain. Green sources like solar and wind have low operating costs but unpredictable availability, necessitating better storage and digital technologies. Parikh emphasised that economic development with lower emissions is possible, but posed the critical question: “Can we do it in time?”

Stopping global warming is an urgent task for humanity. We must aim to reduce greenhouse-gas emissions to nearly zero by 2050. While collaboration within local communities and industries is imperative; and individual efforts may seem small, every action is one step toward global efforts for our collective benefit. Sustainable HEP 2024 showcased innovative ideas, practical solutions and collaborative efforts to reduce the environmental impact of HEP. The event highlighted the community’s commitment to sustainability while advancing scientific knowledge.

The post Accelerating climate mitigation appeared first on CERN Courier.

]]>
Meeting report Sustainable HEP 2024, the third online-only workshop on sustainable high-energy physics, convened more than 200 participants from 10 to 12 June. https://cerncourier.com/wp-content/uploads/2024/09/CCNovDec24_FN_Ichinoseki.jpg
Cristiana Peroni 1949–2024 https://cerncourier.com/a/cristiana-peroni-1949-2024/ Wed, 13 Nov 2024 09:27:10 +0000 https://cern-courier.web.cern.ch/?p=111470 Cristiana Peroni was team leader of the Torino group of the CMS collaboration.

The post Cristiana Peroni 1949–2024 appeared first on CERN Courier.

]]>
Cristiana Peroni

Cristiana Peroni, former team leader of the Torino group of the CMS collaboration, passed away on 19 June 2024.

Peroni obtained her degree in physics in 1974 at the University of Torino. She worked at an experiment on low-energy proton–antiproton collisions at the CERN Proton Synchrotron, before joining the European Muon Collaboration and, later, the New Muon Collaboration. After this, she moved to ZEUS at DESY and then CMS at the LHC, and was appointed full professor at the University of Torino in 2001.

Thanks to Cristiana’s initiative, in collaboration with Fabrizio Gasparini (project manager of the drift-tube project of CMS’s muon system), the Torino group joined the CMS collaboration in the late 1990s. The group took responsibility for the construction of the MB4 muon chambers, together with groups at Padua, Madrid and Aachen, which were responsible for the construction of the MB3, MB2 and MB1 layers of CMS’s drift-tube system, respectively.

At the same time, Cristiana started a collaboration with the JINR–Dubna group led by Igor Golutvin to realise a critical part of the system: the deposition of the field electrodes on the aluminium planes that form the structural element of the chambers. This was a very successful collaboration, in spite of the crucial issues related to complex logistics, which worked extremely well, guaranteeing the construction of the system within the required timeframe. Alongside hardware commitments, the team coordinated by Cristiana took on important roles of responsibility in the physics groups of the collaboration (in particular in the Higgs sector), and soon saw its expansion with the addition and merger of other groups in Torino, which added activities related to the tracker, electromagnetic calorimeter and precision proton spectrometer.

“Cris” was a determined and capable leader, highly appreciated for the attention she always paid to the professional growth of her collaborators, the career development of early-stage researchers, as well as the team building and mutual support that made her group united and coherent.

In the last part of her professional life, Cris turned her attention to research in medical physics, leaving the management of the CMS group to her collaborators, and carrying out research on hadron therapy. In this field, not only did she establish a new course on medical physics at Torino, but she was instrumental to the CNAO hadron-therapy facility in Pavia, which has been treating cancer patients for more than a decade.

The post Cristiana Peroni 1949–2024 appeared first on CERN Courier.

]]>
News Cristiana Peroni was team leader of the Torino group of the CMS collaboration. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24OBITS_Cristiana_feature-1.jpg
Sachio Komamiya 1952–2024 https://cerncourier.com/a/sachio-komamiya-1952-2024/ Wed, 13 Nov 2024 09:25:52 +0000 https://cern-courier.web.cern.ch/?p=111462 Sachio Komamiya was a prominent figure in the Japanese and International Linear Collider communities.

The post Sachio Komamiya 1952–2024 appeared first on CERN Courier.

]]>
Sachio Komamiya

Sachio Komamiya, a prominent figure in the Japanese and International Linear Collider communities, passed away on 5 June 2024 at the age of 71.

Born in Yokohama, Japan in 1952, Komamiya graduated from the University of Tokyo in 1976. He remained there as a graduate student, under the mentorship of Masatoshi Koshiba. Komamiya began his diverse international career by proposing an experiment using the PETRA electron–positron collider at DESY in collaboration with Heidelberg University and the University of Manchester. This collaboration led to the JADE experiment. Koshiba’s laboratory took charge of developing the lead–glass electromagnetic shower detector, which operated reliably and contributed to the discovery of gluons.

After obtaining his PhD for his work at DESY, Komamiya took up a postdoc position at the University of Heidelberg, joining the group of Joachim Heintze. He quickly integrated himself into the group and to the JADE collaboration in general, and was one of the first to perform searches for supersymmetric particles – his enthusiasm for this type of analysis earning him the nickname “SachiNo”.

In 1986 Komamiya’s interest in the highest-energy experiments led him to SLAC as a staff physicist. The construction of the SLAC Linear Collider (SLC) – the first linear collider – was underway. The SLC was a single-pass collider that used a linac to accelerate both electrons and positrons, a design that was highly complex. Komamiya worked on developing the arcs that bent the beams at the end of the linac, which was one of the most complicated parts of the machine. Physics measurements at the SLC started in 1988 with the Mark II detector, and in 1990 Komamiya moved to Europe to join the OPAL experiment at the Large Electron Positron Collider.

Komamiya returned to Japan in 1999 and became a director of the International Center for Elementary Particle Physics at the University of Tokyo in 2000. While leading research and experiments there, he led Japan’s high-energy physics community, serving four terms as the chairman of the Japan Association of High Energy Physics and as a Japanese representative for the International Committee for Future Accelerators from 2000. His leadership and extensive international experience have been precious in advancing the International Linear Collider (ILC) project. In December 2012, a technical design report for the ILC was completed. Shortly afterwards, the ILC project was reorganised under the umbrellas of the Linear Collider Collaboration (LCC), led by Lyn Evans for project development, and the Linear Collider Board, which oversaw the LCC’s activity and was chaired by Komamiya.

Komamiya was eager to see the ILC become Japan’s first globally hosted project. He served as a diplomat to advance this vision, and was calm and patient when explaining to others the often-complex relations involved. Sachio thus fulfilled a critical and essential role bridging science and politics – a talent that, alongside his physics expertise, will be sorely missed.

The post Sachio Komamiya 1952–2024 appeared first on CERN Courier.

]]>
News Sachio Komamiya was a prominent figure in the Japanese and International Linear Collider communities. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24OBITS_Komamiya_feature-1.jpg
Hans Joachim Specht 1936–2024 https://cerncourier.com/a/hans-joachim-specht-1936-2024/ Wed, 13 Nov 2024 09:05:57 +0000 https://cern-courier.web.cern.ch/?p=111458 Hans Joachim Specht was one of the founders of ultra-relativistic heavy-ion physics and a pioneering figure in hadron cancer therapy.

The post Hans Joachim Specht 1936–2024 appeared first on CERN Courier.

]]>
Hans Joachim Specht, one of the founders of ultra-relativistic heavy-ion physics and a pioneering figure in hadron cancer therapy, passed away on 20 May 2024 at the age of 87. A graduate of the University of Munich and ETH Zurich, and full professor at the University of Heidelberg for more than 30 years, his career was distinguished by important contributions across a spectrum of scientific domains.

Hans started his academic career in atomic and nuclear physics in Munich, under the guidance of Heinz Maier-Leibnitz. A highlight was the discovery and precise measurement of shape isomerism in heavy nuclei. His observation of distinct rotational bands in plutonium-240 showed, for the first time, that nuclei can be in a strongly deformed cigar-shaped state shortly before fission, confirming the concept of a “double-humped” fission barrier. In Munich, and later in Heidelberg, he developed several innovative large-scale detectors for fission fragments and reaction products of heavy-ion collisions, becoming one of the leading experimentalists in the new field of heavy-ion physics, with experiments at the MPI for Nuclear Physics in Heidelberg and at the newly founded GSI in Darmstadt.

In the early 1980s, Hans reoriented his research towards the higher energies available at CERN. His contributions and advocacy, alongside a handful of other enthusiastic proponents, were instrumental in establishing CERN’s ultra-relativistic heavy-ion programme at the SPS, which was approved in 1984. He became the spokesperson of a first-generation heavy-ion experiment (Helios/NA34-2), initiator and spokesperson of a second-generation experiment (CERES/NA45), and a crucial supporter of the third-generation ALICE experiment at the LHC.

Hans was a brilliant experimentalist with a keen eye for cutting-edge detector concepts and how to apply them in a minimalistic approach. This was apparent in his masterpiece, the dilepton experiment CERES, which used a “hadron blind” double Cherenkov detector and a specially crafted magnetic field configuration to pick out and measure the rare electrons from the haystack of hadrons.

Initially with CERES, and later as a leading force within NA60, Hans succeeded in detecting, for the first time, thermally produced lepton pairs in heavy-ion collisions; the original discovery with NA45 remains one of the most cited papers from the SPS heavy-ion programme. The high-precision measurements at NA60 of what is arguably one of the most challenging signals (the Planck-like spectrum of thermal radiation at higher masses), and the precise characterisation of the in-medium modification of the ρ meson at lower masses, proved to be crucial in establishing the existence and properties of quark–gluon plasma. The enduring quality and relevance of these measurements remain unsurpassed almost two decades later.

Throughout his career, Hans held numerous positions in the realm of science policy at a variety of German and international research institutes. At CERN, he served as chair of the PSCC committee and as a member of the SPC. He was also a founding member of the first board of directors of the theory institute ECT* in Trento, a place that held special significance for him.

Hans was a brilliant experimentalist with a keen eye for cutting-edge detector concepts

As scientific director of GSI from 1992 to 1999, Hans set the course for the development and application of a groundbreaking innovation in radiation medicine: ion-beam cancer therapy. A pilot project at GSI for the irradiation of tumours with carbon-12 ions successfully treated 450 patients and led to the establishment of the Heidelberg Ion-Beam Therapy Center, the first European ion-beam therapy facility. Reflecting on his achievements, he was most proud of his contributions to ion-beam therapy. Additionally, Hans initiated discussions on the long-term future of GSI, which eventually led to the proposal for the international FAIR facility.

Hans also had a profound interest in the intersection of physics, music and neuroscience, collaborating with Hans-Günter Dosch on understanding perception of music and its physiological bases. This transdisciplinary approach produced highly cited publications on the differences in the auditory cortex between musicians and non-musicians, expanding the boundaries of how we understand the brain and its response to music.

Hans was an outstanding teacher, a prolific mentor, a successful science manager, but foremost, he was someone who profoundly loved physics, with a relentless drive to follow wherever his interests and research would lead him. His frequent and spirited commutes between Heidel­berg and CERN in his iconic green Lotus Elan will be fondly remembered. His critical guidance and profound questions will be deeply missed by all who had the privilege of knowing him.

The post Hans Joachim Specht 1936–2024 appeared first on CERN Courier.

]]>
News Hans Joachim Specht was one of the founders of ultra-relativistic heavy-ion physics and a pioneering figure in hadron cancer therapy. https://cerncourier.com/wp-content/uploads/2024/11/CCNovDec24OBITS_Specht-1.jpg
Werner Beusch 1930–2024 https://cerncourier.com/a/werner-beusch-1930-2024/ Wed, 13 Nov 2024 09:04:54 +0000 https://cern-courier.web.cern.ch/?p=111455 Werner Beusch, who played a pioneering role in the OMEGA spectrometer at CERN, passed away after a short illness on 4 May 2024.

The post Werner Beusch 1930–2024 appeared first on CERN Courier.

]]>
Werner Beusch, who played a pioneering role in the OMEGA spectrometer at CERN, passed away after a short illness on 4 May 2024.

A student of Paul Scherrer at ETH Zurich, Werner obtained his PhD in 1960 with a thesis on two-photon transitions in barium-137 and moved to CERN, joining the “Groupe Chambre Wilson” (a collaboration of teams from CERN, ETH Zurich and Imperial College London). Around that time, cloud chambers were being replaced with spark chambers. Werner, already very experienced in electronics despite his young age, designed and built the entire trigger system for spark chambers from scratch using discrete components (NIM modules were not yet available at the time!).

In the late 1960s Werner started working on the OMEGA project – a high-aperture electronic spectrometer to be installed on a PS beam line in the West Area. The spectrometer was envisioned to operate as a facility, with a standard suite of detectors that could be complemented by experiment-specific apparatus provided by the individual collaborations. This was achieved by a large (3 m diameter) superconducting magnet equipped with spark chambers, a triggering system and data acquisition. The original programme included missing-mass experiments, the study of baryon-exchange processes and leptonic hyperon decays, and experiments with hyperon beams and with polarised targets. After a few years, interest moved to new topics, such as photoproduction, charm production and QCD studies.

In 1976 the OMEGA spectrometer was moved to its final position in the West Area on a beam line from the newly built SPS. In 1979, under Werner’s supervision, the spectrometer – until then equipped with spark chambers and plumbicon cameras – was instrumented with the new, much faster and higher resolution multi-wire proportional chambers. The refurbished OMEGA quickly became the go-to facility for a wide range of experiments. Over the years, under Werner’s stewardship, the facility was continuously upgraded with new equipment such as drift chambers, ring-imaging Cherenkov detectors, silicon microstrips and silicon pixel detectors (which were deployed at OMEGA for the first time). Triggering and data acquisition were also continuously updated such that, throughout its 25-year lifetime, OMEGA remained at the forefront of technology. It hosted some 50 experiments, with achievements ranging from its essential role in the establishment of non-qq mesons, to the detection of a (so-far unexplained) excess in the production of soft photons, to the observation of clear violations of factorisation in charm hadroproduction. The OMEGA scientific programme culminated in a key contribution to the discovery of quark–gluon plasma (QGP), with the detection of the signature enhancement pattern of strange and multi-strange hadrons in lead–lead collisions.

Werner retired from CERN in 1995, one year before OMEGA was closed, not because it had reached its time (QGP studies, then in full blossom, had to be hastily moved to the North Area), but to make room for an assembly and test facility for the LHC magnets. Throughout its lifetime, Werner truly was the “soul” of the OMEGA experiment, always present and ready to help. Swapping from one layout to the next (and from one experimental group to the next) was the standard way of operating, and Werner and his team had the heavy responsibility of keeping the spectrometer in good shape and guaranteeing a prompt and efficient restart of the experiments. Werner’s kind and thoughtful attitude was key to this and the many other OMEGA successes. His impassioned, matter-of-fact and selfless way of doing science influenced generations of physicists whose careers were forged at OMEGA. Werner coming into the control room and offering a basket of fruits from his garden remains vivid in the memory. We miss him dearly.

The post Werner Beusch 1930–2024 appeared first on CERN Courier.

]]>
News Werner Beusch, who played a pioneering role in the OMEGA spectrometer at CERN, passed away after a short illness on 4 May 2024. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24OBITS_Beusch-1.jpg
Olav Ullaland 1944–2024 https://cerncourier.com/a/olav-ullaland-1944-2024/ Mon, 11 Nov 2024 09:04:53 +0000 https://cern-courier.web.cern.ch/?p=111465 Olav Ullaland, a brilliant detector physicist who spent his career at CERN, passed away on 16 June 2024.

The post Olav Ullaland 1944–2024 appeared first on CERN Courier.

]]>
Olav Ullaland

Olav Ullaland, a brilliant detector physicist who spent his career at CERN, passed away on 16 June 2024.

Olav obtained his degree in particle physics at the University of Bergen in 1971. After a short period at Rutherford Appleton Laboratory in the UK, he went to CERN as a fellow in 1973, following which he was awarded a staff contract. He worked as a detector physicist at CERN until he retired in 2009, remaining active for several years as an emeritus. One of his last scientific articles dates from 2020.

Alongside detector R&D, Olav participated in several key CERN experiments. For the Split Field Magnet Detector, located at CERN’s Intersecting Storage Rings, he was in charge of the multi-wire proportional chambers and worked on the prototype of a novel electromagnetic calorimeter that was later adopted by the DELPHI experiment.

After contributing to the UA1 upgrade, he was asked to take a leading role in the complex barrel ring-imaging Cherenkov (RICH) project of DELPHI, which was the first attempt to integrate an imaging Cherenkov detector into a cylindrical collider experiment. The challenges were immense, as it was necessary to operate a gas and liquid radiator, together with a photo­sensitive gas, at different temperatures in a confined space. Thanks to Olav’s perseverance and the loyalty he inspired in his team, he was able to bring the apparatus to a level where it could be used in physics analysis, for example in the tagging of strange jets from Z and W decays. This was a critical milestone in the history of RICH detectors.

Around 1997 Olav joined LHCb and became a leader in the international effort to make its two RICH detectors a reality. Thanks to his deep knowledge of the many facets of detector physics and techniques, and his ability to remain calm, he and his team managed to find solutions to potential showstoppers. It is testament to Olav’s efforts that the particle identification system of LHCb works so impressively in the study of CP violation and heavy-flavour rare decays. In addition, Olav was the LHCb resource coordinator for several years, taking impeccable control of delicate LHCb financial matters at the beginning of the experiment operations. His expertise in leading many project reviews and trouble-shooting several wide-ranging detector subsystems was also in high demand both within and outside LHCb.

Olav was a wonderful collaborator. He was passionate in his support of students and fellows, and encouraged young people to give presentations and international talks, always graciously stepping away from the limelight himself. His dedication to student training was highlighted by his running of the CERN summer student programme, with both lectures and laboratory courses.

For Olav, work did not finish at CERN, but would be continued in any possible meeting place. These unconventional settings provided a conducive atmosphere to explore, discuss and challenge new projects and ideas, with the goal of promoting cohesion in a critical, constructive and friendly fashion.

Olav Ullaland was not only an outstanding researcher, but also a unique human being who left a deep impression on all those with whom he came into contact. We will never forget him.

The post Olav Ullaland 1944–2024 appeared first on CERN Courier.

]]>
News Olav Ullaland, a brilliant detector physicist who spent his career at CERN, passed away on 16 June 2024. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24OBITS_Ullaland_feature-1.jpg
Arnau Brossa Gonzalo 1993–2024 https://cerncourier.com/a/arnau-brossa-gonzalo-1993-2024/ Mon, 11 Nov 2024 08:37:28 +0000 https://cern-courier.web.cern.ch/?p=111473 Arnau’s warmth, kindness, dedication, intelligence and competence will be deeply missed by his many friends at the institute in Santiago and in the LHCb collaboration.

The post Arnau Brossa Gonzalo 1993–2024 appeared first on CERN Courier.

]]>
Arnau Brossa Gonzalo

Arnau Brossa Gonzalo, a postdoctoral researcher at the Galician Institute of High Energy Physics (IGFAE) working on the LHCb experiment, died in Santiago on 21 July 2024 following complications from a climbing accident.

Arnau obtained his degree in physics at the University of Barcelona in 2016, specialising in theoretical physics. He continued there for his master’s in astrophysics, particle physics and cosmology, with a thesis on the LHCb experiment.

In 2017 he embarked on his PhD studies in particle physics at the University of Warwick. His thesis, entitled “First observation of B0 D*(2007)0K+π and B0s D*(2007)0Kπ+ decays in LHCb”, won the Springer Thesis Prize for outstanding PhD research. This was the first LHCb measurement of B decays involving fully reconstructed neutral D*mesons, which are particularly challenging due to the soft neutral particles emitted in the D* Dπ0 and D* Dγ decays. These modes are nonetheless extremely important to understand as they are backgrounds to a wide range of other studies, including those used for precision measurements of the CKM angle γ.

Following the completion of his PhD, Arnau joined the LHCb group at IGFAE in 2022 to work further on the LHCb experiment, first as a postdoctoral researcher and later as a Juan de la Cierva researcher. He then joined the lepton-flavour-universality group at IGFAE, taking on a leading role in the measurement of the ratios of semileptonic-decay branching fractions to final states with tau leptons relative to muons, denoted R(D) and R(D*). Arnau had rapidly established himself as an expert in this area, and in early 2024 he had taken on convenership of the LHCb subgroup that was dedicated to this and to similar charged-current lepton-flavour-universality tests.

Arnau’s warmth, kindness, dedication, intelligence and competence will be deeply missed by his many friends at the institute in Santiago and in the LHCb collaboration.

The post Arnau Brossa Gonzalo 1993–2024 appeared first on CERN Courier.

]]>
News Arnau’s warmth, kindness, dedication, intelligence and competence will be deeply missed by his many friends at the institute in Santiago and in the LHCb collaboration. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24OBITS_Arnau_feature-1.jpg
Robert Aymar 1936-2024 https://cerncourier.com/a/robert-aymar-1936-2024/ Fri, 04 Oct 2024 12:13:26 +0000 https://preview-courier.web.cern.ch/?p=111365 Robert Aymar was the CERN Director General from January 2004 to December 2008.

The post Robert Aymar 1936-2024 appeared first on CERN Courier.

]]>
Robert Aymar, CERN Director General from January 2004 to December 2008, passed away on 23 September at the age of 88. An inspirational leader in big-science projects for several decades, including the International Thermonuclear Experimental Reactor (ITER), his term of office at CERN was marked by the completion of construction and the first commissioning of the Large Hadron Collider (LHC). His experience of complex industrial projects proved to be crucial, as the CERN teams had to overcome numerous challenges linked to the LHC’s innovative technologies and their industrial production.

Robert Aymar was educated at Ecole Polytechnique in Paris. He started his career in plasma physics at Commissariat à l’Energie Atomique (CEA), since renamed Commissariat à l’Energie Atomique et aux Energies Alternatives, at the time when thermonuclear fusion was declassified and research started on its application to energy production. After being involved in several studies at CEA, Aymar contributed to the design of the Joint European Torus, the European tokamak project based on conventional magnet technology, built in Culham, UK in the late 1970s. In the same period, CEA was considering a compact tokamak project based on superconducting magnet technology, for which Aymar decided to use pressurised superfluid helium cooling — a technology then recently developed by Gérard Claudet and his team at CEA Grenoble. Aymar was naturally appointed head of the TORE SUPRA tokamak project, built at CEA Cadarache from 1977 to 1988. The successful project served inter alia as an industrial-size demonstrator of superfluid helium cryogenics, which became a key technology of the LHC.

Robert Aymar set out to bring together the physics of the infinitely large and the infinitely small

As head of the Département des Sciences de la Matière at CEA from 1990 to 1994, Robert Aymar set out to bring together the physics of the infinitely large and the infinitely small, as well as the associated instrumentation, in a department that has now become the Institut de Recherche sur les Lois Fondamentales de l’Univers. In that position, he actively supported CEA-CERN collaboration agreements on R&D for the LHC and served on many national and international committees. In 1993 he chaired the LHC external review committee, whose recommendation proved decisive in the project’s approval. From 1994 to 2003, he led the ITER engineering design activities under the auspices of the International Atomic Energy Agency, establishing the basic design and validity of the project that would be approved for construction in 2006. In 2001, the CERN Council called on his expertise once again by entrusting him to chair the external review committee for CERN’s activities.

When Robert Aymar took over as Director General of CERN in 2004, the construction of the LHC was well under way. But there were many industrial and financial challenges, and a few production crises still to overcome. During his tenure, which saw the ramp-up, series production and installation of major components, the machine was completed and the first beams circulated. That first start-up in 2008 was followed by a major technical problem that led to a shutdown lasting several months. But the LHC had demonstrated that it could run, and in 2009 the machine was successfully restarted. Robert Aymar’s term of office also saw a simplification of CERN’s structure and procedures, aimed at making the laboratory more efficient. He also set about reducing costs and secured additional funding to complete the construction and optimise the operation of the LHC. After retirement, he remained active as scientific advisor to the head of the CEA, occasionally visiting CERN and the ITER construction site in Cadarache.

Robert Aymar was a dedicated and demanding leader, with a strong drive and search for pragmatic solutions in the activities he undertook or supervised. CERN and the LHC project own much to his efforts. He was also a man of culture with a marked interest in history. It was a privilege to serve under his direction.

The post Robert Aymar 1936-2024 appeared first on CERN Courier.

]]>
News Robert Aymar was the CERN Director General from January 2004 to December 2008. https://cerncourier.com/wp-content/uploads/2024/10/robert-aymar.jpg
BWT water dispensers at CERN: a sustainable hydration solution https://cerncourier.com/a/bwt-water-dispensers-at-cern-a-sustainable-hydration-solution/ Fri, 27 Sep 2024 12:16:49 +0000 https://preview-courier.web.cern.ch/?p=111354 Since 2011, BWT (Best Water Technology) has been a proud partner of CERN, supplying state-of-the-art water dispensers to various sites and buildings within the vast CERN complex, which spans both Switzerland and France. Our partnership with CERN is rooted in our shared commitment to sustainability, innovation, and exceptional service. Today, we are pleased to report […]

The post BWT water dispensers at CERN: a sustainable hydration solution appeared first on CERN Courier.

]]>
Since 2011, BWT (Best Water Technology) has been a proud partner of CERN, supplying state-of-the-art water dispensers to various sites and buildings within the vast CERN complex, which spans both Switzerland and France. Our partnership with CERN is rooted in our shared commitment to sustainability, innovation, and exceptional service. Today, we are pleased to report that there are over 150 BWT water dispensers installed and actively serving the CERN community.

Enhancing Hydration at CERN

Our water dispensers provide a range of hydration options to meet the diverse needs of CERN’s staff, visitors, and contractors. With the water dispenser in use, we offer cold water, ambient water, sparkling water, and even hot water. These dispensers are strategically placed throughout CERN’s numerous facilities, ensuring that hydration is always within easy reach.

CERN’s community is a dynamic and international one, with over 17,500 people from around the world working together to push the boundaries of scientific knowledge. This includes approximately 2,500 permanent staff members, as well as countless visitors and collaborators. Ensuring access to high-quality, sustainable hydration solutions is crucial in such an environment, where long hours and intense focus are the norms.

Sustainability at the Core

BWT’s partnership with CERN goes beyond providing high-quality water; it’s about embedding sustainability into everyday practices. Our water dispensers are designed to encourage the use of reusable bottles and cups. By offering easily accessible water stations, we help reduce the reliance on single-use plastic bottles, significantly cutting down on plastic waste.

A BWT water dispenser

The dispensers’ user-friendly design, with spouts specifically engineered to accommodate reusable bottles, further promotes this eco-friendly practice. This is particularly important at CERN, where sustainability is a core value. By choosing BWT, CERN demonstrates its commitment to environmental stewardship and the promotion of sustainable practices within the scientific community.

Technical Excellence and Reliable Service

CERN has trusted BWT not only for the quality of our products but also for our outstanding technical service. Gaining access to CERN’s premises requires authorization, reflecting the high-security environment of the world’s leading particle physics laboratory. Despite these stringent access controls, BWT’s technical team has consistently provided timely and efficient service, ensuring that all water dispensers operate at peak performance.

Our service includes regular maintenance and swift responses to any technical issues, ensuring minimal disruption to CERN’s daily operations. This reliable support has been a key factor in the long-standing relationship between BWT and CERN.

Meeting the Needs of a Diverse Community

The versatility of BWT water dispensers caters to the diverse hydration preferences of CERN’s international community. Whether someone prefers chilled water to stay refreshed during hot summer days, sparkling water for a fizzy treat, or hot water for a quick cup of tea, our dispensers deliver. This flexibility is highly appreciated in a setting as dynamic and varied as CERN.

Moreover, the availability of multiple water options in a single dispenser minimizes the need for separate machines, saving space and reducing energy consumption. This aligns perfectly with CERN’s efforts to optimize resource use and minimize its environmental footprint.

BWT’s water dispensers are more than just hydration stations; they are a testament to our commitment to sustainability, innovation, and excellent service. Our partnership with CERN highlights the importance of providing sustainable, high-quality water solutions in environments where excellence and precision are paramount.

As CERN continues to explore the frontiers of science, BWT is proud to support its mission by ensuring that the people driving these groundbreaking discoveries stay hydrated. Together, we are making strides towards a more sustainable future, one refillable bottle at a time.

The post BWT water dispensers at CERN: a sustainable hydration solution appeared first on CERN Courier.

]]>
Advertising feature https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_Advertorial_BWT2.jpg
GTT supports groundbreaking neutrino research https://cerncourier.com/a/gtt-supports-groundbreaking-neutrino-research/ Fri, 27 Sep 2024 09:24:05 +0000 https://preview-courier.web.cern.ch/?p=111350 Over the past 60 years, GTT has established itself as the technology expert in membrane containment systems for the transport and storage of liquefied gases. In 2023, 511 of the world’s 629 liquefied natural gas carriers with a capacity over 100,000 m³ were equipped with GTT technology. Innovation is at the heart of GTT’s strategy, […]

The post GTT supports groundbreaking neutrino research appeared first on CERN Courier.

]]>
Over the past 60 years, GTT has established itself as the technology expert in membrane containment systems for the transport and storage of liquefied gases. In 2023, 511 of the world’s 629 liquefied natural gas carriers with a capacity over 100,000 m³ were equipped with GTT technology. Innovation is at the heart of GTT’s strategy, as demonstrated by its 3295 registered patents and its position as the leading medium-sized company for patent filings in 2023. Today, GTT is applying its expertise to the Deep Underground Neutrino Experiment (DUNE), adapting its advanced solutions to support this groundbreaking scientific research.

The project

DUNE is an international research initiative aimed at enhancing the understanding of neutrinos. It is a dual-site experiment for both neutrino science and proton decay studies. The project utilises neutrinos generated by Fermilab’s Long-Baseline Neutrino Facility (LBNF). Once completed, the LBNF will feature the world’s highest intensity neutrino beam. The infrastructure necessary to support the massive cryogenic far detectors will be installed at the Sanford Underground Research Facility (SURF) 1300 km downstream, in Lead, South Dakota, US. These detectors are housed in large instrumented cryostats filled with liquid argon.

The challenge

The experimental facilities will include several individual cryogenic detectors, each housed inside a large, instrumented cryostat filled with 17,500 tonnes of liquid argon. In this context, the liquid argon must be maintained at a stable temperature of –186°C, requiring perfect tightness, material purity and high thermal insulation. To ensure the proper functioning of the projection chamber and allow electrons to drift over long distances, the impurity of the liquid argon must not exceed 0.1 parts per billion.

The solution

GTT provided a solution based on its technology, which is typically used in cargo ships transporting liquefied natural gas stored at –163°C. GTT’s patented membrane containment system uses two cryogenic envelopes to contain and isolate the liquefied gas. This modular system can be assembled to accommodate large volumes. GTT has offered its services to CERN to provide a solution to the LBNF/DUNE challenge. Each DUNE cryostat is a membrane cryostat constructed with an adapted Mark III membrane containment system developed by GTT.

The Mark III membrane system is a containment and insulation system directly supported by the ship’s hull structure. The containment system consists of a corrugated stainless-steel primary membrane, in contact with the fluid, placed on a prefabricated insulating panel made of reinforced polyurethane foam, incorporating a composite secondary membrane made of Triplex (aluminium foil between two glass cloths). This modular system integrates standard prefabricated components designed to be produced on a large scale and easily assembled –and that can be adapted to any tank shape and capacity.

GTT’s technologies are constantly optimised to meet the expectations of ship-owners and shipyards, its usual market, while complying with changes in maritime regulations. Since 2008, GTT has been working on developments of the Mark III concept, dedicated to improving the thermal and structural efficiency of the technology. In 2011, GTT launched the Mark III Flex technology, an improved version of Mark III, which offers a guaranteed boil-off rate of 0.07% volume/day, thanks to an increased thickness of 480 mm.

Why not extend this technology to another field? GTT and CERN have collaborated since 2013 to tailor GTT’s technology to CERN’s requirements, focusing on thermal performance and the containment of ultra-pure liquid argon for the time projection chambers required for DUNE. Leveraging the adaptability of the Mark III system, GTT has designed six tanks with CERN, resulting in a fully tested technology that meets CERN’s requirements. The collaboration began with a 17m³ initial prototype commissioned in 2017, followed by two 600m³ tanks, ProtoDUNE, commissioned in 2018 and 2019. The design showed areas for further improvement and required specific upgrades.

Following this initial set of prototypes, CERN and GTT worked together to propose an improved design. This design, optimised for cryogenic conditions, offers excellent containment tightness and thermal insulation, which helps maintain argon purity. The adapted technology includes:

 approximately 800 mm of insulation thickness;

 specific panel arrangements;

 double containment;

 tightness ensured by a combination of stainless steel (1.2 mm) for the primary barrier, a composite material (0.7 mm) for the secondary barrier and a reinforced polyurethane foam for insulation.

This optimised design has been tested and commissioned for two tanks so far. The first, a 200m³ short-baseline near detector sitting in the Booster Neutrino Beam at Fermilab, was commissioned in January 2023, and the second, a 600m³ dark side tank at the Gran Sasso National Laboratory in Assergi, Italy, was commissioned in June 2024.

The future

In the coming years, CERN and GTT will continue their collaboration with future targets already identified. The construction of two tanks, each with a capacity of 12,500 m³, for the DUNE far detector cryostats, to be installed at SURF in Lead, 1300 km downstream, will be the pinnacle of this collaboration. The design of the containment system has been completed by GTT, and the start of construction is planned for 2025.

The post GTT supports groundbreaking neutrino research appeared first on CERN Courier.

]]>
Advertising feature https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_Advertorial_GTT.jpg
Slovak LV cabinets contribute to investigating unresolved questions about the formation of the universe https://cerncourier.com/a/slovak-lv-cabinets-contribute-to-investigating-unresolved-questions-about-the-formation-of-the-universe/ Fri, 27 Sep 2024 09:17:54 +0000 https://preview-courier.web.cern.ch/?p=111346 Slovakia has established itself as a significant player in the nuclear energy sector, primarily due to its nuclear capacities and a strategy focused on sustainability and energy security. Moreover, Slovakia’s commitment to nuclear energy is also evident in its strategic partnerships and collaborations with international organisations, including CERN. PPA ENERGO, the largest member of the […]

The post Slovak LV cabinets contribute to investigating unresolved questions about the formation of the universe appeared first on CERN Courier.

]]>
Michal Cunik

Slovakia has established itself as a significant player in the nuclear energy sector, primarily due to its nuclear capacities and a strategy focused on sustainability and energy security. Moreover, Slovakia’s commitment to nuclear energy is also evident in its strategic partnerships and collaborations with international organisations, including CERN.

PPA ENERGO, the largest member of the PPA CONTROLL group, specialises in delivering comprehensive solutions in automated control systems, field instrumentation and electrical systems. Our services encompass every stage of the project, ensuring seamless integration and performance across the entire lifecycle. This includes engineering, procurement, installation, testing and commissioning, service and maintenance, and, of course, the manufacturing of LV panels. Our extensive experience in manufacturing low-voltage panels, including their qualification for seismic resistance, EMC, vibration, aging, magnetic field resistance and more, combined with our deep expertise in the nuclear industry, has paved the way for prestigious opportunities, such as collaboration with CERN.

PPA ENERGO has demonstrated its ability to apply extensive expertise and experience in the execution of complex infrastructure projects to support CERN’s initiatives. With a wealth of experience in significant nuclear power plant construction projects, such as Mochovce Units 3 and 4 (Slovakia), Hinkley Point C (UK) and others, we have refined our ability to deliver top-tier solutions in challenging environments. Proven capabilities in managing large-scale, critical projects are expected to bring substantial value to CERN.

From technical design to Switzerland

CERN’s requirement was to design, manufacture and test the control and power distribution cabinets for the ATLAS and CMS 2PACL CO2 detector cooling systems. The cooling modules will circulate liquid CO2 through evaporators specifically designed in the detectors in a “two-phase pumped loop scheme”. Each cooling module will be equipped with a dedicated diaphragm pump for liquid CO2. Our control and power distribution cabinets will be part of this cooling system. After the successful qualification of our distribution panels approved by CERN, the first series was successfully delivered to Switzerland. Based on the positive feedback and personal visits of CERN’s technical team to our production hall, we were then commissioned to manufacture the second batch of distribution panels.

As Michal Cunik (pictured with the distribution panels prepared for transport to CERN), the designer responsible for the production of the cabinets, stated:  “The major challenge was that we also had to prepare detailed 3D models, a digital twin of the panel to ensure precise replication and future facility maintenance and upgrades.” At this stage, intensive production of the distribution panels is underway, with planned completion in December 2025.

The successful delivery and ongoing production of the distribution panels has elevated our collaboration with CERN to the highest level.

The post Slovak LV cabinets contribute to investigating unresolved questions about the formation of the universe appeared first on CERN Courier.

]]>
Advertising feature https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_Advertorial_PPT_feature.jpg
CERN to insource beam-pipe production https://cerncourier.com/a/cern-to-insource-beam-pipe-production/ Wed, 25 Sep 2024 13:30:35 +0000 https://preview-courier.web.cern.ch/?p=111324 The laboratory will acquire unique expertise useful to the HL-LHC experiments, future projects and other accelerators around the world.

The post CERN to insource beam-pipe production appeared first on CERN Courier.

]]>
In the Large Hadron Collider (LHC), counter-rotating beams of protons travel in separate chambers under high vacuum to avoid scattering with gas molecules. At four places around the 27-km ring, the beams enter a single chamber, where they collide. To ensure that particles emerging from the high-energy collisions pass into the ALICE, ATLAS, CMS and LHCb detectors with minimal disturbance, the experiments’ vacuum chambers must be as transparent as possible to radiation, placing high demands on materials and production.

The sole material suitable for the beam pipes at the heart of the LHC experiments is beryllium — a substance used in only few other domains, such as the aerospace industry. Its low atomic number (Z = 4) leads to minimal interaction with high-energy particles, reducing scattering and energy loss. The only solid element with a lower atomic number is lithium (Z = 3), but it cannot be used as it oxidizes rapidly and reacts violently with moisture, producing flammable hydrogen gas. Despite being less dense than aluminium, beryllium is six times stronger than steel, and can withstand the mechanical stresses and thermal loads encountered during collider operations. Beryllium also has good thermal conductivity, which helps dissipate the heat generated during beam collisions, preventing the beam pipe from overheating.

But beryllium also has drawbacks. It is expensive to procure as it comes in the form of a powder that must be compressed at very high pressure to obtain metal rods, and as beryllium is toxic, all manufacturing steps require strict safety procedures.

By bringing beam-pipe production in-house, CERN will acquire unique expertise

The last supplier worldwide able to machine and weld beryllium beam pipes within the strict tolerances required by the LHC experiments decided to discontinue their production in 2023. Given the need for multiple new beam pipes as part of the forthcoming high-luminosity upgrade to the LHC (HL-LHC), CERN has decided to build a new facility to manufacture vacuum pipes on site, including parts made of beryllium. A 650 m2 workshop is scheduled to begin operations on CERN’s Prévessin site next year.

By insourcing beryllium beam-pipe production, CERN will gain direct control of the manufacturing process, allowing stricter quality assurance and greater flexibility to meet changing experimental requirements. The new facility will include several spaces to perform metallurgical analysis, machining of components, surface treatments, final assembly by electron-beam welding, and quality control steps such as metrology and non-destructive tests. As soon as beryllium beampipes are fabricated, they will follow the usual steps for ultra-high vacuum conditioning that are already available in CERN’s facilities. These include helium leak tests, non-evaporable-getter thin-film coatings, the installation of bakeout equipment, and final vacuum assessments.

Once the new workshop is operational, the validation of the different manufacturing processes will continue until mid-2026. Production will then begin for new beam pipes for the ALICE, ATLAS and CMS experiments in time for the HL-LHC, as each experiment will replace their pixel tracker – the sub-detector closest to the beam – and therefore require a new vacuum chamber. With stricter manufacturing requirements, never accomplishment before now, and a conical section designed to maximise transparency in the forward regions where particles pass through at smaller angles, ALICE’s vacuum chamber will pose a particular challenge. Together totalling 21 m in length, the first three beam pipes to be constructed at CERN will be installed in the detectors during the LHC’s Long Shutdown 3 from 2027 to 2028.

By bringing beam-pipe production in-house, CERN will acquire unique expertise that will be useful not only for the HL-LHC experiments, but also for future projects and other accelerators around the world, and preserve a fundamental technology for experimental beam pipes.

The post CERN to insource beam-pipe production appeared first on CERN Courier.

]]>
News The laboratory will acquire unique expertise useful to the HL-LHC experiments, future projects and other accelerators around the world. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NA_pipe.jpg
An intricate web of interconnected strings https://cerncourier.com/a/an-intricate-web-of-interconnected-strings/ Tue, 24 Sep 2024 10:23:20 +0000 https://preview-courier.web.cern.ch/?p=111302 The Strings 2024 conference looked at the latest developments in the interconnected fields of quantum gravity and quantum field theory, all under the overarching framework of string theory.

The post An intricate web of interconnected strings appeared first on CERN Courier.

]]>
Strings 2024 participants

Since its inception in the mid-1980s, the Strings conference has sought to summarise the latest developments in the interconnected fields of quantum gravity and quantum field theory, all under the overarching framework of string theory. As one of the most anticipated gatherings in theoretical physics, the conference serves as a platform for exchanging knowledge, fostering new collaborations and pushing the boundaries of our understanding of the fundamental aspects of the physical laws of nature. The most recent edition, Strings 2024, attracted about 400 in-person participants to CERN in June, with several hundred more scientists following on-line.

One way to view string theory is as a model of fundamental interactions that provides a unification of particle physics with gravity. While generic features of the Standard Model and gravity arise naturally in string theory, it has lacked concrete experimental predictions so far. In recent years, the strategy has shifted from concrete model building to more systematically understanding the universal features that models of particle physics must satisfy when coupled to quantum gravity.

Into the swamp

Remarkably, there are very subtle consistency conditions that are invisible in ordinary particle physics, as they involve indirect arguments such as whether black holes can evaporate in a consistent manner. This has led to the notion of the “Swampland”, which encompasses the set of otherwise well-behaved quantum field theories that fail these subtle quantum-gravity consistency conditions. This may lead to concrete implications for particle physics and cosmology.

An important question addressed during the conference was whether these low-energy consistency conditions always point back to string theory as the only consistent “UV completion” (fundamental realisation at distance scales shorter than can be probed at low energies) of quantum gravity, as suggested by numerous investigations. Whether there is any other possible UV completion involving a version of quantum gravity unrelated to string theory remains an important open question, so it is no surprise that significant research efforts are focused in this direction.

Attempts at explicit model construction were also discussed, together with a joint discussion on cosmology, particle physics and their connections to string theory. Among other topics, recent progress on realising accelerating cosmologies in string theory was reported, as well as a stringy model for dark energy.

A different viewpoint, shared by many researchers, is to employ string theory rather as a framework or tool to study quantum gravity, without any special emphasis on its unification with particle physics. It has long been known that there is a fundamental tension when trying to combine gravity with quantum mechanics, which many regard as one of the most important, open conceptual problems in theoretical physics. This becomes most evident when one zooms in on quantum black holes. It was in this context that the holographic nature of quantum gravity was discovered – the idea that all the information contained within a volume of space can be described by data on its boundary, suggesting that the universe’s fundamental degrees of freedom can be thought of as living on a holographic screen. This may not only hold the key for understanding the decay of black holes via Hawking radiation, but can also teach us important lessons about quantum cosmology.

Strings serves as a platform for pushing the boundaries of our understanding of the fundamental aspects of the physical laws of nature

Thousands of papers have been written on this subject within the last decades, and indeed holographic quantum gravity continues to be one of string theory’s most active subfields. Recent breakthroughs include the exact or approximate solution of quantum gravity in low-dimensional toy models in anti-de Sitter space, the extension to de-Sitter space, an improved understanding of the nature of microstates of black holes, the precise way they decay, discovering connections between emergent geometry and quantum information theory, and developing powerful tools for investigating these phenomena, such as bootstrap methods.

Other developments that were reviewed include the use of novel kinds of generalised symmetries and string field theory. Strings 2024 also gave a voice to more tangentially related areas such as scattering amplitudes, non-perturbative quantum field theory, particle phenomenology and cosmology. Many of these topics are interconnected to the core areas mentioned in this article and with each other, both technically and/or conceptually. It is this intricate web of highly non-trivial consistent interconnections between subfields that generates meaning beyond the sum of its parts, and forms the unifying umbrella called string theory.

The conference concluded with a novel “future vision” session, which considered 100 crowd-sourced open questions in string theory that might plausibly be answered in the next 10 years. These 100 questions provide a glimpse of where string theory may head in the near future.

The post An intricate web of interconnected strings appeared first on CERN Courier.

]]>
Meeting report The Strings 2024 conference looked at the latest developments in the interconnected fields of quantum gravity and quantum field theory, all under the overarching framework of string theory. https://cerncourier.com/wp-content/uploads/2024/09/CCNovDec24_FN_Strings_feature.jpg
A decider for CERN’s next collider https://cerncourier.com/a/a-decider-for-cerns-next-collider/ Thu, 19 Sep 2024 10:23:16 +0000 https://preview-courier.web.cern.ch/?p=111096 The third update of the European strategy for particle physics is underway.

The post A decider for CERN’s next collider appeared first on CERN Courier.

]]>
The third update of the European strategy for particle physics, launched by the CERN Council on 21 March, is getting into its stride. At its June session, the Council elected former ATLAS spokesperson Karl Jakobs (University of Freiburg) as strategy secretary and established a European Strategy Group (ESG), which is responsible for submitting final recommendations to Council for approval in early 2026. The aim of the strategy update, states the ESG remit, is to develop “a visionary and concrete plan that greatly advances human knowledge in fundamental physics through the realisation of the next flagship project at CERN”.

“Given the long timescales involved in building large colliders, it is vital that the community reaches a consensus to enable Council to take a decision on the next collider at CERN in 2027/2028,” Jakobs told the Courier. To reach that consensus it is important that the whole community is involved, he says, emphasising that, compared to previous strategy updates, there will be more opportunities to provide input at different stages. “There is excellent progress with the LHC and no new indication that would change our physics priorities: understanding the Higgs boson much better and exploring further the energy frontier are key to the next project.”

The European strategy for particle physics is the cornerstone of Europe’s decision-making process for the long-term future of the field. It was initiated by the CERN Council in 2005, when completing the LHC was listed as the top scientific priority, and has been updated twice. The first strategy update, adopted in 2013, continued to prioritise the LHC and its high-luminosity upgrade, and stated that Europe needed to be in a position to propose an ambitious post-LHC accelerator project at CERN by the time of the next strategy update. The second strategy update, completed in 2020, recommended an electron–positron Higgs factory as the highest priority, and that a technical and financial feasibility study for a next-generation hadron collider should be pursued in parallel.

Significant progress has been made since then. A feasibility study for the proposed Future Circular Collider (FCC) at CERN presented a mid-term report in March 2024, with a final report expected in spring 2025 (CERN Courier March/April 2024 pp25–38). There is also a clearer view of the international landscape. In December 2023 the US “P5” prioritisation process stated that the US would support a Higgs factory in the form of an FCC-ee at CERN or an International Linear Collider (ILC) in Japan, while also exploring the feasibility of a high-energy muon collider at Fermilab (CERN Courier January/February 2024 p7). Shortly afterwards, a technical design report for the proposed Circular Electron Positron Collider (CEPC) in China was released (CERN Courier March/April 2024 p39). The ILC project has meanwhile established an international technology network in a bid to increase global support.

Alternative scenarios

In addition to identifying the preferred option for the next collider at CERN, the strategy update is expected to prioritise alternative options to be pursued if the chosen preferred plan turns out not to be feasible or competitive. “That we should discuss alternatives to the chosen baseline is important to this strategy update,” says Jakobs. “If the FCC were chosen, for example, a lower-energy hadron collider, a linear collider and a muon collider are among the options that would likely be considered. However, in addition to differences in the physics potential we have to understand the technical feasibility and the timelines. Some of these alternatives may also require an extension of the physics exploitation at the HL-LHC.”

Given the long timescales involved in building large colliders, it is vital that the community reaches a consensus

The third strategy update will also indicate physics areas of priority for exploration complementary to colliders and add other relevant items, including accelerator, detector and computing R&D, theory developments, actions to minimise environmental impact and improve the sustainability of accelerator-based particle physics, initiatives to attract, train and retain early-career researchers, and public engagement.

The particle-physics community is invited to submit written inputs by 31 March 2025 via an online portal that will appear on the strategy secretariat’s web page. This will be followed by a scientific open symposium from 23 to 27 June 2025, where researchers will be invited to debate the future orientation of European particle physics. A “briefing book” based on the input and discussions will then be prepared by the physics preparatory group, the makeup of which was to be established by the Council in September before the Courier went to press. The briefing book will be submitted to the ESG by the end of September 2025 for consideration during a five-day-long drafting session, which is scheduled to take place from 1 to 5 December 2025. To allow the national communities to react to the submissions collected by March 2025 and to the content of the briefing book, they are offered further opportunities for input both ahead of the open symposium (with a deadline of 26 May 2025) and ahead of the drafting session (with a deadline of 14 November 2025). The ESG is expected to submit the proposed strategy update to the CERN Council by the end of January 2026.

“The timing is well chosen because at the end of 2025 we will have a lot of the relevant information, namely the final outcome of the FCC feasibility study plus, on the international scale, an update about what is going to happen in China,” says Jakobs. “The national inputs, whereby national communities are also invited to discuss their priorities, are considered very important and ECFA has produced guidelines to make the input more coherent. Early-career researchers are encouraged to contribute to all submissions, and we have restructured the physics preparatory group such that each working group has a scientific secretary who is an early-career researcher. We look forward to a very fruitful process over the forthcoming one and a half years.”

The post A decider for CERN’s next collider appeared first on CERN Courier.

]]>
News The third update of the European strategy for particle physics is underway. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_NA_ESU.jpg
Excellence in precision: advanced RF measurement technology for particle accelerators https://cerncourier.com/a/excellence-in-precision-advanced-rf-measurement-technology-for-particle-accelerators/ Thu, 19 Sep 2024 09:34:42 +0000 https://preview-courier.web.cern.ch/?p=111299 Radio frequency (RF) systems are central to particle accelerators, and they require a wide variety of test and measurement equipment in both their developmental and operational stages. Precise, dependable instrumentation is essential for monitoring and controlling different aspects of RF systems. RF systems generate, control and manage the electric fields used for particle acceleration. Central […]

The post Excellence in precision: advanced RF measurement technology for particle accelerators appeared first on CERN Courier.

]]>
MXO 5 series oscilloscope

Radio frequency (RF) systems are central to particle accelerators, and they require a wide variety of test and measurement equipment in both their developmental and operational stages. Precise, dependable instrumentation is essential for monitoring and controlling different aspects of RF systems.

RF systems generate, control and manage the electric fields used for particle acceleration. Central to these systems are RF cavities, which are evacuated metallic structures that support an electric field at a specific (radio) frequency. RF pulses are used to generate electric fields within these cavities, and the cavities have specific resonant frequencies that match the frequency of the pulses. Charged particles gain energy from these fields as they pass through the cavities at precise moments.

Monitoring RF signals in the time domain

Monitoring RF signals in the time domain is crucial for detecting and analysing transients, phase shifts and other dynamic behaviours that can affect system performance. For such time domain analyses, oscilloscopes are essential.

The MXO 5 oscilloscope from Rohde & Schwarz is a true pioneer in test and measurement technology. As the world’s first eight-channel oscilloscope that offers 4.5 million acquisitions/s, the MXO 5 sets a new standard in real time signal capture. The fast Fourier transform (FFT) technology of the MXO 5 is unique: the oscilloscope can show four FFTs in parallel with a maximum update rate of 45,000 FFT/s per channel.

For the same capabilities in a compact form factor, check out the MXO 5C. It is a screenless oscilloscope that occupies significantly lower vertical space compared to the MXO 5. This is great for space efficiency on the rack as well as for connecting with an MXO 5C oscilloscope to increase available channels (figure 1).

Master oscillator in storage ring 

The master oscillator is at the heart of the storage ring and serves as the primary source of timing and synchronisation for the entire accelerator system. It generates a stable and precise reference frequency, which is used to ensure that RF cavities operate at a frequency that matches the revolution frequency of the particles.

R&S SMA100B RF and microwave signal generator

The R&S®SMA100B RF and microwave signal generator is ideal for this purpose (figure 2). As the world’s leading signal generator, it can handle the most demanding test and measurement tasks on both module and system levels. With the R&S®SMA100B, it is no longer necessary to choose between signal purity and high output power: it is the only signal generator on the market that can supply signals with ultra high output power in combination with extremely low harmonic signal components. It is also capable of generating microwave signals with extremely low close in SSB phase noise, which improves operation efficiency by helping to prevent large energy spreads within particle beams.

Amplifying RF pulses 

Broadband amplifiers are used to amplify RF pulses to the required power levels. In a typical setup, an amplifier might be connected to an RF source generating the base signal. The amplifier boosts this base signal to a specified power level before it is fed into the RF cavities of the accelerator.

The Rohde & Schwarz high power transmitter and broadband amplifiers address customer demands for the highest amplitude and phase stability, lowest phase noise, top energy efficiency, small footprint and modular design. The R&S®BBA150 and R&S®BBA300 are robust solid state power amplifiers and cover ultra broad frequency ranges. They have high availability, and their modular designs allow for experimental flexibility that enables quick reconfiguration to support different setups and eliminates the need for multiple dedicated amplifiers.

Minimising phase noise

The phase of the RF cavity electric field must be extremely stable; phase noise can cause particles to experience different levels of acceleration, leading to the energy spread of particles.

An important aspect of minimising phase noise is introducing advanced feedback systems. Accelerators should be equipped with real time monitoring and feedback systems that continuously adjust the phase of the RF pulses to counteract any phase noise that does arise. The R&S®FSWP phase-noise analyser and voltage-controlled oscillator (VCO) tester is the optimum solution for precise phase-noise measurement. It is ideal for pulsed signals and has an internal source for measuring additive phase noise.

Rohde & Schwarz – partner to the global research community

Rohde & Schwarz has 90 years of experience in high-energy RF signal generation, signal amplification and state-of-the-art test and measurement solutions. We have built up long-lasting relationships within the global research community, offering our expertise and market-leading solutions to labs and institutions worldwide. From beam testing to safe particle storage, we have the background to help you address the highly sophisticated requirements of accelerator testing.

Discover more particle-acceleration solutions from Rohde & Schwarz or get in touch with us.

The post Excellence in precision: advanced RF measurement technology for particle accelerators appeared first on CERN Courier.

]]>
Advertising feature https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_Advertorial_Rhode1.jpg
24 years of CERN and WinCC OA: the success story of a groundbreaking technological partnership https://cerncourier.com/a/24-years-of-cern-and-wincc-oa-the-success-story-of-a-groundbreaking-technological-partnership/ Wed, 18 Sep 2024 16:07:12 +0000 https://preview-courier.web.cern.ch/?p=111291 This relationship, initiated in 2000, has not only endured but also set a benchmark for managing and evolving complex control systems. Rigorous selection process In the late 1990s, CERN undertook an extensive evaluation to choose a SCADA (supervisory control and data acquisition) system for its Large Hadron Collider (LHC) detectors. The process spanned two years […]

The post 24 years of CERN and WinCC OA: the success story of a groundbreaking technological partnership appeared first on CERN Courier.

]]>
This relationship, initiated in 2000, has not only endured but also set a benchmark for managing and evolving complex control systems.

Rigorous selection process

In the late 1990s, CERN undertook an extensive evaluation to choose a SCADA (supervisory control and data acquisition) system for its Large Hadron Collider (LHC) detectors. The process spanned two years and involved 10 person-years of testing and evaluation. Six products were rigorously assessed for functionality, performance, scalability and openness. WinCC OA emerged as the top choice, primarily due to its robust architecture and potential for future development, even though it did not fully meet CERN’s requirements at the time.

Strategic partnership formation 

Recognising the need for significant enhancements to WinCC OA, CERN sought more than just a transactional relationship. A symbiotic partnership was formed, focused on mutual growth and adaptation. This collaboration was crucial in ensuring the timely deployment of the LHC detectors in 2009. From the outset, both parties worked closely to evolve WinCC OA to meet the unique demands of the LHC.

Collaboration examples 

The first contract for WinCC OA (then known as PVSS2) was signed in 1999, initiating work on scaling the product to meet CERN’s unprecedented requirements. One key area of collaboration was the development of a new UI manager based on Qt, funded by CERN, ensuring compatibility across Linux and Windows while enhancing customisation options. This partnership was vital for the product’s evolution.

Another significant collaboration focused on the archiving system of WinCC OA. CERN required a system capable of storing data from large distributed systems in a central, high-performance database. Over the years, this system evolved through numerous workshops and large-scale tests, ultimately resulting in a substantial performance boost in the Oracle RDB archiver system, delivered on time for the LHC’s launch.

ETM’s (ETM professional control, a Siemens company) sponsorship of the CERN openlab project in 2009 furthered this collaboration, leading to the development of the Next Generation Archiver. This new feature, co-designed with CERN, became a cornerstone of WinCC OA, offering modularity, extendability and support for multiple database technologies. This flexibility allowed CERN to integrate the system into the “O2” physics data flow for the ALICE experiment, providing crucial data for analyses. Ongoing collaboration focuses on advancing the NextGen Archiver’s performance, with promising developments like the TimeScaleDB backend.

CERN’s input has also led to numerous enhancements in WinCC OA, such as improvements to the alarm-summarising engine and the modernisation of the CTRL scripting language. Additionally, the TSPP extension of the S7+ driver was implemented, maximising throughput and enabling precise time-stamped events.

CERN’s innovations, like the WebView widget, have influenced the product’s development, allowing the integration of web technologies within WinCC OA panels. The ongoing collaboration between CERN and ETM is set to continue, with plans to explore web-based interfaces, alternative scripting languages and container orchestration.

Widespread adoption and homogeneity 

The success of WinCC OA in managing LHC detectors resulted in its adoption across other CERN systems, including cryogenics, electricity distribution and ventilation. Over time, WinCC OA became the standard SCADA solution at CERN, supporting more than 850 mission-critical applications across its experiments and infrastructure. These applications range from small systems to vast control systems managing millions of hardware IO channels across multiple computers, demonstrating WinCC OA’s scalability and adaptability.

CERN’s development of frameworks like JCOP and UNICOS, based on WinCC OA, has enabled the integration of diverse systems into a vast, homogeneous control environment. These frameworks, centrally maintained by CERN, provide guidelines, conventions and tools for engineering complex control systems, reducing redundancy and maximising the reuse of commonly maintained technologies. This approach has proven efficient, minimising development and maintenance costs while ensuring the integrity of a critical software project despite personnel turnover. The open sourcing of the JCOP and UNICOS frameworks has further strengthened this model, offering a blueprint for other large, complex projects.

A blueprint for future collaborations 

WinCC OA’s adoption is growing beyond CERN’s LHC, with other laboratories and experiments, such as GSI and the Neutrino Platform, choosing it as their SCADA solution. Looking ahead, CERN may use WinCC OA for the Future Circular Collider (FCC) project, with feasibility studies already underway. The ongoing CERN ETM partnership demonstrates the power of collaboration in driving technological innovation. By working together, CERN and ETM have not only met the extraordinary demands of the LHC but also continuously evolved WinCC OA to support CERN’s mission-critical applications.

This partnership serves as a model for organisations aiming to implement large-scale, complex systems, underscoring the importance of selecting the right technology and the right partners committed to a shared vision of success.

“We congratulate CERN on 70 years of excellence in particle-physics research and are proud to partner with such an extraordinary organisation. This collaboration continually inspires us to maximise our capabilities and redefine technological boundaries,” Bernhard Reichl, CEO ETM professional control, a Siemens Company.

The post 24 years of CERN and WinCC OA: the success story of a groundbreaking technological partnership appeared first on CERN Courier.

]]>
Advertising feature https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_Advertorial_EMT.jpg
High-voltage pulse stability measurement of klystron modulators https://cerncourier.com/a/high-voltage-pulse-stability-measurement-of-klystron-modulators/ Wed, 18 Sep 2024 09:30:56 +0000 https://preview-courier.web.cern.ch/?p=111276 Klystron modulators are key elements in free electron lasers. They provide high-voltage pulses to bias klystron tubes with energies of several hundred joules. Amplitude variations directly affect the gain and phase of amplified RF pulses and therefore the accelerating fields created by RF cavities. A huge effort is put into minimising these variations with both […]

The post High-voltage pulse stability measurement of klystron modulators appeared first on CERN Courier.

]]>
Klystron modulators are key elements in free electron lasers. They provide high-voltage pulses to bias klystron tubes with energies of several hundred joules. Amplitude variations directly affect the gain and phase of amplified RF pulses and therefore the accelerating fields created by RF cavities. A huge effort is put into minimising these variations with both klystron modulators and RF pulse regulation.

For machines such as the SwissFEL (Swiss Free Electron Laser), the required HV pulse stability is 15 ppm (parts per million). Stability is calculated from measurements of 100 consecutive pulses taken at a repetition rate of 100 Hz as the relative standard deviation of gated averages with respect to a mean pulse amplitude. The measurement gate is located around the maximum plateau of the pulse, the so-called flat-top region, during which the RF pulse is fired.

Waveforms after automatic CVD offset adjustment

A common technique for measuring such small variations involves pulse offsetting and magnification of the flat-top region in order to achieve a sufficient quantisation resolution. However, signal conditioning requires low-noise analogue electronics in the form of summing amplifiers and clippers with sufficient bandwidth and settling time. Such a set-up has so far involved the use of an external differential amplifier for signal conditioning and a high-end scope with statistical analysis functionality. The resolution of this set-up makes it possible to measure stability down to around 7 ppm, and it is mounted on a trolley so that it can be shared between RF stations.

Starting as an apprentice project, the aim was to consolidate such a bulky and extensive set-up into an embedded unit that could be integrated into any pulse modulator cabinet, allowing permanent live monitoring of pulse stability. As a versatile data-acquisition system with open source firmware / software and small size, the Red Pitaya device is a perfect fit for this application. Figure 1 shows the block diagram of how a Red Pitaya STEMlab 125-14 4-input board, connected to a signal conditioning board developed at PSI, is used to measure the pulse stability of klystron modulators.

Pulse Measurement Unit

Pulse current and voltage are measured simultaneously, while only the voltage signal is used for the stability statistics. The required pulse offset voltage is automatically set by a precision 16-bit DAC before the statistics are calculated. There is a gain factor of 20 (26 dB) between the full range pulse voltage on channel 3 and the flat-top voltage on channel 4, giving a theoretical increase in resolution of 4.3 bits. In principle, this gain can be increased further to give an even higher resolution, but in practice the pulse is not purely rectangular but has a dynamic range due to pulse droop and non-flatness. Figure 2 shows how real waveforms might look in operation. The yellow trace shows the pulse current, while the red and blue traces show the full-range and magnified flat-top pulse voltages, respectively.

The set-up presented here was able to measure pulse stability of 7–8 ppm in operation, with a resolution limit of 5–6 ppm at a 1 µs gate length and 67% of ADC full scale.

The software running on the Red Pitaya is built around the standard C API and includes the OPC-UA stack from open62541.org to allow communication and data transfer via the server and client approach. The integration into our control system environment (EPICS) is currently on-going.

The complete assembly is called a Pulse Measurement Unit (PMU), and it offers many additional features such as the regulation of a high-voltage charging power supply, interfacing with opto-isolated IOs and a low-jitter PLL in order to lock external synchronisation frequencies to generate a synchronised ADC clock. With an overall size of 160 x 100 mm, the unit fits easily in a Eurocard rack or can be mounted on a DIN rail, as shown in Figure 3.

Velika pot 21, 5250
Solkan, Slovenia
Tel +386 30 322 719
E-mail nicu.irimia@redpitaya.com
www.redpitaya.com

The post High-voltage pulse stability measurement of klystron modulators appeared first on CERN Courier.

]]>
Advertising feature https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_Advertorial_redpitya1.jpg
Voices from a new generation https://cerncourier.com/a/voices-from-a-new-generation/ Mon, 16 Sep 2024 14:22:18 +0000 https://preview-courier.web.cern.ch/?p=111056 Early-career researchers tell the Courier what they think is the key strategic issue for the future of high-energy physics.

The post Voices from a new generation appeared first on CERN Courier.

]]>
Seventy years of CERN

In January 1962, CERN was for the first time moving from machine construction to scientific research with the machines. Director-General Victor Weisskopf took up the pen in the first CERN Courier after a brief hiatus. “This institution is remarkable in two ways,” he wrote. “It is a place where the most fantastic experiments are carried out. It is a place where international co-operation actually exists.”

A new generation of early-career researchers (ECRs) shares his convictions. Now, as then, they do much of the heavy lifting that builds the future of the field. Now, as then, they need resilience and vision. As Weisskopf wrote in these pages, the everyday work of high-energy physics (HEP) can hide its real importance – its romantic glory, as the renowned theorist put it. “All our work is for an idealistic aim, for pure science without commercial or any other interests. Our effort is a symbol of what science really means.”

As CERN turns 70, the Courier now hands the pen to the field’s next generation of leaders. All are new post-docs. Each has already made a tangible contribution and earned recognition from their colleagues. All, in short, are among the most recent winners of the four big LHC collaborations’ thesis prizes. Each was offered carte blanche to write about a subject of their choosing, which they believe will be strategically crucial to the future of the field. Almost all responded. These are their viewpoints.

Invest in accelerator innovation

Nicole Hartman

I come from Dallas, Texas, so the Superconducting Super Collider should have been in my backyard as I was growing up. By the late 1990s, its 87 km ring could have delivered 20 TeV per proton beam. The Future Circular Collider could deliver 50 TeV per proton beam in a 91 km ring by the 2070s. I’d be retired before first collisions. Clearly, we need an intermediate-term project to keep expertise in our community. Among the options proposed so far, I’m most excited by linear electron–positron colliders, as they would offer sufficient energy to study the Higgs self-coupling via di-Higgs production. This could be decisive in understanding electroweak symmetry breaking and unveiling possible Higgs portals.

A paradigm shift for accelerators might achieve our physics goals without a collider’s cost scaling with its energy. A strong investment in collider R&D could therefore offer hope for my generation of scientists to push back the energy frontier. Muon colliders avoid synchrotron radiation. Plasma wakefields offer a 100-fold increase in electric field gradient. Though both represent enormous challenges, psychologists have noted an “end of history” phenomenon, whereby as humans we appreciate how much we have changed in the past, but under­estimate how much we will change in the future. Reflecting on the past physics breakthroughs galvanises me to optimism: unlocking the next chapter of physics has always been within the reach of technological innovation. CERN has been a mecca for accelerator applications in the last 70 years. I’d argue that a strong increase in support for novel collider R&D is the best way to carry this legacy forwards.

Nicole Hartman is a post-doc at the Technical University of Munich and Origins Data Science Lab. She was awarded a PhD by Stanford University for her thesis “A search for non-resonant HH  4b at s = 13 TeV with the ATLAS detector – or – 2b, and then another 2b… now that’s the thesis question”.

Reward technical work with career opportunities

Alessandro Scarabotto

This job is a passion and a privilege, and ECRs devote nights and weekends to our research. But this energy should be handled in a more productive way. In particular, technical work on hardware and software is not valued and rewarded as it should be. ECRs who focus on technical aspects are often forced to divide their focus with theoretical work and data analysis, or suffer reduced opportunities to pursue an academic career. Is this correct? Why shouldn’t technical and scientific work be valued in the same way?

I am very hopeful for the future. In recent years, I have seen improvements in this direction, with many supervisors increasingly pushing their students towards technical work. I expect senior leadership to make organisational adjustments to reward and value these two aspects of research in exactly the same way. This cultural shift would greatly benefit our physics community by more efficiently transforming the enthusiasm and hard work of ECRs into skilled contributions to the field that are sustained over the decades.

Alessandro Scarabotto is a postdoctoral researcher at Technische Universität Dortmund. He was awarded a PhD by Sorbonne Université, Paris, for his thesis “Search for rare four-body charm decays with electrons in the final state and long track reconstruction for the LHCb trigger”.

A revolving door to industry

Christopher Brown

Big companies’ energy usage is currently skyrocketing to fuel their artificial intelligence (AI) systems. There is a clear business adaptation of my research on fast, energy-saving AI triggers, but I feel completely unable to make this happen. Why, as a field, are we unable to transfer our research to industry in an effective way?

While there are obvious milestones for taking data to publication, there is no equivalent for starting a business or getting our research into major industry players. Our collaborations are incubators for ideas and people. They should implement dedicated strategies to help ECRs obtain the funding, professional connections and business skills they need to get their ideas into the wider world. We should be presenting at industry conferences – both to offer solutions to industry and to obtain them for our own research – and industry sessions within our own conferences could bring links to every part of our field.

Most importantly, the field should encourage a revolving door between academia and industry to optimise the transfer of knowledge and skills. Unfortunately, when physicists leave for industry, slow, single-track physics career progressions and our focus on publication count rather than skills make a return unrealistic. There also needs to be a way of attracting talent from industry into physics without the requirement of a PhD so that experienced people can start or return to research in high-profile positions suitable for their level of work and life experience.

Christopher Brown is a CERN fellow working on next-generation triggers. He was awarded a PhD by Imperial College London for his thesis “Fast machine learning in the CMS Level-1 trigger for the High-Luminosity LHC”.

Collaboration, retention and support

Prajita Bhattarai

I feel a strong sense of agency regarding the future of our field. The upcoming High-Luminosity LHC (HL-LHC) will provide a wealth of data beyond what the LHC has offered, and we should be extremely excited about the increased discovery potential. Looking further ahead, I share the vision of a future Higgs factory as the next logical step for the field. The proposed Future Circular Collider is currently the most feasible option. However, the high cost and evolving geopolitical landscape are causes for concern. One of the greatest challenges we face is retaining talent and expertise. In the US, it has become increasingly difficult for researchers to find permanent positions after completing postdocs, leading to a loss of valuable technical and operational expertise. On a positive note, our field has made significant strides in providing opportunities for students from under­represented nationalities and socioeconomic backgrounds – I am a beneficiary of these efforts. Still, I believe we should intensify our focus on supporting individuals as they transition through different career stages to ensure a vibrant and diverse future workforce.

Prajita Bhattarai is a research associate at SLAC National Accelerator Laboratory in the US. She was awarded her PhD by Brandeis University in the US for her thesis “Standard Model electroweak precision measurements with two Z bosons and two jets in ATLAS”.

Redesign collaborations for equitable opportunity

Spandan Mondal

Particle physics and cosmology capture the attention of nearly every inquisitive child. Though large collaborations and expensive machines have produced some of humankind’s most spectacular achievements, they have also made the field inaccessible to many young students. Making a meaningful contribution is contingent upon being associated with an institution or university that is a member of an experimental collaboration. One typically also has to study in a country that has a cooperation agreement with an international organisation like CERN.

If future experiments want to attract diverse talent, they should consider new collaborative models that allow participation irrespective of a person’s institution or country of origin. Scientific and financial responsibilities could be defined based on expertise and the research grants of individual research groups. Remote operations centres across the globe, such as those trialled by CERN experiments, could enable participants to fulfil their responsibilities without being constrained by international borders and travel budgets; the worldwide revolution in connectivity infrastructure could provide an opportunity to make this the norm rather than the exception. These measures could provide equitable opportunities to everyone while simultaneously maximising the scientific output of our field.

Spandan Mondal is a postdoctoral fellow at Brown University in the US. He was awarded a PhD by RWTH Aachen in Germany for his thesis on the CMS experiment “Charming decays of the Higgs, Z, and W bosons: development and deployment of a new calibration method for charm jet identification”.

Reward risk taking

Francesca Ercolessi

Young scientists often navigate complex career paths, where the pressure to produce consistent publishable results can stifle creativity and discourage risk taking. Traditionally, young researchers are evaluated almost solely on achieved results, often leading to a culture of risk aversion. To foster a culture of innovation we must shift our approach to research and evaluation. To encourage bold and innovative thinking among ECRs, the fuel of scientific progress, we need to broaden our definition of success. European funding and grants have made strides in recognising innovative ideas, but more is needed. Mentorship and peer-review systems must also evolve, creating an environment open to innovative thinking, with a calculated approach to risk, guided by experienced scientists. Concrete actions include establishing mentorship programmes during scientific events, such as workshops and conferences. To maximise the impact, these programmes should prioritise diversity among mentors and mentees, ensuring that a wide range of perspectives and experiences are shared. Equally important is recognising and rewarding innovation. This can be achieved by dedicated awards that value originality and potential impact over guaranteed success. Celebrating attempts, even failed ones, can shift the focus from the outcome to the process of discovery, inspiring a new generation of scientists to push the boundaries of knowledge.

Francesca Ercolessi is a post-doc at the University of Bologna. She was awarded a PhD by the University of Bologna for her thesis “The interplay of multiplicity and effective energy for (multi) strange hadron production in pp collisions at the LHC”.

Our employment model stifles creativity

Florian Jonas

ECR colleagues are deeply passionate about the science they do and wish to pursue a career in our field – “if possible”. Is there anything one can do to better support this new generation of physicists? In my opinion, we have to address the scarcity of permanent positions in our field. Short-term contracts lead to risk aversion, and short-term projects with a high chance of publication increase your employment prospects. This is in direct contrast to what is needed to successfully complete ambitious future projects this century – projects that require innovation and out-of-the-box thinking by bright young minds.

In addition, employment in fundamental science is more than ever in direct competition with permanent jobs in industry. For example, machine learning and computing experts innovate our field with novel analysis techniques, but end up ultimately leaving our field to apply their skills in permanent employment elsewhere. If we want to keep talent in our field we must create a funding structure that allows realistic prospects for long-term employment and commitment to future projects.

Florian Jonas is a postdoctoral scholar at UC Berkeley and LBNL. He was awarded a PhD by the University of Münster for his thesis on the ALICE experiment “Probing the initial state of heavy-ion collisions with isolated prompt photons”.

Embrace private expertise and investment

Jona Motta

The two great challenges of our time are data taking and data analysis. Rare processes like the production of Higgs-boson pairs have cross sections 10 orders of magnitude smaller than their backgrounds – and during HL-LHC operation the CMS trigger will have to analyse about 50 TB/s and take decisions with a latency of 12.5 μs. In recent years, we have made big steps forward with machine learning, but our techniques are not always up to speed with the current state-of-the-art in the private sector.

To sustain and accelerate our progress, the HEP community must be more open to new sources of funding, particularly from private investments. Collaborations with tech companies and private investors can provide not only financial support but also access to advanced technologies and expertise. Encouraging CERN–private partnerships can lead to the development of innovative tools and infrastructure, driving the field forward.

The recent establishment of the Next Generation Trigger Project, funded by the Eric and Wendy Schmidt Fund for Strategic Innovation, represents the first step toward this kind of collaboration. Thanks to overlapping R&D interests, this could be scaled up to direct partnerships with companies to introduce large and sustained streams of funds. This would not only push the boundaries of our knowledge but also inspire and support the next generation of physicists, opening new tenured positions thanks to private funding.

Jona Motta is a post-doc at Universität Zürich. He was awarded a PhD by Institut Polytechnique de Paris for his thesis “Development of machine learning based τ trigger algorithms and search for Higgs boson pair production in the bbττ decay channel with the CMS detector at the LHC”.

Stability would stop the brain drain

Hassnae El Jarrari

The proposed Future Circular Collider presents a formidable challenge. Every aspect of its design, construction, commissioning and operations would require extensive R&D to achieve the needed performance and stability, and fully exploit the machine’s potential. The vast experience acquired at the LHC will play a significant role. Knowledge must be preserved and transmitted between generations. But the loss of expertise is already a significant problem at the LHC.

The main reason for young scientists to leave the field is the lack of institutional support: it’s hard to count on a stable working environment, regardless of our expertise and performance. The difficulty in finding permanent academic or research positions and the lack of recognition and advancement are all viewed as serious obstacles to pursuing a career in HEP. In these conditions, a young physicist might find competitive sectors such as industry or finance more appealing given the highly stable future they offer.

It is crucial to address this problem now for the HL-LHC. Large HEP collaborations should be more supportive to ensure better recognition and career advancement towards permanent positions. This kind of policy could help to retain young physicists and ensure they continue to be involved in the current HEP projects that would then define the success of the FCC.

Hassnae El Jarrari is a CERN research fellow in experimental physics. She was awarded a PhD by Université Mohammed-V De Rabat for her thesis “Dark photon searches from Higgs boson and heavy boson decays using pp collisions recorded at s = 13 TeV with the ATLAS detector at the LHC and performance evaluation of the low gain avalanche detectors for the HL-LHC ATLAS high-granularity timing detector”.

Reduce environmental impacts

Luca Quaglia

The main challenge for the future of large-scale HEP experiments is reducing our environmental impact, and raising awareness is key to this. For example, before running a job, the ALICE computing grid provides an estimate of its CO2-equivalent carbon footprint, to encourage code optimisation and save power.

I believe that if we want to thrive in the future, we should adopt a new way of doing physics where we think critically about the environment. We should participate in more collaboration meetings and conferences remotely, and promote local conferences that are reachable by train.

I’m not saying that we should ban air travel tout court. It’s especially important for early-career scientists to get their name out there and to establish connections. But by attending just one major international conference in person every two years, and publicising alternative means of communication, we can save resources and travel time, which can be invested in our home institutions. This would also enable scientists from smaller groups with reduced travel budgets to attend more conferences and disseminate their findings.

Luca Quaglia is a postdoctoral fellow at the Istituto Nazionale di Fisica Nucleare, Sezione di Torino. He was awarded his PhD by the University of Torino for his thesis “Development of eco-friendly gas mixtures for resistive plate chambers”.

Invest in software and computing talent

Joshua Beirer

With both computing and human resources in short supply, funds must be invested wisely. While scaling up infrastructure is critical and often seems like the simplest remedy, the human factor is often overlooked. Innovative ideas and efficient software solutions require investment in training and the recruitment of skilled researchers.

This investment must start with a stronger integration of software education into physics degrees. As the boundaries between physics and computer science blur, universities must provide a solid foundation, raise awareness of the importance of software in HEP and physics in general, and promote best practices to equip the next generation for the challenges of the future. Continuous learning must be actively supported, and young researchers must be provided with sufficient resources and appropriate mentoring from experienced colleagues.

Software skills remain in high demand in industry, where financial incentives and better prospects often attract skilled people from academia. It is in the interest of the community to retain top talent by creating more attractive and secure career paths. After all, a continuous drain of talent and knowledge is detrimental to the field, hinders the development of efficient software and computing solutions, and is likely to prove more costly in the long run.

Joshua Beirer is a CERN research fellow in the offline software group of the ATLAS experiment and part of the lab’s strategic R&D programme on technologies for future experiments. He was awarded his PhD by the University of Göttingen for his thesis “Novel approaches to the fast simulation of the ATLAS calorimeter and performance studies of track-assisted reclustered jets for searches for resonant X  SH  bbWW* production with the ATLAS detector”.

Strengthen international science

Ezra D. Lesser

HEP is at an exciting yet critical inflection point. The coming years hold both unparalleled opportunities and growing challenges, including an expanding arena of international competition and the persistent issue of funding and resource allocation. In a swiftly evolving digital age, scientists must rededicate themselves to public service, engagement and education, informing diverse communities about the possible technological advancements of HEP research, and sharing with the world the excitement of discovering fundamental knowledge of the universe. Collaborations must be strengthened across international borders and political lines, pooling resources from multiple countries to traverse cultural gaps and open the doors of scientific diplomacy. With ever-increasing expenses and an uncertain political future, scientists must insist upon the importance of public research irrespective of any national agenda, and reinforce scientific veracity in a rapidly evolving world that is challenged by growing misinformation. Most importantly, the community must establish global priorities in a maturing age of precision, elevating not only new discoveries but the necessary scientific repetition to better understand what we discover.

The most difficult issues facing HEP research today are addressable and furthermore offer excellent opportunities to develop the scientific approach for the next several decades. By tackling these issues now, scientists can continue to focus on the mysteries of the universe, driving scientific and technological advancements for the betterment of all.

Ezra D. Lesser is a CERN research fellow working with the LHCb collaboration. He was awarded his PhD in physics by the University of California, Berkeley for his thesis: Measurements of jet substructure in pp and Pb–Pb collisions at sNN = 5.02 TeV with ALICE”.

Recognise R&D

Savannah Clawson

ECRs must drive the field’s direction by engaging in prospect studies for future experiments, but dedicating time to this essential work comes at the expense of analysing existing data – a trade-off that can jeopardise our careers. With most ECRs employed on precarious two-to-four year contracts, time spent on these studies can result in fewer high-profile publications, making it harder to secure our next academic position. Another important factor is the unprecedented timescales associated with many prospective futures. Those working on R&D today may never see the fruits of their labour.

Anxieties surrounding these issues are often misinterpreted as disengagement, but nothing could be further from the truth. In my experience, ECRs are passionate about research, bringing fresh perspectives and ideas that are crucial for advancing the field. However, we often struggle with institutional structures that fail to recognise the breadth of our contributions. By addressing longstanding issues surrounding attitudes toward work–life balance and long-term job stability – through measures such as establishing enforced minimum contract durations, as well as providing more transparent and diverse sets of criteria for transitioning to permanent positions – we can create a more supportive environment where HEP thrives, driven by the creativity and innovation of its next generation of leaders.

Savannah Clawson is a postdoctoral fellow at DESY Hamburg. She was awarded her PhD by the University of Manchester for her thesis “The light at the end of the tunnel gets weaker: observation and measurement of photon-induced W+W production at the ATLAS experiment”.

The post Voices from a new generation appeared first on CERN Courier.

]]>
Feature Early-career researchers tell the Courier what they think is the key strategic issue for the future of high-energy physics. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_ECR_frontis.jpg
Steering the ship of member states https://cerncourier.com/a/steering-the-ship-of-member-states/ Mon, 16 Sep 2024 14:20:34 +0000 https://preview-courier.web.cern.ch/?p=111171 An interview with Eliezer Rabinovici, the president of the CERN Council.

The post Steering the ship of member states appeared first on CERN Courier.

]]>
CERN turns 70 at the end of September. How would you sum up the contribution the laboratory has made to human culture over the past seven decades?

CERN’s experimental and theoretical research laid many of the building blocks of one of the most successful and impactful scientific theories in human history: the Standard Model of particle physics. Its contributions go beyond the best-known discoveries, such as of neutral currents and the seemingly fundamental W, Z and Higgs bosons, which have such far-reaching significance for our universe. I also wish to draw attention to the many dozens of new composite particles at the LHC and the incredibly high-precision agreement between theoretical calculation performed in quantum chromodynamics and the experimental results obtained at the LHC. These amazing discovering were made possible thanks to the many technological innovations made at CERN.

But knowledge creation and accumulation are only half the story. CERN’s human ecosystem is an oasis in which the words “collaboration among peoples for the good of humanity” can be uttered without grandstanding or hypocrisy.

What role does the CERN Council play?

CERN’s member states are each represented by two delegates to the CERN Council. Decisions are made democratically, with equal voting power for each national delegation. According to the convention approved in 1954, and last revised in 1971, Council determines scientific, technical and administrative policy, approves CERN’s programmes of activities, reviews its expenditures and approves the laboratory’s budget. The Director-General and her management team work closely with Council to develop the Organization’s policies, scientific activities and budget. Director-General Fabiola Gianotti and her management team are now collaborating with Council to forge CERN’s future scientific vision.

What’s your vision for CERN’s future?

As CERN Council president, I have a responsibility to be neutral and reflect the collective will of the member states. In early 2022, when I took up the presidency, Council delegates unanimously endorsed my evaluation of their vision: that CERN should continue to offer the world’s best experimental high-energy physics programme using the best technology possible. CERN now needs to successfully complete the High-Luminosity LHC (HL-LHC) project and agree on a future flagship project.

I strongly believe the format of the future flagship project needs to crystallise as soon as possible. As put to me recently in a letter from the ECFA early-career researchers panel: “While the HL-LHC constitutes a much-anticipated and necessary advance in the LHC programme, a clear path beyond it for our future in the field must be cemented with as little delay as possible.” It can be daunting for young people to speak out on strategy and the future of the field, given the career insecurities they face. I am very encouraged by their willingness to put out a statement calling for immediate action.

At its March 2024 session, Council agreed to ignite the process of selecting the next flagship project by going ahead with the fourth European Strategy for Particle Physics update. The strategy group are charged, among other things, with recommending what this flagship project should be to Council. As I laid down the gavel concluding the meeting I looked around and sensed genuine excitement in the Chambers – that of a passenger ship leaving port. Each passenger has their own vision for the future. Each is looking forward to seeing what the final destination will look like. Several big pieces had started falling into place, allowing us to turn on the engine.

What are these big pieces?

Acting upon the recommendation of the 2020 update of the European Strategy for Particle Physics, CERN in 2021 launched a technical and financial feasibility study for a Future Circular Collider (FCC) operating first as a Higgs, electroweak and top factory, with an eye to succeeding it with a high-energy proton–proton collider. The report will include the physics motivation, technological and geological feasibility, territorial implementation, financial aspects, and the environmental and sustainability challenges that are deeply important to CERN’s member states and the diverse communes of our host countries.

Fabiola Gianotti and Eliezer Rabinovici at CERN Council

It is also important to add that CERN has also invested, and continues to invest, in R&D for alternatives to FCC such as CLIC and the muon collider. CLIC is a mature design, developed over decades, which has already precipitated numerous impactful societal applications in industry and medicine; and to the best of my knowledge, at present no laboratory has invested as much as CERN in muon-collider R&D.

A mid-term report of FCC’s feasibility study was submitted to subordinate bodies to the CERN management mid-2023, and their resulting reports were presented to CERN’s finance and scientific-policy committees. Council received the outcomes with great appreciation for the work involved during an extraordinary session on 2 February, and looks forward to the completion of the feasibility study in March 2025. Timing the European strategy update to follow hot on its heels and use it as an input was the natural next step.

At the June Council session, we started dealing with the nitty gritty of the process. A secretariat for the European Strategy Group was established under the chairmanship of Karl Jakobs, and committees are being appointed. By January 2026 the Council could have at its disposal a large part of the knowledge needed to chart the future of the CERN vision.

How would you encourage early-career researchers (ECRs) to engage with the strategy process?

ECRs have a central role to play. One of the biggest challenges when attempting to build a major novel research infrastructure such as the proposed FCC – which I sometimes think of as a frontier circular collider – is to maintain high-quality expertise, enthusiasm and optimism for long periods in the face of what seem like insurmountable hurdles. Historically, the physicists who brought a new machine to fruition knew that they would get a chance to work on the data it produced or at least have a claim for credit for their efforts. This is not the case now. Success rests on the enthusiasm of those who are at the beginning of their careers today just as much as senior researchers. I hope ECRs will rise to the challenge and find ways to participate in the coming European Strategy Group-sponsored deliberations and become future leaders of the field. One way to engage is to participate in ECR-only strategy sessions like those held at the yearly FCC weeks. I’d also encourage other countries to join the UK in organising nationwide ECR-only forums for debating the future of the field, such as I initiated in Birmingham in 2022.

What’s the outlook for collaboration and competition between CERN and other regions on the future collider programme?

Over decades, CERN has managed to place itself as the leading example of true international scientific collaboration. For example, by far the largest national contingent of CERN users hails from the US. Estonia has completed the process of joining CERN as a new member state and Brazil has just become the first American associate member state. There is a global agreement among scientists in China, Europe, Japan and the US that the next collider should be an electron–positron Higgs factory, able to study the properties of the Higgs boson with high precision. I hope that – patiently, and step by step – ever more global integration will form.

Do member states receive a strong return on their investment in CERN?

Research suggests that fundamental exploration actively stimulates the economy, and more than pays for itself. Member states and associate member states have steadfastly supported CERN to the tune of CHF 53 billion (unadjusted for inflation) since 1954. They do this because their citizens take pride that their nation stands with fellow member states at the forefront of scientific excellence in the fundamental exploration of our universe. They also do this because they know that scientific excellence stimulates their economies through industrial innovation and the waves of highly skilled engineers, entrepreneurs and scientists who return home trained, inspired and better connected after interacting with CERN.

A bipartisan US report from 2005 called “Rising above the gathering storm” offered particular clarity, in my opinion. It asserted that investments in science and technology benefit the world’s economy, and it noted both the abruptness with which a lead in science and technology can be lost and the difficulty of recovering such a lead. One should not be shy to say that when CERN was established in 1954, it was part of a rather crowded third place in the field of experimental particle physics, with the Soviet Union and the United States at the fore. In 2024, CERN is the leader of the field – and with leadership comes a heavy responsibility to chart a path beneficial to a large community across the whole planet. As CERN Council president, I thank member states for their steadfast support and I applaud them for their economic and scientific foresight over the past seven decades. I hope it will persist long into the 21st century.

Is there a role for private funding for fundamental research?

In Europe, substantial private-sector support for knowledge creation and creativity dates back at least to the Medici. Though it is arguably less emphasised in our times, it plays an important role today in the US, the UK and Israel. Academic freedom is a sine qua non for worthwhile research. Within this limit, I don’t believe there is any serious controversy in Council on this matter. My sense is that Council fully supports the clear division between recognising generosity and keeping full academic and governance freedom.

What challenges has Council faced during your tenure as president?

In February 2022, the Russian Federation, an observer state, invaded Ukraine, which has been an associate member state since 2016. This was a situation with no precedent for Council. The shape of our decisions evolved for well over a year. Council members decided to cover from their own budgets the share of Ukraine’s contribution to CERN. Council also tried to address as much as possible the human issues resulting from the situation. It decided to suspend the observer status in the Council of the Russian Federation and the Joint Institute for Nuclear Research. Council also decided to not extend its International Collaboration Agreements with the Republic of Belarus and the Russian Federation. CERN departments also undertook initiatives to support the Ukrainian scientific community at CERN and in Ukraine.

A second major challenge was to mitigate the financial pressures being experienced around the world, such as inflation and rising costs for energy and materials. A package deal was agreed upon in Council that included significant contributions from the member states, a contribution from the CERN staff, and substantial savings from across CERN’s activities. So far, these measures seem to have addressed the issue.

I thank member states for their steadfast support and I applaud them for their economic and scientific foresight over the past seven decades

While these key challenges were tackled, management worked relentlessly on preparing an exhaustive FCC feasibility study, to ensure that CERN stays on course in developing its scientific and technological vision for the field of experimental high-energy physics.

The supportive reaction of Council to these challenges demonstrated its ability to stay on course during rough seas and strong side winds. This cohesion is very encouraging for me. Time and again, Council faced difficult decisions in recent years. Though convergence seemed difficult at first, thanks to a united will and the help of all Council members, a way forward emerged and decisions were taken. It’s important to bear in mind that no matter which flagship project CERN embarks on, it will be a project of another order of magnitude. Some of the methods that made the LHC such a success can continue to accompany us, some will need to evolve significantly, and some new ones will need to be created.

Has the ideal of Science for Peace been damaged?

Over the years CERN has developed the skills needed to construct bridges. CERN does not have much experience in dismantling bridges. This issue was very much on the mind of Council as it took its decisions.

Do you wish to make some unofficial personal remarks?

Thanks. Yes. I would like to mention several things I feel grateful for.

Nobody owes humanity a concise description of the laws of physics and the basic constituents of matter. I am grateful for being in an era where it seems possible, thanks to a large extent to the experiments performed at CERN. Scientists from innumerable countries, who can’t even form a consensus on the best 1970s rock band, have succeeded time and again to assemble the most sophisticated pieces of equipment, with each part built in a different country. And it works. I stand in awe in front of that.

The ecosystem of CERN, the experimental groups working at CERN and the CERN Council are how I dreamt as a child that the United Nations would work. The challenges facing humanity in the coming centuries are formidable. They require international collaboration among the best minds from all over the planet. CERN shows that this is possible. But it requires hard work to maintain this environment. Over the years serious challenges have presented themselves, and one should not take this situation for granted. We need to be vigilant to keep this precious space – the precious gift of CERN.

The post Steering the ship of member states appeared first on CERN Courier.

]]>
Opinion An interview with Eliezer Rabinovici, the president of the CERN Council. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_INT_Rabinovici.jpg
Look to the Higgs self-coupling https://cerncourier.com/a/look-to-the-higgs-self-coupling/ Mon, 16 Sep 2024 14:18:52 +0000 https://preview-courier.web.cern.ch/?p=111164 Matthew McCullough argues that beyond-the-Standard Model physics may be most strongly expressed in the Higgs self-coupling.

The post Look to the Higgs self-coupling appeared first on CERN Courier.

]]>
What are the microscopic origins of the Higgs boson? As long as we lack the short-wavelength probes needed to study its structure directly, our best tool to confront this question is to measure its interactions.

Let’s consider two with starkly contrasting experimental prospects. The coupling of the Higgs boson to two Z bosons (HZZ) has been measured with a precision of around 5%, increasing to around 1.3% by the end of High-Luminosity LHC (HL-LHC) operations. The Higgs boson’s self-coupling (HHH) has so far only been measured with a precision of the order of several hundred percent, improving to around the 50% level by the end of HL-LHC operations – though it’s now rumoured that this latter estimate may be too pessimistic.

Good motives

As HZZ can be measured much more precisely than HHH, is it the more promising window beyond the Standard Model (SM)? An agnostic might say that both measurements are equally valuable, while a “top down” theorist might seek to judge which theories are well motivated, and ask how they modify the two couplings. In supersymmetry and minimal composite Higgs models, for example, modifications to HZZ and HHH are typically of a similar magnitude. But “well motivated” is a slippery notion and I don’t entirely trust it.

Fortunately there is a happy compromise between these perspectives, using the tool of choice of the informed agnostic: effective field theory. It’s really the same physical principle as trying to look within an object when your microscope operates on wavelengths greater than its physical extent. Just as the microscopic structure of an atom is imprinted, at low energies, in its multipolar (dipole, quadrupole and so forth) interactions with photons, so too would the microscopic structure of the Higgs boson leave its trace in modifications to its SM interactions.

All possible coupling modifications from microscopic new physics can be captured by effective field theory and organised into classes of “UV-completion”. UV-completions are the concrete microscopic scenarios that could exist. (Here, ultraviolet light is a metaphor for the short-wavelength probes needed to study the Higgs boson’s microscopic origins in detail.) Scenarios with similar patterns are said to live in the same universality class. Families of universality classes can be identified from the bottom up. A powerful tool for this is naïve dimensional analysis (NDA).

Matthew McCullough

One particularly sharp arrow in the NDA quiver is ℏ counting, which establishes how many couplings and/or ℏs must be present in the EFT modification of an interaction. Couplings tell you the number of fundamental interactions involved. ℏs establish the need for quantum effects. For instance, NDA tells us that the coefficient of the Fermi interaction must have two couplings, which the electroweak theory duly supplies – a W boson transforms a neutron into a proton, and then decays into an electron and a neutrino.

For our purposes, NDA tells us that modifications to HZZ must necessarily involve one more ℏ or two fewer couplings than any underlying EFT interaction that modifies HHH. In the case of one more ℏ, modifications to HZZ could potentially be an entire quantum loop factor smaller than modifications to HHH. In the case of two fewer couplings, modifications to HHH could be as large as a factor g2 greater than for HZZ, where g is a generic coupling. Either way, it is theoretically possible that the BSM modifications could be up to a couple of orders of magnitude greater for HHH than for HZZ. (Naively, a loop factor counts as around 1/16 π2 or about 0.01, and in the most strongly interacting scenarios, g2 can rise to about 16 π2.)

Why does this contrast so strongly with supersymmetry and the minimal composite Higgs? They are simply in universality classes where modifications to HZZ and HHH are comparable in magnitude. But there are more universality classes in heaven and Earth than are dreamt of in our well-motivated scenarios.

Faced with the theoretical possibility of a large hierarchy in coupling modifications, it behoves the effective theorist to provide an existence proof of a concrete UV-completion where this happens, or we may have revealed a universality class of measure zero. But such an example exists: the custodial quadruplet model. I often say it’s a model that only a mother could love, but it could exist in nature, and gives rise to coupling modifications a full loop factor of about 200 greater for HHH than HZZ.

When confronted with theories beyond the SM, all Higgs couplings are not born equal: UV-completions matter. Though HZZ measurements are arguably the most powerful general probe, future measurements of HHH will explore new territory that is inaccessible to other coupling measurements. This territory is largely uncharted, exotic and beyond the best guesses of theorists. Not bad circumstances for the start of any adventure.

The post Look to the Higgs self-coupling appeared first on CERN Courier.

]]>
Opinion Matthew McCullough argues that beyond-the-Standard Model physics may be most strongly expressed in the Higgs self-coupling. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_VIEW_informed.jpg
Electroweak SUSY after LHC Run 2 https://cerncourier.com/a/electroweak-susy-after-lhc-run-2/ Mon, 16 Sep 2024 14:13:36 +0000 https://preview-courier.web.cern.ch/?p=110449 Supersymmetry (SUSY) provides elegant solutions to many of the problems of the Standard Model (SM) by introducing new boson/fermion partners for each SM fermion/boson, and by extending the Higgs sector. If SUSY is realised in nature at the TeV scale, it would accommodate a light Higgs boson without excessive fine-tuning. It could furthermore provide a […]

The post Electroweak SUSY after LHC Run 2 appeared first on CERN Courier.

]]>
ATLAS figure 1

Supersymmetry (SUSY) provides elegant solutions to many of the problems of the Standard Model (SM) by introducing new boson/fermion partners for each SM fermion/boson, and by extending the Higgs sector. If SUSY is realised in nature at the TeV scale, it would accommodate a light Higgs boson without excessive fine-tuning. It could furthermore provide a viable dark-matter candidate, and be a key ingredient to the unification of the electroweak and strong forces at high energy. The SUSY partners of the SM bosons can mix to form what are called charginos and neutralinos, collectively referred to as electroweakinos.

Electroweakinos would be produced only through the electroweak interaction, where their production cross sections in proton–proton collisions are orders of magnitude smaller than strongly produced squarks and gluinos (the supersymmetric partners of quarks and gluons). Therefore, while extensive searches using the Run 1 (7–8 TeV) and Run 2 (13 TeV) LHC datasets have turned up null results, the corresponding chargino/neutralino exclusion limits remain substantially weaker than those for strongly interacting SUSY particles.

The ATLAS collaboration has recently released a comprehensive analysis of the electroweak SUSY landscape based on its Run 2 searches. Each individual search targeted specific chargino/neutralino production mechanisms and subsequent decay modes. The analyses were originally interpreted in so-called “simplified models”, where only one production mechanism is considered, and only one possible decay. However, if SUSY is realised in nature, its particles will have many possible production and decay modes, with rates depending on the SUSY parameters. The new ATLAS analysis brings these pieces together by reinterpreting 10 searches in the phenomenological Minimal Supersymmetric Standard Model (pMSSM), which includes a range of SUSY particles, production mechanisms and decay modes governed by 19 SUSY parameters. The results provide a global picture of ATLAS’s sensitivity to electroweak SUSY and, importantly, reveals the gaps that remain to be explored.

ATLAS figure 2

The 19-dimensional pMSSM parameter space was randomly sampled to produce a set of 20,000 SUSY model points. The 10 selected ATLAS searches were then performed on each model point to determine whether it is excluded with at least 95% confidence level. This involved simulating datasets for each SUSY model, and re-running the corresponding analyses and statistical fits. An extensive suite of reinterpretation tools was employed to achieve this, including preserved likelihoods and RECAST – a framework for preserving analysis workflows and re-applying them to new signal models.

The results show that, while electro­weakino masses have been excluded up to 1 TeV in simplified models, the coverage with regard to the pMSSM is not exhaustive. Numerous scenarios remain viable, including mass regions nominally covered by previous searches (inside the dashed line in figure 1). The pMSSM models may evade detection due to smaller production cross-sections and decay probabilities compared to simplified models. Scenarios with small mass-splittings between the lightest and next-to-lightest neutralino can reproduce the dark-matter relic density, but are particularly elusive at the LHC. The decays in these models produce challenging event features with low-momentum particles that are difficult to reconstruct and separate from SM events.

Beyond ATLAS, experiments such as LZ aim at detecting relic dark-matter particles through their scattering by target nuclei. This provides a complementary probe to ATLAS searches for dark matter produced in the LHC collisions. Figure 2 shows the LZ sensitivity to the pMSSM models considered by ATLAS, compared to the sensitivity of its SUSY searches. ATLAS is particularly sensitive to the region where the dark-matter candidate is around half the Z/Higgs-boson mass, causing enhanced dark-matter annihilation that could have reduced the otherwise overabundant dark-matter relic density to the observed value.

The new ATLAS results demonstrate the breadth and depth of its search programme for supersymmetry, while uncovering its gaps. Supersymmetry may still be hiding in the data, and several scenarios have been identified that will be targeted, benefiting from the incoming Run 3 data.

The post Electroweak SUSY after LHC Run 2 appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2024/03/CCMarApr24_EF_ATLAS_feature.jpg
Back to the future https://cerncourier.com/a/back-to-the-future/ Mon, 16 Sep 2024 14:11:58 +0000 https://preview-courier.web.cern.ch/?p=111034 A photographic journey linking CERN’s early years to ongoing research.

The post Back to the future appeared first on CERN Courier.

]]>
The past seven decades have seen remarkable cultural and technological changes. And CERN has been no passive observer. From modelling European cooperation in the aftermath of World War II to democratising information via the web and discovering a field that pervades the universe, CERN has nudged the zeitgeist more than once since its foundation in 1954.

It’s undeniable, though, that much has stayed the same. A high-energy physics lab still needs to be fast, cool, collaborative, precise, practically useful, deep, diplomatic, creative and crystal clear. Plus ça change, plus c’est la même chose.

This selection of (lightly colourised) snapshots from CERN’s first 25 years, accompanied by expert reflections from across the lab, show how things have changed in the intervening years – and what has stayed the same.

1960

A 5 m diameter magnetic storage ring in 1960

The discovery that electrons and muons possess spin that precesses in a magnetic field has inspired generations of experimentalists and theorists to push the boundaries of precision. The key insight is that quantum effects modify the magnetic moment associated with the particles’ spins, making their gyromagnetic ratios (g) slightly larger than two, the value predicted by Dirac’s equation. For electrons, these quantum effects are primarily due to the electromagnetic force. For muons, the weak and strong forces also contribute measurably – as well, perhaps, as unknown forces. These measurements stand with the most beautiful and precise of all time, and their history is deeply intertwined with that of the Standard Model.

CERN physicists Francis Farley and Emilio Picasso were pioneers and driving forces behind the muon g–2 experimental programme. The second CERN experiment introduced the use of a 5 m diameter magnetic storage ring. Positive muons with 1.3 GeV momentum travelled around the ring until they decayed into positrons whose directions were correlated with the spin of the parent muons. The experiment tested the muon’s anomalous magnetic moment (g-2) with a precision of 270 parts per million. A brilliant concept, the “magic gamma”, was then introduced in the third CERN experiment in the late 1970s: by using muons at a momentum of 3.1 GeV, the effect of electric fields on the precession frequency cancelled out, eliminating a major source of systematic error. All subsequent experiments have relied on this principle, with the exception of an experiment using ultra-cold muons that is currently under construction in Japan. A friendly rivalry for precision between experimentalists and theorists continues today (Lattice calculations start to clarify muon g-2), with the latest measurement at Fermilab achieving a precision of 190 parts per billion.

Andreas Hoecker is spokesperson for the ATLAS collaboration.

1961

Inspecting bubble-chamber images by hand

The excitement of discovering new fundamental particles and forces made the 1950s and 1960s a golden era for particle physicists. A lot of creative energy was channelled into making new particle detectors, such as the liquid hydrogen (or heavier liquid) bubble chambers that paved the way to discoveries such as neutral currents, and seminal studies of neutrinos and strange and charmed baryons. As particles pass through, they make the liquid boil, producing bubbles that are captured to form images. In 1961, each had to be painstakingly inspected by hand, as depicted here, to determine the properties of each particle. Fortunately, in the decades since, physicists have found ways to preserve the level of detail they offer and build on this inspiration to prepare new technologies. Liquid–argon time-projection chambers such as CERN’s DUNE prototypes, which are currently the largest of their kind in the world, effectively give us access to bubble-chamber images in full colour, with the colour representing energy deposition (CERN Courier July/August 2024 p41). Millions of these images are now analysed algorithmically – essential, as DUNE is expected to generate one of the highest data rates in the world.

Laura Munteanu is a CERN staff scientist working on the T2K and DUNE experiments.

1965

The first experiment at CERN to use a superconducting magnet, in 1965

This photograph shows the first experiment at CERN to use a superconducting magnet. The pictured physicist is adjusting a cryostat containing a stack of nuclear emulsions surrounded by a liquid–helium-cooled superconducting niobium–zirconium electromagnet. A pion beam from CERN’s synchro­cyclotron passes through the quadrupole magnet at the right, collimated by the pile of lead bricks and detected by a small plastic scintillation counter before entering the cryostat. In this study of double charge exchange from π+ to π in nuclear emulsions, the experiment consumed between one and two litres of liquid helium per hour from the container in the left foreground, with the vapour being collected for reuse (CERN Courier August 1965 p116).

Today, the LHC is the world’s largest scientific instrument, with more than 24 km of the machine operating at 1.9 K – and yet only one project among many at CERN requiring advanced cryogenics. As presented at the latest international cryogenic engineering conference organised here in July, there have never been so many cryogenics projects either implemented or foreseen. They include accelerators for basic research, light sources, medical accelerators, detectors, energy production and transmission, trains, planes, rockets and ships. The need for energy efficiency and long-term sustainability will necessitate cryogenic technology with an enlarged temperature range for decades to come. CERN’s experience provides a solid foundation for a new generation of engineers to contribute to society.

Serge Claudet is a former deputy group leader of CERN’s cryogenics group.

1966

Mirrors at CERN used to reflect Cherenkov light

Polishing a mirror at CERN in 1966. Are physicists that narcissistic? Perhaps some are, but not in this case. Ultra-polished mirrors are still a crucial part of a class of particle detectors based on the Cherenkov effect. Just as a shock wave of sound is created when an object flies through the sky at a speed greater than the speed of sound in air, so charged particles create a shock wave of light when they pass through a medium at a speed greater than the speed of light in that medium. This effect is extremely useful for measuring the velocity of a charged particle, because the emission angle of light packets relative to the trajectory of the particle is related to the velocity of the particle itself. By measuring the emission angle of Cherenkov light for an ultra-relativistic charged particle travelling through a transparent medium, such as a gas, the velocity of the particle can be determined. Together with the measurement of the particle’s momentum, it is then possible to obtain its identity card, i.e. its mass. Mirrors are used to reflect Cherenkov light to the photosensors. The LHCb experiment at CERN has the most advanced Cherenkov detector ever built. Years go by and technology evolves, but fundamental physics is about reality, and that’s unchangeable!

Vincenzo Vagnoni is spokesperson of the LHCb collaboration.

1970

The Intersecting Storage Rings

In 1911, Heinke Kamerlingh Onnes made a groundbreaking discovery by measuring zero resistance in a mercury wire at 4.2 K, revealing the phenomenon of superconductivity. This earned him the 1913 Nobel Prize, decades in advance of Bardeen, Cooper and Schrieffer’s full theoretical explanation of 1957. It wasn’t until the 1960s that the first superconducting magnets exceeding 1 T were built. This delay stemmed from the difficulty in enabling bulk superconductors to carry large currents in strong magnetic fields – a challenge requiring significant research.

The world’s first proton–proton collider, CERN’s pioneering Intersecting Storage Rings (ISR, pictured below left), began operation in 1971, a year after this photograph was taken. One of its characteristic “X”-shaped vacuum chambers is visible, flanked by combined-function bending magnets on either side. In 1980, to boost its luminosity, eight superconducting quadrupole magnets based on niobium-titanium alloy were installed, each with a 173 mm bore and a peak field of 5.8 T, making the ISR the first collider to use superconducting magnets. Today, we continue to advance superconductivity. For the LHC’s high-luminosity upgrade, we are preparing to install the first magnets based on niobium-tin technology: 24 quadrupoles with a 150 mm aperture and a peak field of 11.3 T.

Susana Izquierdo Bermudez leads CERN’s Large Magnet Facility.

1972

Mary Gaillard and Murray Gell-Mann

The Theoretical Physics Department, or Theory Division as it used to be known, dates back to the foundation of CERN, when it was first established in Copenhagen under the direction of Niels Bohr, before moving to Geneva in 1957. Theory flourished at CERN in the 1960s, hosting many scientists from CERN’s member states and beyond, working side-by-side with experimentalists with a particular focus on strong interactions.

In 1972, when Murray Gell-Mann visited CERN and had this discussion with Mary Gaillard, the world of particle physics was at a turning point. The quark model had been proposed by Gell-Mann in 1964 (similar ideas had been proposed by George Zweig and André Peterman) and the first experimental evidence of their reality had been discovered in deep-inelastic electron scattering at SLAC in 1968. However, the dynamics of quarks was a puzzle. The weak interactions being discussed by Gaillard and Gell-Mann in this picture were also puzzling, though Gerard ’t Hooft and Martinus Veltman had just shown that the unified theory of weak and electromagnetic interactions proposed earlier by Shelly Glashow, Abdus Salam and Steven Weinberg was a calculable theory.

The first evidence for this theory came in 1973 with the discovery of neutral currents by the Gargamelle neutrino experiment at CERN, and 1974 brought the discovery of
the charm quark, a key ingredient in what came to be known as the Standard Model. This quark had been postulated to explain properties of K mesons, whose decays are being discussed by Gaillard and Gell-Mann in this picture, and Gaillard, together with Benjamin Lee, went on to play a key role in predicting its properties. The discoveries of neutral currents and charm ushered in the Standard Model, and CERN theorists were active in exploring its implications – notably in sketching out the phenomenology of the Brout–Englert–Higgs mechanism. We worked with experimentalists particularly closely during the 1990s, making precise calculations and interpreting the results emerging from LEP that established the Standard Model.

CERN Theory in the 21st century has largely been focused on the LHC experimental programme and pursuing new ideas for physics beyond the Standard Model, often in relation to cosmology and astrophysics. These are likely to be the principal themes of theoretical research at CERN during its eighth decade.

John Ellis served as head of CERN’s Theoretical Physics Department from 1988 to 1994.

1974

Adjusting the electronics of the ion source

From 1959 to 1992, Linac1 accelerated protons to 50 MeV, for injection into the Proton Synchrotron, and from 1972 into the Proton Synchrotron Booster. In 1974, their journey started in this ion source. High voltage was used to achieve the first acceleration to a few percent of the speed of light. It wasn’t only the source itself that had to be at high voltage, but also the power supplies that feed magnets, the controllers for gas injection, the diagnostics and the controls. This platform was the laboratory for the ion source. When operational, the cubicle and everything in it was at 520 kV, meaning all external surfaces had to be smooth to avoid sparks. As pictured, hydraulic jacks could lift the lid to allow access for maintenance and testing, at which point a drawbridge would be lowered from the adjacent wall to allow the engineers and technicians to take a seat in front of the instruments.

Thanks to the invention of radio-frequency quadrupoles by Kapchinsky and Teplyakov, radio-frequency acceleration can now start from lower proton energies. Today, ion sources use much lower voltages, in the range of tens of kilovolts, allowing the source installations to shrink dramatically in size compared to the 1970s.

Richard Scrivens is CERN’s deputy head of accelerator and beam physics.

1974

The Super Proton Synchrotron tunnel

CERN’s labyrinth of tunnels has been almost contin­uously expanding since the lab was founded 70 years ago. When CERN was first conceived, who would have thought that the 7 km-long Super Proton Synchrotron tunnel shown in this photograph would have been constructed, let alone the 27 km LEP/LHC tunnel? Similar questions were raised about the feasibility of the LEP tunnel to those that are being posed today about the proposed Future Circular Collider (FCC) tunnel. But if you take a step back and look at the history of CERN’s expanding tunnel network, it seems like the next logical step for the organisation.

This vintage SPS photograph from the 1970s shows the tunnel’s secondary lining being constructed. The concrete was transported from the surface down the 50 m-deep shafts and then pumped behind the metal formwork to create the tunnel walls. This technology is still used today, most recently for the HL-LHC tunnels. However, for a mega-project like the FCC, a much quicker and more sophisticated methodology is envisaged. The tunnels would be excavated using tunnel boring machines, which will install a pre-cast concrete segmental lining using robotics immediately after the excavation of the rock, allowing 20 m of tunnel to be excavated and lined with concrete per day.

John Osborne is a senior civil engineer at CERN.

1977

Alan Jeavons and David Townsend

Detector development for fundamental physics always advances in symbiosis with detector development for societal applications. Here, Alan Jeavons (left) and David Townsend prepare the first positron-emission tomography (PET) scan of a mouse to be performed at CERN. A pair of high-density avalanche chambers (HIDACs) can be seen above and below Jeavons’ left hand. As in PET scans in hospitals today, a radioactive isotope introduced into the biological tissue of the mouse decays by emitting a positron that travels a few millimetres before annihilating with an electron. The resulting pair of coincident and back-to-back 511 keV photons was then converted into electron avalanches which were reconstructed in multiwire proportional chambers – a technology invented by CERN physicist Georges Charpak less than a decade earlier to improve upon bubble chambers and cloud chambers in high-energy physics experiments. The HIDAC detector later contributed to the development of three-dimensional PET image reconstruction. Such testing now takes place at dedicated pre-clinical facilities.

Today, PET detectors are based on inorganic scintillating crystals coupled to photodetectors – a technology that is also used in the CMS and ALICE experiments at the LHC. CERN’s Crystal Clear collaboration has been continuously developing this technology since 1991, yielding benefits for both fundamental physics and medicine.

One of the current challenges in PET is to improve time resolution in time-of-flight PET (TOF-PET) below 100 ps, and towards 10 ps. This will eventually enable positron annihilations to be pinpointed at the millimetre level, improving image quality, speeding up scans and reducing the dose injected into patients. Improvements in time resolution are also important for detectors in future high-energy experiments, and the future barrel timing layer of the CMS detector upgrade for the High-Luminosity LHC was inspired by TOF-PET R&D.

Etiennette Auffray Hillemanns is spokesperson for the Crystal Clear collaboration and technical coordinator for the CMS electromagnetic calorimeter.

1979

Rafel Carreras

In this photo, we see Rafel Carreras, a remarkable science educator and communicator, sharing his passion for science with an eager audience of young learners. Known for his creativity and enthusiasm, Carreras makes the complex world of particle physics accessible and fun. His particle-physics textbook When Energy Becomes Matter includes memorable visualisations that we still use in our education activities today. One such visualisation is the “fruity strawberry collision”, wherein two strawberries collide and transform into a multitude of new fruits, illustrating how particle collisions produce a shower of new particles that didn’t exist before.

Today, we find fewer chalk boards at CERN and more casual clothing, but one thing remains the same: CERN’s dedication to education and communication. Over the years, CERN has trained more than 10,000 science teachers, significantly impacting science education globally. CERN Science Gateway, our new education and outreach centre, allows us to welcome about 400,000 visitors annually. It offers a wide range of activities, such as interactive exhibitions, science shows, guided tours and hands-on lab experiences, making science exciting and accessible for everyone. Thanks to hundreds of passionate and motivated guides, visitors leave inspired and curious to find out more about the fascinating scientific endeavours and extraordinary technologies at CERN.

Julia Woithe coordinates educational activities at CERN’s new Science Gateway.

  • These photographs are part of a collection curated by Renilde Vanden Broeck, which will be exhibited at CERN in September.

The post Back to the future appeared first on CERN Courier.

]]>
Feature A photographic journey linking CERN’s early years to ongoing research. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_VINTAGE_1965.jpg
Watch out for hybrid pixels https://cerncourier.com/a/watch-out-for-hybrid-pixels/ Mon, 16 Sep 2024 14:11:02 +0000 https://preview-courier.web.cern.ch/?p=111070 Hybrid pixel detectors are changing the face of societal applications such as X-ray imaging.

The post Watch out for hybrid pixels appeared first on CERN Courier.

]]>
In 1885, in a darkened lab in Würzburg, Bavaria, Wilhelm Röntgen noticed that a screen coated with barium platinocyanide fluoresced, despite being shielded from the electron beam of his cathode-ray tube. Hitherto undiscovered “X”-rays were being emitted as the electrons braked sharply in the tube’s anode and glass casing. A week later, Röntgen imaged his wife’s hand using a photographic plate, and medicine was changed forever. X-rays would be used for non-invasive diagnosis and treatment, and would inspire countless innovations in medical imaging. Röntgen declined to patent the discovery of X-ray imaging, believing that scientific advancements should benefit all of humanity, and donated the proceeds of the first Nobel Prize for Physics to his university.

One hundred years later, medical imaging would once again be disrupted – not in a darkened lab in Bavaria, but in the heart of the Large Hadron Collider (LHC) at CERN. The innovation in question is the hybrid pixel detector, which allows remarkably clean track reconstruction. When the technology is adapted for use in a medical context, by modifying the electronics at the pixel level, X-rays can be individually detected and their energy measured, leading to spectroscopic X-ray images that distinguish between different materials in the body. In this way, black and white medical imaging is being reinvented in full colour, allowing more precise diagnoses with lower radiation doses.

The next step is to exploit precise timing in each pixel. The benefits will be broadly felt. Electron microscopy of biological samples can be clearer and more detailed. Biomolecules can be more precisely identified and quantified by imaging time-of-flight mass spectrometry. Radiation doses can be better controlled in hadron therapy, reducing damage to healthy tissue. Ultra-fast changes can be captured in detail at synchrotron light sources. Hybrid pixel detectors with fast time readout are even being used to monitor quantum-mechanical processes.

Digital-camera drawbacks

X-ray imaging has come a long way since the photographic plate. Most often, the electronics work in the same way as a cell-phone camera. A scintillating material converts X-rays into visible photons that are detected by light-sensitive diodes connected to charge-integrating electronics. The charge from high-energy and low-energy photons is simply added up within the pixel in the same way a photographic film is darkened by X-rays.

A hybrid pixel detector and Medipix3 chip

Charge integration is the technique of choice in the flat-panel detectors used in radiology as large surfaces can be covered relatively cheaply, but there are several drawbacks. It’s difficult to collect the scintillation light from an X-ray on a single pixel, as it spreads out. And information about the energy of the X-rays is lost.

By the 1990s, however, LHC detector R&D was driving the development of the hybrid pixel detector, which could solve both problems by detecting individual photons. It soon became clear that “photon counting” could be as useful in a hospital ward as it would prove to be in a high-energy-physics particle detector. In 1997 the Medipix collaboration first paired semiconductor sensors with readout chips capable of counting individual X-rays.

Nearly three decades later, hybrid pixel detectors are making their mark in hospital wards. Parallel to the meticulous process of preparing a technology for medical applications in partnership with industry, researchers have continued to push the limits of the technology, in pursuit of new innovations and applications.

Photon counting

In a hybrid pixel detector, semiconductor sensor pixels are individually fixed to readout chips by an array of bump bonds – tiny balls of solder that permit the charge signal in each sensor pixel to be passed to each readout pixel (see “Hybrid pixels” figure). In these detectors, low-noise pulse-processing electronics take advantage of the intrinsic properties of semiconductors to provide clean track reconstruction even at high rates (see “Semiconductor subtlety” panel).

Since silicon detectors are relatively transparent to the X-ray energies used in medical imaging (approximately 20 to 140 keV), denser sensor materials with higher stopping power are required to capture every photon passing through the patient. This is where hybrid pixel detectors really come into their own. For X-ray photons with an energy above about 20 keV, a highly absorbing material such as cadmium telluride can be used in place of the silicon used in the LHC experiments. Provided precautions are taken to deal with charge sharing between pixels, the number of X-rays in every energy bin can be recorded, allowing each pixel to measure the spectrum of the interacting X-rays.

Semiconductor subtlety

In insulators, the conduction band is far above the energy of electrons in the valence band, making it difficult for current to flow. In conductors, the two bands overlap and current flows with little resistance. In semiconductors, the gap is a just a couple of electron-volts. Passing charged particles, such as those created in the LHC experiments, promotes thousands of valence electrons into the conduction band, creating positively charged “holes” in the valence band, allowing current to flow.

Hybrid pixel detector

Silicon has four valence electrons and therefore forms four covalent bonds with neighbouring atoms to fill up its outermost shell in silicon crystals. These crystals can be doped with impurities that either add additional electrons to the conduction band (n-type doping) or additional holes to the valence band (p-type doping). The silicon pixel sensors used at the LHC are made up of rectangular pixels doped with additional holes on one side coupled to a single large electrode doped with additional electrons on the rear (see “Pixel picture” figure).

In p-n junctions such as these, “depletion zones” develop at the pixel boundaries, where neighbouring electrons and holes recombine, generating a natural electric field. The depletion zones can be extended throughout the whole sensor by applying a strong “reverse-bias” field in the opposite direction. When a charged particle passes, electrons and holes are created as before, but thanks to the field a directed pulse of charge now flows across the bump bond into the readout chip. Charge collection is prompt, permitting the pixel to be ready for the next particle.

In each readout pixel the detected charge pulse is compared with an externally adjustable threshold. If the pulse exceeds the threshold, its amplitude and timing can be measured. The threshold level is typically set to be many times higher than the electronic noise of the detection circuit, permitting noise-free images. Because of the intimate contact between the sensor and the readout circuit, the noise is typically less than a root-mean-square value of 100 electrons, and any signal higher than a threshold of about 500 electrons can be unambiguously detected. Pixels that are not hit remain silent.

In the LHC, each passing particle liberates thousands of electrons, allowing clean images of the collisions to be taken even at very high rates. Hybrid pixels have therefore become the detector of choice in many large experiments where fast and clean images are needed, and are the heart of the ATLAS, CMS and LHCb experiments. In cases where the event rates are lower, such as the ALICE experiment at the LHC and the Belle II experiment at SuperKEKB at KEK in Japan, it has now become possible to use “monolithic” active pixel detectors, where the sensor and readout electronics are implemented in the same substrate. In the future, as the semiconductor industry shifts to three-dimensional chip and wafer stacking, the distinction between hybrid and monolithic pixel detectors will be blurred.

Protocols regarding the treatment of patients are strictly regulated in the interest of safety, making it challenging to introduce new technologies. Therefore, in parallel with the development of successive generations of Medipix readout chips, a workshop series on the medical applications of spectroscopic X-ray detectors has been hosted at CERN. Now in its seventh edition (see “Threshold moment for medical photon counting”), the workshop gathers representatives of cross-disciplinary specialists ranging from the designers of readout chips to specialists in the large equipment suppliers, and from medical physicists all the way up to opinion-leading radiologists. The role of the workshop is the formation and development of a community of practitioners from diverse fields willing to share knowledge – and, of course, reasonable doubts – in order to encourage the transition of spectroscopic photon counting from the lab to the clinic. CERN and the Medipix collaborations have played a pathfinding role in this community, exploring avenues well in advance of their introduction to medical practice.

The Medipix2 (1999–present), Medipix3 (2005–present) and Medipix4 (2016–present) collaborations are composed only of publicly funded research institutes and universities, which helps keep the development programmes driven by science. There have been hundreds of peer-reviewed publications and dozens of PhD theses written by the designers and users of the various chips. With the help of CERN’s Knowledge Transfer Office, several start-up companies have been created and commercial licences signed. This has led to many unforeseen applications and helped enormously with the dissemination of the technology. The publications of the clients of the industrial partners now represent a large share of the scientific outcome from these efforts, totalling hundreds of papers.

Spectroscopic X-ray imaging is now arriving in clinical practice. Siemens Healthineers were first to market in 2022 with the Naeotom Alpha photon counting CT scanner, and many of the first users have been making ground-breaking studies exploiting the newly available spectroscopic information in the clinical domain. CERN’s Medipix3 chip is at the heart of the MARS Bioimaging scanner, which brings unprecedented imaging performance to the point of patient care, opening up new patient pathways and saving time and money.

ASIC (application-specific integrated circuit) development is still moving forwards rapidly in the Medipix collaborations. For example, in the Medipix3 and Medipix4 chips, on-pixel circuitry mitigates the impact of X-ray fluorescence  and charge diffusion in the semiconductor by summing up the charge in a localised region and allocating the hit to one pixel. The fine segmentation of the detector not only leads to unprecedented spatial resolution but also mitigates “hole trapping” – a common bugbear of the high-density sensor materials used in medical imaging, whereby photons of the same energy induce different charges according to their interaction depth in the sensor. Where the pixel size is significantly smaller than the perpendicular sensor thickness – as in the Medipix case – only one of the charge species (usually electrons) contributes to the measured charge, and no matter where the X-ray is deposited in the sensor thickness, the total charge detected is the same.

But photon counting is only half the story. Another parameter that has not yet been exploited in high-spatial-resolution medical imaging systems can also be measured at the pixel level.

A new dimension

In 2005, Dutch physicists working with gas detectors requested a modification that would permit each pixel to measure arrival times instead of counting photons. The Medipix2 collaboration agreed and designed a chip with three acquisition modes: photon counting, arrival time and time over threshold, which provides a measure of energy. The Timepix family of pixel-detector readout chips was born.

Xènia Turró using a Timepix-based thumb-drive detector

The most recent generations of Timepix chips, such as Timepix3 (released in 2016) and Timepix4 (released in 2022) stream hit information off chip as soon as it is generated – a significant departure from Medipix chips, which process hits locally, assuming them to be photons, sending only a spectroscopic image off chip. With Timepix, each time a charge exceeds the threshold, a packet of information is sent off chip that contains the coordinates of the hit pixel, the particle’s arrival time and the time over threshold (66 bits in total per hit). This allows offline reconstruction of individual clusters of hits, opening up a myriad of potential new applications.

One advantage of Timepix is that particle event reconstruction is not limited to photons. Cosmic muons leave a straight track. Low-energy X-rays interact in a point-like fashion, lighting up only a small number of pixels. Electrons interact with atomic electrons in the sensor material, leaving a curly track. Alpha particles deposit a large quantity of charge in a characteristic blob. To spark the imagination of young people, Timepix chips have been incorporated on a USB thumb drive that can be read out on a laptop computer (see “Thumb-drive detector” figure). The CERN & Society Foundation is raising funds to make these devices widely available in schools.

Timepix chips have also been adapted to dose monitoring for astronauts. Following a calibration effort by the University of Houston, NASA and the Institute for Experimental and Applied Physics in Prague, a USB device identical to that used in classrooms precisely measures the doses experienced by flight crews in space. Timepix is now deployed on the International Space Station (see “Radiation monitoring” figure), the Artemis programme and several European space-weather studies, and will be deployed on the Lunar Gateway programme.

Stimulating innovation

Applications in science, industry and medicine are too numerous to mention in detail. In time-of-flight mass spectrometry, the vast number of channels allowed by Timepix promises new insights into biomolecules. Large-area time-resolved X-ray cameras are valuable at synchrotrons, where they have applications in structural biology, materials science, chemistry and environmental science. In the aerospace, manufacturing and construction industries, non-destructive X-ray testing using back­scattering can probe the integrity of materials and structures while requiring access from one side only. Timepix chips also play a crucial role in X-ray diffraction for materials analysis and medical applications such as single-photon-emission computed tomography (SPECT), and beam tracking and dose-deposition moni­toring in hadron therapy (see “Carbon therapy” figure). The introduction of noise-free hit streaming with timestamp precision down to 200 picoseconds has also opened up entirely new possibilities in quantum science, and early applications of Timepix3 in experiments exploring the quantum behaviour of particles are already being reported. We are just beginning to uncover the potential of these innovations.

Chris Cassidy working near the Timepix USB

It’s also important to note that applications of the Timepix chips are not limited to the readout of semiconductor pixels made of silicon or cadmium telluride. A defining feature of hybrid pixel detectors is that the same readout chip can be used with a variety of sensor materials and structures. In cases where visible photons are to be detected, an electron can be generated in a photocathode and then amplified using a micro-channel plate. The charge cloud from the micro-channel plate is then detected on a bare readout chip in much the same way as the charge cloud in a semiconductor sensor. Some gas-filled detectors are constructed using gas electron multipliers and micromegas foils, which amplify charge passing through holes in the foils. Timepix chips can be used for readout in place of the conventional pad arrays, providing much higher spatial and time resolution than would otherwise be available.

Successive generations of Timepix and Medipix chips have followed Moore’s law, permitting more and more circuitry to be fitted into a single pixel as the minimum feature size of transistors has shrunk. In the Timepix3 and Timepix4 chips, data-driven architecture and on-pixel time stamping are the unique features. The digital circuitry of the pixel has become so complex that an entirely new approach to chip design – “digital-on-top” – was employed. These techniques were subsequently deployed in ASIC developments for the LHC upgrades.

Just as hybrid-pixel R&D at the LHC has benefitted societal applications, R&D for these applications now benefits fundamental research. Making highly optimised chips available to industry “off the shelf” can also save substantial time and effort in many applications in fundamental research, and the highly integrated R&D model whereby detector designers keep one foot in both camps generates creativity and the reciprocal sparking of ideas and sharing of expertise. Timepix3 is used as readout of the beam–gas-interaction monitors at CERN’s Proton Synchrotron and Super Proton Synchrotron, providing non-destructive images of the beams in real time for the first time. The chips are also deployed in the ATLAS and MoEDAL experiments at the LHC, and in numerous small-scale experiments, and Timepix3 know-how helped develop the VeloPix chip used in the upgraded tracking system for the LHCb experiment. Timepix4 R&D is now being applied to the development of a new generation of readout chips for future use at CERN, in applications where a time bin of 50 ps or less is desired.

Maria Martišíková and Laurent Kelleter

All these developments have relied on collaborating research organisations being willing to pool the resources needed to take strides into unexplored territory. The effort has been based on the solid technical and administrative infrastructure provided by CERN’s experimental physics department and its knowledge transfer, finance and procurement groups, and many applications have been made possible by hardware provided by the innovative companies that license the Medipix and Timepix chips.

With each new generation of chips, we have pushed the boundaries of what is possible by taking calculated risks ahead of industry. But the high-energy-physics community is under intense pressure, with overstretched resources. Can blue-sky R&D such as this be justified? We believe, in the spirit of Röntgen before us, that we have a duty to make our advancements available to a larger community than our own. Experience shows that when we collaborate across scientific disciplines and with the best in industry, the fruits lead directly back into advancements in our own community.

The post Watch out for hybrid pixels appeared first on CERN Courier.

]]>
Feature Hybrid pixel detectors are changing the face of societal applications such as X-ray imaging. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_DETECTOR_xray.jpg
Antiprotons cooled in record time https://cerncourier.com/a/antiprotons-cooled-in-record-time/ Mon, 16 Sep 2024 14:09:45 +0000 https://preview-courier.web.cern.ch/?p=111101 The BASE experiment has reduced the time to cool antiprotons from 15 hours to eight minutes.

The post Antiprotons cooled in record time appeared first on CERN Courier.

]]>
To test the most fundamental symmetry of the Standard Model, CPT symmetry, which implies exact equality between the fundamental properties of particles and their antimatter conjugates, antimatter particles must be cooled to the lowest possible temperatures. The BASE experiment, located at CERN, has passed a major milestone in this regard. Using a sophisticated system of Penning traps, the collaboration has reduced the time required to cool an antiproton by a factor of more than 100. The considerable improvement makes it possible to measure the antiproton’s properties with unparalleled precision, perhaps shedding light on the mystery of why matter outnumbers antimatter in the universe.

Magnetic moments

BASE (Baryon Antibaryon Symmetry Experiment) specialises in the study of antiprotons by measuring properties such as the magnetic moment and charge-to-mass ratio. The latter quantity has been shown to agree with that of the proton within an experimental uncertainty of 16 parts per trillion. While not nearly as precise due to much higher complexity, measurements of the antiproton’s magnetic moment provide an equally important probe of CPT symmetry.

To determine the antiproton’s magnetic moment, BASE measures the frequency of spin flips of single antiprotons – a remarkable feat that requires the particle to be cooled to less than 200 mK. BASE’s previous setup could achieve this, but only after 15 hours of cooling, explains lead author Barbara Latacz (RIKEN/CERN): “As we need to perform 1000 measurement cycles, it would have taken us three years of non-stop measurements, which would have been unrealistic. By reducing the cooling time to eight minutes, BASE can now obtain all of the 1000 measurements it needs – and thereby improve its precision – in less than a month.” By cooling antiprotons to such low energies, the collaboration has been able to detect antiproton spin transitions with an error rate (< 0.000023) more than three orders of magnitude better than in previous experiments.

Underpinning the BASE breakthrough is an improved cooling trap. BASE takes antiprotons that have been decelerated by the Antiproton Decelerator and the Extra Low Energy Antiproton ring (ELENA) and stores them in batches of around 100 in a Penning trap, which holds them in place using electric and magnetic fields. A single antiproton is then extracted into a system made up of two Penning traps: the first trap measures its temperature and, if it is too high, transfers the antiproton to a second trap to be cooled further. The particle goes back and forth between the two traps until the desired temperature is reached.

The new cooling trap has a diameter of just 3.8 mm, less than half the size of that used in previous experiments, and is equipped with innovative segmented electrodes to reduce the amplitude of one of the antiproton oscillations – the cyclotron mode – more effectively. The readout electronics have also been optimised to reduce background noise. The new system reduces the time spent by the antiproton in the cooling trap during each cycle from 10 minutes to 5 seconds, while improvements to the measurement trap have also made it possible to reduce the measurement time fourfold.

“Up to now, we have been able to compare the magnetic moments of the antiproton and the proton with a precision of one part per billion,” says BASE spokesperson Stefan Ulmer (Max Planck–RIKEN–PTB). “Our new device will allow us to reach a precision of a tenth of a billion and, on the very long-term, will even allow us to perform experiments with 10 parts-per-trillion resolution. The slightest discrepancy could help solve the mystery of the imbalance between matter and antimatter in the universe.”

The post Antiprotons cooled in record time appeared first on CERN Courier.

]]>
News The BASE experiment has reduced the time to cool antiprotons from 15 hours to eight minutes. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_NA_base.jpg