CERN Courier https://cerncourier.com/ Reporting on international high-energy physics Tue, 11 Nov 2025 11:34:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 https://cerncourier.com/wp-content/uploads/2025/03/cropped-favicon-32x32.png CERN Courier https://cerncourier.com/ 32 32 Ten windows on the future of particle physics https://cerncourier.com/a/ten-windows-on-the-future-of-particle-physics/ Fri, 07 Nov 2025 12:50:23 +0000 https://cerncourier.com/?p=114785 Paris Sphicas highlights key takeaways from the briefing book of the 2026 update of the European Strategy for Particle Physics.

The post Ten windows on the future of particle physics appeared first on CERN Courier.

]]>
A major step toward shaping the future of European particle physics was reached on 2 October, with the release of the Physics Briefing Book of the 2026 update of the European Strategy for Particle Physics. Despite its 250 pages, it is a concise summary of the vast amount of work contained in the 266 written submissions to the strategy process and the deliberations of the Open Symposium in Venice in June (CERN Courier September/October 2025 p24).

The briefing book compiled by the Physics Preparatory Group is an impressive distillation of our current knowledge of particle physics, and a preview of the exciting prospects offered by future programmes. It provides the scientific basis for defining Europe’s long-term particle-physics priorities and determining the flagship collider that will best advance the field. To this end, it presents comparisons of the physics reach of the different candidate machines, which often have different strengths in probing new physics beyond the Standard Model (SM).

Condensing all this in a few sentences is difficult, though two messages are clear: if the next collider at CERN is an electron–positron collider, the exploration of new physics will proceed mainly through high-precision measurements; and the highest physics reach into the structure of physics beyond the SM via indirect searches will be provided by the combined exploration of the Higgs, electroweak and flavour domains.

Following a visionary outlook for the field from theory, the briefing book divides its exploration of the future of particle physics into seven sectors of fundamental physics and three technology pillars that underpin them.

1. Higgs and electroweak physics

In the new era that has dawned with the discovery of the Higgs boson, numerous fundamental questions remain, including whether the Higgs boson is an elementary scalar, part of an extended scalar sector, or even a portal to entirely new phenomena. The briefing book highlights how precision studies of the Higgs boson, the W and Z bosons, and the top quark will probe the SM to unprecedented accuracy, looking for indirect signs of new physics.

Higgs self-coupling

Addressing these requires highly precise measurements of its couplings, self-interaction and quantum corrections. While the High-Luminosity LHC (HL-LHC) will continue to improve several Higgs and electroweak measurements, the next qualitative leap in precision will be provided by future electron–positron colliders, such as the FCC-ee, the Linear Collider Facility (LCF), CLIC or LEP3. And while these would provide very important information, it would fall upon the shoulders of an energy-frontier machine like the FCC-hh or a muon collider to access potential heavy states. Using the absolute HZZ coupling from the FCC-ee, such machines would measure the single-Higgs-boson couplings with a precision better than 1%, and the Higgs self-coupling at the level of a few per cent (see “Higgs self-coupling” figure).

This anticipated leap in experimental precision will necessitate major advances in theory, simulation and detector technology. In the coming decades, electroweak physics and the Higgs boson in particular will remain a cornerstone of particle physics, linking the precision and energy frontiers in the search for deeper laws of nature.

2. Strong interaction physics

Precise knowledge of the strong interaction will be essential for understanding visible matter, exploring the SM with precision, and interpreting future discoveries at the energy frontier. Building upon advanced studies of QCD at the HL-LHC, future high-luminosity electron–positron colliders such as FCC-ee and LEP3 would, like LHeC, enable per-mille precision on the strong coupling constant, and a greatly improved understanding of the transition between the perturbative and non-perturbative regimes of QCD. The LHeC would bring increased precision on parton-distribution functions that would be very useful for many physics measurements at the FCC-hh. FCC-hh would itself open up a major new frontier for strong-interaction studies.

A deep understanding of the strong interaction also necessitates the study of strongly interacting matter under extreme conditions with heavy-ion collisions. ALICE and the other experiments at the LHC will continue to illuminate this physics, revealing insights into the early universe and the interiors of neutron stars.

3. Flavour physics

With high-precision measurements of quark and lepton processes, flavour studies test the SM at energy scales far above those directly accessible to colliders, thanks to their sensitivity to the effects of virtual particles in quantum loops. Small deviations from theoretical predictions could signal new interactions or particles influencing rare processes or CP-violating effects, making flavour physics one of the most sensitive paths toward discovering physics beyond the SM.

The book highlights how precision studies of the Higgs boson, the W and Z bosons, and the top quark will probe the SM to unprecedented accuracy

Global efforts are today led by the LHCb, ATLAS and CMS experiments at the LHC and by the Belle II experiment at SuperKEKB. These experiments have complementary strengths: huge data samples from proton–proton collisions at CERN and a clean environment in electron–positron collisions at KEK. Combining the two will provide powerful tests of lepton-flavour universality, searches for exotic decays and refinements in the understanding of hadronic effects.

The next major step in precision flavour physics would require “tera-Z” samples of a trillion Z bosons from a high-luminosity electron–positron collider such as the FCC-ee, alongside a spectrum of focused experimental initiatives at a more modest scale.

4. Neutrino physics

Neutrino physics addresses open fundamental questions related to neutrino masses and their deep connections to the matter–antimatter asymmetry in the universe and its cosmic evolution. Upcoming experiments including long-baseline accelerator-neutrino experiments (DUNE and Hyper-Kamiokande), reactor experiments such as JUNO (see “JUNO takes aim at neutrino-mass hierarchy” and astroparticle observatories (KM3NeT and IceCube, see also CERN Courier May/June 2025 p23) will likely unravel the neutrino mass hierarchy and discover leptonic CP violation.

In parallel, the hunt for neutrinoless-double-beta decay continues. A signal would indicate that neutrinos are Majorana fermions, which would be indisputable evidence for new physics! Such efforts extend the reach of particle physics beyond accelerators and deepen connections between disciplines. Efforts to determine the absolute mass of neutrinos are also very important.

The chapter highlights the growing synergy between neutrino experiments and collider, astrophysical and cosmological studies, as well as the pivotal role of theory developments. Precision measurements of neutrino interactions provide crucial support for oscillation measurements, and for nuclear and astroparticle physics. New facilities at accelerators explore neutrino scattering at higher energies, while advances in detector technologies have enabled the measurement of coherent neutrino scattering, opening new opportunities for new physics searches. Neutrino physics is a truly global enterprise, with strong European partici­pation and a pivotal role for the CERN neutrino platform.

5. Cosmic messengers

Astroparticle physics and cosmology increasingly provide new and complementary information to laboratory particle-physics experiments in addressing fundamental questions about the universe. A rich set of recent achievements in these fields includes high-precision measurements of cosmological perturbations in the cosmic microwave background (CMB) and in galaxy surveys, a first measurement of an extragalactic neutrino flux, accurate antimatter fluxes and the discovery of gravitational waves (GWs).

Leveraging information from these experiments has given rise to the field of multi-messenger astronomy. The next generation of instruments, from neutrino telescopes to ground- and space-based CMB and GW observatories, promises exciting results with important clues for
particle physics.

6. Beyond the Standard Model

The landscape for physics beyond the SM is vast, calling for an extended exploration effort with exciting prospects for discovery. It encompasses new scalar or gauge sectors, supersymmetry, compositeness, extra dimensions and dark-sector extensions that connect visible and invisible matter.

Many of these models predict new particles or deviations from SM couplings that would be accessible to next-generation accelerators. The briefing book shows that future electron–positron colliders such as FCC-ee, CLIC, LCF and LEP3 have sensitivity to the indirect effects of new physics through precision Higgs, electroweak and flavour measurements. With their per-mille precision measurements, electron–positron colliders will be essential tools for revealing the virtual effects of heavy new physics beyond the direct reach of colliders. In direct searches, CLIC would extend the energy frontier to 1.5 TeV, whereas FCC-hh would extend it to tens of TeV, potentially enabling the direct observation of new physics such as new gauge bosons, supersymmetric particles and heavy scalar partners. A muon collider would combine precision and energy reach, offering a compact high-energy platform for direct and indirect discovery.

This chapter of the briefing book underscores the complementarity between collider and non-collider experiments. Low-energy precision experiments, searches for electric dipole moments, rare decays and axion or dark-photon experiments probe new interactions at extremely small couplings, while astrophysical and cosmological observations constrain new physics over sprawling mass scales.

7. Dark matter and the dark sector

The nature of dark matter, and the dark sector more generally, remains one of the deepest mysteries in modern physics. A broad range of masses and interaction strengths must be explored, encompassing numerous potential dark-matter phenomenologies, from ultralight axions and hidden photons to weakly interacting massive particles, sterile neutrinos and heavy composite states. The theory space of the dark sector is just as crowded, with models involving new forces or “portals” that link visible and invisible matter.

As no single experimental technique can cover all possibilities, progress will rely on exploiting the complementarity between collider experiments, direct and indirect searches for dark matter, and cosmological observations. Diversity is the key aspect of this developing experimental programme!

8. Accelerator science and technology

The briefing book considers the potential paths to higher energies and luminosities offered by each proposal for CERN’s next flagship project: the two circular colliders FCC-ee and FCC-hh, the two linear colliders LCF and CLIC, and a muon collider; LEP3 and LHeC are also considered as colliders that could potentially offer a physics programme to bridge the time between the HL-LHC and the next high-energy flagship collider. The technical readiness, cost and timeline of each collider are summarised, alongside their environmental impact and energy efficiency (see “Energy efficiency” figure).

Energy efficiency

The two main development fronts in this technology pillar are high-field magnets and efficient radio-frequency (RF) cavities. High-field superconducting magnets are essential for the FCC-hh, while high-temperature superconducting magnet technology, which presents unique opportunities and challenges, might be relevant to the FCC-hh as a second-stage machine after the FCC-ee. Efficient RF systems are required by all accelerators (CERN Courier May/June 2025 p30). Research and development (R&D) on advanced acceleration concepts, such as plasma-wakefield acceleration and muon colliders, also present much promise but necessitate significant work before they can present a viable solution for a future collider.

Preserving Europe’s leadership in accelerator science and technology requires a broad and extensive programme of work with continuous support for accelerator laboratories and test facilities. Such investments will continue to be very important for applications in medicine, materials science and industry.

9. Detector instrumentation

A wealth of lessons learned from the LHC and HL-LHC experiments are guiding the development of the next generation of detectors, which must have higher granularity, and – for a hadron collider – a higher radiation tolerance, alongside improved timing resolution and data throughput.

As the eyes through which we observe collisions at accelerators, detectors require a coherent and long-term R&D programme. Central to these developments will be the detector R&D collaborations, which have provided a structured framework for organising and steering the work since the previous update to the European Strategy for Particle Physics. These span the full spectrum of detector systems, with high-rate gaseous detectors, liquid detectors and high-performance silicon sensors for precision timing, precision particle identification, low-mass tracking and advanced calorimetry.

If detectors are the eyes that explore nature, computing is the brain that deciphers the signals they receive

All these detectors will also require advances in readout electronics, trigger systems and real-time data processing. A major new element is the growing role of AI and quantum sensing, both of which already offer innovative methods for analysis, optimisation and detector design (CERN Courier July/August 2025 p31). As in computing, there are high hopes and well-founded expectations that these technologies will transform detector design and operation.

To maintain Europe’s leadership in instrumentation, it is important to maintain sustained investments in test-beam infrastructures and engineering. This supports a mutually beneficial symbiosis with industry. Detector R&D is a portal to sectors as diverse as medical diagnostics and space exploration, providing essential tools such as imaging technologies, fast electronics and radiation-hard sensors for a wide range of applications.

10. Computing

Data challenge

If detectors are the eyes that explore nature, computing is the brain that deciphers the signals they receive. The briefing book pays much attention to the major leaps in computation and storage that are required by future experiments, with simulation, data management and processing at the top of the list (see “Data challenge” figure). Less demanding in resources, but equally demanding of further development, is data analysis. Planning for these new systems is guided by sustainable computing practices, including energy-efficient software and data centres. The next frontier is the HL-LHC, which will be the testing ground and the basis for future development, and serves as an example for the preservation of the current wealth of experimental data and software (CERN Courier September/October 2025 p41).

Several paradigm shifts hold great promise for the future of computing in high-energy physics. Heterogeneous computing integrates CPUs, GPUs and accelerators, providing hugely increased capabilities and better scaling than traditional CPU usage. Machine learning is already being deployed in event simulation, reconstruction and even triggering, and the first signs from quantum computing are very positive. The combination of AI with quantum technology promises a revolution in all aspects of software and of the development, deployment and usage of computing systems.

Some closing remarks

Beyond detailed physics summaries, two overarching issues appear throughout the briefing book.

First, progress will depend on a sustained interplay between experiment, theory and advances in accelerators, instrumentation and computing. The need for continued theoretical development is as pertinent as ever, as improved calculations will be critical for extracting the full physics potential of future experiments.

Second, all this work relies on people – the true driving force behind scientific programmes. There is an urgent need for academia and research institutions to attract and support experts in accelerator technologies, instrumentation and computing by offering long-term career paths. A lasting commitment to training the new generation of physicists who will carry out these exciting research programmes is equally important.

Revisiting the briefing book to craft the current summary brought home very clearly just how far the field of particle physics has come – and, more importantly, how much more there is to explore in nature. The best is yet to come!

The post Ten windows on the future of particle physics appeared first on CERN Courier.

]]>
Feature Paris Sphicas highlights key takeaways from the briefing book of the 2026 update of the European Strategy for Particle Physics. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_ESPPU_frontis.jpg
Biology at the Bragg peak https://cerncourier.com/a/biology-at-the-bragg-peak/ Fri, 07 Nov 2025 12:45:46 +0000 https://cerncourier.com/?p=114807 Angelica Facoetti explains five facts accelerator physicists need to know about radiobiology to work at the cutting edge of particle therapy.

The post Biology at the Bragg peak appeared first on CERN Courier.

]]>
In 1895, mere months after Wilhelm Röntgen discovered X-rays, doctors explored their ability to treat superficial tumours. Today, the X-rays are generated by electron linacs rather than vacuum tubes, but the principle is the same, and radiotherapy is part of most cancer treatment programmes.

Charged hadrons offer distinct advantages. Though they are more challenging to manipulate in a clinical environment, protons and heavy ions deposit most of their energy just before they stop, at the so-called Bragg peak, allowing medical physicists to spare healthy tissue and target cancer cells precisely. Particle therapy has been an effective component of the most advanced cancer therapies for nearly 80 years, since it was proposed by Robert R Wilson in 1946.

With the incidence of cancer rising across the world, research into particle therapy is more valuable than ever to human wellbeing – and the science isn’t slowing down. Today, progress requires adapting accelerator physics to the demands of the burgeoning field of radiobiology. This is the scientific basis for developing and validating a whole new generation of treatment modalities, from FLASH therapy to combining particle therapy with immunotherapy.

Here are the top five facts accelerator physicists need to know about biology at the Bragg peak.

1. 100 keV/μm optimises damage to DNA

Repair shop

Almost every cell’s control centre is contained within its nucleus, which houses DNA – your body’s genetic instruction manual. If the cell’s DNA becomes compromised, it can mutate and lose control of its basic functions, leading the cell to die or multiply uncontrollably. The latter results in cancer.

For more than a century, radiation doses have been effective in halting the uncontrollable growth of cancerous cells. Today, the key insight from radiobiology is that for the same radiation dose, biological effects such as cell death, genetic instability and tissue toxicity differ significantly based on both beam parameters and the tissue being targeted.

Biologists have discovered that a “linear energy transfer” of roughly 100 keV/μm produces the most significant biological effect. At this density of ionisation, the distance between energy deposition events is roughly equal to the diameter of the DNA double helix, creating complex, repair-resistant DNA lesions that strongly reduce cell survival. Beyond 100 keV/μm, energy is wasted.

DNA is the main target of radiotherapy because it holds the genetic information essential for the cell’s survival and proliferation. Made up of a double helix that looks like a twisted ladder, DNA consists of two strands of nucleotides held together by hydrogen bonds. The sequence of these nucleotides forms the cell’s unique genetic code. A poorly repaired lesion on this ladder leaves a permanent mark on the genome.

When radiation induces a double-strand break, repair is primarily attempted through two pathways: either by rejoining the broken ends of the DNA, or by replacing the break with an identical copy of healthy DNA (see “Repair shop” image). The efficiency of these repairs decreases dramatically when the breaks occur in close spatial proximity or if they are chemically complex. Such scenarios frequently result in lethal mis-repair events or severe alterations in the genetic code, ultimately compromising cell survival.

This fundamental aspect of radiobiology strongly motivates the use of particle therapy over conventional radiotherapy. Whereas X-rays deliver less than 10 keV/μm, creating sparse ionisation events, protons deposit tens of keV/μm near the Bragg peak, and heavy ions 100 keV/μm or more.

2. Mitochondria and membranes matter too

For decades, radiobiology revolved around studying damage to DNA in cell nuclei. However, mounting evidence reveals that an important aspect of cellular dysfunction can be inflicted by damage to other components of cells, such as the cell membrane and the collection of “organelles” inside it. And the nucleus is not the only organelle containing DNA.

Self-destruct

Mitochondria generate energy and serve as the body’s cellular executioners. If a mitochondrion recognises that its cell’s DNA has been damaged, it may order the cell membrane to become permeable. Without the structure of the cell membrane, the cell breaks apart, its fragments carried away by immune cells. This is one mechanism behind “programmed cell death” – a controlled form of death, where the cell essentially presses its own self-destruct button (see “Self-destruct” image).

Irradiated mitochondrial DNA can suffer from strand breaks, base–pair mismatches and deletions in the code. In space-radiation studies, damage to mitochondrial DNA is a serious health concern as it can lead to mutations, premature ageing and even the creation of tumours. But programmed cell death can prevent a cancer cell from multiplying into a tumour. By disrupting the mitochondria of tumour cells, particle irradiation can compromise their energy metabolism and amplify cell death, increasing the permeability of the cell membrane and encouraging the tumour cell to self-destruct. Though a less common occurrence, membrane damage by irradiation can also directly lead to cell death.

3. Bystander cells exhibit their own radiation response

Communication

For many years, radiobiology was driven by a simple assumption: only cells directly hit by radiation would be damaged. This view started to change in the 1990s, when researchers noticed something unexpected: even cells that had not been irradiated showed signs of stress or injury when they were near the irradiated cells. This phenomenon, known as the bystander effect, revealed that irradiated cells can send bio-­chemical signals to their neighbours, which may in turn respond as if they themselves had been exposed, potentially triggering an immune response (see “Communication” image).

“Non-targeted” effects propagate not only in space, but also in time, through the phenomenon of radiation-induced genomic instability. This temporal dimension is characterised by the delayed appearance of genomic alterations across multiple cell generations. Radiation damage propagates across cells and tissues, and over time, adding complexity beyond the simple dose–response paradigm.

Although the underlying mechanisms remain unclear, the clustered ionisation events produced by carbon ions generate complex DNA damage and cell death, while largely preserving nearby, unirradiated cells.

4. Radiation damage activates the immune system

Cancer cells multiply because the immune system fails to recognise them as a threat (see “Immune response” image). The modern pharmaceutical-based technique of immunotherapy seeks to alert the immune system to the threat posed by cancer cells it has ignored by chemically tagging them. Radiotherapy seeks to activate the immune system by inflicting recognisable cellular damage, but long courses of photon radiation can also weaken overall immunity.

Immune response

This negative effect is often caused by the exposure of circulating blood and active blood-producing organs to radiation doses. Fortunately, particle therapy’s ability to tightly conform the dose to the target and subject surrounding tissues to a minimal dose can significantly mitigate the reduction of immune blood cells, better preserving systemic immunity. By inflicting complex, clustered DNA lesions, heavy ions have the strongest potential to directly trigger programmed cell death, even in the most difficult-to-treat cancer cells, bypassing some of the molecular tricks that tumours use to survive, and amplifying the immune response beyond conventional radiotherapy with X-rays. This is linked to the complex, clustered DNA lesions induced by high-energy-transfer radiation, which triggers the DNA damage–repair signals strongly associated with immune activation.

These biological differences provide a strong rationale for the rapidly emerging research frontier of combining particle therapy with immunotherapy. Particle therapy’s key advantage is its ability to amplify immunogenic cell death, where the cell’s surface changes, creating “danger tags” to recruit immune cells to come and kill it, recognise others like it, and kill those too. This ability for particle therapy to mitigate systemic immuno­suppression makes it a theoretically superior partner for immunotherapy compared to conventional X-rays.

5. Ultra-high dose rates protect healthy tissues

In recent years, the attention of clinicians and researchers has focused on the “FLASH” effect– a groundbreaking concept in cancer treatment where radiation is delivered at an ultra-high dose rate in excess of 40 J/kg/s. FLASH radiotherapy appears to minimise damage to healthy tissues while maintaining at least the same level of tumour control as conventional methods. Inflammation in healthy tissues is reduced, and the number of immune cells entering the tumour increased, helping the body fight cancer more effectively. This can significantly widen the therapeutic window – the optimal range of radiation doses that can successfully treat a tumour while minimising toxicity to healthy tissues.

Oxygen depletion

Though the radiobiological mechanisms behind this protective effect remain unclear, several hypotheses have been proposed. A leading theory focuses on oxygen depletion or “hypoxia”.

As tumours grow, they outpace the surrounding blood vessels’ ability to provide oxygen (see “Oxygen depletion” image). By condensing the dose in a very short time, it is thought that FLASH therapy may induce transient hypoxia within normal tissues too, reducing oxygen-dependent DNA damage there, while killing tumour cells at the same rate. Using a similar mechanism, FLASH therapy may also preserve mitochondrial integrity and energy production in normal tissues.

It is still under investigation whether a FLASH effect occurs with carbon ions, but combining the biological benefits of high-energy-transfer radiation with those of FLASH could be very promising.

The post Biology at the Bragg peak appeared first on CERN Courier.

]]>
Feature Angelica Facoetti explains five facts accelerator physicists need to know about radiobiology to work at the cutting edge of particle therapy. https://cerncourier.com/wp-content/uploads/2025/11/cancer3-scaled.jpg
The future of particle therapy https://cerncourier.com/a/the-future-of-particle-therapy/ Fri, 07 Nov 2025 12:41:08 +0000 https://cerncourier.com/?p=114927 PTCOG president Marco Durante describes an exciting future for the technology and shares his vision for closer international cooperation between medicine, academia and industry.

The post The future of particle therapy appeared first on CERN Courier.

]]>
What excites you most about your research in 2025?

2025 has been a very exciting year. We just published a paper in Nature Physics about radioactive ion beams.

I also received an ERC Advanced Grant to study the FLASH effect with neon ions. We plan to go back to the 1970s, when Cornelius Tobias in Berkeley thought of using very heavy ions against radio-resistant tumours, but now using FLASH’s ultrahigh dose rates to reduce its toxicity to healthy tissues. Our group is also working on the simultaneous acceleration of different ions: carbon ions will stop in the tumour, but helium ions will cross the patient, providing an online monitor of the beam’s position during irradiation. The other big news in radiotherapy is vertical irradiation, where we don’t rotate the beam around the patient, but rotate the patient around the beam. This is particularly interesting for heavy-ion therapy, where building a rotating gantry that can irradiate the patient from multiple angles is almost as expensive as the whole accelerator. We are leading the Marie Curie UPLIFT training network on this topic.

Why are heavy ions so compelling?

Close to the Bragg peak, where very heavy ions are very densely ionising, the damage they cause is difficult to repair. You can kill the tumours much better than with protons. But carbon, oxygen and neon run the risk of inducing toxicity in healthy tissues. In Berkeley, more than 400 patients were treated with heavy ions. The results were not very good, and it was realised that these ions can be very toxic for normal tissue. The programme was stopped in 1992, and since then there has been no more heavy-ion therapy in the US, though carbon-ion therapy was established in Japan not long after. Today, most of the 130 particle-therapy centres worldwide use protons, but 17 centres across Asia and Europe offer carbon-ion therapy, with one now under construction at the Mayo Clinic in the US. Carbon is very convenient, because the plateau of the Bragg curve is similar to X-rays, while the peak is much more effective than protons. But still, there is evidence that it’s not heavy enough, that the charge is not high enough to get rid of very radio-resistant hypoxic tumours – tumours where you don’t have enough oxygenation. So that’s why we want to go heavier: neon. If we show that you can manage the toxicity using FLASH, then this is something that can be translated into the clinics.

There seems to be a lot of research into condensing the dose either in space, in microbeams or, in time, in the FLASH effect…

Absolutely.

Why does that spare healthy tissue at the expense of cancer cells?

That is a question I cannot answer. To be honest, nobody knows. We know that it works, but I want to make it very clear that we need more research to translate it completely to the clinic. It is true that if you either fractionate in space or compress in time, normal tissue is much more resistant, while the effect on the tumour is approximately the same, allowing you to increase the dose without harming the patient. The problem is that the data are still controversial.

So you would say that it is not yet scientifically established that the FLASH effect is real?

There is an overwhelming amount of evidence for the strong sparing of normal tissue at specific sites, especially for the skin and for the brain. But, for example, for gastrointestinal tumours the data is very controversial. Some data show no effect, some data show a protective effect, and some data show an increased effectiveness of FLASH. We cannot generalise.

Is it surprising that the effect depends on the tissue?

In medicine this is not so strange. The brain and the gut are completely different. In the gut, you have a lot of cells that are quickly duplicating, while in the brain, you almost have the same number of neurons that you had when you were a teenager – unfortunately, there is not much exchange in the brain.

So, your frontier at GSI is FLASH with neon ions. Would you argue that microbeams are equally promising?

Absolutely, yes, though millibeams more so than microbeams, because microbeams are extremely difficult to go into clinical translation. In the micron region, any kind of movement will jeopardise your spatial fractionation. But if you have millimetre spacing, then this becomes credible and feasible. You can create millibeams using a grid. Instead of having one solid beam, you have several stripes. If you use heavier ions, they don’t scatter very much and remain spatially fractionated. There is mounting evidence that fractionated irradiation of the tumour can elicit an immune response and that these immune cells eventually destroy the tumour. Research is still ongoing to understand whether it’s better to irradiate with a spatial fractionation of 1 millimetre or to only radiate the centre of the tumour, allowing the immune cells to migrate and destroy the tumour.

Radioactive-ion therapy

What’s the biology of the body’s immune response to a tumour?

To become a tumour, a cell has to fool the immune system, otherwise our immune system will destroy it. So, we are desperately trying to find a way to teach the immune system to say: “look, this is not a friend – you have to kill it, you have to destroy it.” This is immunotherapy, the subject of the Nobel Prize in medicine in 2018 and also related to the 2025 Nobel Prize in medicine on regulation of the immune system. But these drugs don’t work for every tumour. Radiotherapy is very useful in this sense, because you kill a lot of cells, and when the immune system sees a lot of dead cells, it activates. A combination of immunotherapy and radiotherapy is now being used more and more in clinical trials.

You also mentioned radioactive ion beams and the simultaneous acceleration of carbon and helium ions. Why are these approaches advantageous?

The two big problems with particle therapy are cost and range uncertainty. Having energy deposition concentrated at the Bragg peak is very nice, but if it’s not in the right position, it can do a lot of damage. Precision is therefore much more important in particle therapy than in conventional radiotherapy, as X-rays don’t have a Bragg peak – even if the patient moves a little bit, or if there is an anatomical change, it doesn’t matter. That’s why many centres prefer X-rays. To change that, we are trying to create ways to see the beam while we irradiate. Radioactive ions decay while they deposit energy in the tumour, allowing you to see the beam using PET. With carbon and helium, you don’t see the carbon beam, but you see the helium beam. These are both ways to visualise the beam during irradiation.

How significantly does radiation therapy improve human well-being in the world today?

When I started to work in radiation therapy at Berkeley, many people were telling me: “Why do you waste your time in radiation therapy? In 10 years everything will be solved.” At that time, the trend was gene therapy. Other trends have come and gone, and after 35 years in this field, radiation therapy is still a very important tool in a multidisciplinary strategy for killing tumours. More than 50% of cancer patients need radiotherapy, but, even in Europe, it is not available to all patients who need it.

Accelerator and detector physicists have to learn to speak the language of the non-specialist

What are the most promising initiatives to increase access to radiotherapy in low- and middle-income countries?

Simply making the accelerators cheaper. The GDP of most countries in Africa, South America and Asia is also steadily increasing, so you can expect that – let’s say – in 20 or 30 years from now, there will be a big demand for advanced medical technologies in these countries, because they will have the money to afford it.

Is there a global shortage of radiation physicists?

Yes, absolutely. This is true not only for particle therapy, which requires a high number of specialists to maintain the machine, but also for conventional X-ray radiotherapy with electron linacs. It’s also true for diagnostics because you need a lot of medical physicists for CT, PET and MRI.

What is your advice to high-energy physicists who have just completed a PhD or a postdoc, and want to enter medical physics?

The next step is a specialisation course. In about four years, you will become a specialised medical physicist and can start to work in the clinics. Many who take that path continue to do research alongside their clinical work, so you don’t have to give up your research career, just reorient it toward medical applications.

How does PTCOG exert leadership over global research and development?

The Particle Therapy Co-Operative Group (PTCOG) is a very interesting association. Every particle-therapy centre is represented in its steering committee. We have two big roles. One is research, so we really promote international research in particle therapy, even with grants. The second is education. For example, Spain currently has 11 proton therapy centres under construction. Each will need maybe 10 physicists. PTCOG is promoting education in particle therapy to train the next generation of radiation-therapy technicians and medical oncologists. It’s a global organisation, representing science worldwide, across national and continental branches.

Do you have a message for our community of accelerator physicists and detector physicists? How can they make their research more interdisciplinary and improve the applications?

Accelerator physicists especially, but also detector physicists, have to learn to speak the language of the non-specialist. Sometimes they are lost in translation. Also, they have to be careful not to oversell what they are doing, because you can create expectations that are not matched by reality. Tabletop laser-driven accelerators are a very interesting research topic, but don’t oversell them as something that can go into the clinics tomorrow, because then you create frustration and disappointment. There is a similar situation with linear accelerators for particle therapy. Since I started to work in this field, people have been saying “Why do we use circular accelerators? We should use linear accelerators.” After 35 years, not a single linear accelerator has been used in the clinics. There must also be a good connection with industry, because eventually clinics buy from industry, not academia.

Are there missed opportunities in the way that fundamental physicists attempt to apply their research and make it practically useful with industry and medicine?

In my opinion, it should work the other way around. Don’t say “this is what I am good at”; ask the clinical environment, “what do you need?” In particle therapy, we want accelerators that are cheaper and with a smaller footprint. So in whatever research you do, you have to prove to me that the footprint is smaller, and the cost lower.

Cave M

Do forums exist where medical doctors can tell researchers what they need?

PTCOG is definitely the right place for that. We keep medicine, physics and biology together, and it’s one of the meetings with the highest industry participation. All the industries in particle therapy come to PTCOG. So that’s exactly the right forum where people should talk. We expect 1500 people at the next meeting, which will take place in Deauville, France, from 8 to 13 June 2026, shortly after IPAC.

Are accelerator physicists welcome to engage in PTCOG even if they’ve not previously worked on medical applications?

Absolutely. This is something that we are missing. Accelerator physicists mostly go to IPAC but not to PTCOG. They should also come to PTCOG to speak more with medical physicists. I would say that PTCOG is 50% medical physics, 30% medicine and 20% biology. So, there are a lot of medical physicists, but we don’t have enough accelerator physicists and detector physicists. We need more particle and nuclear physicists to come to PTCOG to see what the clinical and biology community want, and whether they can provide something.

Do you have a message for policymakers and funding agencies about how they can help push forward research in radiotherapy?

Unfortunately, radiation therapy and even surgery are wrongly perceived as old technologies. There is not much investment in them, and that is a big problem for us. What we miss is good investment at the level of cooperative programmes that develop particle therapy in a collaborative fashion. At the moment, it’s becoming increasingly difficult. All the money goes into prevention and pharmaceuticals for immunotherapy and targeted therapy, and this is something that we are trying to revert.

Are large accelerator laboratories well placed to host cooperative research projects?

Both GSI and CERN face the same challenge: their primary mission is nuclear and particle physics. Technological transfer is fine, but they may jeopardise their funding if they stray too far from their primary goal. I believe they should invest more in technological transfer, lobbying their funding agencies to demonstrate that there is a translation of their basic science into something that is useful for public health.

How does your research in particle therapy transfer to astronaut safety?

Particle therapy and space-radiation research have a lot in common. They use the same tools and there are also a lot of overlapping topics, for example radiosensitivity. One patient is more sensitive, one patient is more resistant, and we want to understand what the difference is. The same is true of astronauts – and radiation is probably the main health risk for long-term missions. Space is also a hostile environment in terms of microgravity and isolation, but here we understand the risks, and we have countermeasures. For space radiation, the problem is that we don’t understand the risk very well, because the type of radiation is so exotic. We don’t have that type of radiation on Earth, so we don’t know exactly how big the risk is. Plus, we don’t have effective countermeasures, because the radiation is so energetic that shielding will not be enough to protect the crews effectively. We need more research to reduce the uncertainty on the risk, and most of this research is done in ground-based accelerators, not in space.

Radiation therapy is probably the best interdisciplinary field that you can work in

I understand that you’re even looking into cryogenics…

Hibernation is considered science fiction, but it’s not science fiction at all – it’s something we can recreate in the lab. We call it synthetic torpor. This can be induced in animals that are non-hibernating. Bears and squirrels hibernate; humans and rats don’t, but we can induce it. And when you go into hibernation, you become more radioresistant, providing a possible countermeasure to radiation exposure, especially for long missions. You don’t need much food, you don’t age very much, metabolic processes are slowed down, and you are protected from radiation. That’s for space. This could also be applied to therapy. Imagine you have a patient with multiple metastasis and no hope for treatment. If you can induce synthetic torpor, all the tumours will stop, because when you go into a low temperature and hibernation, the tumours don’t grow. This is not the solution, because when you wake the patient up, the tumours will grow again, but what you can do is treat the tumours while you are in hibernation, while healthy tissue is more radiation resistant. The number of research groups working on this is low, so we’re quite far from considering synthetic torpor for spaceflight or clinical trials for cancer treatment. First of all, we have to see how long we can keep an animal in synthetic torpor. Second, we should translate into bigger animals like pigs or even non-human primates.

In the best-case scenario, what can particle therapy look like in 10 years’ time?

Ideally, we should probably at least double the amount of particle-therapy centres that are now available, and expand into new regions. We finally have a particle-therapy centre in Argentina, which is the first one in South America. I would like to see many more in South America and in Africa. I would also like to see more centres that try to tackle tumours where there is no treatment option, like glioblastoma or pancreatic cancer, where the mortality is the same as the incidence. If we can find ways to treat such cancers with heavy ions and give hope to these patients, this would be really useful.

Is there a final thought that you’d like to leave with readers?

Radiation therapy is probably the best interdisciplinary field that you can work in. It’s useful for society and it’s intellectually stimulating. I really hope that big centres like CERN and GSI commit more and more to the societal benefits of basic research. We need it now more than ever. We are living in a difficult global situation, and we have to prove that when we invest money in basic research, this is very well invested money. I’m very happy to be a scientist, because in science, there are no barriers, there is no border. Science is really, truly international. I’m an advocate of saying scientific collaboration should never stop. It didn’t even stop during the Cold War. At that time, the cooperation between East and West at the scientist level helped to reduce the risk of nuclear weapons. We should continue this. We don’t have to think that what is happening in the world should stop international cooperation in science: it eventually brings peace.

The post The future of particle therapy appeared first on CERN Courier.

]]>
Opinion PTCOG president Marco Durante describes an exciting future for the technology and shares his vision for closer international cooperation between medicine, academia and industry. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_INT_durante.jpg
Polymath, humanitarian, gentleman https://cerncourier.com/a/polymath-humanitarian-gentleman/ Fri, 07 Nov 2025 12:38:07 +0000 https://cerncourier.com/?p=114822 Herwig Schopper, Director-General of CERN from 1981 to 1988, passed away on 19 August at the age of 101.

The post Polymath, humanitarian, gentleman appeared first on CERN Courier.

]]>
Towards LEP and the LHC

Herwig Schopper was born on 28 February 1924 in the German-speaking town of Landskron (today, Lanškroun) in the then young country of Czechoslovakia. He enjoyed an idyllic childhood, holidaying at his grandparents’ hotel in Abbazia (today, Opatija) on what is now the Croatian Adriatic coast. It was there that his interest in science was awakened through listening in on conversations between physicists from Budapest and Belgrade. In Landskron, he developed an interest in music and sport, learning to play both piano and double bass, and skiing in the nearby mountains. He also learned to speak English, not merely to read Shakespeare as was the norm at the time, but to be able to converse, thanks to a Jewish teacher who had previously spent time in England. This skill was to prove transformational later in life.

The idyll began to crack in 1938 when the Sudetenland was annexed by Germany. War broke out the following year, but the immediate impact on Herwig was limited. He remained in Landskron until the end of his high-school educ ation, graduating as a German citizen – and with no choice but to enlist. Joining the Luftwaffe signals corps, because he thought that would help him develop his knowledge of physics, he served for most of the war on the Eastern Front ensuring that communication lines remained open between military headquarters and the troops on the front lines. As the war drew to a close in March 1945, he was transferred west, just in time to see the Western Allies cross the Rhine at Remagen. Recalled to Berlin and given orders to head further west, Herwig instructed his driver to first make a short detour via Potsdam. This was a sign of the kind of person Herwig was that, amidst the chaos of the fall of Berlin, he wanted to see Schloss Sanssouci, Frederick the Great’s temple to the enlightenment, while he had the chance.

Academic overture

By the time Herwig arrived in Schleswig–Holstein, the war was over, and he found himself a prisoner of the British. He later recalled, with palpable relief, that he had managed to negotiate the war without having to shoot at anyone. On discovering that Herwig spoke English, the British military administration engaged him as a translator. This came as a great consolation to Herwig since many of his compatriots were dispatched to the mines to extract the coal that would be used to reconstruct a shattered Germany. Herwig rapidly struck up a friendship with the English captain he was assigned to. This in turn eased his passage to the University of Hamburg, where he began his research career studying optics, and later enabled him to take the first of his scientific sabbaticals when travel restrictions on German academics were still in place (see “Academic overture” image).

In 1951, Herwig left for a year in Stockholm, where he worked with Lise Meitner on beta decay. He described this time as his first step up in energy from the eV-energies of visible light to the keV-energies of beta-decay electrons. A later sabbatical, starting in 1956, would see him in Cambridge, where he worked under Meitner’s nephew, Otto Frisch, in the Cavendish laboratory. As Austrian Jews, both Meitner and Frisch had sought exile before the war. By this time, Frisch had become director of the Cavendish’s nuclear physics department and a fellow of the Royal Society.

Initial interactions

While at Cambridge, Herwig took his first steps in the emerging field of particle physics, and became one of the first to publish an experimental verification of Lee and Yang’s proposal that parity would be violated in weak interactions. His single-author paper was published soon after that by Chien-Shiung Wu and her team, leading to a lifelong friendship between the two (see “Virtuosi” image).

Following Wu’s experimental verification of parity violation, cited by Herwig in his paper, Lee and Yang received the Nobel Prize. Wu was denied the honour, ostensibly on the basis that she was one of a team and the prize can only be shared three ways. It remains in the realm of speculation whether Herwig would have shared the prize had his paper been the first to appear.

Virtuosi

A third sabbatical, arranged by Willibald Jentschke, who wanted Herwig to develop a user group for the newly established DESY laboratory, saw the Schopper family move to Ithaca, New York in 1960. At Cornell, Herwig learned the ropes of electron synchrotrons from Bob Wilson. He also learned a valuable lesson in the hands-on approach to leadership. Arriving in Ithaca on a Saturday, Herwig decided to look around the deserted lab. He found one person there, tidying up. It turned out not to be the janitor, but the lab’s founder and director, Wilson himself. For Herwig, Cornell represented another big jump in energy, cementing Schopper as an experimental particle physicist.

Cornell represented another big jump in energy, cementing Schopper as an experimental particle physicist

Herwig’s three sabbaticals gave him the skills he would later rely on in hardware development and physics analysis, but it was back in Germany that he honed his management skills and established himself a skilled science administrator.

At the beginning of his career in Hamburg, Herwig worked under Rudolf Fleischmann, and when Fleischmann was offered a chair at Erlangen, Herwig followed. Among the research he carried out at Erlangen was an experiment to measure the helicity of gamma rays, a technique that he’d later deploy in Cambridge to measure parity violation.

Prélude

It was not long before Herwig was offered a chair himself, and in 1958, at the tender age of 34, he parted from his mentor to move to Mainz. In his brief tenure there, he set wheels in motion that would lead to the later establishment of the Mainz Microtron laboratory, today known as MAMI. By this time, however, Herwig was much in demand, and he soon moved to Karlsruhe, taking up a joint position between the university and the Kernforschungszentrum, KfK. His plan was to merge the two under a single management structure as the Karlsruhe Institute for Experimental Nuclear Physics. In doing so, he laid the seeds for today’s Karlsruhe Institute of Technology, KIT.

Pioneering research

At Karlsruhe, Herwig established a user group for DESY, as Jentschke had hoped, and another at CERN. He also initiated a pioneering research programme into superconducting RF and had his first personal contacts with CERN, spending a year there in 1964. In typical Herwig fashion, he pursued his own agenda, developing a device he called a sampling total absorption counter, STAC, to measure neutron energies. At the time, few saw the need for such a device, but this form of calorimetry is now an indispensable part of any experimental particle physicists’ armoury.

In 1970, Herwig again took leave of absence from Karlsruhe to go to CERN. He’d been offered the position of head of the laboratory’s Nuclear Physics Division, but his stay was to be short lived (see “Prélude” image). The following year, Jentschke took up the position of Director-General of CERN alongside John Adams. Jentschke was to run the original CERN laboratory, Lab I, while Adams ran the new CERN Lab II, tasked with building the SPS. This left a vacancy at Germany’s national laboratory, and the job was offered to Herwig. It was too good an offer to refuse.

As chair of the DESY directorate, Herwig witnessed from afar the discovery of both charm and bottom quarks in the US. Although missing out on the discoveries, DESY’s machines were perfect laboratories to study the spectroscopy of these new quark families, and DESY went on to provide definitive measurements. Herwig also oversaw DESY’s development in synchrotron light science, repurposing the DORIS accelerator as a light source when its physics career was complete and it was succeeded by PETRA.

Architects of LEP

The ambition of the PETRA project put DESY firmly on course to becoming an international laboratory, setting the scene for the later HERA model. PETRA experiments went on to discover the gluon in 1979.

The following year, Herwig was named as CERN’s next Director-General, taking up office on 1 January 1981. By this time, the CERN Council had decided to call time on its experiment with two parallel laboratories, leaving Herwig with the task of uniting Lab I and Lab II. The Council was also considering plans to build the world’s most powerful accelerator, the Large Electron–Positron collider, LEP.

It fell to Herwig both to implement a new management structure for CERN and to see the LEP proposal through to approval (see “Architects of LEP” image). Unpopular decisions were inevitable, making the early years of Herwig’s mandate somewhat difficult. In order to get LEP approved, he had to make sacrifices. As a result, the Intersecting Storage Rings (ISR), the world’s only hadron collider, collided its final beams in 1984 and cuts had to be made across the research programme. Herwig was also confronted with a period of austerity in science funding, and found himself obliged to commit CERN to constant funding in real terms throughout the construction of LEP, and as it turns out, in perpetuity.

It fell to Herwig both to implement a new management structure for CERN and to see the LEP proposal through to approval

Herwig’s battles were not only with the lab’s governing body; he also went against the opinions of some of his scientific colleagues concerning the size of the new accelerator. True to form, Herwig stuck with his instinct, insisting that the LEP tunnel should be 27 km around, rather than the more modest 22 km that would have satisfied the immediate research goals while avoiding the difficult geology beneath the Jura mountains. Herwig, however, was looking further ahead – to the hadron collider that would follow LEP. His obstinacy was fully vindicated with the discovery of the Higgs boson in 2012, confirming the Brout–Englert–Higgs mechanism, which had been proposed almost 50 years earlier. This discovery earned the Nobel Prize for Peter Higgs and François Englert in 2013 (see “Towards LEP and the LHC” image).

The CERN blueprint

Difficult though some of his decisions may have been, there is no doubt that Herwig’s 1981 to 1988 mandate established the blueprint for CERN to this day. The end of operations of the ISR may have been unpopular, and we’ll never know what it may have gone on to achieve, but the world’s second hadron collider at the SPS delivered CERN’s first Nobel prize during Herwig’s mandate, awarded to Carlo Rubbia and Simon van der Meer in 1984 for the discovery of W and Z bosons.

Herwig turned 65 two months after stepping down as CERN Director-General, but retirement was never on his mind. In the years that followed, he carried out numerous roles for UNESCO, applying his diplomacy and foresight to new areas of science. UNESCO was in many ways a natural step for Herwig, whose diplomatic skills had been honed by the steady stream of high-profile visitors to CERN during his mandate as Director-General. At one point, he engineered a meeting at UNESCO between Jim Cronin, who was lobbying for the establishment of a cosmic-ray observatory in Argentina, and the country’s president, Carlos Menem. The following day, Menem announced the start of construction of the Pierre Auger Observatory. On another occasion, Herwig was tasked with developing the Soviet gift to Cuba of a small particle accelerator into a working laboratory. That initiative would ultimately come to nothing, but it helped Herwig prepare the groundwork for perhaps his greatest post-retirement achievement: SESAME, a light-source laboratory in Jordan that operates as an intergovernmental organisation following the CERN model (see “Science diplomacy” image). Mastering the political challenge of establishing an organisation that brings together countries from across the Middle East – including long-standing rivals – required a skill set that few possess.

Science diplomacy

Although the roots of SESAME can be traced to a much earlier date, by the end of the 20th century, when the idea was sufficiently mature for an interim organisation to be established, Herwig was the natural candidate to lead the new organisation through its formative years. His experience of running international science coupled with his post-retirement roles at UNESCO made him the obvious choice to steer SESAME from idea to reality. It was Herwig who modelled SESAME’s governing document on the CERN convention, and it was Herwig who secured the site in Jordan for the laboratory. Today, SESAME is producing world-class research – a shining example of what can be achieved when people set aside their differences and focus on what they have in common.

Establishing an organisation that brings together countries from across the Middle East required a skill set few possess

Herwig never stopped working for what he believed in. When CERN’s current Director-General convened a meeting with past Directors-General in 2024, along with the president of the CERN Council, Herwig was present. When initiatives were launched to establish an international research centre in the Balkans, Herwig stepped up to the task. He never lost his sense of what is right, and he never lost his mischievous sense of humour. Following an interview at his house in 2024 for the film The Peace Particle, the interviewer asked whether he still played the piano. Herwig stood up, walked to the piano and started to play a very simple arrangement of Christian Sinding’s “Rustle of Spring”. Just as curious glances started to be exchanged, he transitioned, with a twinkle in his eye, to a beautifully nuanced rendition of Liszt’s “Liebestraum No. 3”.

Herwig Schopper was a rare combination of genius, polymath, humanitarian and gentleman. Always humble, he could make decisions with nerves of steel when required. His legacy spans decades and disciplines, and has shaped the field of particle physics in many ways. With his passing, the world has lost a truly remarkable individual. He will be sorely missed.

The post Polymath, humanitarian, gentleman appeared first on CERN Courier.

]]>
Feature Herwig Schopper, Director-General of CERN from 1981 to 1988, passed away on 19 August at the age of 101. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_SCHOPPER_feature.jpg
Alchemy by pure light https://cerncourier.com/a/alchemy-by-pure-light/ Fri, 07 Nov 2025 12:36:30 +0000 https://cerncourier.com/?p=114836 In lead collisions at the LHC, some of the strongest electromagnetic fields in the universe bombard the inside of the beam pipe with radioactive gold.

The post Alchemy by pure light appeared first on CERN Courier.

]]>
New results in fundamental physics can be a long time coming. Experimental discoveries of elementary particles have often occurred only decades after their prediction by theory.

Still, the discovery of the fundamental particles of the Standard Model has been speedy in comparison to another longstanding quest in natural philosophy: chrysopoeia, the medieval alchemists’ dream of transforming the “base metal” lead into the precious metal gold. This may have been motivated by the observation that the dull grey, relatively abundant metal lead is of similar density to gold, which has been coveted for its beautiful colour and rarity for millennia.

The quest goes back at least to the mythical, or mystical, notion of the philosopher’s stone and Zosimos of Panopolis around 300 CE. Its evolution, in various cultures, through medieval times and up to the 19th century, is a fascinating thread in the emergence of modern empirical science from earlier ways of thinking. Some of the leaders of this transition, such as Isaac Newton, also practised alchemy. While the alchemists pioneered many of the techniques of modern chemistry, it was only much later that it became clear that lead and gold are distinct chemical elements and that chemical methods are powerless to transmute one into the other.

With the dawn of nuclear physics in the 20th century, it was discovered that elements could transform into others through nuclear reactions, either naturally by radioactive decay or in the laboratory. In 1940, gold was produced at the Harvard Cyclotron by bombarding a mercury target with fast neutrons. Some 40 years ago, tiny amounts of gold were produced in nuclear reactions between beams of carbon and neon, and a bismuth target at the Bevalac in Berkeley. Very recently, gold isotopes were produced at the ISOLDE facility at CERN by bombarding a uranium target with proton beams (see “Historic gold” images).

Historic gold

Now, tucked away discreetly in the conclusions of a paper recently published by the ALICE collaboration, one can find the observation, originating from Igor Pshenichnov, Uliana Dmitrieva and Chiara Oppedisano, that “the transmutation of lead into gold is the dream of medieval alchemists which comes true at the LHC.”

ALICE has finally measured the transmutation of lead into gold, not via the crucibles and alembics of the alchemists, nor even by the established techniques of nuclear bombardment used in the experiments mentioned above, but in a novel and interesting way that has become possible in “near-miss” interactions of lead nuclei at the LHC.

At the LHC, lead has been transformed into gold by light.   

Since the first announcement, this story has attracted considerable attention in the media. Here I would like to put this assertion in scientific context and indicate its relevance in testing our understanding of processes that can limit the performance of the LHC and future colliders such as the FCC.

Electromagnetic pancakes

Any charged particle at rest is surrounded by lines of electric fields radiating outwards in all directions. These fields are particularly strong close to a lead nucleus because it contains 82 protons, each with one elementary charge. In the LHC, the lead nuclei travel at 99.999994% of the speed of light, squeezing the field lines into a thin pancake transverse to the direction of motion in the laboratory frame of reference. This compression is so strong that, in the vicinity of the nucleus, we find the strongest magnetic and electric fields known in the universe, trillions of times stronger than even the prodigiously powerful superconducting magnets of the LHC, and orders of magnitude greater than the Schwinger limit where the vacuum polarises or the magnetic fields found in rare, rapidly spinning neutron stars called magnetars. Of course, these fields extend only over a very short time as one nucleus passes by the other. Quantum mechanics, via a famous insight of Fermi, Weizsäcker and Williams, tells us that this electromagnetic flash is equivalent to a pulse of quasi-real photons whose intensity and energy are greatly boosted by the large charge and the relativistic compression.

When two beams of nuclei are brought into collision in the LHC, some hadronic interactions occur. In the unimaginable temperatures and densities of this ultimate crucible we create droplets of the quark–gluon plasma, the main subject of study of the heavy-ion programme. However, when nuclei “just miss” each other, the interactions of these electromagnetic fields amount to photon–photon and photon–nucleus collisions. Some of the processes occurring in these so-called ultra-peripheral collisions (UPCs) are so strong that they would limit the performance of the collider, were it not for special measures implemented in the last 10 years.

Spotting spectators

The ALICE paper is one among many exploring the rich field of fundamental physics studies opened up by UPCs at the LHC (CERN Courier January/February 2025 p31). Among them are electromagnetic dissociation processes where a photon interacting with a nucleus can excite oscillations of its internal structure and result in the ejection of small numbers of neutrons and protons that are detected by ALICE’s zero degree calorimeters (ZDCs). The ALICE experiment is unique in having calorimeters to detect spectator protons as well as neutrons (see “Spotting spectators” figure). The residual nuclei are not detected although they contribute to the signals measured by the beam-loss monitor system of the LHC.

Each 208Pb nucleus in the LHC beams contains 82 protons and 208–82 = 126 neutrons. To create gold, a nucleus with a charge of 79, three protons must be removed, together with a variable number of neutrons.    

Alchemy in ALICE

While less frequent than the creation of the elements thallium (single-proton emission) or mercury (two-proton emission), the results of the ALICE paper show that each of the two colliding lead-ion beams contribute a cross section of 6.8 ± 2.2 barns to gold production, implying that the LHC now produces gold at a maximum rate of about 89 kHz from lead–lead collisions at the ALICE collision point, or 280 kHz from all the LHC experiments combined. During Run 2 of the LHC (2015–2018), about 86 billion gold nuclei were created at all four LHC experiments, but in terms of mass this was only a tiny 2.9 × 10–11 g of gold. Almost twice as much has already been produced in Run 3 (since 2023).

The transmutation of lead into gold is the dream of medieval alchemists which comes true at the LHC

Strikingly, this gold production is somewhat larger than the rate of hadronic nuclear collisions, which occur at about 50 kHz for a total cross section of 7.67 ± 0.25 barns.

Different isotopes of gold are created according to the number of neutrons that are emitted at the same time as the three protons. To create 197Au, the only stable isotope and the main component of natural gold, a further eight neutrons must be removed – a very unlikely process. Most of the gold produced is in the form of unstable isotopes with lifetimes of the order of a minute.

Although the ZDC signals confirm the proton and neutron emission, the transformed nuclei are not themselves detected by ALICE and their fate is not discussed in the paper. These interaction products nevertheless propagate hundreds of metres through the beampipe in several secondary beams whose trajectories can be calculated, as seen in the “Ultraperipheral products” figure.

Ultraperipheral products

The ordinate shows horizontal displacement from the central path of the outgoing beam. This coordinate system is commonly used in accelerator physics as it suppresses the bending of the central trajectory – downwards in the figure – and its separation into the beam pipes of the LHC arcs.   

The “5σ” envelope of the intense main beam of 208Pb nuclei that did not collide is shown in blue. Neutrons from electromagnetic dissociation and other processes are plotted in magenta. They begin with a certain divergence and then travel down the LHC beam pipe in straight lines, forming a cone, until they are detected by the ALICE ZDC, some 114 m away from the collision, after the place where the beam pipe splits in two. Because of the coordinate system, the neutron cone appears to bend sharply at the first separation dipole magnet.

Protons are shown in green. As they only have 40% of the magnetic rigidity of the main beam, they bend quickly away from the central trajectory in the first separation magnet, before being detected by a different part of the ZDC on the other side of the beam pipe.

Photon–photon interactions in UPCs copiously produce electron–positron pairs. In a small fraction of them, corresponding nevertheless to a large cross-section of about 280 barns, the electron is created in a bound state of one of the 208Pb nuclei, generating a secondary beam of 208Pb81+ single-electron ions. The beam from this so-called bound-free pair production (BFPP), shown in red, carries a power of about 150 W – enough to quench the superconducting coils of the LHC magnets, causing them to transition from the superconducting to the normal resistive state. Such quenches can seriously disrupt accelerator operation, as the stored magnetic energy is rapidly released as heat within the affected magnet.

To prevent this, new “TCLD” collimators were installed on either side of ALICE during the second long shutdown of the LHC. Together with a variable-amplitude bump in the beam orbit, which pulls the BFPP beam away from the first impact point so that it can be safely absorbed on the TCLD, this allowed the luminosity to be increased to more than six times the original LHC design, just in time to exploit the full capacity of the upgraded ALICE detector in Run 3.

Light-ion collider

A first at the LHC

Besides lead, the LHC has recently collided beams of 16O and 20Ne (see “First oxygen and neon collisions at the LHC”), and nuclear transmutation has manifested itself in another way. In hadronic or electromagnetic events where equal numbers of protons and neutrons are emitted, the outgoing nucleus has almost the same charge-to-mass ratio, since nuclear binding energies are very small at the top of the periodic table. It may then continue to circulate with the original beam, resulting in a small contamination that increases during the several hours of an LHC fill. Hybrid collisions can then occur, for example including a 14N nucleus formed by the ejection of a proton and a neutron from 16O. Fortunately, the momentum spread introduced by the interactions puts many of these nuclei outside the acceptance of the radio-frequency cavities that keep the beams bunched as they circulate around the ring, so the effect is smaller than had first been expected.

The most powerful beam from an electromagnetic-dissociation process is 207Pb from single neutron emission, plotted in green. It has comparable intensity to 208Pb81+ but propagates through the LHC arc to the collimation system at Point 3.

Similar electromagnetic-dissociation processes occur elsewhere, notably in beam interactions with the LHC collimation system. The recent ALICE paper, together with earlier ones on neutron emissions in UPCs, helps to test our understanding of the nuclear interactions that are an essential ingredient of complex beam-physics simulations. These are used to understand and control beam losses that might otherwise provoke frequent magnet quenches or beam dumps. At the LHC, a deep symbiosis has emerged between the fundamental nuclear physics studied by the experiments and the accelerator physics limiting its performance as a heavy-ion collider – or even as a light-ion collider (see “Light-ion collider” panel).

The figure also shows beams of the three heaviest gold isotopes in gold. 204Au has an impact point in a dipole magnet but is far too weak to quench it. 203Au follows almost the same trajectory as the BFPP beam. 202Au propagates through the arc to Point 3. The extremely weak flux of 197Au, the only stable isotope of gold, is also shown.

Worth its weight in gold

Prospecting for gold at the LHC looks even more futile when we consider that the gold nuclei emerge from the collision point with very high energies. They hit the LHC beam pipe or collimators at various points downstream where they immediately fragment in hadronic showers of single protons, neutrons and other particles. The gold exists for tens of milliseconds at most.

And finally, the isotopically pure lead used in CERN’s ion source costs more by weight than gold, so realising the alchemists’ dream at the LHC was a poor business plan from the outset.

The moral of this story, perhaps, is that among modern-day natural philosophers, LHC physicists take issue with the designation of lead as a “base” metal. We find, on the contrary, that 208Pb, the heaviest stable isotope among all the elements, is worth far more than its weight in gold for the riches of the physics discoveries that it has led us to.

The post Alchemy by pure light appeared first on CERN Courier.

]]>
Feature In lead collisions at the LHC, some of the strongest electromagnetic fields in the universe bombard the inside of the beam pipe with radioactive gold. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_ALCHEMY_radioactive.jpg
The physicist who fought war and cancer https://cerncourier.com/a/the-physicist-who-fought-war-and-cancer/ Fri, 07 Nov 2025 12:34:03 +0000 https://cerncourier.com/?p=114858 Subatomic physics has shaped both the conduct of war and the treatment of cancer. Joseph Rotblat, who left the Manhattan Project on moral grounds and later advanced radiotherapy, embodies this dual legacy.

The post The physicist who fought war and cancer appeared first on CERN Courier.

]]>
The courage of his convictions

Joseph Rotblat’s childhood was blighted by the destruction visited on Warsaw, first by the Tsarist Army, followed by the Central Powers and completed by the Red Army from 1918 to 1920. His father’s successful paper-importing business went bankrupt in 1914, and the family became destitute. After a short course in electrical engineering, Joseph and a teenaged friend became jobbing electricians. A committed autodidact, Rotblat found his way into the Free University, where he studied physics under Ludwik Wertenstein. Wertenstein had worked with Marie Skłodowska-Curie in Paris and was the chief of the Radiological Institute in Warsaw as well as teaching at the Free University. He was the first to recognise Rotblat’s brilliance and retained him as a researcher at the Institute. Rotblat’s main research was neutron-induced artificial radioactivity: he was among the first to induce cobalt-60, which became a standard source in radiotherapy machines before reliable linear accelerators were available.

Chadwick described Rotblat as “very intelligent and very quick”

By the late 1930s, Rotblat had published more than a dozen papers, some in English journals after translation by Wertenstein; the name Rotblat was becoming known in neutron physics. The professor regarded him as the likely next head of the Radiological Institute and thought he should prepare by working outside Poland. Rotblat wanted to gain experience of the cyclotron and, although he could have joined the Joliot–Curie group in Paris, elected to go to Liverpool where James Chadwick was overseeing a machine expected to produce a proton beam within months. He arrived in Liverpool in April 1939 and was shocked by the city’s filth. He also found the scouse dialect of its citizens incomprehensible. Despite the trying circumstances, Rotblat soon impressed Chadwick with his experimental skill and was rewarded with a prestigious fellowship. Chadwick wrote to Wertenstein in June describing Rotblat as “very intelligent and very quick”.

Brimming with enthusiasm

Chadwick had formed a long-distance friendship with Ernest Lawrence, the cyclotron’s inventor, who kept him apprised of developments in Berkeley. At the time of Rotblat’s arrival, Lawrence was brimming with enthusiasm about the potential of neutrons and radioactive isotopes from cyclotrons for medical research, especially in cancer treatment. Chadwick hired Bernard Kinsey, a Cambridge graduate who spent three years with Lawrence, to take charge of the Liverpool cyclotron, and he befriended Rotblat. Liverpool had limited funding: Chadwick complained to Lawrence that the money “this laboratory has been running on in the past few years – is less than some men spend on tobacco.” Chadwick served on a Cancer Commission in Liverpool under the leadership of Lord Derby, which planned to bring cancer research to the Liverpool Radium Institute using products from the cyclotron.

James Chadwick

The small stipend from the Oliver Lodge fellowship encouraged Rotblat to return to Warsaw in August 1939 to collect his wife, Tola, and bring her to England. She was recovering from acute appendicitis; her doctors persuaded Joseph that she was not fit to travel. So he returned alone on the last train allowed to pass through Berlin before the Germans attacked Poland once more. Tola wrote her last letter to Joseph in December 1939. While he was in Warsaw, Rotblat confided in Wertenstein about his belief that a uranium fission bomb was feasible using fast neutrons, and he repeated this argument to Chadwick when he returned to Liverpool. Chadwick eventually became the leader of the British contingent on the Manhattan Project and arranged for Rotblat to come to Los Alamos in 1944 while remaining a Polish citizen. Rotblat worked in Robert Wilson’s cyclotron group and survived a significant radiation accident, receiving an estimated dose of 1.5 J/kg to his upper torso and head. The circumstances of his leaving the project in December 1944 were far more complicated than the moralistic account he wrote in The Bulletin of the Atomic Scientists 40 years later, but no less noble.

Tragedy and triumph

As Chadwick wrote to Rotblat in London, he saw “very obvious advantages” for the future of nuclear physics in Britain from Rotblat’s return to Liverpool. For one thing, “Rotblat has a wider experience on the cyclotron than anyone now in England,” and he also possessed “a mass of information on the equipment used in Project Y [Los Alamos] and Chicago.” Chadwick had two major roles in mind for Rotblat. One was to revitalise the depleted Liverpool department and to stimulate cyclotron research in England; and the second to collate the detailed data on nuclear physics brought by British scientists returning from the Manhattan Project. In 1945, Rotblat discovered that six members of his family had miraculously survived the war in Poland, but tragically not Tola. His state of despair deepened after the news of the atomic bombs being used against Japan: he knew about the possibility of a hydrogen bomb, and remembered conversations with Niels Bohr in Los Alamos about the risks of a nuclear arms race. He made two resolutions: to campaign against nuclear weapons and to leave academic nuclear physics and become a medical physicist to use his scientific knowledge for the direct benefit of people.

Joseph Rotblat
Robert Wilson

When Chadwick returned to Liverpool from the US, he found the department in a much better state than he expected. The credit for this belonged largely to Rotblat’s leadership; Chadwick wrote to Lawrence praising his outstanding ability, combined with a truly remarkable concern for the staff and students. Chadwick and Rotblat then agreed to build a synchrocyclotron in Liverpool. Rotblat selected the abandoned crypt of an unbuilt Catholic cathedral as the best site, since the local topography would provide some radiation protection. The post-war shortages, especially of steel, made this an extremely ambitious project. Rotblat presented a successful application for the largest university grant to the Department of Science and Industrial Research, and despite design and construction problems resulting in spiralling costs, the machine was in active research use from 1954 to 1968.

With the encouragement of physicians at Liverpool Royal Infirmary, Rotblat started to dabble in nuclear medicine to image thyroid glands and treat haematological disorders. In 1949 he saw an advert for the chair in physics at the Medical College of St. Bartholomew’s Hospital (Bart’s) in London and applied. While Rotblat was easily the most accomplished candidate, there was a long delay in his appointment on spurious grounds, such as being over-qualified to teach physics to medical students, likely to be a heavy consumer of research funds and xenophobia. Bart’s was a closed, reactionary institution. There was a clear division between the Medical College, with its links to London University, and the hospital, where the post-war teaching was suboptimal as it struggled to recover from the war and adjusted reluctantly to the new National Health Service (NHS). The Medical College, in Charterhouse Square, was severely bombed in the Blitz and the physics department completely destroyed. Rotblat attempted to thwart his main opponent, the dean (described as “secretive and manipulative” in one history), by visiting the hospital and meeting senior clinicians and governors. There was also a determined effort, orchestrated by Chadwick, to retain him in the ranks of nuclear physicists.

When I interviewed Rotblat in 1994, he told me that Chadwick’s final tactic was to tell him that he was close to being elected as a fellow of the Royal Society, but if he took the position at Bart’s, it would never happen. Rotblat poignantly observed: “He was right.” I mentioned this to Lorna Arnold, the nuclear historian, who thought it was a shame. She said she would take it up with her friend Rudolf Peierls. Despite being in poor health, Peierls vowed to correct this omission, and the next year the Royal Society elected Rotblat a fellow at the age of 86.

Full-time medical physicist

Rotblat’s first task at Bart’s, when he finally arrived in 1950, was to prepare a five-year departmental plan: a task he was well-qualified for after his experience with the synchrocyclotron in Liverpool. With wealthy, centuries-old hospitals such as Bart’s allowed to keep their endowments after the advent of the NHS, he also became an active committee member for the new Research Endowment Fund that provided internal grants and hired research assistants. The physics department soon collaborated with the biochemistry, pharmacology and physiology departments that required radioisotopes for research. He persuaded the Medical College to buy a 15 MV linear accelerator from Mullard, an English electronics company, which never worked for long without problems.

Rotblat resolved to campaign against nuclear weapons and use his scientific knowledge for the direct benefit of people

During his first two years, in addition to the radioisotope work, he studied the passage of electrons through biological tissue and the energy dissipation of neutrons in tissue – the 1950s were a golden age for radiobiology in England, and Rotblat forged close relationships with Hal Gray and his group at the Hammersmith Hospital. In the mid-1950s, he was approached by Patricia Lindop, a newly qualified Bart’s physician who had also obtained a first-class degree in physiology. Lindop had a five-year grant from the Nuffield Foundation to study ageing and, after discussions with Rotblat, it was soon arranged that she would study the acute and long-term effects of radiation in mice at different ages. This was a massive, prospective study that would eventually involve six research assistants and a colony of 30,000 mice. Rotblat acted as the supervisor for her PhD, and they published multiple papers together. In terms of acute death (within 30 days of a high, whole-body dose), she found that mice that were one-day old at exposure could tolerate the highest doses, whereas four-week-old mice were the most vulnerable. The interpretation of long-term effects was much less clearcut and provoked major disagreements within the radiobiology community. In a 1994 letter, Rotblat mused on the number of Manhattan Project scientists still alive: “According to my own studies on the effects of radiation on lifespan, I should have been dead a long time, having received a sub-lethal dose in Los Alamos. But here I am, advocating the closure of Los Alamos, Livermore and Sandia, instead of promoting them as health resorts!”

Patricia Lindop

In 1954, the US Bravo test obliterated the Bikini atoll and layered a Japanese fishing boat (Lucky Dragon No. 5) that was outside the exclusion zone in the South Pacific with radioactive dust. American scientists realised that the weapon massively exceeded its designed yield, and there was an unconvincing attempt to allay public fear. Rotblat was invited onto BBC’s flagship current-affairs programme, Panorama, to explain to the public the difference between the original fission bombs and the H-bomb. His lucid delivery impressed Bertrand Russell, a mathematical philosopher and a leading pacifist in World War I, who also spoke on Panorama. The two became close friends. When Rotblat went to a radiobiology conference a few months later, he met a Japanese scientist who had analysed the dust recovered from Lucky Dragon No. 5. The dust was comprised of about 60% rare-earth isotopes, leading Rotblat to believe that most of the explosive energy was due to fission not fusion. He wrote his own report, not based on any inside knowledge and despite official opposition, concluding this was a fission–fusion–fission bomb and that his TV presentation had underestimated its power by orders of magnitude. Rotblat’s report became public just as the British Cabinet decided in secret to develop thermonuclear weapons. The government was concerned that the Americans would view this as another breach of security by an ex-Manhattan Project physicist. Rotblat’s reputation as a man of the political left grew within the conservative institution of Bart’s.

Russell made a radio address at the end of 1954 to address the global existential threat posed by thermonuclear weapons and urged the public to “remember your humanity and forget the rest”. Six months later, Russell announced the Russell–Einstein Manifesto with Rotblat as one of the signatories, and relied upon by Russell to answer questions from the press. The first Pugwash conference followed in 1957 with Rotblat as a prominent contributor. His active involvement, closely supported by Lindop, would last for the rest of his life, as he encouraged communication across the East–West divide and pushed for international arms control agreements. Much of this work took place in his office at Bart’s. Rotblat and the Pugwash conference then shared the 1995 Nobel Peace Prize.

The post The physicist who fought war and cancer appeared first on CERN Courier.

]]>
Feature Subatomic physics has shaped both the conduct of war and the treatment of cancer. Joseph Rotblat, who left the Manhattan Project on moral grounds and later advanced radiotherapy, embodies this dual legacy. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_ROTBLAT_feature.jpg
JUNO takes aim at neutrino-mass hierarchy https://cerncourier.com/a/juno-takes-aim-at-neutrino-mass-hierarchy/ Fri, 07 Nov 2025 12:31:51 +0000 https://cerncourier.com/?p=114732 The Jiangmen Underground Neutrino Observatory in Guangdong Province, China, began data taking on 26 August.

The post JUNO takes aim at neutrino-mass hierarchy appeared first on CERN Courier.

]]>
Compared to the quark sector, the lepton sector is the Wild West of the weak interaction, with large mixing angles and large uncertainties. To tame this wildness, neutrino physicists are set to bring a new generation of detectors online in the next five years, each roughly an order of magnitude larger than its predecessor. The first of these to become operational is the Jiangmen Underground Neutrino Observatory (JUNO) in Guangdong Province, China, which began data taking on 26 August. The new 20 kton liquid-scintillator detector will seek to resolve one of the major open questions in particle physics: whether the third neutrino-mass eigenstate (ν3) is heavier or lighter than the second (ν2).

“Building JUNO has been a journey of extraordinary challenges,” says JUNO chief engineer Ma Xiaoyan. “It demanded not only new ideas and technologies, but also years of careful planning, testing and perseverance. Meeting the stringent requirements of purity, stability and safety called for the dedication of hundreds of engineers and technicians. Their teamwork and integrity turned a bold design into a functioning detector, ready now to open a new window on the world of neutrinos.”

Main goals

Neutrinos interact only via the parity-violating weak interaction, providing direct evidence only for left-handed neutrinos. As a result, right-handed neutrinos are not part of the Standard Model (SM) of particle physics. As the SM explains fermion masses by a coupling of the Higgs field to a left-handed fermion and its right-handed counterpart of the same flavour, neutrinos are predicted to be massless – a prediction still consistent with every effort to directly measure a neutrino mass yet attempted. Yet decades of observations of the flavour oscillations of solar, atmospheric, reactor, accelerator and astrophysical neutrinos have provided incontrovertible indirect evidence that neutrinos must have tiny masses below the sensitivity of instruments to detect. Observations of quantum interference between flavour eigenstates – the electron, muon and tau neutrinos – indicate that there must be a small mass splitting between ν1 and the slightly more massive ν2, and a larger mass splitting to ν3. But it is not yet known whether the mass eigenvalues follow a so-called normal hierarchy, m1 < m2 < m3, or an inverted hierarchy, m3 < m1 < m2. Resolving this question is the main physics goal of the JUNO experiment.

JUNO’s determination of the mass ordering is largely free of parameter degeneracies

“Unlike other approaches, JUNO’s determination of the mass ordering does not rely on the scattering of neutrinos with atomic electrons in the Earth’s crust or the value of the leptonic CP phase, and hence is largely free of parameter degeneracies,” explains JUNO spokesperson Wang Yifang. “JUNO will also deliver order‑of‑magnitude improvements in the precision of several neutrino‑oscillation parameters and enable cutting‑edge studies of neutrinos from the Sun, supernovae, the atmosphere and the Earth. It will also open new windows to explore unknown physics, including searches for sterile neutrinos and proton decay.”

Additional eye

Located 700 m underground near Jiangmen city, JUNO detects antineutrinos produced 53 km away by the Taishan and Yangjiang nuclear power plants. At the heart of the detector is a liquid‑scintillator detector inside a 44 m-deep water pool. A stainless-steel truss supports an acrylic sphere housing the liquid scintillator, as well as 20,000 20‑inch photomultiplier tubes (PMTs), 25,600 three‑inch PMTs, front‑end electronics, cabling and anti‑magnetic compensation coils. All the PMTs operate simultaneously to capture scintillation light from neutrino interactions and convert it to electrical signals.

To distinguish the extremely fine flavour oscillations that will allow JUNO to observe the neutrino-mass hierarchy, the experiment must achieve an extremely fine energy resolution of almost 50 keV for a typical 3 MeV reactor antineutrino. To attain this, JUNO had to push performance margins in several areas relative to the KamLAND experiment in Japan, which was previously the world’s largest liquid-scintillator detector.

“JUNO is a factor 20 larger than KamLAND, yet our required energy resolution is a factor two better,” explains Wang. “To achieve this, we have covered the full detector with PMTs with only 3 mm clearance and twice the photo-detection efficiency. By optimising the recipe of the liquid scintillator, we were able to improve its attenuation length by a factor of two to over 20 m, and increase its light yield by 50%.”

Go with the flow

Proposed in 2008 and approved in 2013, JUNO began underground construction in 2015. Detector installation started in December 2021 and was completed in December 2024, followed by a phased filling campaign. Within 45 days, the team filled the detector with 60 ktons of ultra‑pure water, keeping the liquid‑level difference between the inner and outer acrylic spheres within centimetres and maintaining a flow‑rate uncertainty below 0.5% to safeguard structural integrity.

Over the next six months, 20 ktons of liquid scintillator progressively filled the 35.4 m diameter acrylic sphere while displacing the water. Stringent requirements on scintillator purity, optical transparency and extremely low radioactivity had to be maintained throughout. In parallel, the collaboration conducted detector debugging, commissioning and optimisation, enabling a seamless transition to full operations at the completion of filling.

JUNO is designed for a scientific lifetime of up to 30 years, with a possible upgrade path allowing a search for neutrinoless double‑beta decay, says the team. Such an upgrade would probe the absolute neutrino-mass scale and test whether neutrinos are truly Dirac fermions, as assumed by the SM, or Majorana fermions without distinct antiparticles, as favoured by several attempts to address fundamental questions spanning particle physics and cosmology.

The post JUNO takes aim at neutrino-mass hierarchy appeared first on CERN Courier.

]]>
News The Jiangmen Underground Neutrino Observatory in Guangdong Province, China, began data taking on 26 August. https://cerncourier.com/wp-content/uploads/2025/11/JUNO-scaled.jpg
First oxygen and neon collisions at the LHC https://cerncourier.com/a/first-oxygen-and-neon-collisions-at-the-lhc/ Fri, 07 Nov 2025 12:25:33 +0000 https://cerncourier.com/?p=114719 Between 29 June and 9 July 2025, LHC physicists pushed the study of the quark–gluon plasma into new territory.

The post First oxygen and neon collisions at the LHC appeared first on CERN Courier.

]]>
In the first microseconds after the Big Bang, extreme temperatures prevented quarks and gluons from binding into hadrons, filling the universe with a deconfined quark–gluon plasma. Heavy-ion collisions between pairs of gold (19779Au79+) or lead (20882Pb82+) nuclei have long been observed to produce fleeting droplets of this medium, but light–ion collisions remain relatively unexplored. Between 29 June and 9 July 2025, LHC physicists pushed the study of the quark–gluon plasma into new territory, with the first dedicated studies of collisions between pairs of oxygen (168O8+) and neon (2010Ne10+) nuclei, and between oxygen nuclei and protons.

“Early analyses have already helped characterise the geometry of oxygen and neon nuclei, including the latter’s predicted prolate ‘bowling-pin’ shape,” says Anthony Timmins of the University of Houston. “More importantly, they appear consistent with the onset of the quark-gluon plasma in light–ion collisions.”

As the quark–gluon plasma appears to behave like a near-perfect fluid with low viscosity, the key to modelling heavy-ion collisions is hydrodynamics – the physics of how fluids evolve under pressure gradients, viscous stresses and other forces. When two lead nuclei collide at the LHC, they create a tiny, extremely hot fireball where quarks and gluons interact so frequently they reach local thermal equilibrium within about 10–23 s. Measurements of gold–gold collisions at Brookhaven’s RHIC and lead–lead collisions at the LHC suggest that the quark–gluon plasma flows with an extraordinarily low viscosity, close to the quantum limit, allowing momentum to move rapidly across the system. But it’s not clear whether the same rules apply to the smaller nuclear systems involved in light–ion collisions.

“For hydrodynamics to work, along with the appropriate quark-gluon plasma equation of state, you need a separation of scales between the mean free path of quarks and gluons, the pressure gradients and overall system size,” explains Timmins. “As you move to smaller systems, those scales start to overlap. Oxygen and neon are expected to sit near that threshold, close to the limits of plasma formation.”

Across the oxygen–oxygen and neon–neon datasets, the ALICE, ATLAS and CMS collaborations decomposed the transverse distribution of emitted particles into Fourier modes – a way to search for collective, fluid-like behaviour. Measurements of the “elliptic” and “triangular” Fourier components as functions of event multiplicity support the emergence of a collective flow driven by the initial collision geometry. The collaborations observe signs of energetic-probe suppression in oxygen–oxygen collisions – a signature of the droplet “quenching” jets in a way not observed in proton–proton collisions. Similar features appeared in a one-day xenon–xenon run that took place in October 2017.

These initial results are just a smattering of those to come

CMS compared particle yields in light-ion collisions to a proton–proton reference. After scaling for the number of binary nucleon–nucleon interactions, the collaboration observed a maximum suppression of 0.69 ± 0.04 at a transverse momentum of about 6 GeV, more than five standard deviations from unity. While milder than that observed for lead–lead and xenon–xenon collisions, the data point to genuine medium-induced suppression in the smallest ion–ion system studied to date. Meanwhile, ATLAS reported the first dijet transverse-momentum imbalance in a light-ion system. The reduction in balanced jets is consistent with path-length-dependent energy-loss effects, though apparently weaker than in lead–lead collisions.

In “head-on” collisions, ALICE, ATLAS and CMS all observed a neon–oxygen–lead hierarchy in elliptic flow, suggesting that, if a quark–gluon plasma does form, it exhibits the most pronounced “almond shape” in neon collisions. This pattern reflects the expected nuclear geometries of each species. Lead-208 is a doubly magic nucleus, with complete proton and neutron shells that render it tightly bound and nearly spherical in its ground state. Conversely, neon is predicted to be prolate, with its inherent elongation producing a larger elliptic overlap. Oxygen falls in between, consistent with models describing it as roughly spherical or weakly clustered.

ALICE and ATLAS reported a hierarchy of flow coefficients in light-ion collisions, with elliptic, triangular and quadrangular flows progressively decreasing as their Fourier index rises, in line with hydrodynamic expectations. Like CMS’s charged hadron yields, ALICE’s preliminary neutral pion yields exhibit a suppression at large momenta.

In a previous fixed-target study, the LHCb collaboration also measured the elliptic and triangular components of the flow in lead–neon and lead–argon collisions, observing the distinctive shape of the neon nucleus. As for proton–oxygen collisions, LHCb’s forward-rapidity coverage can probe the partonic structure of nuclei at very small values of Bjorken-x – the fraction of the nucleon’s momentum carried by a quark or gluon. Such measurements help constrain nuclear parton distribution functions in the low-x region dominated by gluons and provide rare benchmarks for modelling ultra-high-energy cosmic rays colliding with atmospheric oxygen.

These initial results are just a smattering of those to come. In a whirlwind 11-day campaign, physicists made full use of the brief but precious opportunity to investigate the formation of quark–gluon plasma in the uncharted territory of light ions. Accelerator physicists and experimentalists came together to tackle peculiar problems, such as the appearance of polluting species in the beams due to nuclear transmutation (see “Alchemy by pure light“). Despite the tight schedule, luminosity targets for proton–oxygen, oxygen–oxygen and neon–neon collisions were exceeded by large factors, thanks to high accelerator availability and the high injector intensity delivered by the LHC team.

“These early oxygen and neon studies show that indications of collective flow and parton-energy-loss-like suppression persist even in much smaller systems, while providing new sensitivity to nuclear geometry and valuable prospects for forward-physics studies,” concludes Timmins. “The next step is to pin down oxygen’s nuclear parton distribution function. That will be crucial for understanding the hadron-suppression patterns we see, with proton–oxygen and ultra-peripheral collisions being great ways to get there.”

The post First oxygen and neon collisions at the LHC appeared first on CERN Courier.

]]>
News Between 29 June and 9 July 2025, LHC physicists pushed the study of the quark–gluon plasma into new territory. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_NA_light.jpg
Prepped for re-entry https://cerncourier.com/a/prepped-for-re-entry/ Fri, 07 Nov 2025 12:23:23 +0000 https://cerncourier.com/?p=114918 When Francesca Luoni logs on each morning at NASA’s Langley Research Center in Virginia, she’s thinking about something few of us ever consider: how to keep astronauts safe from the invisible hazards of space radiation.

The post Prepped for re-entry appeared first on CERN Courier.

]]>
Francesca Luoni

When Francesca Luoni logs on each morning at NASA’s Langley Research Center in Virginia, she’s thinking about something few of us ever consider: how to keep astronauts safe from the invisible hazards of space radiation. As a research scientist in the Space Radiation Group, Luoni creates models to understand how high-energy particles from the Sun and distant supernovae interact with spacecraft structures and the human body – work that will help future astronauts safely travel deeper into space.

But Luoni is not a civil servant for NASA. She is contracted through the multinational engineering firm Analytical Mechanics Associates, continuing a professional slingshot from pure research to engineering and back again. Her career is an intriguing example of how to balance research with industrial engagement – a holy grail for early-career researchers in the late 2020s.

Leveraging expertise

Luoni’s primary aim is to optimise NASA’s Space Radiation Cancer Risk Model, which maps out the cancer incidence and mortality risk for astronauts during deep-space missions, such as NASA’s planned mission to Mars. To make this work, Luoni’s team leverages the expertise of all kinds of scientists, from engineers, statisticians and physicists, to biochemists, epidemiologists and anatomists.

“I’m applying my background in radiation physics to estimate the cancer risk for astronauts,” she explains. “We model how cosmic rays pass through the structure of a spacecraft, how they interact with shielding materials, and ultimately, what reaches the astronauts and their tissues.”

Before arriving in Virginia early this year, Luoni had already built a formidable career in space-radiation physics. After a physics PhD in Germany, she joined the GSI Helmholtz Centre for Heavy Ion Research, where she spent long nights at particle accelerators testing new shielding materials for spacecraft. “We would run experiments after the medical facility closed for the day,” she says. “It was precious work because there are so few facilities worldwide where you can acquire experimental data on how matter responds to space-like radiation.”

Her experiments combined experimental measurement data with Monte Carlo simulations to compare model predictions with reality – skills she honed during her time in nuclear physics that she still uses daily at NASA. “Modelling is something you learn gradually, through university, postgrads and research,” says Luoni. “It’s really about understanding physics, maths, and how things come together.”

In 2021 she accepted a fellowship in radiation protection at CERN. The work was different from the research she’d done before. It was more engineering-oriented, ensuring the safety of both scientists and surrounding communities from the intense particle beams of the LHC and SPS. “It may sound surprising, but at CERN the radiation is far more energetic than we see in space. We studied soil and water activation, and shielding geometries, to protect everyone on site. It was much more about applied safety than pure research.”

Luoni’s path through academia and research was not linear, to say the least. From being an experimental physicist collecting data at GSI, to working as an engineer and helping physicists conduct their own experiments at CERN, Luoni is excited to be diving back into pure research, even if it wasn’t her initially intended field.

Despite her industry–contractor title, Luoni’s day-to-day work at NASA is firmly research-driven. Most of her time is spent refining computational models of space-radiation-induced cancer risk. While the coding skills she honed at CERN apply to her role now, Luoni still experienced a steep learning curve when transitioning to NASA.

“I am learning biology and epidemiology, understanding how radiation damages human tissues, and also deepening my statistics knowledge,” she says. Her team codes primarily in Python and MATLAB, with legacy routines in Fortran. “You have to be patient with Fortran,” she remarks. “It’s like building with tiny bricks rather than big built-in functions.”

Luoni is quick to credit not just the technical skills but the personal resilience gained from moving between countries and disciplines. Born in Italy, she has worked in Germany, Switzerland and now the US. “Every move teaches you something unique,” she says. “But it’s emotionally demanding. You face bureaucracy, new languages, distance from family and friends. You need to be at peace with yourself, because there’s loneliness too.”

Bravery and curiosity

But in the end, she says, it’s worth the price. Above all, Luoni counsels bravery and curiosity. “Be willing to step out of your comfort zone,” she says. “It takes strength to move to a new country or field, but it’s worth it. I feel blessed to have experienced so many cultures and to work on something I love.”

While she encourages travel, especially at the PhD and postdoc stages in a researcher’s career, Luoni advises caution when presenting your experience on applications. Internships and shorter placements are welcome, but employers want to see that you have stayed somewhere long enough to really understand and harness that company’s training.

“Moving around builds a unique skill set,” she says. “Like it or not, big names on your CV matter – GSI, CERN, NASA – people notice. But stay in each place long enough to really learn from your mentors, a year is the minimum. Take it one step at a time and say yes to every opportunity that comes your way.”

Luoni had been looking for a way to enter space-research throughout her career, building up a diverse portfolio of skills throughout her various roles in academia and engineering. “Follow your heart and your passions,” she says. “Without that, even the smartest person can’t excel.”

The post Prepped for re-entry appeared first on CERN Courier.

]]>
Careers When Francesca Luoni logs on each morning at NASA’s Langley Research Center in Virginia, she’s thinking about something few of us ever consider: how to keep astronauts safe from the invisible hazards of space radiation. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_CAR_Luoni_feature.jpg
The puzzle of an excess of bright early galaxies https://cerncourier.com/a/the-puzzle-of-an-excess-of-bright-early-galaxies/ Fri, 07 Nov 2025 12:21:19 +0000 https://cerncourier.com/?p=114748 Observations by the James Webb Space Telescope hint at an excess of "UV-bright" galaxies in the first 400 million years after the Big Bang.

The post The puzzle of an excess of bright early galaxies appeared first on CERN Courier.

]]>
Since the Big Bang, primordial density perturbations have continually merged and grown to form ever larger structures. This “hierarchical” model of galaxy formation has withstood observational scrutiny for more than four decades. However, understanding the emergence of the earliest galaxies in the first few hundred million years after the Big Bang has remained a key frontier in the field of astrophysics. This is also one of the key science aims of the James Webb Space Telescope (JWST), launched on Christmas Day in 2021.

Its large, cryogenically-cooled mirror and infrared instruments let it capture the faint, redshifted ultraviolet light from the universe’s earliest stars and galaxies. Since its launch, the JWST has collected unprecedented samples of astrophysical sources within the first 500 million years of the Big Bang, utterly transforming our understanding of early galaxy formation.

Stellar observations

Tantalisingly, JWST’s observations hint at an excess of galaxies very bright in the ultra-violet (UV) within the first 400 million years, as compared to expectations from early formation within the standard Lambda Cold Dark matter model. Given that UV photons are a key indicator of young star formation, these observations seem to imply that early galaxies in any given volume of space were overly efficient at forming stars in the infancy of the universe.

However, extraordinary claims demand extraordinary evidence. These puzzling observations have come under immense scrutiny in confirming that the sources lie at the inferred redshifts, and do not just probe over-dense regions that might preferentially host galaxies with high star-formation rates. It could still be the case that the apparent excess of bright galaxies is cosmic variance – a statistical fluctuation caused by the relatively small regions of the sky probed by the JWST so far.

Such observational caveats notwith­standing, theorists have developed a number of distinct “families” of explanations.

UV photons are readily attenuated by dust at low redshifts. If, however, these early galaxies had ejected all of their dust, one might be able to observe almost all of the intrinsic UV light they produced, making them brighter than expected based on lower-redshift benchmarks.

Bias may also arise from detecting only those sources powered by rapid bursts of star formation that briefly elevate galaxies to extreme luminosities.

Extraordinary claims demand extraordinary evidence

Several explanations focus on modifying the physics of star formation itself, for example regarding “stellar feedback” – the energy and momentum that newly formed stars inject back into their surrounding gas, that can heat, ionise or expel gas, and slow or shut down further star formation. Early galaxies might have high star-formation rates because stellar feedback was largely inefficient, allowing them to retain most of their gas for further star formation, or perhaps because a larger fraction of gas was able to form stars in the first place.

While the relative number of low- and high-mass stars in a newly formed stellar population – the initial mass function (IMF) – has been mapped out in the local universe to some extent, its evolution with redshift remains an open question. Since the IMF crucially determines the total UV light produced per unit mass of star formed, a “top-heavy” IMF, with a larger fraction of massive stars compared to that in the local universe, could explain the observations.

Alternatively, the striking ultraviolet light may not arise solely from ordinary young stars – it could instead be powered by accretion onto black holes, which JWST is finding in unexpected numbers.

Alternative cosmologies

Finally, a number of works also appeal to alternative cosmologies to enhance structure formation at such early epochs, invoking an evolving dark-energy equation of state, primordial magnetic fields or even primordial black holes.

A key caveat involved in these observations is that redshifts are often inferred purely from broadband fluxes in different filters – a technique known as photometry. Spectroscopic data are urgently required, not only to verify their exact distances but also to distinguish between different physical scenarios such as bursty star formation, an evolving IMF or contamination by active galactic nuclei, where supermassive black holes accrete gas. Upcoming deep observations with facilities such as the Atacama Large Millimeter/submillimeter Array (ALMA) and the Northern Extended Millimeter Array (NOEMA) will be crucial for constraining the dust content of these systems and thereby clarifying their intrinsic star-formation rates. Extremely large surveys with facilities such as Euclid, the Nancy Grace Roman Space Telescope and the Extremely Large Telescope will also be crucial in surveying early galaxies over large volumes and sampling all possible density fields.

Combining these datasets will be critical in shedding light on this unexpected puzzle unearthed by the JWST.

The post The puzzle of an excess of bright early galaxies appeared first on CERN Courier.

]]>
News Observations by the James Webb Space Telescope hint at an excess of "UV-bright" galaxies in the first 400 million years after the Big Bang. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_NA_Astro.jpg
A step towards the Higgs self-coupling https://cerncourier.com/a/a-step-towards-the-higgs-self-coupling/ Fri, 07 Nov 2025 12:18:26 +0000 https://cerncourier.com/?p=114762 The ATLAS collaboration used Run 2 and Run 3 data to probe Higgs-boson pair production, setting new bounds on the Higgs self-coupling.

The post A step towards the Higgs self-coupling appeared first on CERN Courier.

]]>
ATLAS figure 1

A defining yet unobserved property of the Higgs boson is its ability to couple to itself. The ATLAS collaboration has now set new bounds on this interaction, by probing the rare production of Higgs-boson pairs. Since the self-coupling strength directly connects to the shape of the Higgs potential, any departure from the Standard Model (SM) prediction would have direct implications for electroweak symmetry breaking and the early history of the universe. This makes its measurement one of the most important objectives of modern particle physics.

Higgs-boson pair production is a thousand times less frequent than single-Higgs events, roughly corresponding to a single occurrence every three trillion proton–proton collisions at the LHC. Observing such a rare process demands both vast datasets and highly sophisticated analysis techniques, along with the careful choice of a sensitive probe. Among the most effective is the HH  bbγγ channel, where one Higgs boson decays into a bottom quark–antiquark pair and the other into two photons. This final state balances the statistical reach of the dominant Higgs decay to bottom quarks with the exceptionally clean signature offered by photon-pair measurements. Despite the small signal branching ratio of about 0.26%, the decay to two photons benefits from the excellent di-photon mass resolution and offers the highest efficiency among the leading HH channels. This provides the HH  bbγγ channel with an excellent sensitivity to variations in the trilinear self-coupling modifier κλ, defined as the ratio of the measured Higgs-boson self-coupling to the SM prediction.

In its new study, the ATLAS collaboration relied on Run 3 data collected between 2022 and 2024, and on the full Run 2 dataset, reaching an integrated luminosity of 308 fb–1. Events were selected with two high-quality photons and at least two b-tagged jets, identified using the latest and most performant ATLAS b-tagging algorithm. To further distinguish signal from background, dominated by non-resonant γγ+jets and single-Higgs production with H γγ, a set of machine-learning classifiers called “multivariate analysis discriminants” were trained and used to filter genuine HH  bbγγ signals.

The collaboration reported an HH  bbγγ signal significance of 0.84σ  under the background-only hypothesis, compared to a SM expectation of 1.01σ (see figure 1). At the 95% confidence level, the self-coupling modifier was constrained to –1.7 < κλ < 6.6. These results extend previous Run 2 analyses and deliver a substantially improved sensitivity, comparable to the observed (expected) significance of 0.4σ (1σ) in the combined Run 2 results across all channels. The improvement is primarily due to the adoption of advanced b-tagging algorithms, refined analysis techniques yielding better mass resolution and a larger dataset, more than double that of previous studies.

This result marks significant progress in the search for Higgs self-interactions at the LHC and highlights the potential of Run 3 data. With the full Run 3 dataset and the High-Luminosity LHC on the horizon, ATLAS is set to extend these measurements – improving our understanding of the Higgs boson and searching for possible signs of physics beyond the SM.

The post A step towards the Higgs self-coupling appeared first on CERN Courier.

]]>
News The ATLAS collaboration used Run 2 and Run 3 data to probe Higgs-boson pair production, setting new bounds on the Higgs self-coupling. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_EF_ATLAS_feature.jpg
ALICE observes ρ–proton attraction https://cerncourier.com/a/alice-observes-%cf%81-proton-attraction/ Fri, 07 Nov 2025 12:18:04 +0000 https://cerncourier.com/?p=114766 The ALICE collaboration achieved the first direct measurement of the ρ⁰–proton interaction in high-multiplicity pp collisions.

The post ALICE observes ρ–proton attraction appeared first on CERN Courier.

]]>
ALICE figure 1

The ALICE collaboration recently obtained the first direct measurement of the attraction between a proton and a ρ0 meson – a particle of particular interest due to its fleeting lifetime and close link to chiral symmetry breaking. The result establishes a technique known as femtoscopy as a new method for studying interactions between vector mesons and baryons, and opens the door to a systematic exploration of how short-lived hadrons behave.

Traditionally, interactions between baryons and vector mesons have been studied indirectly at low-energy facilities, using decay patterns or photoproduction measurements. These were mostly interpreted through vector–meson–dominance models developed in the 1960s, in which photons fluctuate into vector mesons to interact with hadrons. While powerful, these methods provide only partial information and cannot capture the full dynamics of the interaction. Direct measurements have long been out of reach, mainly because the extremely short lifetime of vector mesons – of the order of 1–10 fm/c – renders conventional scattering experiments impossible.

At the hadronic level, the strong force can be described as arising from the exchange of massive mesons, with the lightest among them, the pion, setting the interaction range to about 1.4 fm. For such a short-range effect to influence the products of a pp collision, the particles must be created close together and with low relative momentum, ensuring sufficient interaction time and a significant wavefunction overlap.

The ALICE collaboration has now studied this mechanism in high-multiplicity proton–proton (pp) collisions, at a centre-of-mass energy of 13 TeV, through femtoscopy, which examines correlations in the relative momentum (k*) of particle pairs in their rest frame. These were expected to carry information on the size and shape of the particle-emitting source at k* below about 200 MeV, with any deviations from unity indicating the presence of short-ranged forces.

To study the interaction between protons and ρ0 vector mesons, candidates were reconstructed via the hadronic decay channel ρ0 π+π, identified from π+π pairs within the 0.70–0.85 GeV invariant mass window. Since the ρ0 decays almost instantly into pions, only about 3% of the candidates were genuine ρ0 mesons. Background corrections were therefore essential to extract the ρ0–proton correlation function, defined as the ratio of the relative-momentum distribution of same-event pairs to that of mixed-event pairs. The result is consistent with unity at large relative momenta (k* > 200 MeV), as expected in the absence of strong forces. At lower values, however, a suppression with significance of about four standard deviations clearly signals ρ0–proton final-state interactions (see figure 1).

To interpret these results, ALICE used an effective field model based on chiral perturbation theory, which predicted two resonance states consistent with the formation of excited nucleon states. Because some pairs linger in these quasi-bound states instead of flying out freely, fewer emerge with nearly the same momentum. This results in a correlation suppression at low k* consistent with observations. Unlike photoproduction experiments and QCD sum rules, femtoscopy delivers the complete phase information of the ρ0–proton interaction. By analysing both ρ–proton and φ–proton pairs, ALICE extracted precise scattering parameters that can now be incorporated into theoretical models.

This measurement sets a benchmark for vector-meson–dominance models and establishes femtoscopy as a tool to probe interactions involving the shortest-lived hadrons, while providing essential input for understanding ρ–nucleon interactions in vacuum and describing the meson’s properties in heavy-ion collisions. Pinning down how the ρ meson behaves is crucial for interpreting dilepton spectra and the restoration of chiral symmetry, as differences between light quark masses become negligible at high energies. For example, the mass gap between the ρ and its axial counterpart, a1, comes from spontaneous chiral-symmetry breaking.

The post ALICE observes ρ–proton attraction appeared first on CERN Courier.

]]>
News The ALICE collaboration achieved the first direct measurement of the ρ⁰–proton interaction in high-multiplicity pp collisions. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_EF_ALICE_feature.jpg
The measurement problem, measured https://cerncourier.com/a/the-measurement-problem-measured/ Fri, 07 Nov 2025 12:17:51 +0000 https://cerncourier.com/?p=114742 Nature surveyed asked over 1000 researchers about their views on the interpretation of quantum mechanics.

The post The measurement problem, measured appeared first on CERN Courier.

]]>
A century on, physicists still disagree on what quantum mechanics actually means. Nature recently surveyed more than a thousand researchers, asking about their views on the interpretation of quantum mechanics. When broken down by career stage, the results show that a diversity of views spans all generations.

Getting eccentric with age

The Copenhagen interpretation remains the most widely held view, placing the act of measurement at the core of quantum theory well into the 2020s. Epistemic or QBist approaches, where the quantum state expresses an observer’s knowledge or belief, form the next most common group, followed by Everett’s many-worlds framework, in which all quantum outcomes continue to coexist without collapse (CERN Courier July/August 2025 p26). Other views maintain small but steady followings, including pilot-wave theory, spontaneous-collapse models and relational quantum mechanics (CERN Courier July/August 2025 p21).

Fewer than 10% of physicists surveyed declined to express a view. Though this cohort purports to include proponents of the “shut up and calculate” school of thought, an apparently dwindling cohort of disinterested working physicists may simply be undersampled.

Crucially, confidence is modest. Most respondents view their preferred interpretation as an adequate placeholder or a useful conceptual tool. Only 24% are willing to describe their preferred interpretation as correct, leaving ample room for manoeuvre in the very foundations of fundamental physics.

The post The measurement problem, measured appeared first on CERN Courier.

]]>
News Nature surveyed asked over 1000 researchers about their views on the interpretation of quantum mechanics. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_NA_quantum_feature.jpg
Neural networks boost B-tagging https://cerncourier.com/a/neural-networks-boost-b-tagging/ Fri, 07 Nov 2025 12:16:22 +0000 https://cerncourier.com/?p=114771 The LHCb collaboration developed an inclusive deep-learning flavour tagger for neutral B-mesons, improving tagging power by up to 35%.

The post Neural networks boost B-tagging appeared first on CERN Courier.

]]>
LHCb figure 1

The LHCb collaboration has developed a new inclusive flavour-tagging algorithm for neutral B-mesons. Compared to standard approaches, it can correctly identify 35% more B0 and 20% more B0s decays, expanding the dataset available for analysis. This increase in tagging power will allow for more accurate studies of charge–parity (CP) violation and B-meson oscillations.

In the Standard Model (SM), neutral B-mesons oscillate between particle and antiparticle states via second-order weak interactions involving a pair of W-bosons. Flavour-tagging techniques determine whether a neutral B-meson was initially produced as a B0 or its antiparticle B0, thereby enabling the measurement of time-dependent CP asymmetries. As the initial flavour can only be inferred indirectly from noisy, multi-particle correlations in the busy hadronic environment of the LHC, mistag rates have traditionally been high.

Until now, the LHCb collaboration has relied on two complementary flavour-tagging strategies. One infers the signal meson’s flavour by analysing the decay of the other b-hadron in the event, whose existence follows from bb pair production in the original proton-proton collision. Since the two hadrons originate from oppositely-charged, early-produced bottom quarks, the method is known as “opposite-side” (OS) tagging. The other strategy, or “same-side” (SS) tagging, uses tracks from the fragmentation process that produced the signal meson. Each provides only part of the picture, and their combination defined the state of the art in previous analyses.

The new algorithm adopts a more comprehensive approach. Using a deep neural network based on the “DeepSets” architecture, it incorporates information from all reconstructed tracks associated with the hadronisation process, rather than preselecting a subset of candidates. By considering the global structure of the event, the algorithm builds a more detailed inference of the meson’s initial flavour. This inclusive treatment of the available information increases both the sensitivity and the statistical reach of the tagging procedure.

The model was trained and calibrated using well-established B0 and B0s meson decay channels. When compared with the combination of opposite-side and same-side taggers, the inclusive algorithm displayed a 35% increase in tagging power for B0 mesons and 20% for B0s mesons (see figure 1). The improvement stems from gains in both the fraction of events that receive a flavour tag and how often the tag is correct. Tagging power is a critical figure of merit, as it determines the effective amount of usable data. Therefore, even modest gains can dramatically reduce statistical uncertainties in CP-violation and B-oscillation measurements, enhancing the experiment’s precision and discovery potential.

This development illustrates how algorithmic innovation can be as important as detector upgrades in pushing the boundaries of precision. The improved tagging power effectively expands the usable data sample without requiring additional collisions, enhancing the experiment’s capacity to test the SM and seek signs of new physics within the flavour sector. The timing is particularly significant as LHCb enters Run 3 of the LHC programme, with higher data rates and improved detector components. The new algorithm is designed to integrate smoothly with existing reconstruction and analysis frameworks, ensuring immediate benefits while providing scalability for the much larger datasets expected in future runs.

As the collaboration accumulates more data, the inclusive flavour-tagging algorithm is likely to become a central tool in data analysis. Its improved performance is expected to reduce uncertainties in some of the most sensitive measurements carried out at the LHC, strengthening the search for deviations from the SM.

The post Neural networks boost B-tagging appeared first on CERN Courier.

]]>
News The LHCb collaboration developed an inclusive deep-learning flavour tagger for neutral B-mesons, improving tagging power by up to 35%. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_EF_LHCb_feature.jpg
Machine learning and the search for the unknown https://cerncourier.com/a/machine-learning-and-the-search-for-the-unknown/ Fri, 07 Nov 2025 12:15:48 +0000 https://cerncourier.com/?p=114777 The CMS collaboration is employing neural networks to conduct model-independent searches for short-lived particles that could escape conventional analyses.

The post Machine learning and the search for the unknown appeared first on CERN Courier.

]]>
CMS figure 1

In particle physics, searches for new phenomena have traditionally been guided by theory, focusing on specific signatures predicted by models beyond the Standard Model. Machine learning offers a different way forward. Instead of targeting known possibilities, it can scan the data broadly for unexpected patterns, without assumptions about what new physics might look like. CMS analysts are now using these techniques to conduct model-independent searches for short-lived particles that could escape conventional analyses.

Dynamic graph neural networks operate on graph-structured data, processing both the attributes of individual nodes and the relationships between them. One such model is ParticleNet, which represents large-radius-jet constituents as networks to identify N-prong hadronic decays of highly boosted particles, predicting their parent’s mass. The tool recently aided a CMS search for the single production of a heavy vector-like quark (VLQ) decaying into a top quark and a scalar boson, either the Higgs or a new scalar particle. Alongside ParticleNet, a custom deep neural network was trained to identify leptonic top-quark decays by distinguishing them from background processes over a wide range of momenta. With this approach, the analysis achieved sensitivity to VLQ production cross-sections as small as 0.15 fb. Emerging methods such as transformer networks can provide even more sensitivity in future searches (see figure 1).

CMS figure 2

Another novel approach combined two distinct machine-learning tools in the search for a massive scalar X decaying into a Higgs boson and a second scalar Y. While ParticleNet identified Higgs-boson decays to two bottom quarks, potential Y signals were assigned an “anomaly score” by an autoencoder – a neural network trained to reproduce its input and highlight atypical features in the data. This technique provided sensitivity to a wide range of unexpected decays without relying on specific theoretical models. By combining targeted identification with model-independent anomaly detection, the analysis achieved both enhanced performance and broad applicability.

Searches at the TeV scale sit at the frontier where not only more and more data but also algorithmic innovation drives experimental discovery. Tools such as targeted deep neural networks, parametric neural networks (PNNs) – which efficiently scan multi-dimensional mass landscapes (see figure 2) – and model-independent anomaly detection, are opening new ways to search for deviations from the Standard Model. Analyses of the full LHC Run 2 dataset have already revealed intriguing hints, with several machine-learning studies reporting local excesses – including a 3.6σ excess in a search for V′  VV or VH  jets, and deviations up to 3.3σ in various X  HY searches. While no definitive signal has yet emerged, the steady evolution of neural-network techniques is already changing how new phenomena are sought, and anticipation is high for what they may reveal in the larger Run 3 dataset.

The post Machine learning and the search for the unknown appeared first on CERN Courier.

]]>
News The CMS collaboration is employing neural networks to conduct model-independent searches for short-lived particles that could escape conventional analyses. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_EF_CMS_feature.jpg
Standardising sustainability: step one https://cerncourier.com/a/standardising-sustainability-step-one/ Fri, 07 Nov 2025 12:14:00 +0000 https://cerncourier.com/?p=114736 The Laboratory Directors Group has published guidance on evaluating the carbon impact of accelerator projects.

The post Standardising sustainability: step one appeared first on CERN Courier.

]]>
For a global challenge like environmental sustainability, the only panacea is international cooperation. In September, the Sustainability Working Group, part of the Laboratory Directors Group (LDG), took a step forward by publishing a report for standardising the evaluation of the carbon impact of accelerator projects. The report challenges the community to align on a common methodology for assessing sustainability and defining a small number of figures of merit that future accelerator facilities must report.

“There’s never been this type of report before,” says Maxim Titov (CEA Saclay), who co-chairs the LDG Sustainability Working Group. “The LDG Working Group consisted of representatives with technical expertise in sustainability evaluation from large institutions including CERN, DESY, IRFU, INFN, NIKHEF and STFC, as well as experts from future collider projects who signed off on the numbers.”

The report argues that carbon assessment cannot be left to the end of a project. Instead, facilities must evaluate their lifecycle footprint starting from the early design phase, all the way through construction, operation and decommissioning. Studies already conducted on civil-engineering footprints of large accelerator projects outline a reduction potential of up to 50%, says Titov.

In terms of accelerator technology, the report highlights cooling, ventilation, cryogenics, the RF cavities that accelerate charged particles and the klystrons that power them, as the largest sources of inefficiency. The report places particular emphasis on klystrons, and identifies three high-efficiency designs currently under development that could boost the energy efficiency of RF cavities from 60 to 90% (CERN Courier May/June 2025 p30).

Carbon assessment cannot be left to the end of a project

The report also addresses the growing footprint of computing and AI. Training algorithms on more efficient hardware and adapting trigger systems to reduce unnecessary computation are identified as ways to cut energy use without compromising scientific output.

“You need to perform a life-cycle assessment at every stage of the project in order to understand your footprint, not just to produce numbers, but to optimise design and improve it in discussions with policymakers,” emphasises Titov. “Conducting sustainability assessments is a complex process, as the criteria have to be tailored to the maturity of each project and separately developed for scientists, policymakers, and society applications.”

Established by the CERN Council, the LDG is an international coordination body that brings together directors and senior representatives of the world’s major accelerator laboratories. Since 2021, the LDG has been composed of five expert panels: high-field magnets, RF structures, plasma and laser acceleration, muon colliders and energy-recovery linacs. The Sustainability Working Group was added in January 2024.

The post Standardising sustainability: step one appeared first on CERN Courier.

]]>
News The Laboratory Directors Group has published guidance on evaluating the carbon impact of accelerator projects. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_NA_Sustainability.jpg
NuFact prepares for a precision era https://cerncourier.com/a/nufact-prepares-for-a-precision-era/ Fri, 07 Nov 2025 12:10:42 +0000 https://cerncourier.com/?p=114876 More than 200 physicists gathered in Liverpool from 1 to 6 September 2025 for the 26th International Workshop on Neutrinos from Accelerators.

The post NuFact prepares for a precision era appeared first on CERN Courier.

]]>
The 26th edition of the International Workshop on Neutrinos from Accelerators (NuFact) attracted more than 200 physicists to Liverpool from 1 to 6 September. There was no shortage of topics to discuss. Delegates debated oscillations, scattering, accelerators, muon physics, beyond-PMNS physics, detectors, and inclusion, diversity, equity, education and outreach (IDEEO).

Neutrino physics has come a long way since the discovery of neutrino oscillations in 1998. Experiments now measure oscillation parameters with a precision of a few per cent. At NuFact 2025, the IceCube collaboration reported new oscillation measurements using atmospheric neutrinos from 11 years of observations at the South Pole. The measurements achieve world-leading sensitivity on neutrino mixing angles, alongside new constraints on the unitarity of the neutrino mixing matrix. Meanwhile, the JUNO experiment in China celebrated the start of data-taking with its liquid-scintillator detector (see “JUNO takes aim at neutrino-mass hierarchy”). JUNO will determine the neutrino mass ordering by observing the fine oscillation patterns of antineutrinos produced in nuclear reactors.

Neutrino scattering

Beyond oscillations, a major theme of the conference was neutrino scattering. Although neutrinos are the most abundant massive particles in the universe, their interactions with matter remain poorly understood. Measuring and modelling these processes is essential: they probe nuclear structure and hadronic physics in a novel way, while also providing the foundation for oscillation analyses in current and next-generation experiments. Exciting advances were reported across the field. The SBND experiment at Fermilab announced the collection of around three million neutrino interactions using the Booster Neutrino Beam. ICARUS presented its first neutrino–argon cross-section measurement. MicroBooNE, MINERvA and T2K showcased new results on neutrino–nucleus interaction and compared them with theoretical models. The e4ν collaboration highlighted electron beams as potential sources of data to refine neutrino-scattering models, supporting efforts to achieve the detailed interaction picture needed for the coming precision era of oscillation physics. At higher energies, FASER and SND@LHC showcased their LHC neutrino observations with both emulsion and electronic detectors.

Neutrino physics is one of the most vibrant and global areas of particle physics today

CERN’s role in neutrino physics was on display throughout the conference. Beyond the results from ICARUS, FASER and SND@LHC, other contributions included the first observation of neutrinos in the ProtoDUNE detectors, the status of the MUonE experiment – aimed at measuring the hadronic contribution to the muon anomalous magnetic moment – and the latest results from NA61. The role of CERN’s Neutrino Platform was also highlighted in contributions about the T2K ND280 near-detector upgrade and the WAGASCI–BabyMIND detector, both of which were largely assembled and tested at CERN. Discussions featured the results of the Water Cherenkov Test Experiment, which operated in the T9 beamline to prototype technology for Hyper-Kamio­kande, and other novel CERN-based ideas, such as nuSCOPE – a proposal for a short-baseline experiment that would “tag” individual neutrinos at production, formed from the merging of ENUBET and NuTag. Building on a proof-of-principle result from NA62, which identified a neutrino candidate via its parent kaon decay, this technique could represent a paradigm shift in neutrino beam characterisation.

NuFact 2025 reinforced the importance of diversity and inclusion in science. The IDEEO working group led discussions on how varied perspectives and equitable participation strengthen collaboration, improve problem solving and attract the next generation of researchers. Dedicated sessions on education and outreach also highlighted innovative efforts to engage wider communities and ensure that the future of neutrino physics is both scientifically robust and socially inclusive. From precision oscillation measurements to ambitious new proposals, NuFact 2025 demonstrated that neutrino physics is one of the most vibrant and global areas of particle physics today.

The post NuFact prepares for a precision era appeared first on CERN Courier.

]]>
Meeting report More than 200 physicists gathered in Liverpool from 1 to 6 September 2025 for the 26th International Workshop on Neutrinos from Accelerators. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_FN_Liverpool.jpg
Mainz muses on future of kaon physics https://cerncourier.com/a/mainz-muses-on-future-of-kaon-physics/ Fri, 07 Nov 2025 12:09:51 +0000 https://cerncourier.com/?p=114881 KAONS 2025 brought nearly 100 physicists to Mainz from 8 to 12 September 2025, to discuss the latest results in kaon physics.

The post Mainz muses on future of kaon physics appeared first on CERN Courier.

]]>
The 13th KAONS conference convened almost 100 physicists in Mainz from 8 to 12 September. Since the first edition took place in Vancouver in 1988, the conference series has returned roughly every three years to bring together the global kaon-physics community. This edition was particularly significant, being the first since the decision not to continue CERN’s kaon programme with the proposed HIKE experiment (CERN Courier May/June 2024 p7).

CERN’s current NA62 effort was nevertheless present in force. Eight presentations spanned its wide-ranging programme, from precision studies of rare kaon decays to searches for lepton-flavour and lepton-number violation, and explorations beyond the Standard Model (SM). Complementary perspectives came from Japan’s KOTO experiment at J-PARC, from multipurpose facilities such as KLOE-2, Belle II and CERN’s LHCb experiment, as well as from a large and engaged theoretical community. Together, these contributions underscored the vitality of kaon physics: a field that continues to test the SM at the highest levels of precision, with a strong potential to uncover new physics.

NA62 reported a big success on the so-called “golden mode” ultra-rare decay K+ π+νν, a process that is highly sensitive to new physics (CERN Courier July/August 2024 p30). NA62 has already delivered remarkable progress in this domain: by analysing data up to 2022, the collaboration more than doubled its sample from 20 to 51 candidate events, achieving the first 5σ observation of the decay (CERN Courier November/December 2024 p11). This is the smallest branching fraction ever measured, and, intriguingly, shows a mild 1.7σ tension with the Standard Model prediction, which itself is known with a 2% theoretical uncertainty. With the experiment continuing to collect data until CERN’s next long shutdown (LS3), NA62’s final dataset is expected to triple the current statistics, sharpening what is already one of the most stringent tests of the SM.

Another major theme was the study of rare B-meson decays where kaons often appear in the final state, for example B  K* ( Kπ) ℓ+. Such processes are central to the long-debated “B anomalies,” in which certain branching fractions of rare semileptonic B decays show persistent tensions between experimental results and SM predictions (CERN Courier January/February 2025 p14). On the experimental front, CERN’s LHCb experiment continues to lead the field, delivering branching-fraction measurements with unprecedented precision. Progress is also being made on the theoretical side, though significant challenges remain in matching this precision. The conference highlighted new approaches reducing uncertainties and biases, based both on phenomenological techniques and lattice QCD.

Kaon physics is in a particularly dynamic phase. Theoretical predictions are reaching unprecedented precision, and two dedicated experiments are pushing the frontiers of rare kaon decays. At CERN, NA62 continues to deliver impactful results, even though plans for a next-stage European successor did not advance this year. Momentum is building in Japan, where the proposed KOTO-II upgrade, if approved, would secure the long-term future of the programme. Just after the conference, the KOTO-II collaboration held its first in-person meeting, bringing together members from both KOTO and NA62 – a promising sign for continued cross-fertilisation. Looking ahead, sustaining two complementary experimental efforts remains highly desirable: independent cross-checks and diversified systematics. Both will be essential to fully exploit the discovery potential of rare kaon decays.

The post Mainz muses on future of kaon physics appeared first on CERN Courier.

]]>
Meeting report KAONS 2025 brought nearly 100 physicists to Mainz from 8 to 12 September 2025, to discuss the latest results in kaon physics. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_FN_Kaons.jpg
ICFA meets in Madison https://cerncourier.com/a/icfa-meets-in-madison/ Fri, 07 Nov 2025 12:09:16 +0000 https://cerncourier.com/?p=114890 The 99th meeting of the International Committee for Future Accelerators took place on 24 August 2025, in Madison.

The post ICFA meets in Madison appeared first on CERN Courier.

]]>
Once a year, the International Committee for Future Accelerators (ICFA) assembles for an in-person meeting, typically attached to a major summer conference. The 99th edition took place on 24 August at the Wisconsin IceCube Particle Astrophysics Center in downtown Madison, one day before Lepton–Photon 2025.

While the ICFA is neither a decision-making body nor a representation of funding agencies, its mandate assigns to the committee the important task of promoting international collaboration and coordination in all phases of the construction and exploitation of very-high-energy accelerators. This role is especially relevant in today’s context of strategic planning and upcoming decisions – with the ongoing European Strategy update, the Chinese decision process on CEPC in full swing, and the new perspectives emerging on the US–American side with the recent National Academy of Sciences report (CERN Courier September/October 2025 p10).

Consequently, the ICFA heard presentations on these important topics and discussed priorities and timelines. In addition, the theme of “physics beyond colliders” – and with it, the question of maintaining scientific diversity in an era of potentially vast and costly flagship projects – featured prominently. In this context, the importance of national laboratories capable of carrying out mid-sized particle-physics experiments was underlined. This also featured in the usual ICFA regional reports.

An important part of the work of the committee is carried out by the ICFA panels – groups of experts in specific fields of high relevance. The ICFA heard reports from the various panel chairs at the Wisconsin meeting, with a focus on the Instrumentation, Innovation and Development panel, where Stefan Söldner-Rembold (Imperial College London) recently took over as chair, succeeding the late Ian Shipsey. Among other things, the panel organises several schools and training events, such as the EDIT schools, as well as prizes that increase recognition for senior and early-career researchers working in the field of instrumentation.

Maintaining scientific diversity in an era of potentially vast and costly flagship projects  featured prominently

Another focus was the recent work of the Data Lifecycle panel chaired by Kati Lassila-Perini (University of Helsinki). This panel, together with numerous expert stakeholders in the field, recently published recommendations for best practices for data preservation and open science in HEP, advocating the application of the FAIR principles of findability, accessibility, interoperability and reusability at all levels of particle-physics research. The document provides guidance for researchers, experimental collaborations and organisations on implementing best-practice routines. It will now be distributed as broadly as possible and will hopefully contribute to the establishment of open and FAIR science practices.

Formally, the ICFA is a working group of the International Union for Pure and Applied Physics (IUPAP) and is linked to Commission C11, Particles and Fields. IUPAP has recently begun a “rejuvenation” effort that also involves rethinking the role of its working groups. Reflecting the continuity and importance of the ICFA’s work, Marcelo Gameiro Munhoz, chair of C11, presented a proposal to transform the ICFA into a standing committee under C11 – a new type of entity within IUPAP. This would allow ICFA to overcome its transient nature as a working group.

Finally, there were discussions on plans for a new set of ICFA seminars – triennial events in different world regions that assemble up to 250 leaders in the field. Following the 13th ICFA Seminar on Future Perspectives in High-Energy Physics, hosted by DESY in Hamburg in late 2023, the baton has now passed to Japan, which is finalising the location and date for the next edition, scheduled for late 2026.

The post ICFA meets in Madison appeared first on CERN Courier.

]]>
Meeting report The 99th meeting of the International Committee for Future Accelerators took place on 24 August 2025, in Madison. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_FN_ICFA.jpg
Invisibles, in sight https://cerncourier.com/a/invisibles-in-sight/ Fri, 07 Nov 2025 12:08:45 +0000 https://cerncourier.com/?p=114896 Around 150 researchers gathered at CERN from 1 to 5 September 2025, for the annual meeting of the Invisibles network.

The post Invisibles, in sight appeared first on CERN Courier.

]]>
Around 150 researchers gathered at CERN from 1 to 5 September to discuss the origin of the observed matter–antimatter asymmetry in the universe, the source of its accelerated expansion, the nature of dark matter and the mechanism behind neutrino masses. The vibrant atmosphere of the annual meeting of the Invisibles research network encouraged lively discussions, particularly among early-career researchers.

Marzia Bordone (University of Zurich) highlighted central questions in flavour physics, such as the tensions in the determinations of quark flavour-mixing parameters and the anomalies in leptonic and semileptonic B-meson decays (CERN Courier January/February 2025 p14). She showed that new bosons beyond the Standard Model that primarily interact with the heaviest quarks are theoretically well motivated and could be responsible for these flavour anomalies. Bordone emphasised that collaboration between experiment and theory, as well as data from future colliders like FCC-ee, will be essential to understand whether these effects are genuine signs of new physics.

Lina Necib (MIT) shared impressive new results on the distribution of galactic dark matter. Though invisible, dark matter interacts gravitationally and is present in all galaxies across the universe. Her team used exquisite data from the ESA Gaia satellite to track stellar trajectories in the Milky Way and determine the local dark-matter distribution to within 20–30% precision – which means about 300,000 dark-matter particles per cubic metre assuming they have mass similar to that of the proton. This is a huge improvement over what could be done just one decade ago, and will aid experiments in their direct search for dark matter in laboratories worldwide.

The most quoted dark-matter candidates at Invisibles25 were probably axions: particles once postulated to explain why the strong interactions that bind protons and neutrons behave in the same way for particles and antiparticles. Nicole Righi (King’s College London) discussed how these particles are ubiquitous in string theory. According to Righi, their detection may imply a hot Big Bang, with a rather late thermal stage, or hint at some special feature of the geometry of ultracompact dimensions related to quantum gravity.

The most intriguing talk was perhaps the CERN colloquium given by the 2011 Nobel laureate Adam Riess (Johns Hopkins University). By setting up an impressive system of distance measurements to extragalactic systems, Riess and his team have measured the expansion rate of the universe – the Hubble constant – with per cent accuracy. Their results indicate a value about 10% higher than that inferred from the cosmic microwave background within the standard ΛCDM model, a discrepancy known as the “Hubble tension”. After more than a decade of scrutiny, no single systematic error appears sufficient to account for it, and theoretical explanations remain tightly constrained (CERN Courier March/April 2025 p28). In this regard, Julien Lesgourgues (RWTH Aachen University) pointed out that, despite the thousands of papers written on the Hubble tension, there is no compelling extension of ΛCDM that could truly accommodate it.

While 95% of the universe’s energy density is invisible, the community studying it is very real. Invisibles now has a long history and is based on three innovative training networks funded by the European Union, as well as two Marie Curie exchange networks. The network includes more than 100 researchers and 50 PhD students spread across key beneficiaries in Europe, as well as America, Asia and Africa – CERN being one of their long-term partners. The energy and enthusiasm of the participants at this conference were palpable, as nature continues to offer deep mysteries that the Invisibles community strives to unravel.

The post Invisibles, in sight appeared first on CERN Courier.

]]>
Meeting report Around 150 researchers gathered at CERN from 1 to 5 September 2025, for the annual meeting of the Invisibles network. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_FN_Invisibles.jpg
Higgs hunters revel in Run 3 data https://cerncourier.com/a/higgs-hunters-revel-in-run-3-data/ Fri, 07 Nov 2025 12:06:59 +0000 https://cerncourier.com/?p=114905 About 100 researchers gathered in Orsay and Paris from 15 to 17 July 2025, for the 15th Higgs Hunting workshop.

The post Higgs hunters revel in Run 3 data appeared first on CERN Courier.

]]>
The 15th Higgs Hunting workshop took place from 15 to 17 July at IJCLab in Orsay and LPNHE in Paris. It offered an opportunity to about 100 participants to step back and review the most recent LHC Run 2 and 3 Higgs-boson results, together with some of the latest theoretical developments.

One of the highlights concerned the Higgs boson’s coupling to the charm quark, with the CMS collaboration presenting a new search using Higgs production in association with a top–antitop pair. The analysis, targeting Higgs decays into charm–quark pairs, reached a sensitivity comparable to the best existing direct constraints on this elusive interaction. New ATLAS analyses showcased the impact of the large Run 3 dataset, hinting at great potential for Higgs physics in the years to come – for example, Run 3 data has reduced the uncertainties on the coupling of the Higgs boson to muons and Zγ by 30% and 38%, respectively. On the di-Higgs front, the expected upper limit on the signal-strength modifier, measured in the bbγγ final state only, has now surpassed in sensitivity the combination of all Run 2 HH channels (see “A step towards the Higgs self-coupling”). The sensitivity to di-Higgs production is expected to improve significantly during Run 3, raising hopes of seeing a signal before the next long shutdown, from mid-2026 to the end of 2029.

Juan Rojo (Vrije Universiteit Amsterdam) discussed parton distribution functions for Higgs processes at the LHC, while Thomas Gehrmann (University of Zurich) reviewed recent developments in general Higgs theory. Mathieu Pellen (University of Freiburg) provided a review of vector-boson fusion, Jose Santiago Perez (University of Granada) summarised the effective field theory framework and Oleksii Matsedonskyi (University of Cambridge) reviewed progress on electroweak phase transitions. In his “vision” talk, Alfredo Urbano (INFN Rome) discussed the interplay between Higgs physics and early-universe cosmology. Finally, Benjamin Fuks (LPTHE, Sorbonne University) presented a toponium model, bringing the elusive romance of top–quark pairs back into the spotlight (CERN Courier September/October 2025 p9).

After a cruise on the Seine in the light of the Olympic Cauldron, participants were propelled toward the future during the European Strategy for Particle Physics session. The ESPPU secretary Karl Jakobs (University of Freiburg) and various session speakers set the stage for spirited and vigorous discussions of the options before the community – in particular, the scenarios to pursue should the FCC programme, the clear plan A, not be realised. The next Higgs Hunting workshop will be held in Orsay and Paris from 16 to 18 September 2026.

The post Higgs hunters revel in Run 3 data appeared first on CERN Courier.

]]>
Meeting report About 100 researchers gathered in Orsay and Paris from 15 to 17 July 2025, for the 15th Higgs Hunting workshop. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_FN_Higgs.jpg
All aboard the scalar adventure https://cerncourier.com/a/all-aboard-the-scalar-adventure/ Fri, 07 Nov 2025 12:06:22 +0000 https://cerncourier.com/?p=114911 The first Workshop on the Impact of Higgs Studies on New Theories of Fundamental Interactions took place on the Island of Capri, Italy, from 6 to 10 October 2025.

The post All aboard the scalar adventure appeared first on CERN Courier.

]]>
Since the discovery of the Higgs boson in 2012, the ATLAS and CMS collaborations have made significant progress in scrutinising its properties and interactions. So far, measurements are compatible with an elementary Higgs boson, originating from the minimal scalar sector required by the Standard Model. However, current experimental precision leaves ample room for this picture to change. In particular, the full potential of the LHC and its high-luminosity upgrade to search for a richer scalar sector beyond the Standard Model (BSM) is only beginning to be tapped.

The first Workshop on the Impact of Higgs Studies on New Theories of Fundamental Interactions, which took place on the Island of Capri, Italy, from 6 to 10 October 2025, gathered around 40 experimentalists and theorists to explore the pivotal role of the Higgs boson in exploring BSM physics. Participants discussed the implications of extended scalar sectors and the latest ATLAS and CMS searches, including current potential anomalies in LHC data.

“The Higgs boson has moved from the realm of being just a new particle to becoming a tool for searches for BSM particles,” said Greg Landsberg (Brown University) in an opening talk.

An extended scalar sector can address several mysteries in the SM. For example, it could serve as a mediator to a hidden sector that includes dark-matter particles, or play a role in generating the observed matter–antimatter asymmetry during an electroweak phase transition. Modified or extended Higgs sectors also arise in supersymmetric and other BSM models that address why the 125 GeV Higgs boson is so light compared to the Planck mass – despite quantum corrections that should drive it to much higher scales – and might shed light on the perplexing pattern of fermion masses and flavours.

One way to look for new physics in the scalar sector is modifications in the decay rates, coupling strengths and CP-properties of the Higgs boson. Another is to look for signs of additional neutral or charged scalar bosons, such as those predicted in longstanding two-Higgs-doublet or Higgs-triplet models. The workshop saw ATLAS and CMS researchers present their latest limits on extended Higgs sectors, which are based on an increasing number of model-independent or signature-based searches. While the data so far are consistent with the SM, a few mild excesses have attracted the attention of some theorists.

In diphoton final states, a slight excess of events persists in CMS data at a mass of 95 GeV. Hints of a small excess at a mass of 152 GeV are also present in ATLAS data, while a previously reported excess at 650 GeV has faded after full examination of Run 2 data. Workshop participants also heard suggestions that the Brout–Englert–Higgs potential could allow for a second resonance at 690 GeV.

The High-Luminosity LHC will enable us to explore the scalar sector in detail

“We haven’t seen concrete evidence for extended Higgs sectors, but intriguing features appear in various mass scales,” said CMS collaborator Sezen Sekmen (Kyungpook National University). “Run 3 ATLAS and CMS searches are in full swing, with improved triggering, object reconstruction and analysis techniques.”

Di-Higgs production, the rate of which depends on the strength of the Higgs boson’s self-coupling, offers a direct probe of the shape of the Brout–Englert–Higgs potential and is a key target of the LHC Higgs programme. Multiple SM extensions predict measurable effects on the di-Higgs production rate. In addition to non-resonant searches in di-Higgs production, ATLAS and CMS are pursuing a number of searches for BSM resonances decaying into a pair of Higgs bosons, which were shown during the workshop.

Rich exchanges between experimentalists and theorists in an informal setting gave rise to several new lines of attack for physicists to explore further. Moreover, the critical role of the High-Luminosity LHC to probe the scalar sector of the SM at the TeV scale was made clear.

“Much discussed during this workshop was the concern that people in the field are becoming demotivated by the lack of discoveries at the LHC since the Higgs, and that we have to wait for a future collider to make the next advance,” says organiser Andreas Crivellin (University of Zurich). “Nothing could be further from the truth: the scalar sector is not only the least explored of the SM and the one with the greatest potential to conceal new phenomena, but one that the High-Luminosity LHC will enable us to explore in detail.”

The post All aboard the scalar adventure appeared first on CERN Courier.

]]>
Meeting report The first Workshop on the Impact of Higgs Studies on New Theories of Fundamental Interactions took place on the Island of Capri, Italy, from 6 to 10 October 2025. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_FN_Higgs2.jpg
Subtleties of quantum fields https://cerncourier.com/a/subtleties-of-quantum-fields/ Fri, 07 Nov 2025 10:10:15 +0000 https://cerncourier.com/?p=114958 Uncovering Quantum Field Theory and the Standard Model: From Fundamental Concepts to Dynamical Mechanisms, by Wolfgang Bietenholz and Uwe-Jens Wiese, Cambridge University Press.

The post Subtleties of quantum fields appeared first on CERN Courier.

]]>
Quantum field theory unites quantum physics with special relativity. It is the framework of the Standard Model (SM), which describes the electromagnetic, weak and strong interactions as gauge forces, mediated by photons, gluons and W and Z bosons, plus additional interactions mediated by the Higgs field. The success of the SM has exceeded all expectations, and its mathematical structure has led to a number of impressive predictions. These include the existence of the charm quark, discovered in 1974, and the existence of the Higgs boson, discovered in 2012.

Uncovering Quantum Field Theory and the Standard Model by Wolfgang Bietenholz of the National Autonomous University of Mexico and Uwe-Jens Wiese from the University of Bern, explains the foundations of quantum field theory in great depth, from classical field theory and canonical quantisation to regularisation and renormalisation, via path integrals and the renormalisation group. What really makes the book special are frequently discussed relations to statistical mechanics and condensed-matter physics.

Riding a wave

The section on particles and “wavicles” is highly original. In quantum field theory, quantised excitations of fields cannot be interpreted as point-like particles. Unlike massive particles in non-relativistic quantum mechanics, these excitations have non-trivial localisation properties, which apply to photons and electrons alike. To emphasise the difference between non-relativistic particles and wave excitations in a relativistic theory, one may refer to them as “wavicles”, following Frank Wilczek. As discussed in chapter 3, an intuitive understanding of wavicles can be gained by the analogy to phonons in a crystal. Another remarkable feature of charged fields is the infinite extension of their excitations due to their Coulomb field. This means that any charged state necessarily includes an infrared cloud of soft gauge bosons. As a result, they cannot be described by ordinary one-particle states and are referred to as “infra­particles”. Their properties, along with the related “superselection sectors,” are explained in the section on scalar quantum electrodynamics. 

Uncovering Quantum Field Theory and the Standard Model

The SM can be characterised as a non-abelian chiral gauge theory. Bietenholz and Wiese explain the various aspects of chirality in great detail. Anomalies in global and local symmetries are carefully discussed in the continuum as well as on a space–time lattice, based on the Ginsparg–Wilson relation and Lüscher’s lattice chiral symmetry. Confinement of quarks and gluons, the hadron spectrum, the parton model and hard processes, chiral perturbation theory and deconfinement at high temperatures uncover perturbative and non-perturbative aspects of quantum chromodynamics (QCD), the theory of strong interactions. Numerical simulations of strongly coupled lattice Yang–Mills theories are very demanding. During the past four decades, much progress has been made in turning lattice QCD into a quantitative reliable tool by controlling statistical and systematic uncertainties, which is clearly explained to the critical reader. The treatment of QCD is supplemented by an introduction to the electroweak theory covering the Higgs mechanism, electroweak symmetry breaking and flavour physics of quarks and leptons.

The number of quark colours, which is three in nature, plays a prominent role in this book. At the quantum level, gauge symmetries can fail due to anomalies, rendering a theory inconsistent. The SM is free of anomalies, but this only works because of a delicate interplay between quark and lepton charges and the number of colours. An important example of this interplay is the decay of the neutral pion into two photons. The subtleties of this process are explained in chapter 24.

The number of quark colours, which is three in nature, plays a prominent role in this book

Most remarkably, the SM predicts baryon-number-violating processes. This arises from the vacuum structure of the weak SU(2) gauge fields, which involves topologically distinct field configurations. Quantum tunnelling between them, together with the anomaly in the baryon–number current, leads to baryon–number violating transitions, as discussed in chapter 26. Similarly, in QCD a non-trivial topology of the gluon field leads to an explicit breaking of the flavour-singlet axial symmetry and, subsequently, to the mass of the η′ meson. Moreover, the gauge field topology gives rise to an additional parameter in QCD, the vacuum-angle θ. Since this parameter induces an electric dipole moment of the neutron that satisfies a strong upper bound, this confronts us with the strong-CP problem: what constrains θ to be so tiny that the experimental upper bound on the neutron dipole moment is satisfied? A solution may be provided by the Peccei–Quinn symmetry and axions, as discussed in a dedicated chapter.

By analogy with the QCD vacuum angle, one can introduce a CP-violating electromagnetic parameter θ into the SM – even though it has no physical effect in pure QED. This brings us to a gem of the book: its discussion of the Witten effect. In the presence of such a θ, the electric charge of a magnetic monopole becomes θ/2π plus an integer. This leads to the remarkable conclusion that for non-zero θ, all monopoles become dyons, carrying both electric and magnetic charge.

The SM is an effective low-energy theory and we do not know at what energy scale elements of a more fundamental theory will become visible. Its gauge structure and quark and lepton content hint at a possible unification of the interactions into a larger gauge group, which is discussed in the final chapter. Once gravity is included, one is confronted with a hierarchy problem: the question of why the electroweak scale is so small compared to the Planck mass, at which the Compton wavelength of a particle and its Schwarzschild radius coincide. Hence, at Planck energies quantum gravitational effects cannot be ignored. Perhaps, solving the electroweak hierarchy puzzle requires working with supersymmetric theories. For all students and scientists struggling with the SM and exploring possible extensions, the nine appendices will be a very valuable source of information for their research.

The post Subtleties of quantum fields appeared first on CERN Courier.

]]>
Review Uncovering Quantum Field Theory and the Standard Model: From Fundamental Concepts to Dynamical Mechanisms, by Wolfgang Bietenholz and Uwe-Jens Wiese, Cambridge University Press. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_REV_monopoles.jpg
Einstein’s entanglement https://cerncourier.com/a/einsteins-entanglement/ Fri, 07 Nov 2025 10:09:15 +0000 https://cerncourier.com/?p=114965 Einstein’s Entanglement: Bell Inequalities, Relativity, and the Qubit, by William Stuckey, Michael Silberstein and Timothy McDevitt, Oxford University Press.

The post Einstein’s entanglement appeared first on CERN Courier.

]]>
Quantum entanglement is the quantum phenomenon par excellence. Our world is a quantum world: the matter that we see and touch is the most obvious consequence of quantum physics and it wouldn’t really exist the way it is in a purely classical world. However, in our modern parlance when we talk about quantum sensors or quantum computing, what makes these things “quantum” is the employment of entanglement. Entanglement was first discussed by Einstein and Schrödinger, and later became famous with the celebrated EPR (Einstein–Podolsky–Rosen) paper of 1935.

The magic of entanglement

In an entangled particle system, some properties have to be assigned to the system itself and not to individual particles. When a neutral pion decays into two photons, for example, conservation of angular momentum requires their total spin to be zero. Since the photons travel in opposite directions in the pion’s rest frame, in order for their spins to cancel they must share the same “helicity”. Helicity is the spin projection along the direction of motion, and only two states are possible: left- or right-handed. If one photon is measured to be left-handed, the other must be left-handed as well. The entangled photons must be thought of as a single quantum object: neither do the individual particles have predefined spins nor does the measurement performed on one cause the other to pick a spin orientation. Experiments in more complicated systems have ruled these possibilities out, at least in their simplest incarnations, and this is exactly where the magic of entanglement begins.

Quantum entanglement is the main topic of Einstein’s Entanglement by William Stuckey, Michael Silberstein and Timothy McDevitt, all currently teaching at Elizabethtown College, Pennsylvania. The trio have complementary expertise in physics, philosophy and maths, and this is not their first book on the foundations of physics. They aim to explain why entanglement is so puzzling to physicists and the various ways that have been employed over the years to explain (or even explain away) the phenomenon. They also want to introduce the readers to their own idea on how to solve the riddle and argue about its merits.

Why is entanglement so puzzling to physicists, and what has been employed to explain the phenomenon?

General readers may struggle in places. The book does have accessible chapters, for example one at the start with a quantum-gloves experiment – a nice way to introduce the reader to the problem – as well as a chapter on special relativity. Much of the discussion about quantum mechanics, however, uses advanced concepts such as Hilbert space and the Bloch sphere, that belong to an undergraduate course in quantum mechanics. Philosophical terminology, such as “wave-function realism”, is also used copiously. The explanations and the discussion provided are of good quality and an interested reader in the interpretations of quantum mechanics with some background in physics has a lot to gain. The authors quote copiously from a superb list of references and include many interesting historical facts that make reading the book very entertaining.

In general, the book criticises constructive approaches to interpreting quantum mechanics that explicitly postulate physical phenomena. In the example of neutral-pion decay that I gave previously, the case in which the measurement of one photon causes the other photon to pick a spin would require a constructive explanation. These can be contrasted with principle explanations, which may involve, for example, invoking an overarching symmetry. To quote an example that is used many times in the book, the relativity principle can be used to explain Lorentz length contraction without the need for a physical mechanism to contract the bodies, which would require a constructive explanation.

The authors make the claim that the conceptual issues with entanglement can be solved by sticking to principle explanations and, in particular, with the demand that Planck’s constant is measured to be the same in all inertial reference frames. Whether this simple suggestion is adequate to explain the mysteries of quantum mechanics, I will leave to the reader. Seneca wrote in his Natural Questions that “our descendants will be astonished at our ignorance of what to them is obvious”. If the authors are correct, entanglement may prove to be a case in point.

The post Einstein’s entanglement appeared first on CERN Courier.

]]>
Review Einstein’s Entanglement: Bell Inequalities, Relativity, and the Qubit, by William Stuckey, Michael Silberstein and Timothy McDevitt, Oxford University Press. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_REV_Einstein_feature.jpg
John Peoples 1933–2025 https://cerncourier.com/a/john-peoples-1933-2025/ Fri, 07 Nov 2025 10:04:42 +0000 https://cerncourier.com/?p=114941 John Peoples, the third director of Fermilab, who guided the lab through one of the most critical periods in its history, passed away on 25 June 2025.

The post John Peoples 1933–2025 appeared first on CERN Courier.

]]>
John Peoples

John Peoples, the third director of Fermilab, who guided the lab through one of the most critical periods in its history, passed away on 25 June 2025. Born in New York City on 22 January 1933, John received his bachelor’s degree in electrical engineering from the Carnegie Institute of Technology (now Carnegie Mellon University) in 1955. After several years at the Glen L. Martin Company, John entered Columbia University where he received his PhD in physics in 1966 for the measurement of the Michel parameter in muon decay under the direction of Allan Sachs. This was followed by a teaching and research position at Cornell University and relocation to Fermilab, initially on sabbatical, in 1971.

John officially joined the Fermilab staff in 1975 as head of the Research Division. His tenure included the discovery of the upsilon particle (b-quark bound state) by Leon Lederman’s team in 1977. He also held responsibilities for the upgrading of the experimental areas to accept beams of up to 1 TeV in anticipation of the completion of the Fermilab Tevatron.

In 1981, following Lederman’s decision to utilise the Tevatron as a proton–antiproton collider, John was appointed head of the TeV-I Project, with responsibility for the construction of the Antiproton Source and the collision hall for the CDF detector. Under John’s leadership, a novel design was developed, building on the earlier pioneering work done at CERN for antiproton accumulation based on stochastic cooling, and proton–antiproton collisions were achieved in the Tevatron four years later, in 1985.

Tireless commitment

John succeeded Lederman to become Fermilab’s third director in July 1989, shortly after the decision to locate the Superconducting Super Collider (SSC) in Waxahachie, Texas, creating immense challenges to Fermilab’s future. John guided the US community to a plan for a new accelerator, the Main Injector (and ultimately the Recycler), that could support a high-luminosity collider programme for the decade of SSC construction while simultaneously providing high-intensity extracted beams for a future neutrino programme that could sustain Fermilab well beyond the SSC’s startup. The cancellation of the SSC in 1993 was a seismic event for US and global high-energy physics, and ensured the Tevatron’s role as the highest energy collider in the world for the next almost two decades. John was asked to lead the termination phase of the SSC lab. In 1994/1995, as director of both Fermilab and the SSC, he worked on this painful task with a special emphasis on helping the many suddenly unemployed people find new career paths.

During John’s tenure as director, Fermilab produced many important physics results. In 1995, the Tevatron Collider experiments, CDF and D, announced the discovery of the top quark, the final quark predicted in the Standard Model of particle physics at the mass of more than 175 times that of the proton. To ensure that the experiments could analyse their data quickly and efficiently, John supported replacing costly mainframe computers with “clusters” of inexpensive microprocessors developed in industry for personal computers and later laptops and phones. The final fixed-target run with 800 GeV extracted beam in 1997 and 1998 helped resolve an important and long-standing problem in CP violation in kaon decays and discovered the tau neutrino.

His leadership both enhanced international collaboration and retained a prominent role for Fermilab in collider physics

From 1993–1997, John served as chair of the International Committee for Future Accelerators (ICFA). He stepped down after two terms as Fermilab director in 1999. In 2010, he received the Robert R. Wilson Prize for Achievement in the Physics of Particle Acceleration from the American Physical Society.

Under John’s influence, there were frequent personnel exchanges between Fermilab and CERN throughout the 1980s, as Fermilab staff benefited from CERN’s experience with antiproton production and CERN benefited from Fermilab’s experience with the operations of a superconducting accelerator. These exchanges extended into the 1990s, and following the termination of the SSC, John was instrumental in securing support for US participation in the LHC accelerator and detector projects. His leadership both enhanced international collaboration and retained a prominent role for Fermilab in collider physics after the Tevatron completed operations in 2011.

During the 1980s, astrophysics became an important contributor to our knowledge of particle physics and required more ambitious experiments with strong synergies with the latest round of HEP experiments. In 1991, John formed the Experimental Astrophysics Group at Fermilab. This led to its strong participation in the Sloan Digital Sky Survey (SDSS), the Pierre Auger Cosmic Ray Observatory, the Cryogenic Dark Matter Search (CDMS) and the Dark Energy Survey (DES), of which John became director in 2003. John’s vision of a vibrant community of particle physicists, astrophysicists and cosmologists exploring the inner space-outer space connection is now reality.

Those of us who had the privilege of knowing and working with John were challenged by his intense work ethic and by the equally intense flood of new ideas for running and improving our programmes. He was a gifted and dedicated experimental physicist, skilled in accelerator science, an expert in superconducting magnet design and technology, a superb manager, and a great recruiter and mentor of young engineers and scientists, including the authors of this article. We will miss him!

The post John Peoples 1933–2025 appeared first on CERN Courier.

]]>
News John Peoples, the third director of Fermilab, who guided the lab through one of the most critical periods in its history, passed away on 25 June 2025. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_Obits_Peoples_feature.jpg
Ole Hansen 1934–2025 https://cerncourier.com/a/ole-hansen-1934-2025/ Fri, 07 Nov 2025 10:04:03 +0000 https://cerncourier.com/?p=114946 Ole Hansen, a leading Danish nuclear-reaction physicist, passed away on 11 May 2025.

The post Ole Hansen 1934–2025 appeared first on CERN Courier.

]]>
Ole Hansen, a leading Danish nuclear-reaction physicist, passed away on 11 May 2025, three days short of his 91st birthday. His studies of nucleon transfer between a projectile nucleus and a target nucleus made it possible to determine the bound states in either or both nuclei and confront it with the framework for which the Danish Nobel Prize winners Aage Bohr and Ben Mottelson had developed a unified theory. He conducted experiments at Los Alamos in the US and Aldermaston in the UK, among others, and developed a deep intuitive relationship with Clebsch–Gordan coefficients.

Together with Ove Nathan, Ole oversaw a proposal to build a large tandem accelerator at the Niels Bohr Institute department located at Risø, near Roskilde. The government and research authorities had supported the costly project, but it was scrapped on an afternoon in August 1978 as a last-minute saving to help establish a coalition between the two parties across the centre of Danish politics. Ole’s disappointment was enormous: he decided to take up an offer at Brookhaven National Laboratory (BNL) to continue his nuclear work there, while Nathan threw himself into university politics and later became rector of the University of Copenhagen.

Deep exploration

Ole sent his resignation as a professor at the University of Copenhagen to the Queen – a civil servant had to do so at the time – but was almost immediately confronted with demands for cutbacks at BNL, which would stop the research programme with the tandem accelerator there. Ole did not withdraw his resignation, but together with US colleagues proposed a research programme at very high energies by injecting ions from the tandem into the existing particle accelerator, AGS, thereby achieving energies in the nucleon–nucleon centre-of-mass system of up to 5 GeV. This was the start of the exploration of the deeper structure of nuclear matter, which is revealed as a system consisting of quarks and gluons at temperatures of billions of degrees. This later led to the construction of the first atomic nucleus collision machine, the Relativistic Heavy Ion Collider (RHIC) in the US. Ole himself participated in the E802 and E866 experiments at BNL/AGS, and in the BRAHMS experiment at RHIC.

Ole will be remembered as the first director of the unified Niels Bohr Institute and for establishing the Danish National Research Foundation

Ole will also be remembered as the first director, called back from the US, of the unified Niels Bohr Institute, which was established in 1993 as a fusion of the physics, astronomy and geophysics departments surrounding the Fælledparken commons in Copenhagen after an international panel chaired by him had recommended a merger. Ole realised the necessity of merging the departments in order to create the financial room for manoeuvre needed to be able to hire new and younger researchers again. He left his mark on the construction, which initially had to deal with the very different cultures of the Blegdamsvej, Ørsted and Geophysics institutes. He approached the task efficiently but with a good understanding and respect for the scientific perspectives and the individual researchers.

Back in Denmark, Ole played a significant role in the establishment of the competitive research system we know today, including the establishment of the Danish National Research Foundation (DNRF), of which he was vice-chair in the first years, and with the streamlining of the institute’s research and the establishment of several new areas.

Strong interests

Despite the scale of all his administrative tasks, Ole maintained a lively interest in research and actively supported the establishment of the Centre for CERN Research (now the NICE National Instrument Center) together with the author of this obituary. He was also a member of the CERN Council during the exciting period when the LHC took shape.

Ole will be remembered as an open-minded, energetic and visionary man with an irreverent sense of humour that some feared but others greatly appreciated. Despite his modest manner, he influenced his colleagues with his strong interest in new physics and his sharp scepticism. If consulted, he would probably turn his nose up at the word “loyal”, but he was ever a good and loyal friend. He is survived by his wife, Ruth, and four children.

The post Ole Hansen 1934–2025 appeared first on CERN Courier.

]]>
News Ole Hansen, a leading Danish nuclear-reaction physicist, passed away on 11 May 2025. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_Obits_Hansen.jpg
Michele Arneodo 1959–2025 https://cerncourier.com/a/michele-arneodo-1959-2025/ Fri, 07 Nov 2025 10:03:18 +0000 https://cerncourier.com/?p=114949 Michele Arneodo, professor of physics at the University of Piemonte Orientale and chairperson elect of the CMS Collaboration Board, passed away on 12 August 2025.

The post Michele Arneodo 1959–2025 appeared first on CERN Courier.

]]>
Michele Arneodo, professor of physics at the University of Piemonte Orientale and chairperson elect of the CMS Collaboration Board, passed away on 12 August 2025. He was 65.

Born in Turin in 1959, Michele graduated in physics from the University of Torino in 1982. He was awarded a Fulbright Fellowship to pursue graduate studies at Princeton University, where he received his MA in 1985 and his PhD in 1992. He began his career as a staff researcher at INFN Torino, before moving to academia as an associate professor at the University of Calabria and then, from 1995, at the University of Piemonte Orientale in Novara, where he became full professor in 2002.

Michele’s research career began with the European Muon Collaboration (NA2 and NA9) and the New Muon Collaboration (NA37) at CERN, investigating the structure of nucleons through the deep inelastic scattering of muons. He went on to play a leading role in the ZEUS experiment at DESY’s HERA collider, focusing on the diffractive physics programme, coordinating groups in Torino and Novara, and overseeing the operation of the Leading Proton Spectrometer. Awarded an Alexander von Humboldt fellowship, he worked at DESY between 1996 and 1999.

With the start of the LHC era, Michele devoted his efforts to CMS, becoming a central figure in diffractive physics and the relentless force behind the construction of the CMS Precision Proton Spectrometer (PPS) and the subsequent merging of the TOTEM and CMS collaborations. He was convener of the diffractive physics group, served on the CMS Publication and Style committees, and from 2014 chaired the Institution Board of the CMS PPS, where he was also resource manager and INFN national coordinator. He had been appointed as chairperson of the CMS Collaboration Board, a role that he was due to begin this year.

A central figure in diffractive physics and the relentless force behind the construction of the Precision Proton Spectrometer

Teaching was central to Michele’s vocation. At the University of Piemonte Orientale, he developed courses on radiation physics for medical students and radiology specialists, building bridges between particle physics and medical applications. He was also widely recognised as a dedicated mentor, always attentive to the careers of younger collaborators.

We will remember Michele as a very talented physicist and a genuinely kind person, who had the style and generosity of a bygone era. Always approachable, he could be found with a smile, a sincere interest in others’ well-being, and a delicate sense of humour that brought lightness to professional exchanges. His students and collaborators valued his constant encouragement and his passion for transmitting enthusiasm for physics and science.

While leaving a lasting mark on physics and on the institutions he served, Michele also cultivated enduring friendships and dedicated himself fully to his family, to whom the thoughts of the CMS and wider CERN communities go at this difficult time.

Michele, “Rest forever here in our hearts”.

The post Michele Arneodo 1959–2025 appeared first on CERN Courier.

]]>
News Michele Arneodo, professor of physics at the University of Piemonte Orientale and chairperson elect of the CMS Collaboration Board, passed away on 12 August 2025. https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_Obits_Arneodo.jpg
Miro Preger 1946–2025 https://cerncourier.com/a/miro-preger-1946-2025/ Fri, 07 Nov 2025 10:02:08 +0000 https://cerncourier.com/?p=114952 Miro Andrea Preger, a distinguished accelerator physicist in the Accelerator Division of the Frascati National Laboratories, passed away on 1 September 2025. 

The post Miro Preger 1946–2025 appeared first on CERN Courier.

]]>
Miro Andrea Preger, a distinguished accelerator physicist in the Accelerator Division of the Frascati National Laboratories (LNF), passed away on 1 September 2025. 

Originally an employee of the Italian National Committee for Nuclear Energy (CNEN), Miro had a long career as a key figure in the INFN institutions.

He made his mark at the pioneering ADONE collider in the 1970s, optimising its performance, developing an innovative luminosity monitor, and improving the machine optics and injection system. Later he served as the director of ADONE, participating in all second-generation experiments, colliding beams for particle physics and producing synchrotron radiation and gamma rays for nuclear physics.

Beyond LNF, Miro played an important role in the design of the Italian synchrotron radiation source ELETTRA in Trieste, and the ESRF in Grenoble; he also collaborated on many other accelerator projects, including CTF3 and CLIC at CERN.

Miro made outstanding contributions to the DAΦNE collider project, leading the realisation of the electron–positron injection system

Miro held many institutional roles, and as head of the Accelerator Physics Service, he taught the art and science of accelerators to many young scientists, with clarity, patience and dedication. As a mentor, he leaves a legacy of accelerator experts who have ensured the success of many LNF initiatives.

Miro made outstanding contributions to the DAφNE collider project from the beginning, leading the design and realisation of the entire electron–positron injection system. He was deeply involved in the very challenging commissioning and achieving the high luminosity that was required by the experiments.

Besides his characteristic dynamism, one of Miro’s distinctive traits was his ability to foster harmonious collaboration among technicians, technologists and researchers.

Away from physics, Miro was an excellent tennis player and skier, along with being a skilled sailor, activities that he often shared with colleagues.

The post Miro Preger 1946–2025 appeared first on CERN Courier.

]]>
News Miro Andrea Preger, a distinguished accelerator physicist in the Accelerator Division of the Frascati National Laboratories, passed away on 1 September 2025.  https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_Obits_Preger.jpg
CEPC matures, but approval is on hold https://cerncourier.com/a/cepc-matures-but-approval-is-on-hold/ Sun, 26 Oct 2025 10:15:35 +0000 https://cerncourier.com/?p=114683 The Circular Electron–Positron Collider (CEPC), a 100-km electron–positron “Higgs factory” proposed in China, has reached the technical-design stage but will not be included for approval or construction in the country’s 15th Five-Year Plan (2026–2030).

The post CEPC matures, but approval is on hold appeared first on CERN Courier.

]]>
CEPC reference detector

In October, the Circular Electron–Positron Collider (CEPC) study group completed its full suite of technical design reports, marking a key step for China’s Higgs-factory proposal. However, CEPC will not be considered for inclusion in China’s next five-year plan (2026–2030).

“Although our proposal that CEPC be included in the next five-year plan was not successful, IHEP will continue this effort, which an international collaboration has developed for the past 10 years,” says study leader Wang Yifang, of the Institute of High Energy Physics (IHEP) in Beijing. “We plan to submit CEPC for consideration again in 2030, unless FCC is officially approved before then, in which case we will seek to join FCC, and give up CEPC.”

Electroweak precision

CEPC has been under development at IHEP since shortly after the discovery of the Higgs boson at CERN in 2012. To enable precision studies of the new particle, Chinese physicists formally proposed a dedicated electron–positron collider in September 2012. Sharing a concept similar to the Future Circular Collider (FCC) proposed in parallel at CERN, CEPC’s high-luminosity collisions would greatly improve precision in measuring Higgs and electroweak processes.

“CEPC is designed as a multi-purpose particle factory,” explains Wang. “It would not only serve as an efficient Higgs factory but would also precisely study other fundamental particles, and its tunnel can be re-used for a future upgrade to a more powerful super proton–proton collider.”

Following completion of the Conceptual Design Report in 2018, which defined the physics case and baseline layout, the CEPC collaboration entered a detailed technical phase to validate key technologies and complete subsystem designs. The accelerator Technical Design Report (TDR) was released in 2023, followed in October 2025 by the reference detector TDR, providing a mature blueprint for both components.

Although our proposal that CEPC be included in the next five-year plan was not successful, IHEP will continue this effort

Wang Yifang

Compared to the 2018 detector concept, the new technical report proposes several innovations. An electromagnetic calorimeter based on orthogonally oriented crystal bars and a hadronic calorimeter based on high-granularity scintillating glass have been optimised for advanced particle-flow algorithms, improving their energy resolution by a factor of 10 and a factor of two, respectively. A tracking detector employing AC-coupled low-gain avalanche-diode technology will enable simultaneous 10 µm position and 50 ps time measurements, enhancing vertex and flavour tagging. Meanwhile, a readout chip developed in 55 nm technology will achieve state-of-the-art performance at 65% power consumption, enabling better resolution, large-scale integration and reduced cooling-pipe materials. Among other advances, a new type of high-density, high-yield scintillating glass forms the possibility for a full absorption hadronic calorimeter.

To ensure the scientific soundness and feasibility of the design, the CEPC Study Group established an International Detector Review Committee in 2024, chaired by Daniela Bortoletto of the University of Oxford.

Design consolidation

“After three rounds of in-depth review, the committee concluded in September 2025 that the Reference Detector TDR defines a coherent detector concept with a clearly articulated physics reach,” says Bortoletto. “The collaboration’s ambitious R&D programme and sustained technical excellence have been key to consolidating the major design choices and positioning the project to advance from conceptual design into integrated prototyping and system validation.”

CEPC’s technical advance comes amid intense international interest in participating in a Higgs factory. Alongside the circular FCC concept at CERN, Higgs factories with linear concepts have been proposed in Europe and Japan, and both Europe and the US have named constructing or participating in a Higgs factory as a strategic priority. Following China’s decision to defer CEPC, attention now turns to Europe, where the ongoing update of the European Strategy for Particle Physics will prioritise recommendations for the laboratory’s flagship collider beyond the HL-LHC. Domestically, China will consider other large science projects for the 2026 to 2030 period, including a proposed Super Tau–Charm Facility to succeed the Beijing Electron–Positron Collider II.

With completion of its core technical designs, CEPC now turns to engineering design.

“The newly released detector report is the first dedicated to a circular electron–positron Higgs factory,” says Wang. “It showcases the R&D capabilities of Chinese scientists and lays the foundation for turning this concept into reality.”

The post CEPC matures, but approval is on hold appeared first on CERN Courier.

]]>
News The Circular Electron–Positron Collider (CEPC), a 100-km electron–positron “Higgs factory” proposed in China, has reached the technical-design stage but will not be included for approval or construction in the country’s 15th Five-Year Plan (2026–2030). https://cerncourier.com/wp-content/uploads/2025/10/CCNovDec25_NA_CEPC_feature.jpg
Europe’s collider strategy takes shape https://cerncourier.com/a/europes-collider-strategy-takes-shape/ Tue, 09 Sep 2025 08:22:08 +0000 https://cerncourier.com/?p=113895 How are community inputs and debates shaping the ongoing update to the European strategy for particle physics? The Courier consults two scientists tasked with representing CERN Member States and the high-energy-physics community.

The post Europe’s collider strategy takes shape appeared first on CERN Courier.

]]>
Costas Fountas

A community-driven process is building consensus

CERN Council president Costas Fountas sums up the vision of CERN’s Member States.

In March 2024, the CERN Council called on the particle-physics community to develop a visionary and concrete plan that greatly advances human knowledge in fundamental physics through the realisation of the next flagship project at CERN. This community-driven strategy will be submitted to the CERN Council in March 2026, leading to discussions among CERN Member States. The CERN Council will update the European strategy for particle physics (ESPP) based on these deliberations, with a view to approving CERN’s next flagship collider in 2028.

This third update to the ESPP builds on a process initiated by the CERN Council in 2006 and updated in 2013 and 2020. It is designed to convey to the CERN Council the views of the community on strategic questions that are key to the future of high-energy physics (HEP). The process involves all CERN Member States and Associate Member States, with the goal of developing a roadmap for the field for many years to come. The CERN Council asked that the newly updated ESPP should take into account the status of implementation of the 2020 ESPP, recent accomplishments at the LHC and elsewhere, progress in the construction of the High-Luminosity LHC (HL-LHC), the outcome of the Future Circular Collider (FCC) Feasibility Study, recent technological developments in accelerator, detector and computing technology, and the international landscape of the field. Scientific inputs were requested from across the community.

On behalf of the CERN Council, I would like to thank the high-energy community for understanding that this is a critical time for our field and participating very actively. Throughout this time, the various national groups have held a large number of meetings to debate which would be the best accelerator to be hosted at CERN after the HL-LHC. They also discussed and proposed alternative options as requested by the CERN Council, which followed the process closely.

By June 2025 we were delighted to hear from the ESPP secretariat that the participation of the community had been overwhelming and that a very large number of proposals had been submitted (CERN Courier May/June 2025 p8). These submissions show a broad consensus that CERN should be maintained as the global centre for collider physics through the realisation of a new flagship project. Europe’s strategy should be ambitious, innovative and forward looking. An overwhelming majority of the communities from CERN Member States express their strong support for the FCC programme, starting with an electron–positron collider (FCC-ee) as a first stage. Their strong support is largely based on its superb physics potential and its long-term prospects, given the potential to explore the energy frontier with a hadron collider (FCC-hh) following a precision era at FCC-ee.

CERN’s future flagship collider – Member State preferences

Based on an unofficial analysis by CERN Courier of national submissions to the 2026 update to the European strategy for particle physics. Each national submission is accorded equal weight, with that weight divided equally when multiple options are specified. With the deadline for national submissions passing before Slovenia acceded as CERN’s 25th Member State, 24 national submissions are included. These data are not endorsed by the authors, the CERN Council, the strategy secretariat or CERN management.

This strategy coherently develops the vision of ESPP 2020, which recommended to the CERN Council that an electron–positron Higgs factory be the highest-priority next collider. The 2020 ESPP update further recommended that Europe, together with its international partners, should investigate the technical and financial feasibility of a future hadron collider at CERN with a centre-of-mass energy of at least 100 TeV and with an electron–positron Higgs and electro­weak factory as a possible first stage. Such a feasibility study of the colliders and related infrastructure should be established as a global endeavour and be completed on the timescale of the next strategy update.

Based on ESPP 2020, the CERN Council mandated the CERN management to undertake a feasibility study for the FCC and approved an initial budget of CHF 100 million over a five-year period. Throughout the past five years, the FCC feasibility study was undertaken by CERN management under the oversight of the CERN Council. Council heard presentations on its progress at every session and carefully scrutinised a very successful mid-term review (CERN Courier March/April 2024 p25). The FCC collaboration completed the FCC feasibility study ahead of schedule and summarised the results of the study in a three-volume report that was released in March 2025 (CERN Courier May/June 2025 p8). The results are currently under review by panels which will scrutinise both the scientific aspects of the project as well as its budget estimates. The project will be presented to the Scientific Policy and Finance committees in September 2025 and to the CERN Council in November 2025.

It is rewarding to see that the scientific opinion of the community is in sync with ESPP 2020, the decision of the CERN Council to initiate the FCC feasibility study, and the efforts of CERN management to steer and complete it. This is a sign of the strength of the HEP community. While respecting a healthy diversity of opinion, a clear consensus has emerged across the community that the FCC is the highest priority project.

Crucially, however, the CERN Council requested that the community provide not only the scientifically most attractive option, but also hierarchically ordered alternative options. Specifically, the Council requested that the strategy update should include the preferred option for the next collider at CERN and prioritised alternative options to be pursued if the chosen preferred plan turns out not to be feasible or competitive. No consensus has yet been reached here, however two projects have the required readiness to be candidates for alternative programmes: the Linear Collider Facility (LCF, 250 GeV) and the Compact Linear Collider (CLIC, 380 GeV), with additional R&D required in the latter case. A third proposal, LEP3, also requires further study, but could be a promising candidate for a Higgs factory in the existing LEP/LHC tunnel, albeit at a significantly reduced luminosity relative to FCC-ee.

On behalf of the CERN Council, I would like to thank the high-energy community for understanding that this is a critical time for our field and participating very actively

The R&D for several of these projects has been supported by CERN for a long time. Research on linear colliders has been an active programme for the past 30 years and has received significant support, not only ensuring their readiness for consideration as future HEP facilities, but also sparking an exceptional R&D programme in the applications of fundamental research, for example in accelerators for cancer treatment (CERN Courier July/August 2024 p46). Over the past five years, CERN has also invested in muon colliders and hosts the International Muon Collider Collaboration. CERN also leads research into the application of plasma-wakefield acceleration for fundamental physics, having supported the AWAKE experiment for 10 years now (CERN Courier May/June 2024 p25).

The next milestone for updating the ESPP is 14 November: the deadline for submission of the final national inputs. The final drafting session of the strategy update will then take place from 1 to 5 December 2025 at Monte Verità Ascona, where the community recommendations will be finalised. These will be presented to the CERN Council in March 2026 and discussed at a dedicated meeting of the CERN Council in May 2026 in Budapest.

Meanwhile, a key milestone for community deliberations recently passed. The full spectrum of community inputs was presented and debated at an Open Symposium held in Venice in June. As strategy secretary Karl Jakobs reports on the following pages, the symposium was a smashing success with lively discussions and broad participation from our community. On behalf of Council, I would like to convey my sincere thanks to the Italian delegation for the superb organisation of the symposium.

Costas Fountas has served as president of the CERN Council since his appointment in January this year, and as the Greek scientific delegate to the Council since 2016. A professor of physics at the University of Ioannina and longstanding member of the CMS collaboration, he previously served as vice-president of the Council from 2022 to 2024. (Image credit: M Brice, CERN)

 

Karl Jakobs

Venice symposium debates decades of collider strategy

Strategy secretary Karl Jakobs reports from a vibrant Open Symposium in Venice.

The Open Symposium of the European Strategy for Particle Physics (ESPP) brought together more than 600 physicists from almost 40 countries in Venice, Italy, from 23 to 27 June, to debate the future of European particle physics. In the focus was the discussion on the next large-scale accelerator project at CERN to follow the HL-LHC, which is scheduled to operate until the end of 2041. The strategy update should – according to the remit defined by the CERN Council – define a preferred option for the next collider and prioritised alternative options to be pursued if the preferred plan turns out not to be feasible or competitive. In addition, the strategy update should indicate areas of priority for exploration complementary to colliders and other experiments to be considered at CERN and at other European laboratories, as well as for participation in projects outside Europe.

The Open Symposium is an important step in the strategy process. The aim is to involve the full community in discussions of the 266 scientific contributions that had been submitted by the community to the ESPP process before the symposium (CERN Courier May/June 2025 p8).

In the opening session of the symposium CERN Director-General Fabiola Gianotti summarised the impressive achievements of the CERN community in the implementation of the recommendations from the 2020 update to the ESPP. Eric Laenen (Nikhef) stressed that the outstanding questions in particle physics require a broad and diverse experimental programme, including the HL-LHC, a new flagship collider, and a wide variety of other experiments including those in neighbouring fields. A broad consensus emerged that a future collider programme should be realised that can fully leverage both precision and energy, covering the widest range of observables at different energy scales. To match experimental precision, significant progress on the theoretical side is also required, in particular regarding higher-order calculations.

An important part of the symposium was devoted to presentations of possible future large-scale accelerator projects. Detailed presentations were given on the FCC-ee and FCC-hh colliders, either in the integrated FCC programme or proceeding directly to FCC-hh as a standalone realisation at an earlier time. Linear colliders were presented as alternative options, with a Linear Collider Facility (LCF) based on the design of the International Linear Collider (ILC) and CLIC both considered. In addition, smaller collider options were presented, based on re-using the LHC/LEP tunnel. A first proposal, LEP3, suggests accelerating electrons and positrons up to energies of 230 GeV, while a second proposal, LHeC, proposes the realisation of electron–proton collisions in one interaction point of the LHC. LHeC would require the construction of an additional new energy-recovery linac for the acceleration of electrons.

Open symposium

Moving focus from the precision frontier to the energy frontier, several ways to reach the 10 TeV “parton scale” were presented. (Comparisons between the energy reach of hadron and lepton colliders must discuss parton–parton centre-of-mass energies, where partons refer to the pointlike constituents of hadrons, as only a fraction of the energy of collisions between composite particles can be used to probe the existence of new particles and fields.) If FCC-ee is realised, a natural path is to proceed with proton-proton collisions with proton–proton centre-of-mass energies in the range of 85 to 120 TeV, depending on the available high-field magnet technology. As an alternative, a muon collider could provide a path towards high-energy lepton collisions, however, demonstrations of how to address the significant technological challenges, such as six-dimensional cooling in transverse and longitudinal phase space, and other items associated with the various acceleration steps, need to be achieved. Likewise, plasma-based acceleration techniques for electrons and positrons capable of exceeding the 1 TeV energy scale are yet to be demonstrated.

A broad consensus emerged that a future collider programme should be realised that can fully leverage both precision and energy

The symposium was organised to foster strong engagement by the community in discussion sessions. Six physics topics – covering electroweak physics, strong interactions, flavour physics, physics beyond the Standard Model, neutrino physics and cosmic messengers, and dark matter and the dark sector, as well as the three technology areas on accelerators, detectors and computing, were summarised in rapporteur talks, followed by 45-minute discussions, where the people present in Venice strongly engaged.

For the study of precision Higgs measurements, the performance of all the considered electron–positron (e+e) colliders is comparable. While a sub-percent precision can be reached in several measurements of Higgs couplings to fermions and bosons, HL-LHC measurements would prevail for rare processes. On the determination of the important Higgs-boson (H) self-coupling, the precision obtained at the HL-LHC will prevail until either e+e linear colliders can improve it in direct HH production measurements at collision energies above 500 GeV, or before precisions at the level of a few percent can be reached at FCC-hh or a muon collider. It was further stressed that precision measurements in the Higgs, electroweak (Z, W, top) and flavour physics constitute three facets for indirect discoveries and that their synergy is essential to maximise the discovery potential of future colliders. Due to its high luminosity at low energies and its four experiments, the FCC-ee shows a superior physics performance in the electroweak programme.

In flavour physics, a lot of progress will be achieved in the coming decade by the LHCb and Belle-II experiments. While the tera-Z production at a future FCC-ee would provide a major step forward, the giga-Z data samples available at linear colliders do not seem to be a good option for flavour physics. The FCC-ee and LHeC would also achieve high precision on QCD measurements, leading, for example, to a per-mille level determination of the strong coupling constant αs. The important investigations of the quark–gluon plasma at the HL-LHC could be continued in parallel to an e+e collider operation at CERN at the SPS fixed target programme, before FCC-hh would eventually allow for novel studies in the high-temperature QCD domain.

Keeping diversity in the particle-physics programme was also felt to be essential: the next collider project should not come at the expense of a diverse scientific programme in Europe. Given that we do not know where new physics will show up, ensuring a diverse and comprehensive physics programme is vital, including fixed-target, neutrino, flavour, astroparticle and nuclear-physics experiments. Experiments in these areas have the potential for groundbreaking discoveries.

The discussions in Venice revealed a community united in its desire for a future flagship collider at CERN

At the technology frontier, essential work on accelerator R&D, such as on high-field and high-temperature superconducting magnets and RF systems, remain a high priority and appropriate investments must be made. R&D on advanced acceleration concepts should continue with adequate effort to prepare future projects. In the detector area, the establishment of the Detector Research & Development (DRD) collaborations as a result of the implementation of the recommendations of the 2020 ESPP update were considered to provide a solid basis to tackle the challenges related to the developments for high-performing detectors for future colliders and beyond. It is also expected that the required software and computing challenges for future colliders can be mastered, provided that adequate person power and funding are available and adaptations to new technologies, in particular GPUs, AI and – on a longer timescale – quantum computing, can be made.

The discussions in Venice revealed a community united in its desire for a future flagship collider at CERN. Over the past years, very significant progress has been made in this direction, and the discussions on the prioritisation of collider options will continue over the next months. In addition to the FCC-ee, linear colliders (LCF, CLIC) present mature options for a Higgs factory at CERN. LEP3 and LHeC could alternatively be considered as intermediate collider projects, followed by a larger accelerator capable of exploring the 10 TeV parton scale.

The differences in the physics potential between the various collider options will be documented in the Physics Briefing Book that will be released by the Physics Preparatory Group by the end of September. In parallel, the technical readiness, risks, timescales and costs will be reviewed by the European Strategy Group (ESG). Alongside the final national inputs, these assessments will provide the foundation for the final recommendations to be drafted by the ESG in early December 2025.

Karl Jakobs is the secretary of the 2026 update to the European strategy for particle physics. A professor at the University of Freiburg, Jakobs served as spokesperson of the ATLAS collaboration from 2017 to 2021 and as chairman of the European Committee for Future Accelerators from 2021 to 2023. (Image credit: K Jakobs)

The post Europe’s collider strategy takes shape appeared first on CERN Courier.

]]>
Feature How are community inputs and debates shaping the ongoing update to the European strategy for particle physics? The Courier consults two scientists tasked with representing CERN Member States and the high-energy-physics community. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_VENICE_feature.jpg
Memories of quarkonia https://cerncourier.com/a/memories-of-quarkonia/ Tue, 09 Sep 2025 08:21:53 +0000 https://cerncourier.com/?p=114223 As the story of quarkonia draws to a close, John Ellis shares personal recollections of five decades of discoveries and debates about the simplest composite objects in QCD.

The post Memories of quarkonia appeared first on CERN Courier.

]]>
The world of particle physics was revolutionised in November 1974 by the discovery of the J/ψ particle. At the time, most of the elements of the Standard Model of particle physics had already been formulated, but only a limited set of fundamental fermions were confidently believed to exist: the electron and muon, their associated neutrinos, and the up, down and strange quarks that were thought to make up the strongly interacting particles known at that time. The J/ψ proved to be a charm–anticharm bound state, vindicating the existence of a quark flavour first hypothesised by Sheldon Glashow and James Bjorken in 1964 (CERN Courier January/February 2025 p35). Its discovery eliminated any lingering doubts regarding the quark model of 1964 (see “Nineteen sixty-four“) and sparked the development of the Standard Model into its modern form.

This new “charmonium” state was the first example of quarkonium: a heavy quark bound to an antiquark of the same flavour. It was named by analogy to positronium, a bound state of an electron and a positron, which decays by mutual annihilation into two or three photons. Composed of unstable quarks, bound by gluons rather than photons, and decaying mainly via the annihilation of their constituent quarks, quarkonia have fascinated particle physicists ever since.

The charmonium interpretation of the J/ψ was cemented by the subsequent discovery of a spectrum of related ccstates, and ultimately by the observation of charmed particles in 1976. The discovery of charmonium was followed in 1977 by the identification of bottomonium mesons and particles containing bottom quarks. While toponium – a bound state of a top quark and antiquark – was predicted in principle, most physicists thought that its observation would have to wait for the innate precision of a next-generation e+e collider following the LHC, in view of the top quark’s large mass and exceptionally rapid decay, more than 1012 times quicker than the bottom quark. The complex environment at a hadron collider, where the composite nature of protons precludes knowledge of the initial collision energy of pairs of colliding partons within them, would make toponium particularly difficult to identify at the LHC.

However, in the second half of 2024, the CMS collaboration reported an enhancement near the threshold for tt production at the LHC, which is now most plausibly interpreted as the lowest-lying toponium state. The existence of this enhancement has recently been corroborated by the ATLAS collaboration (see”ATLAS confirms top–antitop excess“).

Here are the personal memories of an eyewitness who followed these 50 years of quarkonium discoveries firsthand.

Strangeonium?

In hindsight, the quarkonium story can be thought to have begun in 1963 with the discovery of the φ meson. The φ was an unexpectedly stable and narrow resonance, decaying mainly into kaons rather than the relatively light pions, despite lying only just above the KK threshold. Heavier quarkonia cannot decay into a pair of mesons containing single heavy quarks, as their masses lie below the energy threshold for such “open flavour” decays.

The preference of the φ to decay into kaons was soon interpreted by Susumu Okubo as a consequence of approximate SU(3) flavour symmetry, developing mathematical ideas based on unitary 3 × 3 matrices with a determinant one. At the beginning of 1964, quarks were proposed and George Zweig suggested that the φ was a bound state of a strange quark and a strange anti-quark (or aces as he termed them). After 1974, the portmanteau word “strangeonium” was retrospectively applied to the φ and similar heavier ss bound states, but the name has never really caught on.

Why is R rising?

In the year or so prior to the discovery of the J/ψ in November 1974, there was much speculation about data from the Cambridge Electron Accelerator (CEA) at Harvard and the Stanford Positron–Electron Asymmetric Ring (SPEAR) at SLAC. Data from these e+e colliders indicated a rise in the ratio, R, of cross-sections for hadron and μ+μ production (see “Why is R rising?” figure). Was this a failure of the parton model that had only recently found acceptance as a model for the apparently scale-invariant internal structure of hadrons observed in deep-inelastic scattering experiments? Did partons indeed have internal structure? Or were there “new” partons that had not been seen previously, such as charm or coloured quarks? I was asked on several occasions to review the dozens of theoretical suggestions on the market, including at the ICHEP conference in the summer of 1974. In preparation, I toted a large Migros shopping bag filled with dozens of theoretical papers around Europe. Playing the part of an objective reviewer, I did not come out strongly in favour of any specific interpretation, however, during talks that autumn in Copenhagen and Dublin, I finally spoke out in favour of charm as the best-motivated explanation of the increase in R.

November revolution

Then, on 11 November 1974, the news broke that two experimental groups, one working at BNL under the leadership of Sam Ting and the other at SLAC led by Burt Richter, had discovered, in parallel, the narrow vector boson that bears the composite name J/ψ (see “Charmonium” figure). The worldwide particle-physics community went into convulsions (CERN Courier November/December 2024 p41) – and the CERN Theory Division was no exception. We held informal midnight discussion sessions around an open-mic phone with Fred Gilman in the SLAC theory group, who generously shared with us the latest J/ψ news. Away from the phone, like many groups around the world, we debated the merits and demerits of many different theoretical ideas. Rather than write a plethora of rival papers about these ideas, we decided to bundle our thoughts into a collective preprint. Instead of taking individual responsibility for our trivial thoughts, the preprint was anonymous, the place of the authors’ names being taken by a mysterious “CERN Theory Boson Workshop”. Eagle eyes will spot that the equations were handwritten by Mary K Gaillard (CERN Courier July/August 2025 p47). Informally, we called ourselves Co-Co, for communication collective. With “no pretentions to originality or priority,” we explored five hypotheses: a hidden charm vector meson, a coloured vector meson, an intermediate vector boson, a Higgs meson and narrow resonances in strong interactions.

Charmonium

My immediate instinct was to advocate the charmonium interpretation of the J/ψ, and this was the first interpretation to be described in our paper. This was on the basis of the Glashow–Iliopoulos–Maiani (GIM) mechanism, which accounted for the observed suppression of flavour-changing neutral currents by postulating the existence a charm quark with a mass around 2 GeV (see CERN Courier July/August 2024 p30), and the Zweig rule, which suggested phenomenologically that quarkonia do not easily decay by quark–antiquark annihilation via gluons into other flavours of quarks. So I was somewhat surprised when one of the authors of the GIM paper wrote a paper proposing that it might be an intermediate electroweak vector boson. A few days after the J/ψ discovery came the news of the (almost equally narrow) ψ′ discovery, which I was told as I was walking along the theory corridor to my office one morning. My informant was a senior theorist who was convinced that this discovery would kill the charmonium interpretation of the J/ψ. However, before I reached my office I realised that an extension of the Zweig rule would also suppress ψJ/ψ + light meson decays, so the ψ′ could also be narrow.

Keen competition

The charmonium interpretation of the J/ψ and ψ′ states predicted that there should be intermediate P-wave states (with one unit of orbital angular momentum) that could be detected in radiative decays of the ψ′. In the first half of 1975 there was keen competition between teams at SLAC and DESY to discover these states. That summer I was visiting SLAC, where I discovered one day under the cover of a copying machine, before their discovery was announced, a sheet of paper with plots showing clear evidence for the P-wave states. I made a copy, went to Burt Richter’s office and handed him the sheet of paper. I also asked whether he wanted my copy. He graciously allowed me to keep it, as long as I kept quiet about it, which I did until the discovery was officially announced a few weeks later.

The story of quarkonium can be thought to have begun in 1963 with the discovery of the φ meson

Discussion about the interpretation of the new particles, in particular between advocates of charm and Han–Nambu coloured quarks – a different way to explain the new particles’ astounding stability by giving them a new quantum number – rumbled on for a couple of years until the discovery of charmed particles in 1976. During this period we conducted some debates in the main CERN auditorium moderated by John Bell. I remember one such debate in particular, during which a distinguished senior British theorist spoke for coloured quarks and I spoke for charm. I was somewhat taken aback when he described me as representing the “establishment”, as I was under 30 at the time.

Over the following year, my attention wandered to grand unified theories, and my first paper on the subject was with Michael Chanowitz and Mary K Gaillard, which we completed in May 1977. We realised while writing this paper that simple grand unified theories – which unify the electroweak and strong interactions – would relate the mass of the τ heavy lepton that had been discovered in 1975 to the mass of the bottom quark, which was confidently expected but whose mass was unknown. Our prediction was mb/mτ = 2 to 5, but we did not include it in the abstract. Shortly afterwards, while our paper was in proof, the discovery of the ϒ state (or states) by a group at Fermilab led by Leon Lederman (see “Bottomonium” figure) became known, implying that mb ~ 4.5 GeV. I added our successful mass prediction by hand in the margin of the corrected proof. Unfortunately, the journal misunderstood my handwriting and printed our prediction as mb/mτ = 2605, a spectacularly inaccurate postdiction! It remains to be seen whether the idea of a grand unified theory is correct: it also predicted successfully the electroweak mixing angle θW and suggested that neutrinos might have mass, but direct evidence, such as the decay of the proton, has yet to be found.

Peak performance

Meanwhile, buoyed by the success of our prediction for mb, Mary K Gaillard, Dimitri Nanopoulos, Serge Rudaz and I set to work on a paper about the phenomenology of the top and bottom quarks. One of our predictions was that the first two excited states of the ϒ, the ϒ′ and ϒ′′, should be detectable by the Lederman experiment because the Zweig rule would suppress their cascade decays to lighter bottomonia via light-meson emission. Indeed, the Lederman experiment found that the ϒ bump was broader than the experimental resolution, and the bump was eventually resolved into three bottomonium peaks.

Bottomonium

It was in the same paper that we introduced the terminology of “penguin diagrams”, wherein a quark bound in a hadron changes flavour not at tree level via W-boson exchange but via a loop containing heavy particles (like W bosons or top quarks), emitting a gluon, photon or Z boson. Similar diagrams had been discussed by the ITEP theoretical school in Moscow, in connection with K decays, and we realised that they would be important in B-hadron decays. I took an evening off to go to a bar in the Old Town of Geneva, where I got involved in a game of darts with the experimental physicist Melissa Franklin. She bet me that if I lost the game I had to include the word “penguin” in my next paper. Melissa abandoned the darts game before the end, and was replaced by Serge Rudaz, who beat me. I still felt obligated to carry out the conditions of the bet, but for some time it was not clear to me how to get the word into the b-quark paper that we were writing at the time. Then, another evening, after working at CERN, I stopped to visit some friends on my way back to my apartment, where I inhaled some (at that time) illegal substance. Later, when I got home and continued working on our paper, I had a sudden inspiration that the famous Russian diagrams look like penguins. So we put the word into our paper, and it has now appeared in almost 10,000 papers.

What of toponium, the last remaining frontier in the world of quarkonia? In the early 1980s there were no experimental indications as to how heavy the top quark might be, and there were hopes that it might be within the range of existing or planned e+e colliders such as PETRA, TRISTAN and LEP. When the LEP experimental programme was being devised, I was involved in setting “examination questions” for candidate experimental designs that included asking how well they could measure the properties of toponium. In parallel, the first theoretical papers on the formalism for toponium production in e+e and hadron–hadron collisions appeared.

Toponium will be a very interesting target for future e+e colliders

But the top quark did not appear until the mid-1990s at the Tevatron proton–antiproton collider at Fermilab, with a mass around 175 GeV, implying that toponium measurements would require an e+e collider with an energy much greater than LEP, around 350 GeV. Many theoretical studies were made of the cross section in the neighbourhood of the e+e tt threshold, and how precisely the top quark mass, electroweak and Higgs couplings could be measured.

Meanwhile, a smaller number of theorists were calculating the possible toponium signal at the LHC, and the LHC experiments ATLAS and CMS started measuring tt production with high statistics. CMS and ATLAS embarked on programmes to search for quantum-mechanical correlations in the final-state decay products of the top quarks and antiquarks, as should occur if the tt state were to be produced in a specific spin-parity state. They both found decay correlations characteristic of tt production in a pseudoscalar state: it was the first time such a quantum correlation had been observed at such high energies.

The CMS collaboration used these studies to improve the sensitivities of dedicated searches they were making for possible heavy Higgs bosons decaying into tt final states, as would be expected in many extensions of the Standard Model. Intriguingly, hints of a possible excess of events around the tt threshold with the type of correlation expected from a pseudoscalar tt state began to emerge in the CMS data, but initially not with high significance.

Pseudoscalar states

I first heard about this excess at an Asia–CERN physics school in Thailand, and started wondering whether it could be due to the lowest-lying toponium state, which would decay predominantly into unstable top quarks and antiquarks rather than via their annihilation, or to a heavy pseudoscalar Higgs boson, and how one might distinguish between these hypotheses. A few years previously, Abdelhak Djouadi, Andrei Popov, Jérémie Quevillon and I had studied in detail the possible signatures of heavy Higgs bosons in tt final states at the LHC, and shown that they would have significant interference effects that would generate dips in the cross-section as well as bumps.

Toponium?

The significance of the CMS signal subsequently increased to over 5σ, showing up in a tailored search for new pseudoscalar states decaying into tt pairs with specific spin correlations, and recently this CMS discovery has been confirmed by the ATLAS Collaboration, with a significance over 7σ. Unfortunately, the experimental resolution in the tt invariant mass is not precise enough to see any dip due to pseudoscalar Higgs production, and Djouadi, Quevillon and I have concluded that it is not yet possible to discriminate between the toponium and Higgs hypotheses on purely experimental grounds.

However, despite being a fan of extra Higgs bosons, I have to concede that toponium is the more plausible interpretation of the CMS threshold excess. The mass is consistent with that expected for toponium, the signal strength is consistent with theoretical calculations in QCD, and the tt spin correlations are just what one expects for the lowest-lying pseudoscalar toponium state that would be produced in gluon–gluon collisions.

Caution is still in order. The pseudoscalar Higgs hypothesis cannot (yet) be excluded. Nevertheless, it would be a wonderful golden anniversary present for quarkonium if, some 50 years after the discovery of the J/ψ, the appearance of its last, most massive sibling were to be confirmed.

Toponium will be a very interesting target for future e+e colliders, which will be able to determine its properties with much greater accuracy than a hadron collider could achieve, making precise measurements of the mass of the top quark and its electroweak couplings possible. The quarkonium saga is far from over.

The post Memories of quarkonia appeared first on CERN Courier.

]]>
Feature As the story of quarkonia draws to a close, John Ellis shares personal recollections of five decades of discoveries and debates about the simplest composite objects in QCD. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_QUARK_toponium.jpg
Hidden treasures https://cerncourier.com/a/hidden-treasures/ Tue, 09 Sep 2025 08:21:50 +0000 https://cerncourier.com/?p=114208 As the LHC surpasses one exabyte of stored data, Cristinel Diaconu and Ulrich Schwickerath call for new collaborations to join a global effort in data preservation.

The post Hidden treasures appeared first on CERN Courier.

]]>
Data resurrection

In 2009, the JADE experiment had been inoperational for 23 years. The PETRA electron–positron collider that served it had already completed a second life as a pre-accelerator for the HERA electron–proton collider and was preparing for a third life as an X-ray source. JADE and the other PETRA experiments were a piece of physics history, well known for seminal measurements of three-jet quark–quark-gluon events, and early studies of quark fragmentation and jet hadronisation. But two decades after being decommissioned, the JADE collaboration was yet to publish one of its signature measurements.

At high energies and short distances, the strong force becomes weaker. Quarks behave almost like free particles. This “asymptotic freedom” is a unique hallmark of QCD. In 2009, as now, JADE’s electron–positron data was unique in the low-energy range, with other data sets lost to history. When reprocessed with modern next-to-next-to-leading-order QCD and improved simulation tools, the DESY experiment was able to rival experiments at CERN’s higher-energy Large Electron–Positron (LEP) collider for precision on the strong coupling constant, contributing to a stunning proof of QCD’s most fundamental behaviour. The key was a farsighted and original initiative by Siggi Bethke to preserve JADE’s data and analysis software.

New perspectives

This data resurrection from JADE demonstrated how data can be reinterpreted to give new perspectives decades after an experiment ends. It was a timely demonstration. In 2009, HERA and SLAC’s PEP-II electron–positron collider had been recently decommissioned, and Fermilab’s Tevatron proton–antiproton collider was approaching the end of its operations. Each facility nevertheless had a strong analysis programme ahead, and CERN’s Large Hadron Collider (LHC) was preparing for its first collisions. How could all this data be preserved?

The uniqueness of these programmes, for which no upgrade or followup was planned for the coming decades, invited the consideration of data usability at horizons well beyond a few years. A few host labs risked a small investment, with dedicated data-preservation projects beginning, for example, at SLAC, DESY, Fermlilab, IHEP and CERN (see “Data preservation” dashboard). To exchange data-preservation concepts, methodologies and policies, and to ensure the long-term preservation of HEP data, the Data Preservation in High Energy Physics (DPHEP) group was created in 2014. DPHEP is a global initiative under the supervision of the International Committee for Future Accelerators (ICFA), with strong support from CERN from the beginning. It actively welcomes new collaborators and new partner experiments, to ensure a vibrant and long-term future for the precious data sets being collected at present and future colliders.

Data preservation

At the beginning of our efforts, DPHEP designed a four-level classification of data abstraction. Level 1 corresponds to the information typically found in a scientific publication or its associated HEPData entry (a public repository for high-energy physics data tables). Level 4 includes all inputs necessary to fully reprocess the original data and simulate the experiment from scratch.

The concept of data preservation had to be extended too. Simply storing data and freezing software is bound to fail as operating systems evolve and analysis knowledge disappears. A sensible preservation process must begin early on, while the experiments are still active, and take into account the research goals and available resources. Long-term collaboration organisation plays a crucial role, as data cannot be preserved without stable resources. Software must adapt to rapidly changing computing infrastructure to ensure that the data remains accessible in the long term.

Return on investment

But how much research gain could be expected for a reasonable investment in data preservation? We conservatively estimate that for dedicated investments below 1% of the cost of the construction of a facility, the scientific output increases by 10% or more. Publication records confirm that scientific outputs at major experimental facilities continue long after the end of operations (see “Publications per year, during and after data taking” panel). Publication rates remain substantial well beyond the “canonical” five years after the end of the data taking, particularly for experiments that pursued dedicated data-preservation programmes. For some experiments, the lifetime of the preservation system is by now comparable with the data-taking period, illustrating the need to carefully define collaborations for the long term.

Publication records confirm that scientific outputs at major experimental facilities continue long after the end of operations

The most striking example is BaBar, an electron–positron-collider experiment at SLAC that was designed to investigate the violation of charge-parity symmetry in the decays of B mesons, and which continues to publish using a preservation system now hosted outside the original experiment site. Aging infrastructure is now presenting challenges, raising questions about the very-long-term hosting of historical experiments – “preservation 2.0” – or the definitive end of the programme. The other historical b-factory, Belle, benefits from a follow-up experiment on site.

Publications per year, during and after data taking

Publications per year, during and after data taking

The publication record at experiments associated with the DPHEP initiative. Data-taking periods of the relevant facilities are shaded, and the fraction of peer-reviewed articles published afterwards is indicated as a percentage for facilities that are not still operational. The data, which exclude conference proceedings, were extracted from Inspire-HEP on 31 July 2025.

HERA, an electron– and positron–proton collider that was designed to study deep inelastic scattering (DIS) and the structure of the proton, continues to publish and even to attract new collaborators as the community prepares for the Electron Ion Collider (EIC) at BNL, nicely demonstrating the relevance of data preservation for future programmes. The EIC will continue studies of DIS in the regime of gluon saturation (CERN Courier January/February 2025 p31), with polarised beams exploring nucleon spin and a range of nuclear targets. The use of new machine-learning algorithms on the preserved HERA data has even allowed aspects of the EIC physics case to be explored: an example of those “treasures” not foreseen at the end of collisions.

IHEP in China conducts a vigorous data-preservation programme around BESIII data from electron–positron collisions in the BEPCII charm factory. The collaboration is considering using artificial intelligence to rank data priorities and user support for data reuse.

Remarkably, LEP experiments are still publishing physics analyses with archived ALEPH data almost 25 years after the completion of the LEP programme on 4 November 2000. The revival of the CERNLIB collection of FORTRAN data-analysis software libraries has also enabled the resurrection of the legacy software stacks of both DELPHI and OPAL, including the spectacular revival of their event displays (see “Data resurrection” figure). The DELPHI collaboration revised their fairly restrictive data-access policy in early 2024, opening and publishing their data via CERN’s Open Data Portal.

Some LEP data is currently being migrated into the standardised EDM4hep (event data model) format that has been developed for future colliders. As well as testing the format with real data, this will ensure data preservation and support software development, analysis training and detector design for the electron–positron collider phase of the proposed Future Circular Collider using real events.

The future is open

In the past 10 years, data preservation has grown in prominence in parallel with open science, which promotes free public access to publications, data and software in community-driven repositories, and according to the FAIR principles of findability, accessibility, interoperability and reusability. Together, data preservation and open science help maximise the benefits of fundamental research. Collaborations can fully exploit their data and share its unique benefits with the international community.

The two concepts are distinct but tightly linked. Data preservation focuses on maintaining data integrity and usability over time, whereas open data emphasises accessibility and sharing. They have in common the need for careful and resource-loaded planning, with a crucial role played by the host laboratory.

Treasure chest

Data preservation and open science both require clear policies and a proactive approach. Beginning at the very start of an experiment is essential. Clear guidelines on copyright, resource allocation for long-term storage, access strategies and maintenance must be established to address the challenges of data longevity. Last but not least, it is crucially important to design collaborations to ensure smooth international cooperation long after data taking has finished. By addressing these aspects, collaborations can create robust frameworks for preserving, managing and sharing scientific data effectively over the long term.

Today, most collaborations target the highest standards of data preservation (level 4). Open-source software should be prioritised, because the uncontrolled obsolescence of commercial software endangers the entire data-preservation model. It is crucial to maintain all of the data and the software stack, which requires continuous effort to adapt older versions to evolving computing environments. This applies to both software and hardware infrastructures. Synergies between old and new experiments can provide valuable solutions, as demonstrated by HERA and EIC, Belle and Belle II, and the Antares and KM3NeT neutrino telescopes.

From afterthought to forethought

In the past decade, data preservation has evolved from simply an afterthought as experiments wrapped up operations into a necessary specification for HEP experiments. Data preservation is now recognised as a source of cost-effective research. Progress has been rapid, but its implementation remains fragile and needs to be protected and planned.

In the past 10 years, data preservation has grown in prominence in parallel with open science

The benefits will be significant. Signals not imagined during the experiments’ lifetime can be searched for. Data can be reanalysed in light of advances in theory and observations from other realms of fundamental science. Education, training and outreach can be brought to life by demonstrating classic measurements with real data. And scientific integrity is fully realised when results are fully reproducible.

The LHC, having surpassed an exabyte of data, now holds the largest scientific data set ever accumulated. The High-Luminosity LHC will increase this by an order of magnitude. When the programme comes to an end, it will likely be the last data at the energy frontier for decades. History suggests that 10% of the LHC’s scientific programme will not yet have been published when collisions end, and a further 10% not even imagined. While the community discusses its strategy for future colliders, it must therefore also bear in mind data preservation. It is the key to unearthing hidden treasures in the data of the past, present and future.

The post Hidden treasures appeared first on CERN Courier.

]]>
Feature As the LHC surpasses one exabyte of stored data, Cristinel Diaconu and Ulrich Schwickerath call for new collaborations to join a global effort in data preservation. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_DATA_feature.jpg
Nineteen sixty-four https://cerncourier.com/a/nineteen-sixty-four/ Tue, 09 Sep 2025 08:21:47 +0000 https://cerncourier.com/?p=114248 Michael Riordan chronicles 1964, the year that saw the birth of the quark model, the Higgs mechanism, and the discovery of CP violation and the cosmic microwave background.

The post Nineteen sixty-four appeared first on CERN Courier.

]]>
Murray Gell-Mann
George Zweig
Evidence for SU(3) symmetry
Cosmic microwave background radiation
James Bjorken and Sheldon Glashow
Broken symmetry and the mass of gauge vector mesons
Evidence for the 2π decay
Peter Higgs
Global conservation laws and massless particles
Spin and unitary-spin independence in a paraquark model of baryons and mesons

In the history of elementary particle physics, 1964 was truly an annus mirabilis. Not only did the quark hypothesis emerge – independently from two theo­rists half a world apart – but a multiplicity of theorists came up with the idea of spontaneous symmetry breaking as an attractive method to generate elementary particle masses. And two pivotal experiments that year began to alter the way astronomers, cosmologists and physicists think about the universe.

Shown on the left is a timeline of the key 1964 milestones; discoveries that laid the groundwork for the Standard Model of particle physics and continue to be actively studied and refined today (images: N Eskandari, A Epshtein).

Some of the insights published in 1964 were first conceived in 1963. Caltech theorist Murray Gell-Mann had been ruminating about quarks ever since a March 1963 luncheon discussion with Robert Serber at Columbia University. Serber was exploring the possibility of a triplet of fundamental particles that in various combinations could account for mesons and baryons in Gell-Mann’s SU(3) symmetry scheme, dubbed “the Eightfold Way”. But Gell-Mann summarily dismissed his suggestion, showing him on a napkin how any such fundaments would have to have fractional charges of –2/3 or 1/3 the charge on an electron, which seemed absurd.

From the ridiculous to the sublime

Still, he realised, such ridiculous entities might be allowable if they somehow never materialised outside of the hadrons. For much of the year, Gell-Mann toyed with the idea in his musings, calling such hypothetical entities by the nonsense word “quorks”, until he encountered the famous line in Finnegans Wake by James Joyce, “Three quarks for Muster Mark.” He even discussed it with his old MIT thesis adviser, then CERN Director-General Victor Weisskopf, who chided him not to waste their time talking about such nonsense on an international phone call.

In late 1963, Gell-Mann finally wrote the quark idea up for publication and sent his paper to the newer European journal Physics Letters rather than the (then) more prestigious Physical Review Letters, in part because he thought it would be rejected there. “A schematic model of baryons and mesons”, published on 1 February 1964, is brief and to the point. After a few preliminary remarks, he noted that “a simpler, more elegant scheme can be constructed if we allow non-integral values for the charges … We then refer to the members u(2/3), d(–1/3) and s(–1/3) of the triplet as ‘quarks’.” But toward the end, he hedged his bets, warning readers not to take the existence of these quarks too seriously: “A search for stable quarks of charge +2/3 or –1/3 … at the highest-energy accelerators would help to reassure us of the non-existence of real quarks.”

As often happens in the history of science, the idea of quarks had another, independent genesis – at CERN in 1964. George Zweig, a CERN postdoc who had recently been a Caltech graduate student with Richard Feynman and Gell-Mann, was wondering why the φ meson lived so long before decaying into a pair of K mesons. A subtle conservation law must be at work, he figured, which led him to consider a constituent model of the hadrons. If the φ were somehow composed of two more fundamental entities, one with strangeness +1 and the other with –1, then its great preference for kaon decays over other, energetically more favourable possibilities, could be explained. These two strange constituents would find it difficult to “eat one another,” as he later put it, so two individual, strange kaons would be required to carry each of them away.

Late in the fall of 1963, Zweig discovered that he could reproduce the meson and baryon octets of the Eightfold Way from such constituents if they carried fractional charges of 2/3 and –1/3. Although he at first thought this possibility artificial, it solved a lot of other problems, and he began working feverishly on the idea, day and night. He wrote up his theory for publication, calling his fractionally charged particles “aces” – in part because he figured there would be four of them. Mesons, built from pairs of these aces, formed the “deuces” and baryons the “treys” in his deck of cards. His theory first appeared as a long CERN report in mid-January 1964, just as Gell-Mann’s quark paper was awaiting publication at Physics Letters.

As chance would have it, there was an intensive activity going on in parallel that January – an experimental search for the Ω baryon that Gell-Mann had predicted just six months earlier at a Geneva particle-physics conference. With negative charge and a mass almost twice that of the proton, it had to have strangeness –3 and would sit atop a 10-fold decuplet of heavy baryons predicted in his Eightfold Way. Brookhaven experimenter Nick Samios was eagerly seeking evidence of this very strange particle in the initial run of the 80 inch bubble chamber that he and colleagues had spent years planning and building. On 31 January 1964, he finally found a bubble-chamber photograph with just the right signatures. It might be the “gold-plated event” that could prove the existence of the Ω baryon.

After more detailed tests to make sure of this conclusion, the Brookhaven team delivered a paper with the unassuming title “Observation of a hyperon with strangeness minus three” to Physical Review Letters. With 33 authors, it reported only one event. But with that singular event, any remaining doubt about SU(3) symmetry and Gell-Mann’s Eightfold Way evaporated.

A fourth quark for Muster Mark?

Later in spring 1964, James Bjorken and Sheldon Glashow crossed paths in Copenhagen, on leave from Harvard and Stanford, working at Niels Bohr’s Institute for Theoretical Physics. Seeking to establish lepton–hadron symmetry, they needed a fourth quark because a fourth lepton – the muon neutrino – had been discovered in 1962 at Brookhaven. Bjorken and Glashow were early adherents of the idea that hadrons were made of quarks, but based their arguments on SU(4) symmetry rather than SU(3). “We called the new quark flavour ‘charm,’ completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day,” recalled Glashow (CERN Courier January/February 2025 p35). Their Physics Letters article appeared that summer, but it took another decade before solid evidence for charm turned up in the famous J/ψ discovery at Brookhaven and SLAC. The charm quark they had predicted in 1964 was the central player in the so-called November Revolution a decade later that led to widespread acceptance of the Standard Model of particle physics.

In the same year, Oscar Greenberg at the University of Maryland was wrestling with the difficult problem of how to confine three supposedly identical quarks within a volume hardly larger than a proton. According to the sacrosanct Pauli exclusion principle, identical spin–1/2 fermions could never occupy the exact same quantum state. So how, for example, could one ever cram three strange quarks inside an Ω baryon?

One possible solution, Greenberg realised, was that quarks carry a new physical property that distinguished them from one another so they were not in fact identical. Instead of a single quark triplet, that is, there could be three distinct triplets of what he dubbed “paraquarks”, publishing his ideas in November 1964, and capping an extraordinary year of insights into hadrons. We now recognise his insight as anticipating the existence of “coloured” quarks, where colour is the source of the relentless QCD force binding them within mesons and baryons.

The origin of mass

Although it took more than a decade for experiments to verify them, these insights unravelled the nature of hadrons, revealing a new family of fermions and hinting at the nature of the strong force. Yet they were not necessarily the most important ideas developed in particle physics in 1964. During that summer, three theorists – Robert Brout, François Englert and Peter Higgs – formulated an innovative technique to generate particle masses using spontaneous symmetry breaking of non-Abelian Yang–Mills gauge theories – a class of field theories that would later describe the electroweak and strong forces in the Standard Model.

Murray Gell-Mann and Yuval Ne’eman

Inspired by successful theories of superconductivity, symmetry-breaking ideas had been percolating among those few still working on quantum field theory, then in deep decline in particle physics, but they foundered whenever masses were introduced “by hand” into the theories. Or, as Yoichiro Nambu and Peter Goldstone realised in the early 1960s, massless bosons appeared in the theories that did not correspond to anything observed in experiments.

If they existed, the W (and later, Z) bosons carrying the short-range weak force had to be extremely massive (as is now well known). Brout and Englert – and independently Higgs – found they could generate the masses of such vector bosons if the gauge symmetry governing their behaviour was instead spontaneously broken, preserving the underlying symmetry while allowing for distinctive, asymmetric particle states. In solid-state physics, for example, magnetic domains will spontaneously align along a single direction, breaking the underlying symmetry of the electromagnetic field. Brout and Englert published their solution in June 1964, while Higgs followed suit a month later (after his paper was rejected by Physics Letters). Higgs subsequently showed that this symmetry breaking required a scalar boson to exist that was soon named after him. Dubbed the “Higgs mechanism,” this mass-generating process became a crucial feature of the unification of the weak and electromagnetic forces a few years later by Steven Weinberg and Abdus Salam. And after their electroweak theory was shown in 1971 to be renormalisable, and hence calculable, the theoretical floodgates opened wide, leading to today’s dominant Standard Model paradigm.

Surprise, surprise!

Besides the quark model and the Higgs mechanism, 1964 witnessed two surprising discoveries that would light up almost any other year in the history of science. That summer saw the publication of an epochal experiment leading to the discovery of CP violation in the decays of long-lived neutral mesons. Led by Princeton physicists Jim Cronin and Val Fitch, their Brookhaven experiment had discerned a small but non-negligible fraction – 0.2% – of two-body decays into a pair of pions, instead of into the dominant CP-conserving three-body decays. For months, the group wrestled with trying to understand this surprising result before publishing it that July in Physical Review Letters.

Robert Brout and François Englert

It took almost another decade before Japanese theorists Makoto Kobayashi and Toshihide Maskawa proved that such a small amount of CP violation was the natural result of the Standard Model if there were three quark-lepton families instead of the two then known to exist. Whether this phenomenon has any causal relation to the dominance of matter in the universe is still up for grabs decades later. “Indeed, it is almost certain that the CP violation observed in the K-meson system is not directly responsible for the matter dominance of the universe,” wrote Cronin in the early 1990s, “but one would wish that it is related to whatever the mechanism was that created [this] matter dominance.”

Robert W Wilson and Arno Penzias

Another epochal 1964 observation was not published until 1965, but it deserves mention here because of its tremendous significance for the subsequent marriage of particle physics and cosmology. That summer, Arno Penzias and Robert W Wilson of Bell Telephone Labs were in the process of converting a large microwave antenna in Holmdel, NJ, for use in radio astronomy. Shaped like a giant alpenhorn lying on its side, the device had been developed for early satellite communications. But the microwave signals that it was receiving included a faint, persistent “hiss” no matter the direction in which the horn was pointed; they at first interpreted the hiss as background noise – possibly due to some smelly pigeon droppings that had accumulated inside, which they removed. Still it persisted. Penzias and Wilson were at a complete loss to explain it.

Cosmological consequences

It so happened that a Princeton group led by Robert Dicke and James Peebles was just then building a radiometer to search for the uniform microwave radiation that should suffuse the universe had it begun in a colossal fireball, as a few cosmologists had been arguing for decades. In the spring of 1965, Penzias read a preprint of a paper by Peebles on the subject and called Dicke to suggest he come to Holmdel to view their results. After arriving and realising they had been scooped, the Princeton physicists soon confirmed the Bell Labs results using their own rooftop radiometer.

Besides the quark model and the Higgs mechanism, 1964 witnessed two surprising discoveries that would light up almost any other year in the history
of science

The results were published as back-to-back letters in the Astrophysical Journal on 7 May 1965. The Princeton group wrote extensively about the cosmological consequences of the discovery, while Penzias and Wilson submitted just a brief, dry description of their work, “A measurement of excess antenna temperature at 4080 Mc/s” – ruling out other possible interpretations of the uniform signal corresponding to the radiation expected from a 3.5 K blackbody.

Subsequent measurements at many other frequencies have established that this is indeed the cosmic background radiation expected from the Big Bang birth of the universe, confirming that it had in fact occurred. That was an incredibly brief, hot, dense phase of its existence, which has prodded many particle physicists to take up the study of its evolution and remnants. This discovery of the cosmic background radiation therefore serves as a fitting capstone on what was truly a pivotal year for particle physics.

The post Nineteen sixty-four appeared first on CERN Courier.

]]>
Feature Michael Riordan chronicles 1964, the year that saw the birth of the quark model, the Higgs mechanism, and the discovery of CP violation and the cosmic microwave background. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_1964_feature.jpg
Mixed signals from X17 https://cerncourier.com/a/mixed-signals-from-x17/ Tue, 09 Sep 2025 08:20:13 +0000 https://cerncourier.com/?p=114339 A decade after the initial report of the ATOMKI anomaly, MEG II and PADME present conflicting findings on the proposed X17 boson.

The post Mixed signals from X17 appeared first on CERN Courier.

]]>
MEG II and PADME experiments

Almost a decade after ATOMKI researchers reported an unexpected peak in electron–positron pairs from beryllium nuclear transitions, the case for a new “X17” particle remains open. Proposed as a light boson with a mass of about 17 MeV and very weak couplings, it would belong to the sometimes-overlooked low-energy frontier of physics beyond the Standard Model. Two recent results now pull in opposite directions: the MEG II experiment at the Paul Scherrer Institute found no signal in the same transition, while the PADME experiment at INFN Frascati reports a modest excess in electron–positron scattering at the corresponding mass.

The story of the elusive X17 particle began at the Institute for Nuclear Research (ATOMKI) in Debrecen, Hungary, where nuclear physicist Attila János Krasznahorkay and colleagues set out to study the de-excitation of a beryllium-8 state. Their target was the dark photon – a particle hypothesised to mediate interactions between ordinary and dark matter. In their setup, a beam of protons strikes a lithium-7 target, producing an excited beryllium nucleus that releases a proton or de-excites to the beryllium-8 ground state by emitting an 18.1 MeV gamma ray – or, very rarely, an electron–positron pair.

Controversial anomaly

In 2015, ATOMKI claimed to have observed an excess of electron–positron pairs with a statistical significance of 6.8σ. Follow-up measurements with different nuclei were also reported to yield statistically significant excess at the same mass. The team claimed the excess was consistent with the creation of a short-lived neutral boson with a mass of about 17 MeV. Given that it would be produced in nuclear transitions and decay into electron–positron pairs, the X17 should couple to nucleons, electrons and positrons. But many relevant constraints squeeze the parameter space for new physics at low energies, and independent tests are essential to resolve an unexpected and controversial anomaly that is now a decade old.

In November 2024, MEG II announced a direct cross-check of the anomaly, publishing their results in July 2025. Designed for high-precision tracking and calorimetry, the experiment combines dedicated background monitors with a spectrometer based on a lightweight, single-volume drift chamber that records the ionisation trails of charged particles. The detector is designed to search for evidence of the rare lepton-flavour-violating decay μ+ → e+γ, with the collaboration recently reporting world-leading limits at EPS-HEP (see “High-energy physics meets in Marseille”). It is also well suited to probing electron–positron final states, and has the mass resolution required to test the narrow-resonance interpretation of the ATOMKI anomaly.

Motivated by interest in X17, the collaboration directed a proton beam with energy up to 1.1 MeV onto a lithium-7 target, to study the same nuclear process as ATOMKI. Their data disfavours the ATOMKI hypothesis and imposes an upper limit on the branching ratio of 1.2 × 10–5 at 90% confidence.

“While the result does not close the case,” notes Angela Papa of INFN, the University of Pisa and the Paul Scherrer Institute, “it weakens the simplest interpretations of the anomaly.”

But MEG II is not the only cross check in progress. In May, the PADME collaboration reported an independent test that doesn’t repeat the ATOMKI experiment, but seeks to disentangle the X17 question from the complexities of nuclear physics.

For theorists, X17 is an awkward fit

Initially designed to search for evidence of states that decay invisibly, like dark photons or axion-like particles, PADME collides a positron beam with energies reaching 550 MeV with a 100 µm-thick active diamond target. Annihilations of positrons with electrons bound in the target material are reconstructed by detecting the resulting photons, with any peak in the missing-mass spectrum signalling an unseen product. The photon energy and impact position is measured by a finely segmented electromagnetic calorimeter with crystals refurbished from the L3 experiment at LEP.

“The PADME approach relies only on the suggested interaction of X17 with electrons and positrons,” remarks spokesperson Venelin Kozhuharov of Sofia University and INFN Frascati. “Since the ATOMKI excess was observed in electron–positron final states, this is the minimal possible assumption that can be made for X17.”

Instead of searching for evidence of unseen particles, PADME varied the beam energy to look for an electron-positron resonance in the expected X17 mass range. The collaboration claims that the combined dataset displays an excess near 16.90 MeV with a local significance of 2.5σ.

For theorists, X17 is an awkward fit. Most consider dark photons and axions to be the best motivated candidates for low mass, weakly coupled new physics states, says Claudio Toni of LAPTh. Another possibility, he says, is a bound state of known particles, though QCD states such as pions are about eight times heavier, and pure QED effects usually occur at much lower scales than 17 MeV.

“We should be cautious,” says Toni. “Since X17 is expected to couple to both protons and electrons, the absence of signals elsewhere forces any theoretical proposal to respect stringent constraints. We should focus on its phenomenology.”

The post Mixed signals from X17 appeared first on CERN Courier.

]]>
News A decade after the initial report of the ATOMKI anomaly, MEG II and PADME present conflicting findings on the proposed X17 boson. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_NA_X17feature.jpg
ATLAS confirms top–antitop excess https://cerncourier.com/a/atlas-confirms-top-antitop-excess/ Tue, 09 Sep 2025 08:20:11 +0000 https://cerncourier.com/?p=114346 The ATLAS collaboration has confirmed the threshold excess in top–antitop production first reported by CMS, consistent with the formation of toponium.

The post ATLAS confirms top–antitop excess appeared first on CERN Courier.

]]>
Quasi-bound candidate

At the LHC, almost all top–antitop pairs are produced in a smooth invariant-mass spectrum described by perturbative QCD. In March, the CMS collaboration announced the discovery of an additional 1% localised near the energy threshold to produce a top quark and its antiquark (CERN Courier May/June 2025 p7). The ATLAS collaboration has now confirmed this observation.

“The measurement was challenging due to the small cross section and the limited mass resolution of about 20%,” says Tomas Dado of the ATLAS collaboration and CERN. “Sensitivity was achieved by exploiting high statistics, lepton angular variables sensitive to spin correlations, and by carefully constraining modelling uncertainties.”

Toponium

The simplest explanation for the excess appears to be a spectrum of “quasi-bound” states of a top quark and its antiquark that are often collectively referred to as toponium, by reference to the charmonium and bottomonium states discovered in the November Revolution of 1974 (see “Memories of quarkonia“). But there the similarities end. Thanks to the unique properties of the most massive fundamental particle yet discovered, toponium is expected to be exceptionally broad rather than exceptionally narrow in energy spectra, and to disintegrate via the weak decay of its constituent quarks rather than via their mutual annihilation.

“Historically, it was assumed that the LHC would never reach the sensitivity required to probe such effects, but ATLAS and CMS have shown that this expectation was too pessimistic,” says Benjamin Fuks of the Sorbonne. “This regime corresponds to the production of a slowly moving top–antitop pair that has time to exchange multiple gluons before one of the top quarks decays. The invariant mass of the system lies slightly below the open top–antitop threshold, which implies that at least one of the top quarks is off-shell. This contrasts with conventional top–antitop production, where the tops are typically produced far above threshold, move relativistically and do not experience significant non-relativistic gluon dynamics.”

While CMS fitted a pseudo-scalar resonance that couples to gluons and top quarks – the essential features of the ground state of toponium – the new ATLAS analysis employs a model recently published by Fuks and his collaborators that additionally includes all S-wave excitations. ATLAS reports a cross-section for such quasi-bound excitations of 9.0 ± 1.3 pb, consistent with CMS’s measurement of 8.8 ± 1.3 pb. ATLAS’s measurement rises to 13.9 ± 1.9 pb when applying the same signal model as CMS.

Future measurements of top quark–antiquark pairs will compare the threshold excess to the expectations of non-relativistic QCD, search for the possible presence of new fields beyond the Standard Model, and study the quantum entanglement of the top and antitop quarks.

“At the High-Luminosity LHC, the main objective is to exploit the much larger dataset to go beyond a single-bin description of the sub-threshold top–antitop invariant mass distribution,” says Fuks. “At a future electron–positron collider, the top–antitop threshold scan has long been recognised as a cornerstone measurement, with toponium contributions playing an essential role.”

For Dado, this story reflects a satisfying interplay between theorists and the LHC experiments.

“Theorists proposed entanglement studies, ATLAS demonstrated entangled top–antitop pairs and CMS applied spin-sensitive observables to reveal the quasi-bound-state effect,” he says. “The next step is for theory to deliver a complete description of the top–antitop threshold.”

The post ATLAS confirms top–antitop excess appeared first on CERN Courier.

]]>
News The ATLAS collaboration has confirmed the threshold excess in top–antitop production first reported by CMS, consistent with the formation of toponium. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_NA_Quasi_feature.jpg
US publishes 40-year vision for particle physics https://cerncourier.com/a/us-publishes-40-year-vision-for-particle-physics/ Tue, 09 Sep 2025 08:20:08 +0000 https://cerncourier.com/?p=114354 In June, the US National Academies of Sciences, Engineering, and Medicine published an unprecedented 40-year strategy for US particle physics.

The post US publishes 40-year vision for particle physics appeared first on CERN Courier.

]]>
Elementary Particle Physics: The Higgs and Beyond

Big science requires long-term planning. In June, the US National Academies of Sciences, Engineering, and Medicine published an unprecedented 40-year strategy for US particle physics titled Elementary Particle Physics: The Higgs and Beyond. Its recommendations include participating in the proposed Future Circular Collider at CERN and hosting the world’s highest-energy elementary particle collider around the middle of the century (see “Eight recommendations” panel). The report assesses that a 10 TeV muon col­lider would complement the discovery potential of a 100 TeV proton collider.

“The shift to a 40-year horizon in the new report reflects a recognition that modern particle-physics projects and scientific questions are of unprecedented scale and complexity, demanding a much longer-term strategic commitment, international cooperation and investment for continued leadership,” says report co-chair Maria Spiropulu of the California Institute of Technology. “A staggered approach towards large research-infrastructure projects, rich in scientific advancement, technological breakthroughs and collaboration, can shield the field from stagnation.”

Eight recommendations

1. The US should host the world’s highest-energy elementary particle collider around the middle of the century. This requires the immediate creation of a national muon collider R&D programme to enable the construction of a demonstrator of the key new technologies and their integration.

2. The US should participate in the international Future Circular Collider Higgs factory currently under study at CERN to unravel the physics of the Higgs boson.

3. The US should continue to pursue and develop new approaches to questions ranging from neutrino physics and tests of fundamental symmetries to the mysteries of dark matter, dark energy, cosmic inflation and the excess of matter over antimatter in the universe.

4. The US should explore new synergistic partnerships across traditional science disciplines and funding boundaries.

5. The US should invest for the long journey ahead with sustained R&D funding in accelerator science and technology, advanced instrumentation, all aspects of computing, emerging technologies from other disciplines and a healthy core research programme.

6. The federal government should provide the means and the particle-physics community should take responsibility for recruiting, training, mentoring and retaining the highly motivated student and postdoctoral workforce required for the success of the field’s ambitious science goals.

7. The US should engage internationally through existing and new partnerships, and explore new cooperative planning mechanisms.

8. Funding agencies, national laboratories and universities should work to minimise the environmental impact of particle-physics research and facilities.

Source: National Academies of Sciences, Engineering, and Medicine 2025 Elementary Particle Physics: The Higgs and Beyond. Washington, DC: The National Academies Press.

The report is authored by a committee of leading scientists selected by the National Academies. Its mandate complements the grassroots-led Snowmass process and the budget-conscious P5 process (CERN Courier January/February 2024 p7). The previous report in this series, Revealing the Hidden Nature of Space and Time: Charting the Course for Elementary Particle Physics was published in 2006. It called for the full exploitation of the LHC, a strategic focus on linear-collider R&D, expanding particle astrophysics, and pursuing an internationally coordinated, staged programme in neutrino physics.

Two conclusions underpin the new report’s recommendations. The first identifies three workforce issues currently threatening the future of particle physics: the morale of early-career scientists, a shortfall in the number of accelerator scientists, and growing barriers to international exchanges. The second urges US leadership in elementary particle physics, citing benefits to science, the nation and humanity.

The post US publishes 40-year vision for particle physics appeared first on CERN Courier.

]]>
News In June, the US National Academies of Sciences, Engineering, and Medicine published an unprecedented 40-year strategy for US particle physics. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_NA_USstrategy_feature.jpg
Full coherence at fifty https://cerncourier.com/a/full-coherence-at-fifty/ Tue, 09 Sep 2025 08:20:06 +0000 https://cerncourier.com/?p=114330 The CONUS+ collaboration presents evidence for CEνNS in the fully coherent regime.

The post Full coherence at fifty appeared first on CERN Courier.

]]>
The most common neutrino interactions are the most difficult to detect. But thanks to advances in detector technology, coherent elastic neutrino–nucleus scattering (CEνNS) is emerging from behind backgrounds, 50 years after it was first hypothesised. These low-energy interactions are insensitive to the intricacies of nuclear or nucleon structure, making them a promising tool for precision searches for physics beyond the Standard Model. They also offer a route to miniaturising neutrino detectors.

“I am convinced that we are seeing the beginning of a new field in neutrino physics based on CEνNS observations,” says Manfred Lindner (Max Planck Institute for Nuclear Physics in Heidelberg), the spokesperson for the CONUS+ experiment, which reported the first evidence for fully coherent CEνNS in July. “The technology of CONUS+ is mature and seems scalable. I believe that we are at the beginning of precision neutrino physics with CEνNS and CONUS+ is one of the door openers!”

Act of hubris

Daniel Z Freedman is not best known for CEνNS, but in 1974 the future supergravity architect suggested that experimenters search for evidence of neutrinos interacting not with nucleons but “coherently” with entire nuclei. This process should dominate when the de Broglie wavelength of the neutrino is the diameter of the nucleus or larger. The question of which specific neutron exchanged a Z boson with the incoming neutrino would sum in the quantum amplitude rather than the probability, leading to an N2 dependence on the number of neutrons. As a result, CEνNS cross sections are typically enhanced by a factor of between 100 and 1000.

Freedman noted that his proposal may have been an “act of hubris”, because the interaction rate, detector resolution and backgrounds would all pose grave experimental difficulties. His caveat was perspicacious. It took until 2017 for indisputable evidence for CEνNS to emerge at Oak Ridge National Laboratory in the US, where the COHERENT experiment observed CEνNS by neutrinos with a maximum energy of 52 MeV, emerging from pion decays at rest (CERN Courier October 2017 p8). At these energies, the coherence condition is only partially fulfilled, and nuclear structure still plays a role.

The CONUS+ collaboration now presents evidence for CEνNS in the fully coherent regime. The experiment – one of many launched at nuclear reactors following the COHERENT demonstration – uses reactor electron anti-neutrinos with energies below 10 MeV generated across 119 days at the Leibstadt Nuclear Power Plant in Switzerland. The team observed 395 ± 106 neutrinos compared to a Standard Model expectation of 347 ± 59 events, corresponding to a statistical significance for the observation of CEνNS of 3.7σ.

I am convinced that we are seeing the beginning of a new field in neutrino physics based on CEνNS observations

It is no wonder that detection took 50 years. The only signal of CEνNS is a gentle nuclear recoil – an effect often compared to the effect of a ping-pong ball on a tanker. In CONUS+, the nuclear recoils of the CEνNS interactions are detected using the ionisation signal of point-contact high-purity germanium detectors with ultra-low energy thresholds as low as 160 eV.

The team has now increased the mass of their four semiconductor detectors from 1 to 2.4 kg to provide better statistics and potentially a lower threshold energy. CONUS+ is highly sensitive to physics beyond the Standard Model, says the team, including non-standard interaction parameters, new light mediators and electromagnetic properties of the neutrino such as electrical millicharges or neutrino magnetic moments. Lindner estimates that the CONUS+ technology could be scaled up to 100 kg, potentially yielding 100,000 CEνNS events per year of operation.

Into the neutrino fog

One researcher’s holy grail is another’s curse. In 2024, dark-matter experiments reported entering the “neutrino fog”, as their sensitivity to nuclear recoils crossed the threshold to detect a background of solar-neutrino CEνNS interactions. The PandaX-4T and XENONnT collaborations reported 2.6σ and 2.7σ evidence for CEνNS interactions in their liquid–xenon time projection chambers, based on estimated signals of 79 and 11 interactions, respectively. These were the first direct measurements of nuclear recoils from solar neutrinos with dark-matter detectors. Boron-8 solar neutrinos have slightly higher energies than those detected by CONUS+, and are also in the fully coherent regime.

CEνNS has promise for nuclear-reactor monitoring

“The neutrino flux in CONUS+ is many orders of magnitude bigger than in dark-matter detectors,” notes Lindner, who is also co-spokesperson of the XENON collaboration. “This is compensated by a much larger target mass, a larger CEνNS cross section due to the larger number of neutrons in xenon versus germanium, a longer running time and differences in detection efficiencies. Both experiments have in common that all backgrounds of natural or imposed radioactivity must be suppressed by many orders of magnitude such that the CEνNS process can be extracted over backgrounds.”

The current experimental frontier for CEνNS is towards low energy thresholds, concludes COHERENT spokesperson Kate Scholberg of Duke University. “The coupling of recoil energy to observable energy can be in the form of a dim flash of light picked up by light sensors, a tiny zap of charge collected in a semiconductor detector, or a small thermal pulse observed in a bolometer. A number of collaborations are pursuing novel technologies with sub-keV thresholds, among them cryogenic bolometers. A further goal is measurement over a range of nuclei, as this will test the SM prediction of an N2 dependence of the CEνNS cross section. And for higher-energy neutrino sources, for which the coherence is not quite perfect, there are opportunities to learn about nuclear structure. Another future possibility is directional recoil detection. If we are lucky, nature may give us a supernova burst of CEνNS recoils. As for societal applications, CEνNS has promise for nuclear-reactor monitoring for nonproliferation purposes due to its large cross section and interaction threshold below that for inverse-beta-decay of 1.8 MeV.”

The post Full coherence at fifty appeared first on CERN Courier.

]]>
News The CONUS+ collaboration presents evidence for CEνNS in the fully coherent regime. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_NA_CONUS.jpg
Einstein Probe detects exotic gamma-ray bursts https://cerncourier.com/a/einstein-probe-detects-exotic-gamma-ray-bursts/ Tue, 09 Sep 2025 08:20:06 +0000 https://cerncourier.com/?p=114371 Early results from the Einstein Probe identify soft X-ray events, questioning standard gamma-ray burst emission models.

The post Einstein Probe detects exotic gamma-ray bursts appeared first on CERN Courier.

]]>
Supernovae are some of the most well-known astrophysical phenomena. The energies involved in these powerful explosions are, however, dwarfed by a gamma-ray burst (GRB). These extra-galactic explosions form the most powerful electromagnetic explosions in the universe and play an important role in its evolution. First detected in 1967, they consist of a bright pulse of gamma rays, lasting from several seconds to several minutes. This is followed by an afterglow emission that can be measured from X-rays down to radio energies for days or even months. Thanks to 60 years of observations of these events by a range of detectors, we now know that the longer GRBs are an extreme version of a core-collapse supernova. In GRBs, the death of the heavy star is accompanied by two powerful relativistic jets. If such a jet points towards Earth we can detect gamma-ray photons even for GRBs at distances of billions of light years. Thanks to detailed observations, the afterglow is now understood to be the result of synchrotron emission produced as the jet crashes into the interstellar medium.

After the detection of over 10,000 gamma-ray components of GRBs by dedicated gamma-ray satellites, the most common models associate the longer ones with supernovae. This has been confirmed thanks to detections of afterglow emission coinciding with supernova events in other galaxies. The exact characteristics that cause some heavy stars to produce a GRB remain, however, poorly understood. Furthermore, many open questions remain regarding the nature and origin of the relativistic jets and how the gamma rays are produced within them.

While the emission has been studied extensively in gamma rays, detections at soft X-ray energies are limited. This changed in early 2024 with the launch of the Einstein Probe (EP) satellite. EP is a novel X-ray telescope, developed by the Chinese Academy of Sciences (CAS) in collaboration with ESA, the Max Planck Institute for Extraterrestrial Physics and the Centre National d’Études Spatiales. EP is unique in its wide field of view (1/11th of the sky) in soft X-rays, made possible thanks to complex X-ray optics. As GRBs occur at random positions in the sky at random times, the large field of view increases its chance to observe them. Within its first year EP detected several GRB events, most of which challenge our understanding of them.

One of these occurred on 14 April 2024. It consisted of a bright flash of X-rays lasting about 2.5 minutes. The event was also observed by ground-based optical and radio telescopes that were alerted to its location in the sky by EP. These observations at lower photon energies were consistent with a weak afterglow together with the signatures from a relatively standard supernova-like event. The supernova emission showed it to originate from a star which, prior to its death, had already shed its outer layers of hydrogen and helium. Along with the spectrum detected by EP, the detection of an afterglow indicates the existence of a relativistic jet. The overall picture is therefore consistent with a GRB. However, a crucial part was missing: a gamma-ray component.

In addition, the emission spectrum observed by EP looks significantly softer as it peaks at keV rather than the 100s of keV energies typical for GRBs. The results hint at this being at an explosion that produced a relativistic jet which – for unknown reasons – was not energetic enough to produce the standard gamma-ray emission. The progenitor star therefore appears to bridge the stellar population which causes a “simple” core collapse supernova and those that produce GRBs.

Another event, detected on 15 March 2024, produced soft X-rays consisting of six separate epochs spread out over 17 minutes. Here, a gamma-ray component was detected by NASA’s Swift BAT instrument, confirming it to be a GRB. However, unlike any other GRB, the gamma-ray emission started long after the onset of the X-ray emission. This lack of gamma-ray emission in the early stages is difficult to reconcile with standard emission models. There, the emission comes from a single uniform jet where the highest energies are emitted at the start when the jet is at its most energetic.

In their publication in Nature Astronomy, the EP collaboration suggests the possibility that the early X-ray emission comes from either shocks from the supernova explosion itself or from weaker relativistic jets preceding the main powerful jet. Other proposed explanations include complex jet structures and pose that EP observed the jet far away from its centre. In this explanation, the matter in the jet moves faster in the centre while at the edges its Lorentz factor (or velocity) is significantly slower, thereby producing a lower-energy longer-lasting emission, undetectable before the launch of EP.

Overall, the two detections appear to indicate that the GRBs detected over the last 60 years, where the emission was dominated by gamma rays, were only a subset of a more complex phenomenon. At a time where two of the most important instruments in GRB astronomy from the last two decades, NASA’s Fermi and Swift missions, are proposed to be switched off, EP is taking over an important role and opening the window to soft X-ray observations.

The post Einstein Probe detects exotic gamma-ray bursts appeared first on CERN Courier.

]]>
News Early results from the Einstein Probe identify soft X-ray events, questioning standard gamma-ray burst emission models. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_NA_Astro.jpg
CP symmetry in diphoton Higgs decays https://cerncourier.com/a/cp-symmetry-in-diphoton-higgs-decays/ Tue, 09 Sep 2025 08:19:13 +0000 https://cerncourier.com/?p=114441 The CMS collaboration analysed Higgs-boson decays to two photons, setting limits on anomalous couplings that would violate CP symmetry.

The post CP symmetry in diphoton Higgs decays appeared first on CERN Courier.

]]>
CMS figure 1

In addition to giving mass to elementary particles, the Brout–Englert–Higgs mechanism provides a testing ground for the fundamental symmetries of nature. In a recent analysis, the CMS collaboration searched for violations of charge–parity (CP) symmetry in the decays of Higgs bosons into two photons. The results set some of the strongest limits to date on anomalous Higgs-boson couplings that violate CP symmetry.

CP symmetry is particularly interesting as violations reveal fundamental differences in the behaviour of matter and antimatter, potentially explaining why the former appears to be much more abundant in the observed universe. While the Standard Model predicts that CP symmetry should be violated, the effect is not sufficient to account for the observed imbalance, motivating searches for additional sources of CP violation. CP symmetry requires that the laws of physics remain the same when particles are replaced by their corresponding antiparticles (C symmetry) and their spatial coordinates are reflected as in a mirror (P symmetry). In 1967, Andrei Sakharov established CP violation as one of three necessary requirements for a cosmic imbalance between matter and antimatter.

The CMS collaboration probed Higgs-boson interactions with electro­weak bosons and gluons, using decays into two energetic photons. This final state is particularly precise: photons are well reconstructed thanks to the energy resolution of the CMS electromagnetic calorimeter and backgrounds can be accurately estimated. The analysis employed 138 fb–1 of proton–proton collision data at a centre-of-mass energy of 13 TeV and focused on two main channels. Electroweak production of the Higgs boson, via vector boson fusion (VBF) or in association with a W or Z boson (VH), tests the Higgs boson’s couplings to electroweak gauge bosons. Gluon fusion, which occurs through loops dominated by the top quark, is sensitive to possible CP-violating interactions with fermions. A full angular analysis was performed to separate different coupling hypotheses, exploiting both the kinematic properties of the photons from the Higgs boson decay and the particles produced alongside it.

The matrix element likelihood approach (MELA) was used to minimise the number of observables, while retaining all essential information. Deep neural networks and boosted decision trees classified events based on their topology and kinematic properties, isolating signal-like events from background or alternative new-physics scenarios. Events were then grouped into analysis categories, each optimised to enhance sensitivity to anomalous couplings for a specific production mode.

The data favour the Standard Model configuration, with no significant deviation from its predictions (see figure 1). By placing some of the most stringent constraints yet on CP-violating interactions between the Higgs boson and vector bosons, the study highlights how precise measurements in simple final states can yield insights into the symmetries governing particle physics. With the upcoming data from Run 3 of the LHC and the High-Luminosity LHC, CMS is well positioned to push these limits further and potentially uncover hidden aspects of the Higgs sector.

The post CP symmetry in diphoton Higgs decays appeared first on CERN Courier.

]]>
News The CMS collaboration analysed Higgs-boson decays to two photons, setting limits on anomalous couplings that would violate CP symmetry. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_EF_CMS_feature.jpg
Charming energy–energy correlators https://cerncourier.com/a/charming-energy-energy-correlators/ Tue, 09 Sep 2025 08:19:11 +0000 https://cerncourier.com/?p=114448 The ALICE collaboration has measured energy–energy correlators of charm-quark jets for the first time, observing the expected suppression at small angles.

The post Charming energy–energy correlators appeared first on CERN Courier.

]]>
ALICE figure 1

Narrow sprays of particles called jets erupt from high-energy quarks and gluons. The ALICE collaboration has now measured so-called energy–energy correlators (EECs) of charm-quark jets for the first time – revealing new details of the elusive “dead cone” effect.

Unlike in quantum electrodynamics, the quantum chromodynamics (QCD) coupling constant gets weaker at higher energies – a feature known as asymptotic freedom. This allows high-energy partons to scatter and radiate additional partons, forming showers. As their energy splits between more and more products, decreasing toward the characteristic QCD confinement scale, interactions grow strong enough to bind partons within colour-neutral hadrons. The structure, energy profile and angular distribution of particles within the jets bear traces of the initial collision and the parton-to-hadron transitions, making them powerful probes of both perturbative and non-perturbative QCD effects. To understand the interplay between these two regimes, researchers track how jet properties vary with the mass and colour of the initiating partons.

Due to the gluon’s larger colour charge, QCD predicts gluon-initiated jets to be broader and contain more low-momentum particles than those from quarks. Additionally, the significant mass of heavy quarks should suppress collinear gluon emission, inducing the so-called “dead-cone” effect at small angles. These expectations can be tested by comparing jet substructure across flavours. A key observable for this purpose is the EEC, which measures how energy is distributed within a jet as a function of the angular separation RL between particle pairs. The large-RL region is dominated by early partonic splittings, reflecting perturbative dynamics, while a small RL value corresponds to later radiation shaped by final-state hadrons. The intermediate-RL region captures the transition where hadronisation begins to affect the jet structure. This characteristic shape enables the separation of perturbative and non-perturbative regimes, revealing flavour-dependent dynamics of jet formation and hadronisation.

The ALICE Collaboration measured the EEC for charm–quark jets tagged with D0 mesons, reconstructed via the D0 K π+ decay mode (branching ratio 3.93 ± 0.04%), in proton–proton collisions at centre-of-mass energy 13 TeV. Jets are inferred from charged-particle tracks using the anti-kT algorithm, clustering products in momentum space with a resolution parameter R = 0.4.

At low transverse momentum, where the effect of the charm-quark mass is most prominent, the EEC amplitude is found to be significantly suppressed for charm jets relative to inclusive jets initiated by light-quarks and gluons. The difference is more pronounced at small angles due to the dead-cone effect (see figure 1). Despite the sizable charm–quark mass, the distribution peak position remains similar across the two populations, pointing to a complex mix of parton flavour effects in the shower evolution and enhanced non-perturbative contributions such as hadronisation. Perturbative QCD calculations reproduce the general shape at large RL but show tension near the peak, indicating the need for theoretical improvements for heavy-quark jets. The upward trend in the ratio of charm to inclusive jets as a function of RL, reproduced with PYTHIA 8, suggests that they deviate in fragmentation.

This first measurement of the heavy-flavour jet EEC helps disentangle perturbative and non-perturbative QCD effects in jet formation, constraining theoretical models. Furthermore, it provides an essential vacuum baseline for future studies in heavy-ion collisions, where the quark–gluon plasma is expected to alter jet properties.

The post Charming energy–energy correlators appeared first on CERN Courier.

]]>
News The ALICE collaboration has measured energy–energy correlators of charm-quark jets for the first time, observing the expected suppression at small angles. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_EF_ALICE_feature.jpg
Mapping rare Higgs-boson decays https://cerncourier.com/a/mapping-rare-higgs-boson-decays/ Tue, 09 Sep 2025 08:19:08 +0000 https://cerncourier.com/?p=114453 The ATLAS collaboration reports on combined Run-2 and Run-3 results on the rare Higgs boson decay channel H→μμ and H→Zγ.

The post Mapping rare Higgs-boson decays appeared first on CERN Courier.

]]>
ATLAS figure 1

Rare, unobserved decays of the Higgs boson are natural places to search for new physics. At the EPS-HEP conference, the ATLAS collaboration presented new improved measurements of two highly suppressed Higgs decays: into a pair of muons; and into a Z boson accompanied by a photon. Producing a single event of either H → μμ or H → Zγ→ (ee/μμ) γ at the LHC requires, on average, around 10 trillion proton–proton collisions. The H → μμ and H → Zγ signals appear as narrow resonances in the dimuon and Zγ invariant mass spectra, atop backgrounds some three orders of magnitude larger.

In the Standard Model, the Brout–Englert–Higgs mechanism gives mass to the muon through its Yukawa coupling to the Higgs field, which can be tested via the rare H → μμ decay. An indirect comparison with the well-known muon mass, determined to 22 parts per billion, provides a stringent test of the mechanism in the second fermion generation and is a powerful probe of new physics. With a branching ratio of just 0.02%, and a large background dominated by the Drell–Yan production of muon pairs through virtual photons or Z bosons, the inclusive signal-over-background ratio plunges to the level of one part in a thousand. To single out its decay signature, the ATLAS collaboration employed machine-learning techniques for background suppression and generated over five billion Drell–Yan Monte Carlo events at next-to-leading-order accuracy in QCD, all passed through the full detector simulation. This high-precision sample provides templates to refine the background model and minimise bias on the tiny H → μμ signal.

The Higgs boson can decay into a Z boson and a photon via loop diagrams involving W bosons and heavy charged fermions, like the top quark. Detecting this rare process would complete the suite of established decays into electroweak boson pairs and offer a window on physics beyond the Standard Model. To reduce QCD background and improve sensitivity, the ATLAS analysis focused on Z bosons further decaying into electron or muon pairs, with an overall branching fraction of 7%. This additional selection reduces the event rate to about one in 10,000 Higgs decays, with an inclusive signal-over-background ratio at the per-mille level. The low momenta of final-state particles, combined with the high-luminosity conditions of LHC Run 3, pose additional challenges for signal extraction and suppression of Z + jets backgrounds. To enhance signal significance, the ATLAS collaboration improved background modelling techniques, optimised event categorisation by Higgs production mode, and employed machine learning to boost sensitivity.

The two ATLAS searches are based on 165 fb–1 of LHC Run 3 proton–proton collision data collected between 2022 and 2024 at √s = 13.6 TeV, with a rigorous blinding procedure in place to prevent biases. Both channels show excesses at the Higgs-boson mass of 125.09 GeV, with observed (expected) 2.8σ (1.8σ) significance for H to μμ and 1.4σ (1.5σ) for H to Zγ. These results are strengthened by combining them with 140 fb–1 of Run-2 data collected at √s = 13 TeV, updating the H → μμ and H → Zγ observed (expected) significances to 3.4σ (2.5σ) and 2.5σ (1.9σ), respectively (see figure 1). The measured signal strengths are consistent with the Standard Model within uncertainties.

These results mark the ATLAS collaboration’s first evidence for the H → μμ decay, following the earlier claim by CMS based on Run-2 data (see CERN Courier September/October 2020 p7). Meanwhile, the H → Zγ search achieves a 19% increase in expected significance with respect to the combined ATLAS–CMS Run-2 analysis, which first reported evidence for this process. As Run 3 data-taking continues, the LHC experiments are closing in on establishing these two rare Higgs decay channels. Both will remain statistically limited throughout the LHC’s lifetime, with ample room for discovery in the high-luminosity phase.

The post Mapping rare Higgs-boson decays appeared first on CERN Courier.

]]>
News The ATLAS collaboration reports on combined Run-2 and Run-3 results on the rare Higgs boson decay channel H→μμ and H→Zγ. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_EF_ATLAS_feature.jpg
Closing the gap on axion-like particles https://cerncourier.com/a/closing-the-gap-on-axion-like-particles/ Tue, 09 Sep 2025 08:19:00 +0000 https://cerncourier.com/?p=114437 The LHCb collaboration searched for axion-like particles decaying into photon pairs, setting bounds on their couplings in the low-mass region.

The post Closing the gap on axion-like particles appeared first on CERN Courier.

]]>
LHCb figure 1

Axion-like particles (ALPs) are some of the most promising candidates for physics beyond the Standard Model. At the LHC, searches for ALPs that couple to gluons and photons have so far been limited to masses above 10 GeV due to trigger requirements that reduce low-energy sensitivity. In its first ever analysis on purely neutral final states, the LHCb collaboration has now extended this experimental reach and set new bounds on the ALP parameter space.

When a global symmetry is spontaneously broken, it gives rise to massless excitations called Goldstone bosons, which reflect the system’s freedom to transform continuously without changing its energy. It is thought that ALPs may arise via a similar mechanism, acquiring a small mass though, as they originate from symmetries that are only approximate. Depending on the underlying theory, they could contribute to dark matter, solve the strong-CP problem, or mediate interactions with a hidden sector. Their coupling to known particles varies across models, leading to a range of potential experimental signatures. Among the most compelling are those involving gluons and photons.

Thanks to the magnitude of the strong coupling constant, even a small interaction with gluons can dominate the production and decay of ALPs. This makes searches at the LHC challenging since low-energy jets in proton–proton collisions are often indistinguishable from the expected ALP decay signature. In this environment, a more effective approach is to focus on the photon channel and search for ALPs that are produced in proton–proton collisions – mostly via gluon–gluon fusion – and that decay into photon pairs. These processes have been investigated at the LHC, but previous searches were limited by trigger thresholds requesting photons with large momentum components transverse to the beam. This is particularly restrictive for low-mass ALPs, whose decay products are often too soft to pass these thresholds.

The new search, based on Run-2 data collected in 2018, overcomes this limitation by leveraging the LHCb detector’s flexible software-based trigger system, lower pile-up and forward geometry. The latter enhances sensitivity to products with a small momentum component transverse to the beam, making it well suited to probe resonances in the 4.9 to 19.4 GeV mass region. This is the first LHCb analysis of a purely neutral final state, hence requiring a new trigger and selection strategy, as well as a dedicated calibration procedure. Candidate photon pairs are identified from two high-energy calorimeter clusters, produced in isolation from the rest of the event, which could not originate from charged particles or neutral pions. ALP decays are then sought using maximum likelihood fits that scan the photon-pair invariant mass spectrum for peaks.

No photon-pair excess is observed over the background-only hypothesis, and upper limits are set on the ALP production cross-section times decay branching. These results constrain the ALP decay rate and its coupling to photons, probing a region of parameter space that has so far remained unexplored (see figure 1). The investigated mass range is also of interest beyond ALP searches. Alongside the main analysis, the study targeted two-photon decays of B0(s) and the little-studied ηb meson, almost reaching the sensitivity required for its detection.

The upgraded LHCb detector, which began operations with Run 3 in 2022, is expected to deliver another boost in sensitivity. This will allow future analyses to benefit from the extended flexibility of its purely software trigger, significantly larger datasets and a wider energy coverage of the upgraded calorimeter.

The post Closing the gap on axion-like particles appeared first on CERN Courier.

]]>
News The LHCb collaboration searched for axion-like particles decaying into photon pairs, setting bounds on their couplings in the low-mass region. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_EF_LHCb_feature.jpg
Four reasons dark energy should evolve with time https://cerncourier.com/a/four-reasons-dark-energy-should-evolve-with-time/ Tue, 09 Sep 2025 08:18:35 +0000 https://cerncourier.com/?p=114578 Robert Brandenberger argues that the unchanging cosmological constant of the ΛCDM model is theoretically problematic.

The post Four reasons dark energy should evolve with time appeared first on CERN Courier.

]]>
In the late 1990s, observational evidence accumulated that the universe is currently undergoing an accelerating expansion. Its cause remains a major mystery for physics. The term “dark energy” was coined to explain the data, however, we have no idea what dark energy is. All we know is that it makes up about 70% of the energy density of the universe, and that it does not behave like regular matter – if it is indeed matter and not a modification of the laws of gravity on cosmological scales. If it is matter, then it must have a pressure density close to p = –ρ, where ρ is its energy density. The cosmological constant in Einstein’s equations for spacetime acts precisely this way, and a cosmological constant has therefore long been regarded as the simplest explanation for the observations. It is the bedrock of the prevailing ΛCDM model of cosmology – a setup where dark energy is time-independent. But recent observations by the Dark Energy Spectroscopic Instrument provide tantalising evidence that dark energy might be time-dependent, with its pressure slightly increasing over time (CERN Courier May/June 2025 p11). If upcoming data confirm these results, it would require a paradigm shift in cosmology, ruling out the ΛCDM model.

Mounting evidence

From the point of view of fundamental theory, there are at least four good reasons to believe that dark energy must be time-dependent and cannot be a cosmological constant.

The first piece of evidence is well known: if there is a cosmological constant induced by a particle-physics description of matter, then its value should be 120 orders of magnitude larger than observations indicate. This is the famous cosmological constant problem.

Robert H Brandenberger

A second argument is the “infrared instability” of a spacetime induced by a cosmological constant. Alexander Polyakov (Princeton) has forcefully argued that inhomogeneities on very large length scales would gradually mask a preexisting cosmological constant, making it appear to vary over time.

Recently, other arguments have been put forwards indicating that dark energy must be time-dependent. Since quantum matter generates a large cosmological constant when treated as an effective field theory, it should be expected that the cosmological constant problem can only be addressed in a quantum theory of all forces. The best candidate we have is superstring theory. There is mounting evidence that – at least in the regions of the theory under mathematical control – it is impossible to obtain a positive cosmological constant corresponding to the observed accelerating expansion. But one can obtain time-dependent dark energy, for example in quintessence toy models.

Recent observations provide tantalising evidence that dark energy might be time-dependent

The final reason is known as the trans-Planckian censorship conjecture. As the nature of dark energy remains a complete mystery, it is often treated as an effective field theory. This means that one expands all fields in Fourier modes and quantises each field as a harmonic oscillator. The modes one uses have wavelengths that increase in proportion to the scale of space. This creates a theoretical headache at the highest energies. To avoid infinities, an “ultraviolet cutoff” is required at or below the Planck mass. This must be at a fixed physical wavelength. In order to maintain this cutoff in an expanding space, it is necessary to continuously create new modes at the cutoff scale as the wavelength of the previously present modes increases. This implies a violation of unitarity. If dark energy were a cosmological constant, then modes with wavelength equal to the cutoff scale at the present time would become classical at some time in the future, and the violation of unitarity would be visible in hypothetical future observations. To avoid this problem, we conclude that dark energy must be time-dependent.

Because of its deep implications for fundamental physics, we are eagerly awaiting new observational results that will shine more light on the issue of the time-dependence of dark energy.

The post Four reasons dark energy should evolve with time appeared first on CERN Courier.

]]>
Opinion Robert Brandenberger argues that the unchanging cosmological constant of the ΛCDM model is theoretically problematic. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_VIEW_DES.jpg
High-energy physics meets in Marseille https://cerncourier.com/a/high-energy-physics-meets-in-marseille/ Tue, 09 Sep 2025 08:18:00 +0000 https://cerncourier.com/?p=114472 The 2025 European Physical Society Conference on High Energy Physics took place in Marseille from 7 to 11 July.

The post High-energy physics meets in Marseille appeared first on CERN Courier.

]]>
EPS-HEP 2025

The 2025 European Physical Society Conference on High Energy Physics (EPS-HEP), held in Marseille from 7 to 11 July, took centre stage in this pivotal year for high-energy physics as the community prepares to make critical decisions on the next flagship collider at CERN to enable major leaps at the high-precision and high-energy frontiers. The meeting showcased the remarkable creativity and innovation in both experiment and theory, driving progress across all scales of fundamental physics. It also highlighted the growing interplay between particle, nuclear, astroparticle physics and cosmology.

Advancing the field relies on the ability to design, build and operate increasingly complex instruments that push technological boundaries. This requires sustained investment from funding agencies, laboratories, universities and the broader community to support careers and recognise leadership in detectors, software and computing. Such support must extend across construction, commissioning and operation, and include strategic and basic R&D. The implementation of detector R&D (DRD) collaborations, as outlined in the 2021 ECFA roadmap, is an important step in this direction.

Physics thrives on precision, and a prime example this year came from the Muon g–2 collaboration at Fermilab, which released its final result combining all six data runs, achieving an impressive 127 parts-per-billion precision on the muon anomalous magnetic moment (CERN Courier July/August 2025 p7). The result agrees with the latest lattice–QCD predictions for the leading hadronic–vacuum-polarisation term, albeit within a four times larger theoretical uncertainty than the experimental one. Continued improvements to lattice QCD and to the traditional dispersion-relation method based on low-energy e+e and τ data are expected in the coming years.

Runaway success

After the remarkable success of LHC Run 2, Run 3 has now surpassed it in delivered luminosity. Using the full available Run-2 and Run-3 datasets, ATLAS reported 3.4σ evidence for the rare Higgs decay to a muon pair, and a new result on the quantum-loop mediated decay into a Z boson and a photon, now more consistent with the Standard Model prediction than the earlier ATLAS and CMS Run-2 combination (see “Mapping rare Higgs-boson decays”). ATLAS also presented an updated study of Higgs pair production with decays into two b-quarks and two photons, whose sensitivity was increased beyond statistical gains thanks to improved reconstruction and analysis. CMS released a new Run-2 search for Higgs decays to charm quarks in events produced with a top-quark pair, reaching sensitivity comparable to the traditional weak-boson-associated production. Both collaborations also released new combinations of nearly all their Higgs analyses from Run 2, providing a wide set of measurements. While ATLAS sees overall agreement with predictions, CMS observes some non-significant tensions.

Advancing the field relies on the ability to design, build and operate increasingly complex instruments that push technological boundaries

A highlight in top-quark physics this year was the observation by CMS of an excess in top-pair production near threshold, confirmed at the conference by ATLAS (see “ATLAS confirms top–antitop excess”). The physics of the strong interaction predicts highly compact, colour-singlet, quasi-bound pseudoscalar top–antitop state effects arising from gluon exchange. Unlike bottomonium or charmonium, no proper bound state is formed due to the rapid weak decay of the top quark (see “Memories of quarkonia”). This “toponium” effect can be modelled with the use of non-relativistic QCD. Both experiments observed a cross section about 100 times smaller than for inclusive top-quark pair production. The subtle signal and complex threshold modelling make the analysis challenging, and warrant further theoretical and experimental investigation.

A major outcome of LHC Run 2 is the lack of compelling evidence for physics beyond the Standard Model. In Run 3, ATLAS and CMS continue their searches, aided by improved triggers, reconstruction and analysis techniques, as well as a dataset more than twice as large, enabling a more sensitive exploration of rare or suppressed signals. The experiments are also revisiting excesses seen in Run 2, for example, a CMS hint of a new resonance decaying into a Higgs and another scalar was not confirmed by a new ATLAS analysis including Run-3 data.

Hadron spectroscopy has seen a renaissance since Belle’s 2003 discovery of the exotic X(3872), with landmark advances at the LHC, particularly by LHCb. CMS recently reported three new four-charm-quark states decaying into J/ψ pairs between 6.6 and 7.1 GeV. Spin-parity analysis suggests they are tightly bound tetraquarks rather than loosely bound molecular states (CERN Courier November/December 2024 p33).

Rare observations

Flavour physics continues to test the Standard Model with high sensitivity. Belle-II and LHCb reported new CP violation measurements in the charm sector, confirming the expected small effects. LHCb observed, for the first time, CP violation in the baryon sector via Λb decays, a milestone in CP violation history. NA62 at CERN’s SPS achieved the first observation of the ultra-rare kaon decay K+→ π+νν with a branching ratio of 1.3 × 10–10, matching the Standard Model prediction. MEG-II at PSI set the most stringent limit to date on the lepton-flavour-violating decay μ → eγ, excluding branching fractions above 1.5 × 10–13. Both experiments continue data taking until 2026.

Heavy-ion collisions at the LHC provide a rich environment to study the quark–gluon plasma, a hot, dense state of deconfined quarks and gluons, forming a collective medium that flows as a relativistic fluid with an exceptionally low viscosity-to-entropy ratio. Flow in lead–lead collisions, quantified by Fourier harmonics of spatial momentum anisotropies, is well described by hydrodynamic models for light hadrons. Hadrons containing heavier charm and bottom quarks show weaker collectivity, likely due to longer thermalisation times, while baryons exhibit stronger flow than mesons due to quark coalescence. ALICE reported the first LHC measurement of charm–baryon flow, consistent with these effects.

Spin-parity analysis suggests the states are tightly bound tetraquarks

Neutrino physics has made major strides since oscillations were confirmed 27 years ago, with flavour mixing parameters now known to a few percent.  Crucial questions still remain: are neutrinos their own antiparticles (Majorana fermions)? What is the mass ordering – normal or inverted? What is the absolute mass scale and how is it generated? Does CP violation occur? What are the properties of the right-handed neutrinos? These and other questions have wide-ranging implications for particle physics, astrophysics and cosmology.

Neutrinoless double-beta decay, if observed, would confirm that neutrinos are Majorana particles. Experiments using xenon and germanium are beginning to constrain the inverted mass ordering, which predicts higher decay rates. Recent combined data from the long-baseline experiments T2K and NOvA show no clear preference for either ordering, but exclude vanishing CP violation at over 3σ in the inverted scenario. The KM3NeT detector in the Mediterranean, with its ORCA and ARCA components, has delivered its first competitive oscillation results, and detected a striking ~220 PeV muon neutrino, possibly from a blazar (CERN Courier March/April 2025 p7). The next-generation large-scale neutrino experiments JUNO (China), Hyper-Kamiokande (Japan) and LBNF/DUNE (USA) are progressing in construction, with data-taking expected to begin in 2025, 2028 and 2031, respectively. LBNF/DUNE is best positioned to determine the neutrino mass ordering, while Hyper-Kamiokande will be the most sensitive to CP violation. All three will also search for proton decay, a possible messenger of grand unification.

There is compelling evidence for dark matter from gravitational effects across cosmic times and scales, as well as indications that it is of particle origin. Its possible forms span a vast mass range, up to the ~100 TeV unitarity limit for a thermal relic, and may involve a complex, structured “dark sector”. The wide complementarity among the search strategies gives the field a unifying character. Direct detection experiments looking for tiny, elastic nuclear recoils, such as XENONnT (Italy), LZ (USA) and PandaX-4T (China), have set world-leading constraints on weakly interacting massive particles. XENONnT and PandaX-4T have also reported first signals from boron-8 solar neutrinos, part of the so-called “neutrino fog” that will challenge future searches. Axions, introduced theoretically to suppress CP violation in strong interactions, could be viable dark-matter candidates. They would be produced in the early universe with enormous number density, behaving, on galactic scales, as a classical, nonrelativistic, coherently oscillating bosonic field, effectively equivalent to cold dark matter. Axions can be detected via their conversion into photons in strong magnetic fields. Experiments using microwave cavities have begun to probe the relevant μeV mass range of relic QCD axions, but the detection becomes harder at higher masses. New concepts, using dielectric disks or wire-based plasmonic resonance, are under development to overcome these challenges.

Cosmological constraints

Cosmology featured prominently at EPS-HEP, driven by new results from the analysis of DESI DR2 baryon acoustic oscillation (BAO) data, which include 14 million redshifts. Like the cosmic microwave background (CMB), BAO also provides a “standard ruler” to trace the universe’s expansion history – much like supernovae (SNe) do as standard candles. Cosmological surveys are typically interpreted within the ΛCDM model, a six-parameter framework that remarkably accounts for 13.8 billion years of cosmic evolution, from inflation and structure formation to today’s energy content, despite offering no insight into the nature of dark matter, dark energy or the inflationary mechanism. Recent BAO data, when combined with CMB and SNe surveys, show a preference for a form of dark energy that weakens over time. Tensions also persist in the Hubble expansion rate derived from early-universe (CMB and BAO) and late-universe (SN type-Ia) measurements (CERN Courier March/April 2025 p28). However, anchoring SN Ia distances in redshift remains challenging, and further work is needed before drawing firm conclusions.

Cosmological fits also constrain the sum of neutrino masses. The latest CMB and BAO-based results within ΛCDM appear inconsistent with the lower limit implied by oscillation data for inverted mass ordering. However, firm conclusions are premature, as the result may reflect limitations in ΛCDM itself. Upcoming surveys from the Euclid satellite and the Vera C. Rubin Observatory (LSST) are expected to significantly improve cosmological constraints.

Cristinel Diaconu and Thomas Strebler, chairs of the local organising committee, together with all committee members and many volunteers, succeeded in delivering a flawlessly organised and engaging conference in the beautiful setting of the Palais du Pharo overlooking Marseille’s old port. They closed the event with a memorial phrase of British cyclist Tom Simpson: “There is no mountain too high.”

The post High-energy physics meets in Marseille appeared first on CERN Courier.

]]>
Meeting report The 2025 European Physical Society Conference on High Energy Physics took place in Marseille from 7 to 11 July. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_FN_EPS-HEP_feature.jpg
Probing the dark side from Kingston https://cerncourier.com/a/probing-the-dark-side-from-kingston/ Tue, 09 Sep 2025 08:17:58 +0000 https://cerncourier.com/?p=114491 The international conference Dark Matter and Stars: Multi-Messenger Probes of Dark Matter and Modified Gravity was held at Queen’s University in Kingston, Ontario, Canada, from 14 to 16 July.

The post Probing the dark side from Kingston appeared first on CERN Courier.

]]>
The nature of dark matter remains one of the greatest unresolved questions in modern physics. While ground-based experiments persist in their quest for direct detection, astrophysical observations and multi-messenger studies have emerged as powerful complementary tools for constraining its properties. Stars across the Milky Way and beyond – including neutron stars, white dwarfs, red giants and main-sequence stars – are increasingly recognised as natural laboratories for probing dark matter through its interactions with stellar interiors, notably via neutron-star cooling, astero­seismic diagnostics of solar oscillations and gravitational-wave emission.

The international conference Dark Matter and Stars: Multi-Messenger Probes of Dark Matter and Modified Gravity (ICDMS) was held at Queen’s University in Kingston, Ontario, Canada, from 14 to 16 July. The meeting brought together around 70 researchers from across astrophysics, cosmology, particle physics and gravitational theory. The goal was to foster interdisciplinary dialogue on how observations of stellar systems, gravitational waves and cosmological data can help shed light on the dark sector. The conference was specifically dedicated to exploring how astrophysical and cosmological systems can be used to probe the nature of dark matter.

The first day centred on compact objects as natural laboratories for dark-matter physics. Giorgio Busoni (University of Adelaide) opened with a comprehensive overview of recent theoretical progress on dark-matter accumulation in neutron stars and white dwarfs, highlighting refinements in the treatment of relativistic effects, optical depth, Fermi degeneracy and light mediators – all of which have shaped the field in recent years. Melissa Diamond (Queen’s University) followed with a striking talk with a nod to Dr. Strangelove, exploring how accumulated dark matter might trigger thermonuclear instability in white dwarfs. Sandra Robles (Fermilab) shifted the perspective from neutron stars to white dwarfs, showing how they constrain dark-matter properties. One of the authors highlighted postmerger gravitational-wave observations as a tool to distinguish neutron stars from low-mass black holes, offering a promising avenue for probing exotic remnants potentially linked to dark matter. Axions featured prominently throughout the day, alongside extensive discussions of the different ways in which dark matter affects neutron stars and their mergers.

ICDMS continues to strengthen the interface between fundamental physics and astrophysical observations

On the second day, attention turned to the broader stellar population and planetary systems as indirect detectors. Isabelle John (University of Turin) questioned whether the anomalously long lifetimes of stars near the galactic centre might be explained by dark-matter accumulation. Other talks revisited stellar systems – white dwarfs, red giants and even speculative dark stars – with a focus on modelling dark-matter transport and its effects on stellar heat flow. Complementary detection strategies also took the stage, including neutrino emission, stochastic gravitational waves and gravitational lensing, all offering potential access to otherwise elusive energy scales and interaction strengths.

The final day shifted toward galactic structure and the increasingly close interplay between theory and observation. Lina Necib (MIT) shared stellar kinematics data used to map the Milky Way’s dark-matter distribution, while other speakers examined the reliability of stellar stream analyses and subtle anomalies in galactic rotation curves. The connection to terrestrial experiments grew stronger, with talks tying dark matter to underground detectors, atomic-precision tools and cosmological observables such as the Lyman-alpha forest and baryon acoustic oscillations. Early-career researchers contributed actively across all sessions, underscoring the field’s growing vitality and introducing a fresh influx of ideas that is expanding its scope.

The ICDMS series is now in its third edition. It began in 2018 at Instituto Superior Técnico, Portugal, and is poised to become an annual event. The next conference will take place at the University of Southampton, UK, in 2026, followed by the Massachusetts Institute of Technology in the US in 2027. With increasing participation and growing international interest, the ICDMS series continues to strengthen the interface between fundamental physics and astrophysical observations in the quest to understand the nature of dark matter.

The post Probing the dark side from Kingston appeared first on CERN Courier.

]]>
Meeting report The international conference Dark Matter and Stars: Multi-Messenger Probes of Dark Matter and Modified Gravity was held at Queen’s University in Kingston, Ontario, Canada, from 14 to 16 July. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_FN_EPS-ICDMS.jpg
Loopsummit returns to Cadenabbia https://cerncourier.com/a/loopsummit-returns-to-cadenabbia/ Tue, 09 Sep 2025 08:17:29 +0000 https://cerncourier.com/?p=114494 Loopsummit-2 2025 was held on the banks of Lake Como from 20 to 25 July.

The post Loopsummit returns to Cadenabbia appeared first on CERN Courier.

]]>
Measurements at high-energy colliders such as the LHC, the Electron–Ion Collider (EIC) and the FCC will be performed at the highest luminosities. The analysis of the high-precision data taken there will require a significant increase in the accuracy of theoretical predictions. To achieve this, new mathematical and algorithmic technologies are needed. Developments in precision Standard Model calculations have been rapid since experts last met for Loopsummit-1 at Cadenabbia on the banks of Lake Como in 2021 (CERN Courier November/December 2021 p24). Loopsummit-2, held in the same location from 20 to 25 July this year, summarised this formidable body of work.

As higher experimental precision relies on new technologies, new theory results require better algorithms, both from the mathematical and computer-algebraic side, and new techniques in quantum field theory. The central software package for perturbative calculations, FORM, now has a new major release, FORM 5. Progress has also been achieved in integration-by-parts reduction, which is of central importance for reducing to a much smaller set of master integrals. New developments were also reported in analytic and numerical Feynman-diagram integration using Mellin–Barnes techniques, new compact function classes such as Feynman–Fox integrals, and modern summation technologies and methods to establish and solve gigantic recursions and differential equations of degree 4000 and order 100. The latest results on elliptic integrals and progress on the correct treatment of the γ5-problem in real dimensions were also presented. These technologies allow the calculation of processes up to five loops and in the presence of more scales at two- and three-loop order. New results for single-scale quantities like quark condensates and the ρ-parameter were also reported.

In the loop

Measurements at future colliders will depend on the precise knowledge of parton distribution functions, the strong coupling constant αs(MZ) and the heavy-quark masses. Experience suggests that going from one loop order to the next in the massless and massive cases takes 15 years or more, as new technologies must be developed. By now, most of the space-like four-loop splitting functions governing scaling violations are known with a good precision, as well as new results for the three-loop time-like splitting functions. The massive three-loop Wilson coefficients for deep-inelastic scattering are now complete, requiring far larger and different integral spaces compared with the massless case. Related to this are the Wilson coefficients of semi-inclusive deep-inelastic scattering at next-to-next-to leading order (NNLO), which will be important to tag individual flavours at the EIC. For the αs(MZ) measurement at low-scale processes, the correct treatment of renormalon contributions is necessary. Collisions at high energies also allow the detailed study of scattering processes in the forward region of QCD. Other long-term projects concern NNLO corrections for jet-production at e+e and hadron colliders, and other related processes like Higgs-boson and top-quark production, in some cases with a large number of partons in the final state. This also includes the use of effective Lagrangians.

Many more steps lie ahead if we are to match the precision of measurements at high-luminosity colliders

The complete calculation of difficult processes at NNLO and beyond always drives the development of term-reduction algorithms and analytic or numerical integration technologies. Many more steps lie ahead in the coming years if we are to match the precision of measurements at high-luminosity colliders. Some of these will doubtless be reported at Loopsummit-3 in summer 2027.

The post Loopsummit returns to Cadenabbia appeared first on CERN Courier.

]]>
Meeting report Loopsummit-2 2025 was held on the banks of Lake Como from 20 to 25 July. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_FN_loopsummit.jpg
Geneva witnesses astroparticle boom https://cerncourier.com/a/geneva-witnesses-astroparticle-boom/ Tue, 09 Sep 2025 08:17:11 +0000 https://cerncourier.com/?p=114482 The 39th edition of the International Cosmic Ray Conference was held in Geneva from 15 to 24 July.

The post Geneva witnesses astroparticle boom appeared first on CERN Courier.

]]>
ICRC 2025

The 39th edition of the International Cosmic Ray Conference (ICRC), a key biennial conference in astroparticle physics, was held in Geneva from 15 to 24 July. Plenary talks covered solar, galactic and ultra-high-energy cosmic rays. A strong multi-messenger perspective combined measurements of charged particles, neutrinos, gamma rays and gravitational waves. Talks were informed by limits from the LHC and elsewhere on dark-matter particles and primordial black-holes. The bundle of constraints has improved very significantly over the past few years, allowing more meaningful and stringent tests.

Solar modelling

The Sun and its heliosphere, where the solar wind offers insights into magnetic reconnection, shock acceleration and diffusion, are now studied in situ thanks to the Solar Orbiter and Parker Solar Probe spacecraft. Long-term PAMELA and AMS data, spanning over an 11-year solar cycle, allow precise modelling of solar modulation of cosmic-ray fluxes below a few tens of GeV. AMS solar proton data show a 27-day periodicity up to 20 GV, caused by corotating interaction regions where fast solar wind overtakes slower wind, creating shocks. AMS has recorded 46 solar energetic particle (SEP) events, the most extreme reaching a few GV, from magnetic-reconnection flares or fast coronal mass ejections. While isotope data once suggested such extreme events occur every 1500 years, Kepler observations of Sun-like stars indicate they may happen every 100 years, releasing more than 1034 erg, often during weak solar minima, and linked to intense X-ray flares.

The spectrum of galactic cosmic rays, studied with high-precision measurements from satellites (DAMPE) and ISS-based experiments (AMS-02, CALET, ISS-CREAM), is not a single power law but shows breaks and slope changes, signatures of diffusion or source effects. A hardening at about 500 GV, common to all primaries, and a softening at 10 TV, are observed in protons and He spectra by all experiments – and for the first time also in DAMPE’s O and C. As the hardening is detected in primary spectra scaling at the same rigidity (charge, not mass) as in secondary-to-primary ratios, they are attributed to propagation in the galaxy and not to source-related effects. This is supported by secondary (Li, Be, B) spectra with breaks about twice as strong as primaries (He, C, O). A second hardening at 150 TV was reported by ISS-CREAM (p) and DAMPE (p + He) for the first time, broadly consistent – within large hadronic-model and statistical uncertainties – with indirect ground-based results from GRAPES and LHAASO.

A strong multi-messenger perspective combined measurements of charged particles, neutrinos, gamma rays and gravitational waves

Ratios of secondary over primary species versus rigidity R (energy per unit charge) probe the ratio of the galactic halo size H to the energy-dependent diffusion coefficient D(R), and so measure the “grammage” of material through which cosmic rays propagate. Unstable/stable secondary isotope ratios probe the escape times of cosmic rays from the halo (H2/D(R)), so from both measurements H and D(R) can be derived. The flattening evidenced by the highest energy point at 10 to 12 GeV/nucleon of the 10Be/9Be ratio as a function of energy, hints at a possibly larger halo than previously believed beyond 5 kpc, to be tested by HELIX. AMS-02 spectra of single elements will soon allow separation of the primary and secondary fractions for each nucleus, also based on spallation cross-sections. Anomalies remain, such as a flattening at ~7 TeV/nucleon in Li/C and B/C, possibly indicating reacceleration or source grammage. AMS-02’s 7Li/6Li ratio disagrees with pure secondary models, but cross-section uncertainties preclude firm conclusions on a possible Li primary component, which would be produced by a new population of sources.

The muon puzzle

The dependency of ground-based cosmic-ray measurements on hadronic models has been widely discussed by Boyd and Pierog, highlighting the need for more measurements at CERN, such as the recent proton-O run being analysed by LHCf. The EPOS–LHC model, based on the core–corona approach, shows reduced muon discrepancies, producing more muons and a heavier composition, namely deeper shower maxima (+20 g/cm2) than earlier models. This clarifies the muon puzzle raised by Pierre Auger a few years ago of a larger muon content in atmospheric showers than simulations. A fork-like structure remains in the knee region of the proton spectrum, where the new measurements presented by LHAASO are in agreement with IceTop/IceCube, and could lead to a higher content of protons beyond the knee than hinted at by KASCADE and the first results of GRAPES. Despite the higher proton fluxes, a dominance of He above the knee is observed, which requires a special kind of close-by source to be hypothesised.

Multi-messenger approaches

Gamma-ray and neutrino astrophysics were widely discussed at the conference, highlighting the relevance of multi-messenger approaches. LHAASO produced impressive results on UHE astrophysics, revealing a new class of pevatrons: microquasars alongside young massive clusters, pulsar wind nebulae (PWNe) and supernova remnants.

Microquasars are gamma-ray binaries containing a stellar-mass black hole that drives relativistic jets while accreting matter from their companion stars. Outstanding examples include Cyg X-3, a potential PeV microquasar, from which the flux of PeV photons is 5–10 times higher than in the rest of the Cygnus bubble.

Five other microquasars are observed beyond 100 TeV: SS 433, V4641 Sgr, GRS 1915 + 105, MAXI J1820 + 070 and Cygnus X-1. SS 433 is a microquasar with two gamma-ray emitting jets nearly perpendicular to our line of sight, terminated at 40 pc from the black hole (BH) identified by HESS and LHAASO beyond 10 TeV. Due to the Klein–Nishina effect, the inverse Compton flux above ~10 TeV is gradually suppressed, and an additional spectral component is needed to explain the flux around 100 TeV.

Gamma-ray and neutrino astrophysics were widely discussed at the conference

Beyond 100 TeV, LHAASO also identifies a source coincident with a giant molecular cloud; this component may be due to protons accelerated close to the BH or in the lobes. These results demonstrate the ability to resolve the morphology of extended galactic sources. Similarly, ALMA has discovered two hotspots, both at 0.28° (about 50 pc) from GRS 1915 + 105 in opposite directions from its BH. These may be interpreted as two lobes, or the extended nature of the LHAASO source may instead be due to the spatial distribution of the surrounding gas, if the emission from GRS 1915 + 105 is dominated by hadronic processes.

Further discussions addressed pulsar halos and PWNe as unique laboratories for studying the diffusion of electrons and mysterious as-yet-unidentified pevatrons, such as MGRO J1908 + 06, coincident with a SNR (favoured) and a PSR. One of these sources may finally reveal an excess in KM3NeT or IceCube neutrinos, proving their cosmic-ray accelerator nature directly.

The identification and subtraction of source fluxes on the galactic plane is also important for the measurement of the galactic plane neutrino flux by IceCube. This currently assumes a fixed spectral index of E–2.7, while authors like Grasso et al. presented a spectrum becoming as soft as E–2.4, closer to the galactic centre. The precise measurements of gamma-ray source fluxes and the diffuse emission from galactic cosmic rays interacting in the interstellar matter lead to better constraints on neutrino observations and on cosmic ray fluxes around the knee.

Cosmogenic origins

KM3NeT presented a neutrino of energy well beyond the diffuse cosmic neutrino flux of IceCube, which does not extend beyond 10 PeV (CERN Courier March/April 2025 p7). Its origin was widely discussed at the conference. The large error on its estimated energy – 220 PeV, within a 1σ confidence interval of 110 to 790 PeV – makes it nevertheless compatible with the flux observed by IceCube, for which a 30 TeV break was first hypothesised at this conference. If events of this kind are confirmed, they could have transient or dark-matter origins, but a cosmogenic origin is improbable due to the IceCube and Pierre Auger limits on the cosmogenic neutrino flux.

The post Geneva witnesses astroparticle boom appeared first on CERN Courier.

]]>
Meeting report The 39th edition of the International Cosmic Ray Conference was held in Geneva from 15 to 24 July. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_FN_ICRC_feature.jpg
Quantum gravity beyond frameworks https://cerncourier.com/a/quantum-gravity-beyond-frameworks/ Tue, 09 Sep 2025 08:17:00 +0000 https://cerncourier.com/?p=114487 The third Quantum Gravity conference took place at Penn State University from 21 to 25 July 2025, bringing together researchers across the quantum gravity landscape.

The post Quantum gravity beyond frameworks appeared first on CERN Courier.

]]>
Matvej Bronštejn

Reconciling general relativity and quantum mechanics remains a central problem in fundamental physics. Though successful in their own domains, the two theories resist unification and offer incompatible views of space, time and matter. The field of quantum gravity, which has sought to resolve this tension for nearly a century, is still plagued by conceptual challenges, limited experimental guidance and a crowded landscape of competing approaches. Now in its third instalment, the “Quantum Gravity” conference series addresses this fragmentation by promoting open dialogue across communities. Organised under the auspices of the International Society for Quantum Gravity (ISQG), the 2025 edition took place from 21 to 25 July at Penn State University. The event gathered researchers working across a variety of frameworks – from random geometry and loop quantum gravity to string theory, holography and quantum information. At its core was the recognition that, regardless of specific research lines or affiliations, what matters is solving the puzzle.

One step to get there requires understanding the origin of dark energy, which drives the accelerated expansion of the universe and is typically modelled by a cosmological constant Λ. Yasaman K Yazdi (Dublin Institute for Advanced Studies) presented a case for causal set theory, reducing spacetime to a discrete collection of events, partially ordered to capture cause–effect relationships. In this context, like a quantum particle’s position and momentum, the cosmological constant and the spacetime volume are conjugate variables. This leads to the so-called “ever-present Λ” models, where fluctuations in the former scale as the inverse square root of the latter, decreasing over time but never vanishing. The intriguing agreement between the predicted size of these fluctuations and the observed amount of dark energy, while far from resolving quantum cosmology, stands as a compelling motivation for pursuing the approach.

In the spirit of John Wheeler’s “it from bit” proposal, Jakub Mielczarek (Jagiellonian University) suggested that our universe may itself evolve by computing – or at least admit a description in terms of quantum information processing. In loop quantum gravity, space is built from granular graphs known as spin networks, which capture the quantum properties of geometry. Drawing on ideas from tensor networks and holography, Mielczarek proposed that these structures can be reinterpreted as quantum circuits, with their combinatorial patterns reflected in the logic of algorithms. This dictionary offers a natural route to simulating quantum geometry, and could help clarify quantum theories that, like general relativity, do not rely on a fixed background.

Quantum clues

What would a genuine quantum theory of spacetime achieve, though? According to Esteban Castro Ruiz (IQOQI), it may have to recognise that reference frames, which are idealised physical systems used to define spatio-temporal distances, must themselves be treated as quantum objects. In the framework of quantum reference frames, notions such as entanglement, localisation and superposition become observer-dependent. This leads to a perspective-neutral formulation of quantum mechanics, which may offer clues for describing physics when spacetime is not only dynamical, but quantum.

The conference’s inclusive vocation came through most clearly in the thema­tic discussion sessions, including one on the infamous black-hole information problem chaired by Steve Giddings (UC Santa Barbara). A straightforward reading of Stephen Hawking’s 1974 result suggests that black holes radiate, shrink and ultimately destroy information – a process that is incompatible with standard quantum mechanics. Any proposed resolution must face sharp trade-offs: allowing information to escape challenges locality, losing it breaks unitarity and storing it in long-lived remnants undermines theoretical control. Giddings described a mild violation of locality as the lesser evil, but the controversy is far from settled. Still, there is growing consensus that dissolving the paradox may require new physics to appear well before the Planck scale, where quantum-gravity effects are expected to dominate.

Once the domain of pure theory, quantum gravity has become eager to engage with experiment

Among the few points of near-universal agreement in the quantum-gravity community has long been the virtual impossibility of detecting a graviton, the hypothetical quantum of the gravitational field. According to Igor Pikovski (Stockholm University), things may be less bleak than once thought. While the probability of seeing graviton-induced atomic transitions is negligible due to the weakness of gravity, the situation is different for massive systems. By cooling a macroscopic object close to absolute zero, Pikovski suggested, the effect could be amplified enough, with current interferometers simultaneously monitoring gravitational waves in the correct frequency window. Such a signal would not amount to a definitive proof of gravity’s quantisation, just as the photoelectric effect could not definitely establish the existence of photons, nor would it single out a specific ultraviolet model. However, it could constrain concrete predictions and put semiclassical theories under pressure. Giulia Gubitosi (University of Naples Federico II) tackled phenomenology from a different angle, exploring possible deviations from special relativity in models where spacetime becomes non-commutative. There, coordinates are treated like quantum operators, leading to effects like decoherence, modified particle speeds and soft departures from locality. Although such signals tend to be faint, they could be enhanced by high-energy astrophysical sources: observations of neutrinos corresponding to gamma-ray bursts are now starting to close in on these scenarios. Both talks reflected a broader, cultural shift: quantum gravity, once the domain of pure theory, has become eager to engage with experiment.

Quantum Gravity 2025 offered a wide snapshot of a field still far from closure, yet increasingly shaped by common goals, the convergence of approaches and cross-pollination. As intended, no single framework took centre stage, with a dialogue-based format keeping focus on the central, pressing issue at hand: understanding the quantum nature of spacetime. With limited experimental guidance, open exchange remains key to clarifying assumptions and avoiding duplication of efforts. Building on previous editions, the meeting pointed toward a future where quantum-gravity researchers will recognise themselves as part of a single, coherent scientific community.

The post Quantum gravity beyond frameworks appeared first on CERN Courier.

]]>
Meeting report The third Quantum Gravity conference took place at Penn State University from 21 to 25 July 2025, bringing together researchers across the quantum gravity landscape. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_FN_Bronstejn_feature.jpg
Ultra-peripheral physics in the ultraperiphery https://cerncourier.com/a/ultra-peripheral-physics-in-the-ultraperiphery/ Tue, 09 Sep 2025 08:16:44 +0000 https://cerncourier.com/?p=114500 In June 2025, physicists met at Saariselkä, Finland, to discuss recent progress in the field of ultra-peripheral collisions.

The post Ultra-peripheral physics in the ultraperiphery appeared first on CERN Courier.

]]>
In June 2025, physicists met at Saariselkä, Finland to discuss recent progress in the field of ultra-peripheral collisions (UPCs). All the major LHC experiments measure UPCs – events where two colliding nuclei miss each other, but nevertheless interact via the mediation of photons that can propagate long distances. In a case of life imitating science, almost 100 delegates propagated to a distant location in one of the most popular hiking destinations in northern Lapland to experience 24-hour daylight and discuss UPCs in Finnish saunas.

UPC studies have expanded significantly since the first UPC workshop in Mexico in December 2023. The opportunity to study scattering processes in a clean photon–nucleus environment at collider energies has inspired experimentalists to examine both inclusive and exclusive scattering processes, and to look for signals of collectivity and even the formation of quark–gluon plasma (QGP) in this unique environment.

For many years, experimental activity in UPCs was mainly focused on exclusive processes and QED phenomena including photon–photon scattering. This year, fresh inclusive particle-production measurements gained significant attention, as well as various signatures of QGP-like behaviour observed by different experiments at RHIC and at the LHC. The importance of having complementing experiments to perform similar measurements was also highlighted. In particular, the ATLAS experiment joined the ongoing activities to measure exclusive vector–meson photoproduction, finding a cross section that disagrees with the previous ALICE measurements by almost 50%. After long and detailed discussions, it was agreed that different experimental groups need to work together closely to resolve this tension before the next UPC workshop.

Experimental and theoretical developments very effectively guide each other in the field of UPCs. This includes physics within and beyond the Standard Model (BSM), such as nuclear modifications to the partonic structure of protons and neutrons, gluon-saturation phenomena predicted by QCD (CERN Courier January/February 2025 p31), and precision tests for BSM physics in photon–photon collisions. The expanding activity in the field of UPCs, together with the construction of the Electron Ion Collider (EIC) at Brookhaven National Laboratory in the US, has also made it crucial to develop modern Monte Carlo event generators to the level where they can accurately describe various aspects of photon–photon and photon–nucleus scatterings.

As a photon collider, the LHC complements the EIC. While the centre-of-mass energy at the EIC will be lower, there is some overlap between the kinematic regions probed by these two very different collider projects thanks to the varying energy spectra of the photons. This allows the theoretical models needed for the EIC to be tested against UPC data, thereby reducing theoretical uncertainty on the predictions that guide the detector designs. This complementarity will enable precision studies of QCD phenomena and BSM physics in the 2030s.

The post Ultra-peripheral physics in the ultraperiphery appeared first on CERN Courier.

]]>
Meeting report In June 2025, physicists met at Saariselkä, Finland, to discuss recent progress in the field of ultra-peripheral collisions. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_FN_UPC.jpg
Becoming T-shaped https://cerncourier.com/a/becoming-t-shaped/ Tue, 09 Sep 2025 08:16:23 +0000 https://cerncourier.com/?p=114550 For Heike Riel, IBM fellow and head of science and technology at IBM Research, successful careers in science are built not by choosing between academia and industry, but by moving fluidly between them.

The post Becoming T-shaped appeared first on CERN Courier.

]]>
Heike Riel

For Heike Riel, IBM fellow and head of science and technology at IBM Research, successful careers in science are built not by choosing between academia and industry, but by moving fluidly between them. With a background in semiconductor physics and a leadership role in one of the world’s top industrial research labs, Riel learnt to harness the skills she picked up in academia, and now uses them to build real-world applications. Today, IBM collaborates with academia and industry partners on projects ranging from quantum computing and cybersecurity to developing semiconductor chips for AI hardware.

“I chose semiconductor physics because I wanted to build devices, use electronics and understand photonics,” says Riel, who spent her academic years training to be an applied physicist. “There’s fundamental science to explore, but also something that can be used as a product to benefit society. That combination was very motivating.”

Hands-on mindset

For experimental physicists, this hands-on mindset is crucial. But experiments also require infrastructure that can be difficult to access in purely academic settings. “To do experiments, you need cleanrooms, fabrication tools and measurement systems,” explains Riel. “These resources are expensive and not always available in university labs.” During her first industry job at Hewlett-Packard in Palo Alto, Riel realised just how much she could achieve if given the right resources and support. “I felt like I was then the limit, not the lab,” she recalls.

This experience led Riel to proactively combine academic and industrial research in her PhD with IBM, where cutting-edge experiments are carried out towards a clear, purpose-driven goal within a structured research framework, leaving lots of leeway for creativity. “We explore scientific questions, but always with an application in mind,” says Riel. “Whether we’re improving a product or solving a practical problem, we aim to create knowledge and turn it into impact.”

Shifting gears

According to Riel, once you understand the foundations of fundamental physics, and feel as though you have learnt all the skills you can leach from it, then it’s time to consider shifting gears and expanding your skills with economics or business. In her role, understanding economic value and organisational dynamics is essential. But Riel advises against independently pursuing an MBA. “Studying economics or an MBA later is very doable,” she says. “In fact, your company might even financially support you. But going the other way – starting with economics and trying to pick up quantum physics later – is much harder.”

Riel sees university as a precious time to master complex subjects like quantum mechanics, relativity and statistical physics – topics that are difficult to revisit later in life. “It’s much easier to learn theoretical physics as a student than to go back to it later,” she says. “It builds something more important than just knowledge: it builds your tolerance for frustration, and your capacity for deep logical thinking. You become extremely analytical and much better at breaking down problems. That’s something every employer values.”

In demand

High-energy physicists are even in high demand in fields like consulting, says Riel. A high-achieving academic has a really good chance at being hired, as long as they present their job applications effectively. When scouring applications, recruiters look for specific key words and transferable skills, so regardless of the depth or quality of your academic research, the way you present yourself really counts. Physics, Riel argues, teaches a kind of thinking that’s both analytical and resilient. With experimental physics, your application can be tailored towards hands-on experience and understanding tangible solutions to real-world problems. For theoretical physicists, your application should demonstrate logical problem-solving and thinking outside of the box. “The winning combination is having aspects of both,” says Riel.

On top of that, research in physics increases your “frustration tolerance”. Every physicist has faced failure at one point during their academic career. But their determination to persevere is what makes them resilient. Whether this is through constantly thinking on your feet, or coming up with new solutions to the same problems, this resilience is what can make a physicist’s application pierce through the others. “In physics, you face problems every day that don’t have easy answers, and you learn how to deal with that,” explains Riel. “That mindset is incredibly useful, whether you’re solving a semiconductor design problem or managing a business unit.”

Academic research is often driven by curiosity and knowledge gain, while industrial research is shaped by application

Riel champions the idea of the “T-shaped person”: someone with deep expertise in one area (the vertical stroke of the T) and broad knowledge across fields (the horizontal bar of the T). “You start by going deep – becoming the go-to person for something,” says Riel. This deep knowledge builds your credibility in your desired field: you become the expert. But after that, you need to broaden your scope and understanding.

That breadth can include moving between fields, working on interdisciplinary projects, or applying physics in new domains. “A T-shaped person brings something unique to every conversation,” adds Riel. “You’re able to connect dots that others might not even see, and that’s where a lot of innovation happens.”

Adding the bar on the T means that you can move fluidly between different fields, including through academia and industry. For this reason, Riel believes that the divide between academia and industry is less rigid than people assume, especially in large research organisations like IBM. “We sit in that middle ground,” she explains. “We publish papers. We work with universities on fundamental problems. But we also push toward real-world solutions, products and economic value.”

The difficult part is making the leap from academia to industry. “You need the confidence to make the decision, to choose between working in academia or industry,” says Riel. “At some point in your PhD, your first post-doc, or maybe even your second, you need to start applying your practical skills to industry.” Companies like IBM offer internships, PhDs, research opportunities and temporary contracts for physicists all the way from masters students to high-level post-docs. These are ideal ways to get your foot in the door of a project, get work published, grow your network and garner some of those industry-focused practical skills, regardless of the stage you are at in your academic career. “You can learn from your colleagues about economy, business strategy and ethics on the job,” says Riel. “If your team can see you using your practical skills and engaging with the business, they will be eager to help you up-skill. This may mean supporting you through further study, whether it’s an online course, or later an MBA.”

Applied knowledge

Riel notes that academic research is often driven by curiosity and knowledge gain, while industrial research is shaped by application. “US funding is often tied to applications, and they are much stronger at converting research into tangible products, whereas in Europe there is still more of a divide between knowledge creation and the next step to turn this into products,” she says. “But personally, I find it most satisfying when I can apply what I learn to something meaningful.”

That applied focus is also cyclical, she says. “At IBM, projects to develop hardware often last five to seven years. Software development projects have a much faster turnaround. You start with an idea, you prove the concept, you innovate the path to solve the engineering challenges and eventually it becomes a product. And then you start again with something new.” This is different to most projects in academia, where a researcher contributes to a small part of a very long-term project. Regardless of the timeline of the project, the skills gained from academia are invaluable.

For early-career researchers, especially those in high-energy physics, Riel’s message is reassuring: “Your analytical training is more useful than you think. Whether you stay in academia, move to industry, or float between both, your skills are always relevant. Keep learning and embracing new technologies.”

The key, she says, is to stay flexible, curious and grounded in your foundations. “Build your depth, then your breadth. Don’t be afraid of crossing boundaries. That’s where the most exciting work happens.”

The post Becoming T-shaped appeared first on CERN Courier.

]]>
Careers For Heike Riel, IBM fellow and head of science and technology at IBM Research, successful careers in science are built not by choosing between academia and industry, but by moving fluidly between them. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_CAR_Riel_feature.jpg
The history of heavy ions https://cerncourier.com/a/the-history-of-heavy-ions/ Tue, 09 Sep 2025 08:16:10 +0000 https://cerncourier.com/?p=114561 Across a career that accompanied the emergence of heavy-ion physics at CERN, Hans Joachim Specht was often a decisive voice in shaping the experimental agenda and the institutional landscape in Europe.

The post The history of heavy ions appeared first on CERN Courier.

]]>
Across a career that accompanied the emergence of heavy-ion physics at CERN, Hans Joachim Specht was often a decisive voice in shaping the experimental agenda and the institutional landscape in Europe. Before he passed away last May, he and fellow editors Sanja Damjanovic (GSI), Volker Metag (University of Giessen) and Jürgen Schukraft (Yale University) finalised the manuscript for Scientist and Visionary – a new biographical work that offers both a retrospective on Specht’s wide-ranging scientific contributions and a snapshot of four decades of evolving research at CERN, GSI and beyond.

Precision and rigour

Specht began his career in nuclear physics under the mentorship of Heinz Maier-Leibnitz at the Technische Universität München. His early work was grounded in precision measurements and experimental rigour. Among his most celebrated early achievements were the discoveries of superheavy quasi-molecules and quasi-atoms, where electrons can be bound for short times to a pair of heavy ions, and nuclear-shape isomerism, where nuclei exhibit long-lived prolate or oblate deformations. These milestones significantly advanced the understanding of atomic and nuclear structure. Around 1979, he shifted focus, joining the emerging efforts at CERN to explore the new frontier of ultra-relativistic heavy-ion collisions, which was started five years earlier at Berkeley by the GSI-LBL collaboration. It was Bill Willis, one of CERN’s early advocates for high-energy nucleus–nucleus collisions, who helped draw Specht into this developing field. That move proved foundational for both Specht and CERN.

From the early 1980s through to 2010, Specht played leading roles in four CERN nuclear-collision experiments: R807/808 at the Intersecting Storage Rings, and HELIOS, CERES/NA45 and NA60 at the Super Proton Synchrotron (SPS). As the book describes, he was instrumental, and not only in their scientific goals, namely to search for the highest temperatures of the newly formed hot, dense QCD matter, exceeding the well established Hagedorn limiting hadron fluid temperature of roughly 160 MeV. The overarching aim was to establish that quasi-thermalised gluon matter and even quark–gluon matter can be created at the SPS. Specht was also involved in the design and execution of these detectors. At the Universität Heidelberg, he built a heavy-ion research group and became a key voice in securing German support for CERN’s heavy-ion programme.

CERES was Spechts brainchild, and stood out for its bold concept

As spokesperson of the HELIOS experiment from 1984 onwards, Specht gained recognition as a community leader. But it was CERES, his brainchild, that stood out for its bold concept: to look for thermal dileptons using a hadron-blind detector – a novel idea at the time that introduced the concept of heavy-ion collision experiments. Despite considerable scepticism, CERES was approved in 1989 and built in under two years. Its results on sulphur–gold collisions became some of the most cited of the SPS era, offering strong evidence for thermal lepton-pair production, potentially from a quark–gluon plasma – a hot and deconfined state of QCD matter then hypothesised to exist at high temperatures and densities, such as in the early universe. Such high temperatures, above the hadrons’ limiting Hagedorn temperature of 160 MeV, had not yet been experimentally demonstrated at LBNL’s Bevalac and Brookhaven’s Alternating Gradient Synchrotron.

Advising ALICE

In the early 1990s, while CERES was being upgraded for lead–gold runs, Specht co-led a European Committee for Future Accelerators working group that laid the groundwork for ALICE, the LHC’s dedicated heavy-ion experiment. His Heidelberg group formally joined ALICE in 1993. Even after becoming scientific director of GSI in 1992, Specht remained closely involved as an advisor.

Specht’s next major CERN project was NA60, which collided a range of nuclei in a fixed-target experiment at the SPS and pushed dilepton measurements to new levels of precision. The NA60 experiment achieved two breakthroughs: a nearly perfect thermal spectrum consistent with blackbody radiation of temperatures 240 to 270 MeV, some hundred MeV above the previous highest hadron Hagedorn temperature of 160 MeV. Clear evidence of in-medium modification of the ρ meson was observed, due to meson collisions with nucleons and heavy baryon resonances, showing that this medium is not only hot, but also that its net baryon density is high. These results were widely seen as strong confirmation of the lattice–QCD-inspired quark–gluon plasma hypothesis. Many chapter authors, some of whom were direct collaborators, others long-time interpreters of heavy-ion signals, highlight the impact NA60 had on the field. Earlier claims, based on competing hadronic signals for deconfinement, such as strong collective hydrodynamic flow, J/ψ melting and quark recombination, were often also described by hadronic transport theory, without assuming deconfinement.

Hans Joachim Specht: Scientist and Visionary

Specht didn’t limit himself to fundamental research. As director of GSI, he oversaw Europe’s first clinical ion-beam cancer therapy programme using carbon ions. The treatment of the first 450 patients at GSI was a breakthrough moment for medical physics and led to the creation of the Heidelberg Ion Therapy centre in Heidelberg, the first hospital-based hadron therapy centre in Europe. Specht later recalled the first successful treatment as one of the happiest moments of his career. In their essays, Jürgen Debus, Hartmut Eickhoff and Thomas Nilsson outline how Specht steered GSI’s mission into applied research without losing its core scientific momentum.

Specht was also deeply engaged in institutional planning, helping to shape the early stages of the Facility for Antiproton and Ion Research, a new facility to study heavy ion collisions, which is expected to start operations at GSI at the end of the decade. He also initiated plasma-physics programmes, and contributed to the development of detector technologies used far beyond CERN or GSI. In parallel, he held key roles in international science policy, including within the Nuclear Physics Collaboration Committee, as a founding board member of the European Centre for Theoretical Studies in Nuclear Physics in Trento, and at CERN as chair of the Proton Synchrotron and Synchro-Cyclotron Committee, and as a decade-long member of the Scientific Policy Committee.

The book doesn’t shy away from more unusual chapters either. In later years, Specht developed an interest in the neuroscience of music. Collaborating with Hans Günter Dosch and Peter Schneider, he explored how the brain processes musical structure – an example of his lifelong intellectual curiosity and openness to interdisciplinary thinking.

Importantly, Scientist and Visionary is not a hagiography. It includes a range of perspectives and technical details that will appeal to both physicists who lived through these developments and younger researchers unfamiliar with the history behind today’s infrastructure. At its best, the book serves as a reminder of how much experimental physics depends not just on ideas, but on leadership, timing and institutional navigation.

That being said, it is not a typical scientific biography. It’s more of a curated mosaic, constructed through personal reflections and contextual essays. Readers looking for deep technical analysis will find it in parts, especially in the sections on CERES and NA60, but its real value lies in how it tracks the development of large-scale science across different fields, from high-energy physics to medical applications and beyond.

For those interested in the history of CERN, the rise of heavy-ion physics, or the institutional evolution of European science, this is a valuable read. And for those who knew or worked with Hans Specht, it offers a fitting tribute – not through nostalgia, but through careful documentation of the many ways Hans shaped the physics and the institutions we now take for granted.

The post The history of heavy ions appeared first on CERN Courier.

]]>
Review Across a career that accompanied the emergence of heavy-ion physics at CERN, Hans Joachim Specht was often a decisive voice in shaping the experimental agenda and the institutional landscape in Europe. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_REV_CERES.jpg
Two takes on the economics of big science https://cerncourier.com/a/two-takes-on-the-economics-of-big-science/ Tue, 09 Sep 2025 08:15:44 +0000 https://cerncourier.com/?p=114569 Economist from the University of Milan, Massimo Florio, reviews "Big Science, Innovation & Societal Contributions" and "The Economics of Big Science 2.0".

The post Two takes on the economics of big science appeared first on CERN Courier.

]]>
At the 2024 G7 conference on research infrastructure in Sardinia, participants were invited to think about the potential socio-economic impact of the Einstein Telescope. Most physicists would have no expectation that a deeper knowledge of gravitational waves will have any practical usage in the foreseeable future. What, then, will be the economic impact of building a gravitational-wave detector hundreds of metres underground in some abandoned mines? What will be the societal impact of several kilometres of lasers and mirrors?

Such questions are strategically important for the future of fundamental science, which is increasingly often big science. Two new books tackle its socio-economic impacts head on, though with quite different approaches, one more qualitative in its research, and the other more quantitative. What are the pros and cons of qualitative versus quantitative analysis in social sciences? Personally, as an economist, at a certain point I would tend to say show me the figures! But, admittedly, when assessing the socio-economic impact of large-scale research infrastructures, if good statistical data is not available, I would always prefer a fine-grained qualitative analysis to quantitative models based on insufficient data.

Big Science, Innovation & Societal Contributions, edited by Shantha Liyanage (CERN), Markus Nordberg (CERN) and Marilena Streit-Bianchi (vice president of ARSCIENCIA), takes the qualitative route – a journey into mostly uncharted territory, asking difficult questions about the socio-economic impact of large-scale research infrastructures.

Big Science, Innovation & Societal Contributions

Some figures about the book may be helpful: the three editors were able to collect 15 chapters, with about 100 figures and tables, to involve 34 authors, to list more than 700 references, and to cover a wide range of scientific fields, including particle physics, astrophysics, medicine and computer science. A cursory reading of the list of about 300 acronyms, from AAI (Architecture Adaptive Integrator) to ZEPLIN (ZonEd Proportional scintillation in Liquid Noble gas detector), would be a good test to see how many research infrastructures and collaborations you already know.

After introducing the LHC, a chapter on new accelerator technologies explores a remarkable array of applications of accelerator physics. To name a few: CERN’s R&D in superconductivity is being applied in nuclear fusion; the CLOUD experiment uses particle beams to model atmospheric processes relevant to climate change (CERN Courier January/February 2025 p5); and the ELISA linac is being used to date Australian rock art, helping determine whether it originates from the Pleistocene or Holocene epochs (CERN Courier March/April 2025 p10).

A wide-ranging exploration of how large-scale research infrastructures generate socio-economic value

The authors go on to explore innovation with a straightforward six-step model: scanning, codification, abstraction, diffusion, absorption and impacting. This is a helpful compass to build a narrative. Other interesting issues discussed in this part of the book include governance mechanisms and leadership of large-scale scientific organisations, including in gravitational-wave astronomy. No chapter better illustrates the impact of science on human wellbeing than the survey of medical applications by Mitra Safavi-Naeini and co-authors, which covers three major domains of applications in medical physics: medical imaging with X-rays and PET; radio­therapy targeting cancer cells internally with radioactive drugs or externally using linacs; and more advanced but expensive particle-therapy treatments with beams of protons, helium ions and carbon ions. Personally, I would expect that some of these applications will be enhanced by artificial intelligence, which in turn will have an impact on science itself in terms of digital data interpretation and forecasting.

Sociological perspectives

The last part of the book takes a more sociological perspective, with discussions about cultural values, the social responsibility to make sure big data is open data, and social entrepreneurship. In his chapter on the social responsibility of big science, Steven Goldfarb stresses the importance of the role of big science for learning processes and cultural enhancement. This topic is particularly dear to me, as my previous work on the cost–benefit analysis of the LHC revealed that the value of human capital accumulation for early-stage researchers is among the biggest contributions to the machine’s return on investment.

I recommend Big Science, Innovation & Societal Contributions as a highly infor­mative, non-technical and updated introduction to the landscape of big science, but I would suggest complemen­ting it with another very recent book, The Economics of Big Science 2.0, edited by Johannes Gutleber and Panagiotis Charitos, both currently working at CERN. Charitos was also the co-editor of the volume’s predecessor, The Economics of Big Science, which focuses more on science policy, as well as public investment in science.

Why a “2.0” book? There is a shift of angle. The Economics of Big Science 2.0 builds upon the prior volume, but offers a more quantitative perspective on big science. Notably, it takes advantage of a larger share of contributions by economists, including myself as co-author of a chapter about the public’s perception of CERN.

The Economics of Big Science 2.0

It is worth clarifying that economics, as a domain within the paradigm of social sciences more generally, has its rules of the game and style. For example, the social sciences can be used as an umbrella encompassing sociology, political science, anthropology, history, management and communication studies, linguistics, psychology and more. The role of economics within sociology is to build quantitative models and to test them with statistical evidence, a field also known as econometrics.

Here, the authors excel. The Economics of Big Science 2.0 offers a wide-ranging exploration of how large-scale research infrastructures generate socio-economic value, primarily driven by quantitative analysis. The authors explore a diverse range of empirical methods, from forming cost–benefit analyses to evaluating econometric modelling, allowing them to assess the tangible effects of big science across multiple fields. There is a unique challenge for applied economics here, as big science centres by definition do not come in large numbers, however the authors involve large numbers of stakeholders, allowing for a statistical analysis of impacts, and the estimation of expected values, standard errors and confidence intervals.

Societal impact

The Economics of Big Science 2.0 examines the socio-economic impact of ESA’s space programmes, the local economic benefits from large-scale facilities and the efficiency benefits from open science. The book measures public attitudes toward and awareness of science within the context of CERN, offering insights into science’s broader societal impacts. It grounds its analyses in a series of focused case studies, including particle colliders such as the LHC and FCC, synchrotron light sources like ESRF and ALBA, and radio telescopes such as SARAO, illustrating the economic impacts of big science through a quantitative lens. In contrast to the more narrative and qualitative approach of Big Science, Innovation & Societal Contributions, The Economics of Big Science 2.0 distinguishes itself through a strong reliance on empirical data.

The post Two takes on the economics of big science appeared first on CERN Courier.

]]>
Review Economist from the University of Milan, Massimo Florio, reviews "Big Science, Innovation & Societal Contributions" and "The Economics of Big Science 2.0". https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_REV_SARAO.jpg
Ivan Todorov 1933–2025 https://cerncourier.com/a/ivan-todorov-1933-2025/ Tue, 09 Sep 2025 08:15:14 +0000 https://cerncourier.com/?p=114517 Ivan Todorov, theoretical physicist of outstanding academic achievements and a man of remarkable moral integrity, passed away on 14 February 2025.

The post Ivan Todorov 1933–2025 appeared first on CERN Courier.

]]>
Ivan Todorov, theoretical physicist of outstanding academic achievements and a man of remarkable moral integrity, passed away on 14 February in his hometown of Sofia. He is best known for his prominent works on the group-theoretical methods and the mathematical foundations of quantum field theory.

Ivan was born on 26 October 1933 into a family of literary scholars who played an active role in Bulgarian academic life. After graduating from the University of Sofia in 1956, he spent several years at JINR in Dubna and at IAS Princeton, before joining INRNE in Sofia. In 1974 he became a full member of the Bulgarian Academy of Sciences.

Ivan contributed substantially to the development of conformal quantum field theories in arbitrary dimensions. The classification and the complete description of the unitary representations of the conformal group have been collected in two well known and widely used monographs by him and his collaborators. Ivan’s research on constructive quantum field theories and the books devoted to the axiomatic approach have largely influenced modern developments in this area. His early scientific results related to the analytic properties of higher loop Feynman diagrams have also found important applications in perturbative quantum field theory.

Ivan contributed substantially to the development of conformal quantum field theories in arbitrary dimensions

The scientifically highly successful international conferences and schools organised in Bulgaria during the Cold War period under the guidance of Ivan served as meeting grounds for leading Russian and East European theoretical physicists and their West European and American colleagues. They were crucial for the development of theoretical physics in Bulgaria.

Everybody who knew Ivan was impressed by his vast culture and acute intellectual curiosity. His profound and deep knowledge of modern mathematics allowed him to remain constantly in tune with new trends and ideas in theoretical physics. Ivan’s courteous and smiling way of discussing physics, always peppered with penetrating comments and suggestions, was inimitable. His passing is a great loss for theoretical physics, especially in Bulgaria, where he mentored a generation of researchers.

The post Ivan Todorov 1933–2025 appeared first on CERN Courier.

]]>
News Ivan Todorov, theoretical physicist of outstanding academic achievements and a man of remarkable moral integrity, passed away on 14 February 2025. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_OBITS_todorov.jpg
Jonathan L Rosner 1941–2025 https://cerncourier.com/a/jonathan-l-rosner-1941-2025/ Tue, 09 Sep 2025 08:14:28 +0000 https://cerncourier.com/?p=114512 Jonathan L Rosner, a distinguished theoretical physicist and professor emeritus at the University of Chicago, passed away on 24 May 2025.

The post Jonathan L Rosner 1941–2025 appeared first on CERN Courier.

]]>
Jon Rosner

Jonathan L Rosner, a distinguished theoretical physicist and professor emeritus at the University of Chicago, passed away on 24 May 2025. He made profound contributions to particle physics, particularly in quark dynamics and the Standard Model.

Born in New York City, Rosner grew up in Yonkers, NY. He earned his Bachelor of Arts in Physics from Swarthmore College in 1962 and completed his PhD at Princeton University in 1965 with Sam Treiman as his thesis advisor. His early academic appointments included positions at the University of Washington and Tel Aviv University. In 1969 he joined the faculty at the University of Minnesota, where he served until 1982. That year, he became a professor at the University of Chicago, where he remained a central figure in the Enrico Fermi Institute and the Department of Physics until his retirement in 2011.

Rosner’s research spanned a broad spectrum of topics in particle physics, with a focus on the properties and interactions of quarks and leptons in the Standard Model and beyond.

In a highly influential paper in 1969, he pointed out that the duality between hadronic s-channel scattering and t-channel exchanges could be understood graphically, in terms of quark worldlines. Approximately three months before the “November revolution”, i.e. the experimental discovery of charm–anticharm particles, together with the late Mary K Gaillard and Benjamin W Lee, Jon published a seminal paper predicting the properties of hadronic states containing charm quarks.

He made significant contributions to the study of mesons and baryons, exploring their spectra and decay processes. His work on quarkonium systems, particularly the charmonium and bottomonium states, provided critical insights into the strong force that binds quarks together. He also made masterful use of algebraic methods in predicting and analysing CP-violating observables.

In more recent years, Jon focused on exotic combinations of quarks and antiquarks, tetra­quarks and pentaquarks. In 2017 he co-authored a Physical Review Letters paper that provided the first robust prediction of a bbud tetraquark that would be stable under the strong interaction (CERN Courier November/December 2024 p33).

What truly set Jon apart was his rare ability to seamlessly integrate theoretical acumen with practical experimental engagement. While primarily a theoretician, he held a deep appreciation for experimental data and actively participated in the experimental endeavour. A prime example of this was his long-standing involvement with the CLEO collaboration at Cornell University.

He also collaborated on studies related to the detection of cosmic-ray air showers and contributed to the development of prototype systems for detecting radio pulses associated with these high-energy events. His interdisciplinary approach bridged theoretical predictions with experimental observations, enhancing the coherence between theory and practice in high-energy physics.

Unusually for a theorist, Jon was a high-level expert in electronics, rooted through his deep life-long interest in amateur short-wave radio. As with everything else, he did it very thoroughly, from physics analysis to travelling to solar eclipses to take advantage of the increased propagation range of the electromagnetic waves caused by changes in the ionosphere. Rosner was also deeply committed to public service within the scientific community. He served as chair of the Division of Particles and Fields of the American Physical Society in 2013, during which he played a central role in organising the “Snowmass on the Mississippi” conference. This event was an essential part of the long-term strategic planning for the US high-energy physics programme. His leadership and vision were widely recognised and appreciated by his peers.

Throughout his career, Rosner received numerous accolades. He was a fellow of the American Physical Society and was awarded fellowships from the Alfred P. Sloan Foundation and the John Simon Guggenheim Memorial Foundation. His publication record includes more than 500 theoretical papers, reflecting his prolific and highly impactful career in physics. He is survived by his wife, Joy, their two children, Hannah and Benjamin, and a granddaughter, Sadie.

The post Jonathan L Rosner 1941–2025 appeared first on CERN Courier.

]]>
News Jonathan L Rosner, a distinguished theoretical physicist and professor emeritus at the University of Chicago, passed away on 24 May 2025. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_OBITS_rosner_feature.jpg
César Gómez 1954–2025 https://cerncourier.com/a/cesar-gomez-1954-2025/ Tue, 09 Sep 2025 08:13:36 +0000 https://cerncourier.com/?p=114508 César Gómez, whose deep contributions to gauge theory and quantum gravity were matched by his scientific leadership, passed away on 7 April 2025.

The post César Gómez 1954–2025 appeared first on CERN Courier.

]]>
César Gómez, whose deep contributions to gauge theory and quantum gravity were matched by his scientific leadership, passed away on 7 April 2025 after a short fight against illness, leaving his friends and colleagues with a deep sense of loss.

César gained his PhD in 1981 from Universidad de Salamanca, where he became professor after working at Harvard, the Institute for Advanced Study and CERN. He held an invited professorship at the Université de Genève between 1987 and 1991, and in this same year, he moved to Consejo Superior de Investigaciones Científicas (CSIC) in Madrid, where he eventually became a founding member of the Instituto de Física Teórica (IFT) UAM–CSIC. He became emeritus in 2024.

Among the large number of topics he worked on during his scientific career, César was initially fascinated by the dynamics of gauge theories. He dedicated his postdoctoral years to problems concerning the structure of the quantum vacuum in QCD, making some crucial contributions.

Focusing in the 1990s on the physics of two-dimensional conformal field theories, he used his special gifts to squeeze physics out of formal structures, leaving his mark in works ranging from superstrings to integrable models, and co-authoring with Martí Ruiz-Altaba and Germán Sierra the book Quantum Groups in Two-Dimensional Physics (Cambridge University Press, 1996). With the new century and the rise of holography, César returned to the topics of his youth: the renormalisation group and gauge theories, now with a completely different perspective.

Far from settling down, in the last decade we discover a very daring César, plunging together with Gia Dvali and other collaborators into a radical approach to understand symmetry breaking in gauge theories, opening new avenues in the study of black holes and the emergence of spacetime in quantum gravity. The magic of von Neumann algebras inspired him to propose an elegant, deep and original understanding of inflationary universes and their quantum properties. This research programme led him to one of his most fertile and productive periods, sadly truncated by his unexpected passing at a time when he was bursting with ideas and projects.

César’s influence went beyond his papers. After his arrival at CSIC as an international leader in string theory, he acted as a pole of attraction. His impact was felt both through the training of graduate students, as well as by the many courses he imparted that left a lasting memory on the new generations.

Contrasting with his abstract scientific style, César also had a pragmatic side, full of vision, momentum and political talent. A major part of his legacy is the creation of the IFT, whose existence would be unthinkable without César among the small group of theoretical physicists from Universidad Autónoma de Madrid and CSIC who made a dream come true. For him, the IFT was more than his research institute, it was the home he helped to build.

Philosophy was a true second career for César, dating back to his PhD in Salamanca and strengthened at Harvard, where he started a lifelong friendship with Hilary Putnam. The philosophy of language was one of his favourite subjects for philosophical musings, and he dedicated to it an inspiring book in Spanish in 2003.

Cesar’s impressive and eclectic knowledge of physics always transformed blackboard discussions into a delightful and fascinating experience, while his extraordinary ability to establish connections between apparently remote notions was extremely motivating at the early stages of a project. A regular presence at seminars and journal clubs, and always conspicuous by his many penetrating and inspiring questions, he was a beloved character among graduate students, who felt the excitement of knowing that he could turn every seminar into a unique event.

César was an excellent scientist with a remarkable personality. He was a wonderful conversationalist on any possible topic, encouraging open discussions free of prejudice, and building bridges with all conversational partners. He cherished his wife Carmen and daughters Ana and Pepa, who survive him.

Farewell, dear friend. May you rest in peace, and may your memory be our blessing.

The post César Gómez 1954–2025 appeared first on CERN Courier.

]]>
News César Gómez, whose deep contributions to gauge theory and quantum gravity were matched by his scientific leadership, passed away on 7 April 2025. https://cerncourier.com/wp-content/uploads/2025/09/CCSepOct25_OBITS_gomez.jpg
Quantum simulators in high-energy physics https://cerncourier.com/a/quantum-simulators-in-high-energy-physics/ Wed, 09 Jul 2025 07:12:13 +0000 https://cerncourier.com/?p=113530 Enrique Rico Ortega and Sofia Vallecorsa explain how quantum computing will allow physicists to model complex dynamics, from black-hole evaporation to neutron-star interiors.

The post Quantum simulators in high-energy physics appeared first on CERN Courier.

]]>
In 1982 Richard Feynman posed a question that challenged computational limits: can a classical computer simulate a quantum system? His answer: not efficiently. The complexity of the computation increases rapidly, rendering realistic simulations intractable. To understand why, consider the basic units of classical and quantum information.

A classical bit can exist in one of two states: |0> or |1>. A quantum bit, or qubit, exists in a superposition α|0> + β|1>, where α and β are complex amplitudes with real and imaginary parts. This superposition is the core feature that distinguishes quantum bits and classical bits. While a classical bit is either |0> or |1>, a quantum bit can be a blend of both at once. This is what gives quantum computers their immense parallelism – and also their fragility.

The difference becomes profound with scale. Two classical bits have four possible states, and are always in just one of them at a time. Two qubits simultaneously encode a complex-valued superposition of all four states.

Resources scale exponentially. N classical bits encode N boolean values, but N qubits encode 2N complex amplitudes. Simulating 50 qubits with double-precision real numbers for each part of the complex amplitudes would require more than a petabyte of memory, beyond the reach of even the largest supercomputers.

Direct mimicry

Feynman proposed a different approach to quantum simulation. If a classical computer struggles, why not use one quantum system to emulate the behaviour of another? This was the conceptual birth of the quantum simulator: a device that harnesses quantum mechanics to solve quantum problems. For decades, this visionary idea remained in the realm of theory, awaiting the technological breakthroughs that are now rapidly bringing it to life. Today, progress in quantum hardware is driving two main approaches: analog and digital quantum simulation, in direct analogy to the history of classical computing.

Optical tweezers

In analog quantum simulators, the physical parameters of the simulator directly correspond to the parameters of the quantum system being studied. Think of it like a wind tunnel for aeroplanes: you are not calculating air resistance on a computer but directly observing how air flows over a model.

A striking example of an analog quantum simulator traps excited Rydberg atoms in precise configurations using highly focused laser beams known as “optical tweezers”. Rydberg atoms have one electron excited to an energy level far from the nucleus, giving them an exaggerated electric dipole moment that leads to tunable long-range dipole–dipole interactions – an ideal setup for simulating particle interactions in quantum field theories (see “Optical tweezers” figure).

The positions of the Rydberg atoms discretise the space inhabited by the quantum fields being modelled. At each point in the lattice, the local quantum degrees of freedom of the simulated fields are embodied by the internal states of the atoms. Dipole–dipole interactions simulate the dynamics of the quantum fields. This technique has been used to observe phenomena such as string breaking, where the force between particles pulls so strongly that the vacuum spontaneously creates new particle–antiparticle pairs. Such quantum simulations model processes that are notoriously difficult to calculate from first principles using classical computers (see “A philosophical dimension” panel).

Universal quantum computation

Digital quantum simulators operate much like classical digital computers, though using quantum rather than classical logic gates. While classical logic manipulates classical bits, quantum logic manipulates qubits. Because quantum logic gates obey the Schrödinger equation, they preserve information and are reversible, whereas most classical gates, such as “AND” and “OR”, are irreversible. Many quantum gates have no classical equivalent, because they manipulate phase, superposition or entanglement – a uniquely quantum phenomenon in which two or more qubits share a combined state. In an entangled system, the state of each qubit cannot be described independently of the others, even if they are far apart: the global description of the quantum state is more than the combination of the local information at every site.

A philosophical dimension

The discretisation of space by quantum simulators echoes the rise of lattice QCD in the 1970s and 1980s. Confronted with the non-perturbative nature of the strong interaction, Kenneth Wilson introduced a method to discretise spacetime, enabling numerical solutions to quantum chromodynamics beyond the reach of perturbation theory. Simulations on classical supercomputers have since deepened our understanding of quark confinement and hadron masses, catalysed advances in high-performance computing, and inspired international collaborations. It has become an indispensable tool in particle physics (see “Fermilab’s final word on muon g-2”).

In classical lattice QCD, the discretisation of spacetime is just a computational trick – a means to an end. But in quantum simulators this discretisation becomes physical. The simulator is a quantum system governed by the same fundamental laws as the target theory.

This raises a philosophical question: are we merely modelling the target theory or are we, in a limited but genuine sense, realising it? If an array of neutral atoms faithfully mimics the dynamical behaviour of a specific gauge theory, is it “just” a simulation, or is it another manifestation of that theory’s fundamental truth? Feynman’s original proposal was, in a sense, about using nature to compute itself. Quantum simulators bring this abstract notion into concrete laboratory reality.

By applying sequences of quantum logic gates, a digital quantum computer can model the time evolution of any target quantum system. This makes them flexible and scalable in pursuit of universal quantum computation – logic able to run any algorithm allowed by the laws of quantum mechanics, given enough qubits and sufficient time. Universal quantum computing requires only a small subset of the many quantum logic gates that can be conceived, for example Hadamard, T and CNOT. The Hadamard gate creates a superposition: |0> (|0> + |1>) / 2. The T gate applies a 45° phase rotation: |1> eiπ/4|1>. And the CNOT gate entangles qubits by flipping a target qubit if a control qubit is |1>. These three suffice to prepare any quantum state from a trivial reference state: |ψ> = U1 U2 U3 … UN |0000…000>.

Trapped ions

To bring frontier physics problems within the scope of current quantum computing resources, the distinction between analog and digital quantum simulations is often blurred. The complexity of simulations can be reduced by combining digital gate sequences with analog quantum hardware that aligns with the interaction patterns relevant to the target problem. This is feasible as quantum logic gates usually rely on native interactions similar to those used in analog simulations. Rydberg atoms are a common choice. Alongside them, two other technologies are becoming increasingly dominant in digital quantum simulation: trapped ions and superconducting qubit arrays.

Trapped ions offer the greatest control. Individual charged ions can be suspended in free space using electromagnetic fields. Lasers manipulate their quantum states, inducing interactions between them. Trapped-ion systems are renowned for their high fidelity (meaning operations are accurate) and long coherence times (meaning they maintain their quantum properties for longer), making them excellent candidates for quantum simulation (see “Trapped ions” figure).

Superconducting qubit arrays promise the greatest scalability. These tiny superconducting circuit materials act as qubits when cooled to extremely low temperatures and manipulated with microwave pulses. This technology is at the forefront of efforts to build quantum simulators and digital quantum computers for universal quantum computation (see “Superconducting qubits” figure).

The noisy intermediate-scale quantum era

Despite rapid progress, these technologies are at an early stage of development and face three main limitations.

The first problem is that qubits are fragile. Interactions with their environment quickly compromise their superposition and entanglement, making computations unreliable. Preventing “decoherence” is one of the main engineering challenges in quantum technology today.

The second challenge is that quantum logic gates have low fidelity. Over a long sequence of operations, errors accumulate, corrupting the result.

Finally, quantum simulators currently have a very limited number of qubits – typically only a few hundred. This is far fewer than what is needed for high-energy physics (HEP) problems.

Superconducting qubits

This situation is known as the noisy “intermediate-scale” quantum era: we are no longer doing proof-of-principle experiments with a few tens of qubits, but neither can we control thousands of them. These limitations mean that current digital simulations are often restricted to “toy” models, such as QED simplified to have just one spatial and one time dimension. Even with these constraints, small-scale devices have successfully reproduced non-perturbative aspects of the theories in real time and have verified the preservation of fundamental physical principles such as gauge invariance, the symmetry that underpins the fundamental forces of the Standard Model.

Quantum simulators may chart a similar path to classical lattice QCD, but with even greater reach. Lattice QCD struggles with real-time evolution and finite-density physics due to the infamous “sign problem”, wherein quantum interference between classically computed amplitudes causes exponentially worsening signal-to-noise ratios. This renders some of the most interesting problems unsolvable on classical machines.

Quantum simulators do not suffer from the sign problem because they evolve naturally in real-time, just like the physical systems they emulate. This promises to open new frontiers such as the simulation of early-universe dynamics, black-hole evaporation and the dense interiors of neutron stars.

Quantum simulators will powerfully augment traditional theoretical and computational methods, offering profound insights when Feynman diagrams become intractable, when dealing with real-time dynamics and when the sign problem renders classical simulations exponentially difficult. Just as the lattice revolution required decades of concerted community effort to reach its full potential, so will the quantum revolution, but the fruits will again transform the field. As the aphorism attributed to Mark Twain goes: history never repeats itself, but it often rhymes.

Quantum information

One of the most exciting and productive developments in recent years is the unexpected, yet profound, convergence between HEP and quantum information science (QIS). For a long time these fields evolved independently. HEP explored the universe’s smallest constituents and grandest structures, while QIS focused on harnessing quantum mechanics for computation and communication. One of the pioneers in studying the interface between these fields was John Bell, a theoretical physicist at CERN.

Just as the lattice revolution needed decades of concerted community effort to reach its full potential, so will the quantum revolution

HEP and QIS are now deeply intertwined. As quantum simulators advance, there is a growing demand for theoretical tools that combine the rigour of quantum field theory with the concepts of QIS. For example, tensor networks were developed in condensed-matter physics to represent highly entangled quantum states, and have now found surprising applications in lattice gauge theories and “holographic dualities” between quantum gravity and quantum field theory. Another example is quantum error correction – a vital QIS technique to protect fragile quantum information from noise, and now a major focus for quantum simulation in HEP.

This cross-disciplinary synthesis is not just conceptual; it is becoming institutional. Initiatives like the US Department of Energy’s Quantum Information Science Enabled Discovery (QuantISED) programme, CERN’s Quantum Technology Initiative (QTI) and Europe’s Quantum Flagship are making substantial investments in collaborative research. Quantum algorithms will become indispensable for theoretical problems just as quantum sensors are becoming indispensable to experimental observation (see “Sensing at quantum limits”).

The result is the emergence of a new breed of scientist: one equally fluent in the fundamental equations of particle physics and the practicalities of quantum hardware. These “hybrid” scientists are building the theoretical and computational scaffolding for a future where quantum simulation is a standard, indispensable tool in HEP. 

The post Quantum simulators in high-energy physics appeared first on CERN Courier.

]]>
Feature Enrique Rico Ortega and Sofia Vallecorsa explain how quantum computing will allow physicists to model complex dynamics, from black-hole evaporation to neutron-star interiors. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_QSIM_fidelity.jpg
Four ways to interpret quantum mechanics https://cerncourier.com/a/four-ways-to-interpret-quantum-mechanics/ Wed, 09 Jul 2025 07:11:50 +0000 https://cerncourier.com/?p=113474 Carlo Rovelli describes the major schools of thought on how to make sense of a purely quantum world.

The post Four ways to interpret quantum mechanics appeared first on CERN Courier.

]]>
One hundred years after its birth, quantum mechanics is the foundation of our understanding of the physical world. Yet debates on how to interpret the theory – especially the thorny question of what happens when we make a measurement – remain as lively today as during the 1930s.

The latest recognition of the fertility of studying the interpretation of quantum mechanics was the award of the 2022 Nobel Prize in Physics to Alain Aspect, John Clauser and Anton Zeilinger. The motivation for the prize pointed out that the bubbling field of quantum information, with its numerous current and potential technological applications, largely stems from the work of John Bell at CERN the 1960s and 1970s, which in turn was motivated by the debate on the interpretation of quantum mechanics.

The majority of scientists use a textbook formulation of the theory that distinguishes the quantum system being studied from “the rest of the world” – including the measuring apparatus and the experimenter, all described in classical terms. Used in this orthodox manner, quantum theory describes how quantum systems react when probed by the rest of the world. It works flawlessly.

Sense and sensibility

The problem is that the rest of the world is quantum mechanical as well. There are of course regimes in which the behaviour of a quantum system is well approximated by classical mechanics. One may even be tempted to think that this suffices to solve the difficulty. But this leaves us in the awkward position of having a general theory of the world that only makes sense under special approximate conditions. Can we make sense of the theory in general?

Today, variants of four main ideas stand at the forefront of efforts to make quantum mechanics more conceptually robust. They are known as physical collapse, hidden variables, many worlds and relational quantum mechanics. Each appears to me to be viable a priori, but each comes with a conceptual price to pay. The latter two may be of particular interest to the high-energy community as the first two do not appear to fit well with relativity.

Probing physical collapse

The idea of the physical collapse is simple: we are missing a piece of the dynamics. There may exist a yet-undiscovered physical interaction that causes the wavefunction to “collapse” when the quantum system interacts with the classical world in a measurement. The idea is empirically testable. So far, all laboratory attempts to find violations of the textbook Schrödinger equation have failed (see “Probing physical collapse” figure), and some models for these hypothetical new dynamics have been ruled out by measurements.

The second possibility, hidden variables, follows on from Einstein’s belief that quantum mechanics is incomplete. It posits that its predictions are exactly correct, but that there are additional variables describing what is going on, besides those in the usual formulation of the theory: the reason why quantum predictions are probabilistic is our ignorance of these other variables.

The work of John Bell shows that the dynamics of any such theory will have some degree of non-locality (see “Non-locality” image). In the non-relativistic domain, there is a good example of a theory of this sort, that goes under the name of de Broglie–Bohm, or pilot-wave theory. This theory has non-local but deterministic dynamics capable of reproducing the predictions of non-relativistic quantum-particle dynamics. As far as I am aware, all existing theories of this kind break Lorentz invariance, and the extension of hidden variable theories to quantum-field theoretical domains appears cumbersome.

Relativistic interpretations

Let me now come to the two ideas that are naturally closer to relativistic physics. The first is the many-worlds interpretation – a way of making sense of quantum theory without either changing its dynamics or adding extra variables. It is described in detail in this edition of CERN Courier by one of its leading contemporary proponents (see “The minimalism of many worlds“), but the main idea is the following: being a genuine quantum system, the apparatus that makes a quantum measurement does not collapse the superposition of possible measurement outcomes – it becomes a quantum superposition of the possibilities, as does any human observer.

Non-locality

If we observe a singular outcome, says the many-worlds interpretation, it is not because one of the probabilistic alternatives has actualised in a mysterious “quantum measurement”. Rather, it is because we have split into a quantum superposition of ourselves, and we just happen to be in one of the resulting copies. The world we see around us is thus only one of the branches of a forest of parallel worlds in the overall quantum state of everything. The price to pay to make sense of quantum theory in this manner is to accept the idea that the reality we see is just a branch in a vast collection of possible worlds that include innumerable copies of ourselves.

Relational interpretations are the most recent of the four kinds mentioned. They similarly avoid physical collapse or hidden variables, but do so without multiplying worlds. They stay closer to the orthodox textbook interpretation, but with no privileged status for observers. The idea is to think of quantum theory in a manner closer to the way it was initially conceived by Born, Jordan, Heisenberg and Dirac: namely in terms of transition amplitudes between observations rather than quantum states evolving continuously in time, as emphasised by Schrödinger’s wave mechanics (see “A matter of taste” image).

Observer relativity

The alternative to taking the quantum state as the fundamental entity of the theory is to focus on the information that an arbitrary system can have about another arbitrary system. This information is embodied in the physics of the apparatus: the position of its pointer variable, the trace in a bubble chamber, a person’s memory or a scientist’s logbook. After a measurement, these physical quantities “have information” about the measured system as their value is correlated with a property of the observed systems.

Quantum theory can be interpreted as describing the relative information that systems can have about one another. The quantum state is interpreted as a way of coding the information about a system available to another system. What looks like a multiplicity of worlds in the many-worlds interpretation becomes nothing more than a mathematical accounting of possibilities and probabilities.

A matter of taste

The relational interpretation reduces the content of the physical theory to be about how systems affect other systems. This is like the orthodox textbook interpretation, but made democratic. Instead of a preferred classical world, any system can play a role that is a generalisation of the Copenhagen observer. Relativity teaches us that velocity is a relative concept: an object has no velocity by itself, but only relative to another object. Similarly, quantum mechanics, interpreted in this manner, teaches us that all physical variables are relative. They are not properties of a single object, but ways in which an object affects another object.

The QBism version of the interpretation restricts its attention to observing systems that are rational agents: they can use observations and make probabilistic predictions about the future. Probability is interpreted subjectively, as the expectation of a rational agent. The relational interpretation proper does not accept this restriction: it considers the information that any system can have about any other system. Here, “information” is understood in the simple physical sense of correlation described above.

Like many worlds – to which it is not unrelated – the relational interpretation does not add new dynamics or new variables. Unlike many worlds, it does not ask us to think about parallel worlds either. The conceptual price to pay is a radical weakening of a strong form of realism: the theory does not give us a picture of a unique objective sequence of facts, but only perspectives on the reality of physical systems, and how these perspectives interact with one another. Only quantum states of a system relative to another system play a role in this interpretation. The many-worlds interpretation is very close to this. It supplements the relational interpretation with an overall quantum state, interpreted realistically, achieving a stronger version of realism at the price of multiplying worlds. In this sense, the many worlds and relational interpretations can be seen as two sides of the same coin.

Every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics

I have only sketched here the most discussed alternatives, and have tried to be as neutral as possible in a field of lively debates in which I have my own strong bias (towards the fourth solution). Empirical testing, as I have mentioned, can only test the physical collapse hypothesis.

There is nothing wrong, in science, in using different pictures for the same phenomenon. Conceptual flexibility is itself a resource. Specific interpretations often turn out to be well adapted to specific problems. In quantum optics it is sometimes convenient to think that there is a wave undergoing interference, as well as a particle that follows a single trajectory guided by the wave, as in the pilot-wave hidden-variable theory. In quantum computing, it is convenient to think that different calculations are being performed in parallel in different worlds. My own field of loop quantum gravity treats spacetime regions as quantum processes: here, the relational interpretation merges very naturally with general relativity, because spacetime regions themselves become quantum processes, affecting each other.

Richard Feynman famously wrote that “every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics. He knows that they are all equivalent, and that nobody is ever going to be able to decide which one is right at that level, but he keeps them in his head, hoping that they will give him different ideas for guessing.” I think that this is where we are, in trying to make sense of our best physical theory. We have various ways to make sense of it. We do not yet know which of these will turn out to be the most fruitful in the future.

The post Four ways to interpret quantum mechanics appeared first on CERN Courier.

]]>
Feature Carlo Rovelli describes the major schools of thought on how to make sense of a purely quantum world. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_INTERP_Helgoland.jpg
Sensing at quantum limits https://cerncourier.com/a/sensing-at-quantum-limits/ Wed, 09 Jul 2025 07:11:30 +0000 https://cerncourier.com/?p=113517 Quantum sensors have become important tools in low-energy particle physics. Michael Doser explores opportunities to exploit their unparalleled precision at higher energies.

The post Sensing at quantum limits appeared first on CERN Courier.

]]>
Atomic energy levels. Spin orientations in a magnetic field. Resonant modes in cryogenic, high-quality-factor radio-frequency cavities. The transition from superconducting to normal conducting, triggered by the absorption of a single infrared photon. These are all simple yet exquisitely sensitive quantum systems with discrete energy levels. Each can serve as the foundation for a quantum sensor – instruments that detect single photons, measure individual spins or record otherwise imperceptible energy shifts.

Over the past two decades, quantum sensors have taken on leading roles in the search for ultra-light dark matter and in precision tests of fundamental symmetries. Examples include the use of atomic clocks to probe whether Earth is sweeping through oscillating or topologically structured dark-matter fields, and cryogenic detectors to search for electric dipole moments – subtle signatures that could reveal new sources of CP violation. These areas have seen rapid progress, as challenges related to detector size, noise, sensitivity and complexity have been steadily overcome, opening new phase space in which to search for physics beyond the Standard Model. Could high-energy particle physics benefit next?

Low-energy particle physics

Most of the current applications of quantum sensors are at low energies, where their intrinsic sensitivity and characteristic energy scales align naturally with the phenomena being probed. For example, within the Project 8 experiment at the University of Washington, superconducting sensors are being developed to tackle a longstanding challenge: to distinguish the tiny mass of the neutrino from zero (see “Quantum-noise limited” image). Inward-looking phased arrays of quantum-noise-limited microwave receivers allow spectroscopy of cyclotron radiation from beta-decay electrons as they spiral in a magnetic field. The shape of the endpoint of the spectrum is sensitive to the mass of the neutrino and such sensors have the potential to be sensitive to neutrino masses as low as 40 meV.

Quantum-noise limited

Beyond the Standard Model, superconducting sensors play a central role in the search for dark matter. At the lowest mass scales (peV to meV), experiments search for ultralight bosonic dark-matter candidates such as axions and axion-like particles (ALPs) through excitations of the vacuum field inside high–quality–factor microwave and millimetre-wave cavities (see “Quantum sensitivity” image). In the meV range, light-shining-through-wall experiments aim to reveal brief oscillations into weakly coupled hidden-sector particles such as dark photons or ALPs, and may employ quantum sensors for detecting reappearing photons, depending on the detection strategy. In the MeV to sub-GeV mass range, superconducting sensors are used to detect individual photons and phonons in cryogenic scintillators, enabling sensitivity to dark-matter interactions via electron recoils. At higher masses, reaching into the GeV regime, superfluid helium detectors target nuclear recoils from heavier dark matter particles such as WIMPs.

These technologies also find broad application beyond fundamental physics. For example, in superconducting and other cryogenic sensors, the ability to detect single quanta with high efficiency and ultra-low noise is essential. The same capabilities are the technological foundation of quantum communication.

Raising the temperature

While many superconducting quantum sensors require ultra-low temperatures of a few mK, some spin-based quantum sensors can function at or near room temperature. Spin-based sensors, such as nitrogen-vacancy (NV) centres in diamonds and polarised rubidium atoms, are excellent examples.

NV centres are defects in the diamond lattice where a missing carbon atom – the vacancy – is adjacent to a lattice site where a carbon atom has been replaced by a nitrogen atom. The electronic spin states in NV centres have unique energy levels that can be probed by laser excitation and detection of spin-dependent fluorescence.

Researchers are increasingly exploring how quantum-control techniques can be integrated into high-energy-physics detectors

Rubidium is promising for spin-based sensors because it has unpaired electrons. In the presence of an external magnetic field, its atomic energy levels are split by the Zeeman effect. When optically pumped with laser light, spin-polarised “dark” sublevels – those not excited by the light – become increasingly populated. These aligned spins precess in magnetic fields, forming the basis of atomic magnetometers and other quantum sensors.

Being exquisite magnetometers, both devices make promising detectors for ultralight bosonic dark-matter candidates such as axions. Fermion spins may interact with spatial or temporal gradients of the axion field, leading to tiny oscillating energy shifts. The coupling of axions to gluons could also show up as an oscillating nuclear electric dipole moment. These interactions could manifest as oscillating energy-level shifts in NV centres, or as time-varying NMR-like spin precession signals in the atomic magnetometers.

Large-scale detectors

The situation is completely different in high-energy physics detectors, which require numerous interactions between a particle and a detector. Charged particles cause many ionisation events, and when a neutral particle interacts it produces charged particles that result in similarly numerous ionisations. Even if quantum control were possible within individual units of a massive detector, the number of individual quantum sub-processes to be monitored would exceed the possibilities of any realistic device.

Increasingly, however, researchers are exploring how quantum-control techniques – such as manipulating individual atoms or spins using lasers or microwaves – can be integrated into high-energy-physics detectors. These methods could enhance detector sensitivity, tune detector response or enable entirely new ways of measuring particle properties. While these quantum-enhanced or hybrid detection approaches are still in their early stages, they hold significant promise.

Quantum dots

Quantum dots are nanoscale semiconductor crystals – typically a few nanometres in diameter – that confine charge carriers (electrons and holes) in all three spatial dimensions. This quantum confinement leads to discrete, atom-like energy levels and results in optical and electronic properties that are highly tunable with size, shape and composition. Originally studied for their potential in optoelectronics and biomedical imaging, quantum dots have more recently attracted interest in high-energy physics due to their fast scintillation response, narrow-band emission and tunability. Their emission wavelength can be precisely controlled through nanostructuring, making them promising candidates for engineered detectors with tailored response characteristics.

Chromatic calorimetry

While their radiation hardness is still under debate and needs to be resolved, engineering their composition, geometry, surface and size can yield very narrow-band (20 nm) emitters across the optical spectrum and into the infrared. Quantum dots such as these could enable the design of a “chromatic calorimeter”: a stack of quantum-dot layers, each tuned to emit at a distinct wavelength; for example red in the first layer, orange in the second and progressing through the visible spectrum to violet. Each layer would absorb higher energy photons quite broadly but emit light in a narrow spectral band. The intensity of each colour would then correspond to the energy absorbed in that layer, while the emission wavelength would encode the position of energy deposition, revealing the shower shape (see “Chromatic calorimetry” figure). Because each layer is optically distinct, hermetic isolation would be unnecessary, reducing the overall material budget.

Rather than improving the energy resolution of existing calorimeters, quantum dots could provide additional information on the shape and development of particle showers if embedded in existing scintillators. Initial simulations and beam tests by CERN’s Quantum Technology Initiative (QTI) support the hypothesis that the spectral intensity of quantum-dot emission can carry information about the energy and species of incident particles. Ongoing work aims to explore their capabilities and limitations.

Beyond calorimetry, quantum dots could be formed within solid semiconductor matrices, such as gallium arsenide, to form a novel class of “photonic trackers”. Scintillation light from electronically tunable quantum dots could be collected by photodetectors integrated directly on top of the same thin semiconductor structure, such as in the DoTPiX concept. Thanks to a highly compact, radiation-tolerant scintillating pixel tracking system with intrinsic signal amplification and minimal material budget, photonic trackers could provide a scintillation-light-based alternative to traditional charge-based particle trackers.

Living on the edge

Low temperatures also offer opportunities at scale – and cryogenic operation is a well-established technique in both high-energy and astroparticle physics, with liquid argon (boiling point 87 K) widely used in time projection chambers and some calorimeters, and some dark-matter experiments using liquid helium (boiling point 4.2 K) to reach even lower temperatures. A range of solid-state detectors, including superconducting sensors, operate effectively at these temperatures and below, and offer significant advantages in sensitivity and energy resolution.

Single-photon phase transitions

Magnetic microcalorimeters (MMCs) and transition-edge sensors (TESs) operate in the narrow temperature range where a superconducting material undergoes a rapid transition from zero resistance to finite values. When a particle deposits energy in an MMC or TES, it slightly raises the temperature, causing a measurable increase in resistance. Because the transition is extremely steep, even a tiny temperature change leads to a detectable resistance change, allowing precise calorimetry.

Functioning at millikelvin temperatures, TESs provide much higher energy resolution than solid-state detectors made from high-purity germanium crystals, which work by collecting electron–hole pairs created when ionising radiation interacts with the crystal lattice. TESs are increasingly used in high-resolution X-ray spectroscopy of pionic, muonic or antiprotonic atoms, and in photon detection for observational astronomy, despite the technical challenges associated with maintaining ultra-low operating temperatures.

By contrast, superconducting nanowire and microwire single-photon detectors (SNSPDs and SMSPDs) register only a change in state – from superconducting to normal conducting – allowing them to operate at higher temperatures than traditional low-temperature sensors. When made from high–critical-temperature (Tc) superconductors, operation at temperatures as high as 10 K is feasible, while maintaining excellent sensitivity to energy deposited by charged particles and ultrafast switching times on the order of a few picoseconds. Recent advances include the development of large-area devices with up to 400,000 micron-scale pixels (see “Single-photon phase transitions” figure), fabrication of high-Tc SNSPDs and successful beam tests of SMSPDs. These technologies are promising candidates for detecting milli-charged particles – hypothetical particles arising in “hidden sector” extensions of the Standard Model – or for high-rate beam monitoring at future colliders.

Rugged, reliable and reproducible

Quantum sensor-based experiments have vastly expanded the phase space that has been searched for new physics. This is just the beginning of the journey, as larger-scale efforts build on the initial gold rush and new quantum devices are developed, perfected and brought to bear on the many open questions of particle physics.

Partnering with neighbouring fields such as quantum computing, quantum communication and manufacturing is of paramount importance

To fully profit from their potential, a vigorous R&D programme is needed to scale up quantum sensors for future detectors. Ruggedness, reliability and reproducibility are key – as well as establishing “proof of principle” for the numerous imaginative concepts that have already been conceived. Challenges range from access to test infrastructures, to standardised test protocols for fair comparisons. In many cases, the largest challenge is to foster an open exchange of ideas given the numerous local developments that are happening worldwide. Finding a common language to discuss developments in different fields that at first glance may have little in common, builds on a willingness to listen, learn and exchange.

The European Committee for Future Accelerators (ECFA) detector R&D roadmap provides a welcome framework for addressing these challenges collaboratively through the Detector R&D (DRD) collaborations established in 2023 and now coordinated at CERN. Quantum sensors and emerging technologies are covered within the DRD5 collaboration, which ties together 112 institutes worldwide, many of them leaders in their particular field. Only a third stem from the traditional high-energy physics community.

These efforts build on the widespread expertise and enthusiastic efforts at numerous institutes and tie in with the quantum programmes being spearheaded at high-energy-physics research centres, among them CERN’s QTI. Partnering with neighbouring fields such as quantum computing, quantum communication and manufacturing is of paramount importance. The best approach may prove to be “targeted blue-sky research”: a willingness to explore completely novel concepts while keeping their ultimate usefulness for particle physics firmly in mind.

The post Sensing at quantum limits appeared first on CERN Courier.

]]>
Feature Quantum sensors have become important tools in low-energy particle physics. Michael Doser explores opportunities to exploit their unparalleled precision at higher energies. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_QSENSING_ADMX.jpg
A new probe of radial flow https://cerncourier.com/a/a-new-probe-of-radial-flow/ Tue, 08 Jul 2025 20:23:57 +0000 https://cerncourier.com/?p=113587 The ATLAS and ALICE collaborations have announced the first results of a new way to measure the “radial flow” of quark–gluon plasma.

The post A new probe of radial flow appeared first on CERN Courier.

]]>
Radial-flow fluctuations

The ATLAS and ALICE collaborations have announced the first results of a new way to measure the “radial flow” of quark–gluon plasma (QGP). The two analyses offer a fresh perspective into the fluid-like behaviour of QCD matter under extreme conditions, such as those that prevailed after the Big Bang. The measurements are highly complementary, with ALICE drawing on their detector’s particle-identification capabilities and ATLAS leveraging the experiment’s large rapidity coverage.

At the Large Hadron Collider, lead–ion collisions produce matter at temperatures and densities so high that quarks and gluons momentarily escape their confinement within hadrons. The resulting QGP is believed to have filled the universe during its first few microseconds, before cooling and fragmenting into mesons and baryons. In the laboratory, these streams of particles allow researchers to reconstruct the dynamical evolution of the QGP, which has long been known to transform anisotropies of the initial collision geometry into anisotropic momentum distributions of the final-state particles.

Compelling evidence

Differential measurements of the azimuthal distributions of produced particles over the last decades have provided compelling evidence that the outgoing momentum distribution reflects a collective response driven by initial pressure gradients. The isotropic expansion component, typically referred to as radial flow, has instead been inferred from the slope of particle spectra (see figure 1). Despite its fundamental role in driving the QGP fireball, radial flow lacked a differential probe comparable to those of its anisotropic counterparts.

ATLAS measurements of radial flow

That situation has now changed. The ALICE and ATLAS collaborations recently employed the novel observable v0(pT) to investigate radial flow directly. Their independent results demonstrate, for the first time, that the isotropic expansion of the QGP in heavy-ion collisions exhibits clear signatures of collective behaviour. The isotropic expansion of the QGP and its azimuthal modulations ultimately depend on the hydrodynamic properties of the QGP, such as shear or bulk viscosity, and can thus be measured to constrain them.

Traditionally, radial flow has been inferred from the slope of pT-spectra, with the pT-integrated radial-flow extracted via fits to “blast wave” models. The newly introduced differential observable v0(pT) captures fluctuations in spectral shape across pT bins. v0(pT) retains differential sensitivity, since it is defined as the correlation (technically the normalised covariance) between the fraction of particles in a given pT-interval and the mean transverse momentum of the collision products within a single event, [pT]. Roughly speaking, a fluctuation raising [pT] produces a positive v0(pT) at high pT due to the fractional yield increasing; conversely, the fractional yield decreasing at low pT causes a negative v0(pT). A pseudorapidity gap between the measurement of mean pT and the particle yields is used to suppress short-range correlations and isolate the long-range, collective signal. Previous studies observed event-by-event fluctuations in [pT], related to radial flow over a wide pT range and quantified by the coefficient v0ref, but they could not establish whether these fluctuations were correlated across different pT intervals – a crucial signature of collective behaviour.

Origins

The ATLAS collaboration performed a measurement of v0(pT) in the 0.5 to 10 GeV range, identifying three signatures of the collective origin of radial flow (see figure 2). First, correlations between the particle yield at fixed pT and the event-wise mean [pT] in a reference interval show that the two-particle radial flow factorises into single-particle coefficients as v0(pT) × v0ref for pT < 4 GeV, independent of the reference choice (left panel). Second, the data display no dependence on the rapidity gap between correlated particles, suggesting a long-range effect intrinsic to the entire system (middle panel). Finally, the centrality dependence of the ratio v0(pT)/v0ref followed a consistent trend from head-on to peripheral collisions, effectively cancelling initial geometry effects and supporting the interpretation of a collective QGP response (right panel). At higher pT, a decrease in v0(pT) and a splitting with respect to centrality suggest the onset of non-thermal effects such as jet quenching. This may reveal fluctuations in jet energy loss – an area warranting further investigation.

ALICE measurements of radial flow

Using more than 80 million collisions at a centre-of-mass energy of 5.02 TeV, ALICE extracted v0(pT) for identified pions, kaons and protons across a broad range of centralities. ALICE observes v0(pT) to be negative at low pT, reflecting the influence of mean-pT fluctuations on the spectral shape (see figure 3). The data display a clear mass ordering at low pT, from protons to kaons to pions, consistent with expectations from collective radial expansion. This mass ordering reflects the greater “push” heavier particles experience in the rapidly expanding medium. The picture changes above 3 GeV, where protons have larger v0(pT) values than pions and kaons, perhaps indicating the contribution of recombination processes in hadron production.

The results demonstrate that the isotropic expansion of the QGP in heavy-ion collisions exhibits clear signatures of collective behaviour

The two collaborations’ measurements of the new v0(pT) observable highlight its sensitivity to the bulk-transport properties of the QGP medium. Comparisons with hydrodynamic calculations show that v0(pT) varies with bulk viscosity and the speed of sound, but that it has a weaker dependence on shear viscosity. Hydrodynamic predictions reproduce the data well up to about 2 GeV, but diverge at higher momenta. The deviation of non-collective models like HIJING from the data underscores the dominance of final-state, hydrodynamic-like effects in shaping radial flow.

These results advance our understanding of one of the most extreme regimes of QCD matter, strengthening the case for the formation of a strongly interacting, radially expanding QGP medium in heavy-ion collisions. Differential measurements of radial flow offer a new tool to probe this fluid-like expansion in detail, establishing its collective origin and complementing decades of studies of anisotropic flow.

The post A new probe of radial flow appeared first on CERN Courier.

]]>
News The ATLAS and ALICE collaborations have announced the first results of a new way to measure the “radial flow” of quark–gluon plasma. https://cerncourier.com/wp-content/uploads/2025/07/ATLAS-PHOTO-2018-001-1.png
Neutron stars as fundamental physics labs https://cerncourier.com/a/neutron-stars-as-fundamental-physics-labs/ Tue, 08 Jul 2025 20:12:41 +0000 https://cerncourier.com/?p=113630 Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use extreme environments as precise laboratories for fundamental physics.

The post Neutron stars as fundamental physics labs appeared first on CERN Courier.

]]>
Neutron stars are truly remarkable systems. They pack between one and two times the mass of the Sun into a radius of about 10 kilometres. Teetering on the edge of gravitational collapse into a black hole, they exhibit some of the strongest gravitational forces in the universe. They feature extreme densities in excess of atomic nuclei. And due to their high densities they produce weakly interacting particles such as neutrinos. Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use these extreme environments as precise laboratories for fundamental physics.

Perhaps the most intriguing open question surrounding neutron stars is what is actually inside them. Clearly they are primarily composed of neutrons, but many theories suggest that other forms of matter should appear in the highest density regions near the centre of the star, including free quarks, hyperons and kaon or pion condensates. Diverse data can constrain these hypotheses, including astronomical inferences of the masses and radii of neutron stars, observations of the mergers of neutron stars by LIGO, and baryon production patterns and correlations in heavy-ion collisions at the LHC. Theoretical consistency is critical here. Several talks highlighted the importance of low-energy nuclear data to understand the behaviour of nuclear matter at low densities, though also emphasising that at very high densities and energies any description should fall within the realm of QCD – a theory that beautifully describes the dynamics of quarks and gluons at the LHC.

Another key question for neutron stars is how fast they cool. This depends critically on their composition. Quarks, hyperons, nuclear resonances, pions or muons would each lead to different channels to cool the neutron star. Measurements of the temperatures and ages of neutron stars might thereby be used to learn about their composition.

Research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics

The workshop revealed that research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics including tests of particles beyond the Standard Model, including the axion: a very light and weakly coupled dark-matter candidate that was initially postulated to explain the “strong CP problem” of why strong interactions are identical for particles and antiparticles. The workshop allowed particle theorists to appreciate the various possible uncertainties in their theoretical predictions and propagate them into new channels that may allow sharper tests of axions and other weakly interacting particles. An intriguing question that the workshop left open is whether the canonical QCD axion could condense inside neutron stars.

While many uncertainties remain, the workshop revealed that the field is open and exciting, and that upcoming observations of neutron stars, including neutron-star mergers or the next galactic supernova, hold unique opportunities to understand fundamental questions from the nature of dark matter to the strong CP problem.

The post Neutron stars as fundamental physics labs appeared first on CERN Courier.

]]>
Meeting report Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use extreme environments as precise laboratories for fundamental physics. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_Neutron.jpg
The battle of the Big Bang https://cerncourier.com/a/the-battle-of-the-big-bang/ Tue, 08 Jul 2025 20:05:48 +0000 https://cerncourier.com/?p=113664 Battle of the Big Bang provides an entertaining update on the collective obsessions and controlled schizophrenias in cosmology, writes Will Kinney.

The post The battle of the Big Bang appeared first on CERN Courier.

]]>
As Arthur Koestler wrote in his seminal 1959 work The Sleepwalkers, “The history of cosmic theories … may without exaggeration be called a history of collective obsessions and controlled schizophrenias; and the manner in which some of the most important individual discoveries were arrived at, reminds one more of a sleepwalker’s performance than an electronic’s brain.” Koestler’s trenchant observation about the state of cosmology in the first half of the 20th century is perhaps even more true of cosmology in the first half of the 21st, and Battle of the Big Bang: The New Tales of Our Cosmic Origins provides an entertaining – and often refreshingly irreverent – update on the state of current collective obsessions and controlled schizophrenias in cosmology’s effort to understand the origin of the universe. The product of a collaboration between a working cosmologist (Afshordi) and a science communicator (Halper), Battle of the Big Bang tells the story of our modern efforts to comprehend the nature of the first moments of time, back to the moment of the Big Bang and even before.

Rogues gallery

The story told by the book combines lucid explanations of a rogues’ gallery of modern cosmological theories, some astonishingly successful, others less so, interspersed with anecdotes culled from Halper’s numerous interviews with key players in the game. These stories of the real people behind the theories add humanistic depth to the science, and the balance between Halper’s engaging storytelling and Afshordi’s steady-handed illumination of often esoteric scientific ideas is mostly a winning combination; the book is readable, without sacrificing too much scientific depth. In this respect, Battle of the Big Bang is reminiscent of Dennis Overbye’s 1991 Lonely Hearts of the Cosmos. As with Overbye’s account of the famous conference-banquet fist fight between Rocky Kolb and Gary Steigman, there is no shortage here of renowned scientists behaving like children, and the “mean girls of cosmology” angle makes for an entertaining read. The story of University of North Carolina professor Paul Frampton getting catfished by cocaine smugglers posing as model Denise Milani and ending up in an Argentine prison, for example, is not one you see coming.

Battle of the Big Bang: The New Tales of Our Cosmic Origins

A central conflict propelling the narrative is the longstanding feud between Andrei Linde and Alan Guth, both originators of the theory of cosmological inflation, and Paul Steinhardt, also an originator of the theory who later transformed into an apostate and bitter critic of the theory he helped establish.

Inflation – a hypothesised period of exponential cosmic expansion by more than 26 orders of magnitude that set the initial conditions for the hot Big Bang – is the gorilla in the room, a hugely successful theory that over the past several decades has racked up win after win when confronted by modern precision cosmology. Inflation is rightly considered by most cosmologists to be a central part of the “standard” cosmology, and its status as a leading theory inevitably makes it a target of critics like Steinhardt, who argue that inflation’s inherent flexibility means that it is not a scientific theory at all. Inflation is introduced early in the book, and for the remainder, Afshordi and Halper ably lead the reader through a wild mosaic of alternative theories to inflation: multiverses, bouncing universes, new universes birthed from within black holes, extra dimensions, varying light speed and “mirror” universes with reversed time all make appearances, a dizzying inventory of our most recent collective obsessions and schizophrenias.

In the later chapters, Afshordi describes some of his own efforts to formulate an alternative to inflation, and it is here that the book is at its strongest; the voice of a master of the craft confronting his own unconscious assumptions and biases makes for compelling reading. I have known Niayesh as a friend and colleague for more than 20 years. He is a fearlessly creative theorist with deep technical skill, but he has the heart of a rebel and a poet, and I found myself wishing that the book gave his unique voice more room to shine, instead of burying it beneath too many mundane pop-science tropes; the book could have used more of the science and less of the “science communication”. At times the pop-culture references come so thick that the reader feels as if he is having to shake them off his leg.

Compelling arguments

Anyone who reads science blogs or follows science on social media is aware of the voices, some of them from within mainstream science and many from further out on the fringe, arguing that modern theoretical physics suffers from a rigid orthodoxy that serves to crowd out worthy alternative ideas to understand problems such as dark matter, dark energy and the unification of gravity with quantum mechanics. This has been the subject of several books such as Lee Smolin’s The Trouble with Physics and Peter Woit’s Not Even Wrong. A real value in Battle of the Big Bang is to provide a compelling counterargument to that pessimistic narrative. In reality, ambitious scientists like nothing better than overturning a standard paradigm, and theorists have put the standard model of cosmology in the cross hairs with the gusto of assassins gunning for John Wick. Despite – or perhaps because of – its focus on conflict, this book ultimately paints a picture of a vital and healthy scientific process, a kind of controlled chaos, ripe with wild ideas, full of the clash of egos and littered with the ashes of failed shots at glory.

What the book is not is a reliable scholarly work on the history of science. Not only was the manuscript rather haphazardly copy-edited (the renowned Mount Palomar telescope, for example, is not “two hundred foot”, but in fact 200 inches), but the historical details are sometimes smoothed over to fit a coherent narrative rather than presented in their actual messy accuracy. While I do not doubt the anecdote of David Spergel saying “we’re dead”, referring to cosmic strings when data from the COBE satellite was first released, it was not COBE that killed cosmic strings. The blurry vision of COBE could accommodate either strings or inflation as the source of fluctuations in the cosmic microwave background (CMB), and it took a clearer view to make the distinction. The final nail in the coffin came from BOOMERanG nearly a decade later, with the observation of the second acoustic peak in the CMB. And it was not, as claimed here, BOOMERanG that provided the first evidence for a flat geometry to the cosmos; that happened a few years earlier, with the Saskatoon and CAT experiments.

Afshordi and Halper ably lead the reader through a wild mosaic of alternative theories to inflation

The book makes a point of the premature death of Dave Wilkinson, when in fact he died at age 67, not (as is implied in the text) in his 50s. Wilkinson – who was my freshman physics professor – was a great scientist and a gifted teacher, and it is appropriate to memorialise him, but he had a long and productive career.

Besides these points of detail, there are some more significant omissions. The book relates the story of how the Ukrainian physicist Alex Vilenkin, blacklisted from physics and working as a zookeeper in Kharkiv, escaped the Soviet Union. Vilenkin moved to SUNY Buffalo, where I am currently a professor, because he had mistaken Mendel Sachs, a condensed matter theorist, for Ray Sachs, who originally predicted fluctuations in the CMB. It’s a funny story, and although the authors note that Vilenkin was blacklisted for refusing to be an informant for the KGB, they omit the central context that he was Jewish, one of many Jews banished from academic life by Soviet authorities who escaped the stifling anti-Semitism of the Soviet Union for scientific freedom in the West. This history resonates today in light of efforts by some scientists to boycott Israeli institutes and even blacklist Israeli colleagues. Unlike the minutiae of CMB physics, this matters, and Battle of the Big Bang should have been more careful to tell the whole story.

The post The battle of the Big Bang appeared first on CERN Courier.

]]>
Review Battle of the Big Bang provides an entertaining update on the collective obsessions and controlled schizophrenias in cosmology, writes Will Kinney. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_Rev_Steinhardt.jpg
Quantum theory returns to Helgoland https://cerncourier.com/a/quantum-theory-returns-to-helgoland/ Tue, 08 Jul 2025 20:01:35 +0000 https://cerncourier.com/?p=113617 The takeaway from Helgoland 2025 was that the foundations of quantum mechanics, though strongly built on Helgoland 100 years ago, remain open to interpretation.

The post Quantum theory returns to Helgoland appeared first on CERN Courier.

]]>
In June 1925, Werner Heisenberg retreated to the German island of Helgoland seeking relief from hay fever and the conceptual disarray of the old quantum theory. On this remote, rocky outpost in the North Sea, he laid the foundations of matrix mechanics. Later, his “island epiphany” would pass through the hands of Max Born, Wolfgang Pauli, Pascual Jordan and several others, and become the first mature formulation of quantum theory. From 9 to 14 June 2025, almost a century later, hundreds of researchers gathered on Helgoland to mark the anniversary – and to deal with pressing and unfinished business.

Alfred D Stone (Yale University) called upon participants to challenge the folklore surrounding quantum theory’s birth. Philosopher Elise Crull (City College of New York) drew overdue attention to Grete Hermann, who hinted at entanglement before it had a name and anticipated Bell in identifying a flaw in von Neumann’s no-go theorem, which had been taken as proof that hidden-variable theories are impossible. Science writer Philip Ball questioned Heisenberg’s epiphany itself: he didn’t invent matrix mechanics in a flash, claims Ball, nor immediately grasp its relevance, and it took months, and others, to see his contribution for what it was (see “Lend me your ears” image).

Building on a strong base

A clear takeaway from Helgoland 2025 was that the foundations of quantum mechanics, though strongly built on Helgoland 100 years ago, nevertheless remain open to interpretation, and any future progress will depend on excavating them directly (see “Four ways to interpret quantum mechanics“).

Does the quantum wavefunction represent an objective element of reality or merely an observer’s state of knowledge? On this question, Helgoland 2025 could scarcely have been more diverse. Christopher Fuchs (UMass Boston) passionately defended quantum Bayesianism, which recasts the Born probability rule as a consistency condition for rational agents updating their beliefs. Wojciech Zurek (Los Alamos National Laboratory) presented the Darwinist perspective, for which classical objectivity emerges from redundant quantum information encoded across the environment. Although Zurek himself maintains a more agnostic stance, his decoherence-based framework is now widely embraced by proponents of many-worlds quantum mechanics (see “The minimalism of many worlds“).

The foundations of quantum mechanics remain open to interpretation, and any future progress will depend on excavating them directly

Markus Aspelmeyer (University of Vienna) made the case that a signature of gravity’s long-speculated quantum nature may soon be within experimental reach. Building on the “gravitational Schrödinger’s cat” thought experiment proposed by Feynman in the 1950s, he described how placing a massive object in a spatial superposition could entangle a nearby test mass through their gravitational interaction. Such a scenario would produce correlations that are inexplicable by classical general relativity alone, offering direct empirical evidence that gravity must be described quantum-mechanically. Realising this type of experiment requires ultra-low pressures and cryogenic temperatures to suppress decoherence, alongside extremely low-noise measurements of gravitational effects at short distances. Recent advances in optical and opto­mechanical techniques for levitating and controlling nanoparticles suggest a path forward – one that could bring evidence for quantum gravity not from black holes or the early universe, but from laboratories on Earth.

Information insights

Quantum information was never far from the conversation. Isaac Chuang (MIT) offered a reconstruction of how Heisenberg might have arrived at the principles of quantum information, had his inspiration come from Shannon’s Mathematical Theory of Communication. He recast his original insights into three broad principles: observations act on systems; local and global perspectives are in tension; and the order of measurements matters. Starting from these ingredients, one could in principle recover the structure of the qubit and the foundations of quantum computation. Taking the analogy one step further, he suggested that similar tensions between memorisation and generalisation – or robustness and adaptability – may one day give rise to a quantum theory of learning.

Helgoland 2025 illustrated just how much quantum mechanics has diversified since its early days. No longer just a framework for explaining atomic spectra, the photoelectric effect and black-body radiation, it is at once a formalism describing high-energy particle scattering, a handbook for controlling the most exotic states of matter, the foundation for information technologies now driving national investment plans, and a source of philosophical conundrums that, after decades at the margins, has once again taken centre stage in theoretical physics.

The post Quantum theory returns to Helgoland appeared first on CERN Courier.

]]>
Meeting report The takeaway from Helgoland 2025 was that the foundations of quantum mechanics, though strongly built on Helgoland 100 years ago, remain open to interpretation. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_bornpauli.jpg
Exceptional flare tests blazar emission models https://cerncourier.com/a/exceptional-flare-tests-blazar-emission-models/ Tue, 08 Jul 2025 19:51:49 +0000 https://cerncourier.com/?p=113571 A new analysis of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer sheds light on the emission mechanisms of active galactic nuclei.

The post Exceptional flare tests blazar emission models appeared first on CERN Courier.

]]>
Active galactic nuclei (AGNs) are extremely energetic regions at the centres of galaxies, powered by accretion onto a supermassive black hole. Some AGNs launch plasma outflows moving near light speed. Blazars are a subclass of AGNs whose jets are pointed almost directly at Earth, making them appear exceptionally bright across the electro­magnetic spectrum. A new analysis of an exceptional flare of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer (IXPE) has now shed light on their emission mechanisms.

The spectral energy distribution of blazars generally has two broad peaks. The low-energy peak from radio to X-rays is well explained by synchrotron radiation from relativistic electrons spiraling in magnetic fields, but the origin of the higher-energy peak from X-rays to γ-rays is a longstanding point of contention, with two classes of models, dubbed hadronic and leptonic, vying to explain it. Polarisation measurements offer a key diagnostic tool, as the two models predict distinct polarisation signatures.

Model signatures

In hadronic models, high-energy emission is produced by protons, either through synchrotron radiation or via photo-hadronic interactions that generate secondary particles. Hadronic models predict that X-ray polarisation should be as high as that in the optical and millimetre bands, even in complex jet structures.

Leptonic models are powered by inverse Compton scattering, wherein relativistic electrons “upscatter” low-energy photons, boosting them to higher energies with low polarisation. Leptonic models can be further subdivided by the source of the inverse-Compton-scattered photons. If initially generated by synchrotron radiation in the AGN (synchrotron self-Compton, SSC), modest polarisation (~50%) is expected due to the inherent polarisation of synchrotron photons, with further reductions if the emission comes from inhomogeneous or multiple emitting regions. If initially generated by external sources (external Compton, EC), isotropic photon fields from the surrounding structures are expected to average out their polarisation.

IXPE launched on 9 December 2021, seeking to resolve such questions. It is designed to have 100-fold better sensitivity to the polarisation of X-rays in astrophysical sources than the last major X-ray polarimeter, which was launched half a century ago (CERN Courier July/August 2022 p10). In November 2023, it participated in a coordinated multiwavelength campaign spanning radio, millimetre and optical, and X-ray bands targeted the blazar BL Lacertae, whose X-ray emission arises mostly from the high-energy component, with its low-energy synchrotron component mainly at infrared energies. The campaign captured an exceptional flare, providing a rare opportunity to test competing emission models.

Optical telescopes recorded a peak optical polarisation of 47.5 ± 0.4%, the highest ever measured in a blazar. The short-mm (1.3 mm) polarisation also rose to about 10%, with both bands showing similar trends in polarisation angle. IXPE measured no significant polarisation in the 2 to 8 keV X-ray band, placing a 3σ upper limit of 7.4%.

The striking contrast between the high polarisation in optical and mm bands, and a strict upper limit in X-rays, effectively rules out all single-zone and multi-region hadronic models. Had these processes dominated, the X-ray polarisation would have been comparable to the optical. Instead, the observations strongly support a leptonic origin, specifically the SSC model with a stratified or multi-zone jet structure that naturally explains the low X-ray polarisation.

A key feature of the flare was the rapid rise and fall of optical polarisation

A key feature of the flare was the rapid rise and fall of optical polarisation. Initially, it was low, of order 5%, and aligned with the jet direction, suggesting the dominance of poloidal or turbulent fields. A sharp increase to nearly 50%, while retaining alignment, indicates the sudden injection of a compact, toroidally dominated magnetic structure.

The authors of the analysis propose a “magnetic spring” model wherein a tightly wound toroidal field structure is injected into the jet, temporarily ordering the magnetic field and raising the optical polarisation. As the structure travels outward, it relaxes, likely through kink instabilities, causing the polarisation to decline over about two weeks. This resembles an elastic system, briefly stretched and then returning to equilibrium.

A magnetic spring would also explain the multiwavelength flaring. The injection boosted the total magnetic field strength, triggering an unprecedented mm-band flare powered by low-energy electrons with long cooling times. The modest rise in mm-wavelength polarisation (green points) suggests emission from a large, turbulent region. Meanwhile, optical flaring (black points) was suppressed due to the rapid synchrotron cooling of high-energy electrons, consistent with the observed softening of the optical spectrum. No significant γ-ray enhancement was observed, as these photons originate from the same rapidly cooling electron population.

Turning point

These findings mark a turning point in high-energy astrophysics. The data definitively favour leptonic emission mechanisms in BL Lacertae during this flare, ruling out efficient proton acceleration and thus any associated high-energy neutrino or cosmic-ray production. The ability of the jet to sustain nearly 50% polarisation across parsec scales implies a highly ordered, possibly helical magnetic field extending far from the supermassive black hole.

The results cement polarimetry as a definitive tool in identifying the origin of blazar emission. The dedicated Compton Spectrometer and Imager (COSI) γ-ray polarimeter is soon set to complement IXPE at even higher energies when launched by NASA in 2027. Coordinated campaigns will be crucial for probing jet composition and plasma processes in AGNs, helping us understand the most extreme environments in the universe.

The post Exceptional flare tests blazar emission models appeared first on CERN Courier.

]]>
News A new analysis of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer sheds light on the emission mechanisms of active galactic nuclei. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_Astro.jpg
Fermilab’s final word on muon g-2 https://cerncourier.com/a/fermilabs-final-word-on-muon-g-2/ Tue, 08 Jul 2025 19:40:43 +0000 https://cerncourier.com/?p=113549 In parallel, theorists have published an updated Standard Model prediction based purely on lattice QCD.

The post Fermilab’s final word on muon g-2 appeared first on CERN Courier.

]]>
Fermilab’s Muon g-2 collaboration has given its final word on the magnetic moment of the muon. The new measurement agrees closely with a significantly revised Standard Model (SM) prediction. Though the experimental measurement will likely now remain stable for several years, theorists expect to make rapid progress to reduce uncertainties and resolve tensions underlying the SM value. One of the most intriguing anomalies in particle physics is therefore severely undermined, but not yet definitively resolved.

The muon g-2 anomaly dates back to the late 1990s and early 2000s, when measurements at Brookhaven National Laboratory (BNL) uncovered a possible discrepancy by comparison to theoretical predictions of the so-called muon anomaly, aμ = (g-2)/2. aμ expresses the magnitude of quantum loop corrections to the leading-order prediction of the Dirac equation, which multiplies the classical gyromagnetic ratio of fundamental fermions by a “g-factor” of precisely two. Loop corrections of aμ ~ 0.1% quantify the extent to which virtual particles emitted by the muon further increase the strength of its interaction with magnetic fields. Were measurements to be shown to deviate from SM predictions, this would indicate the influence of virtual fields beyond the SM.

Move on up

In 2013, the BNL experiment’s magnetic storage ring was transported from Long Island, New York, to Fermilab in Batavia, Illinois. After years of upgrades and improvements, the new experiment began in 2017. It now reports a final precision of 127 parts per billion (ppb), bettering the experiment’s design precision of 140 ppb, and a factor of four more sensitive than the BNL result.

“First and foremost, an increase in the number of stored muons allowed us to reduce our statistical uncertainty to 98 ppb compared to 460 ppb for BNL,” explains co-spokesperson Peter Winter of Argonne National Laboratory, “but a lot of technical improvements to our calorimetry, tracking, detector calibration and magnetic-field mapping were also needed to improve on the systematic uncertainties from 280 ppb at BNL to 78 ppb at Fermilab.”

This formidable experimental precision throws down the gauntlet to the theory community

The final Fermilab measurement is (116592070.5 ± 11.4 (stat.) ± 9.1(syst.) ± 2.1 (ext.)) × 10–11, fully consistent with the previous BNL measurement. This formidable precision throws down the gauntlet to the Muon g-2 Theory Initiative (TI), which was founded to achieve an international consensus on the theoretical prediction.

The calculation is difficult, featuring contributions from all sectors of the SM (CERN Courier March/April 2025 p21). The TI published its first whitepaper in 2020, reporting aμ = (116591810 ± 43) × 10–11, based exclusively on a data-driven analysis of cross-section measurements at electron–positron colliders (WP20). In May, the TI updated its prediction, publishing a value aμ = (116592033 ± 62) × 10–11, statistically incompatible with the previous prediction at the level of three standard deviations, and with an increased uncertainty of 530 ppb (WP25). The new prediction is based exclusively on numerical SM calculations. This was made possible by rapid progress in the use of lattice QCD to control the dominant source of uncertainty, which arises due to the contribution of so-called hadronic vacuum polarisation (HVP). In HVP, the photon representing the magnetic field interacts with the muon during a brief moment when a virtual photon erupts into a difficult-to-model cloud of quarks and gluons.

Significant shift

“The switch from using the data-driven method for HVP in WP20 to lattice QCD in WP25 results in a significant shift in the SM prediction,” confirms Aida El-Khadra of the University of Illinois, chair of the TI, who believes that it is not unreasonable to expect significant error reductions in the next couple of years. “There still are puzzles to resolve, particularly around the experimental measurements that are used in the data-driven method for HVP, which prevent us, at this point in time, from obtaining a new prediction for HVP in the data-driven method. This means that we also don’t yet know if the data-driven HVP evaluation will agree or disagree with lattice–QCD calculations. However, given the ongoing dedicated efforts to resolve the puzzles, we are confident we will soon know what the data-driven method has to say about HVP. Regardless of the outcome of the comparison with lattice QCD, this will yield profound insights.”

We are making plans to improve experimental precision beyond the Fermilab experiment

On the experimental side, attention now turns to the Muon g-2/EDM experiment at J-PARC in Tokai, Japan. While the Fermilab experiment used the “magic gamma” method first employed at CERN in the 1970s to cancel the effect of electric fields on spin precession in a magnetic field (CERN Courier September/October 2024 p53), the J-PARC experiment seeks to control systematic uncertainties by exercising particularly tight control of its muon beam. In the Japanese experiment, antimatter muons will be captured by atomic electrons to form muonium, ionised using a laser, and reaccelerated for a traditional precession measurement with sensitivity to both the muon’s magnetic moment and its electric dipole moment (CERN Courier July/August 2024 p8).

“We are making plans to improve experimental precision beyond the Fermilab experiment, though their precision is quite tough to beat,” says spokesperson Tsutomu Mibe of KEK. “We also plan to search for the electric dipole moment of the muon with an unprecedented precision of roughly 10–21 e cm, improving the sensitivity of the last results from BNL by a factor of 70.”

With theoretical predictions from high-order loop processes expected to be of the order 10–38 e cm, any observation of an electric dipole moment would be a clear indication of new physics.

“Construction of the experimental facility is currently ongoing,” says Mibe. “We plan to start data taking in 2030.”

The post Fermilab’s final word on muon g-2 appeared first on CERN Courier.

]]>
News In parallel, theorists have published an updated Standard Model prediction based purely on lattice QCD. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_fermilab.jpg
STAR hunts QCD critical point https://cerncourier.com/a/star-hunts-qcd-critical-point/ Tue, 08 Jul 2025 19:38:28 +0000 https://cerncourier.com/?p=113561 The STAR collaboration at BNL has narrowed the search for a long-sought-after “critical point” in the still largely conjectural phase diagram of QCD.

The post STAR hunts QCD critical point appeared first on CERN Courier.

]]>
Phases of QCD

Just as water takes the form of ice, liquid or vapour, QCD matter exhibits distinct phases. But while the phase diagram of water is well established, the QCD phase diagram remains largely conjectural. The STAR collaboration at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC) recently completed a new beam-energy scan (BES-II) of gold–gold collisions. The results narrow the search for a long-sought-after “critical point” in the QCD phase diagram.

“BES-II precision measurements rule out the existence of a critical point in the regions of the QCD phase diagram accessed at LHC and top RHIC energies, while still allowing the possibility at lower collision energies,” says Bedangadas Mohanty of the National Institute of Science Education and Research in India, who co-led the analysis. “The results refine earlier BES-I indications, now with much reduced uncertainties.”

At low temperatures and densities, quarks and gluons are confined within hadrons. Heating QCD matter leads to the formation of a deconfined quark–gluon plasma (QGP), while increasing the density at low temperatures is expected to give rise to more exotic states such as colour superconductors. Above a certain threshold in baryon density, the transition from hadron gas to QGP is expected to be first-order – a sharp, discontinuous change akin to water boiling. As density decreases, this boundary gives way to a smooth crossover where the two phases blend. A hypothetical critical point marks the shift between these regimes, much like the endpoint of the liquid–gas coexistence line in the phase diagram of water (see “Phases of QCD” figure).

Heavy-ion collisions offer a way to observe this phase transition directly. At the Large Hadron Collider, the QGP created in heavy-ion collisions transitions smoothly to a hadronic gas as it cools, but the lower energies explored by RHIC probe the region of phase space where the critical point may lie.

To search for possible signatures of a critical point, the STAR collaboration measured gold–gold collisions at centre-of-mass energies between 7.7 and 27 GeV per nucleon pair. The collaboration reports that their data deviate from frameworks that do not include a critical point, including the hadronic transport model, thermal models with canonical ensemble treatment, and hydrodynamic approaches with excluded-volume effects. Depending on the choice of observable and non-critical baseline model, the significance of the deviations ranges from two to five standard deviations, with the largest effects seen in head-on collisions when using peripheral collisions as a reference.

“None of the existing theoretical models fully reproduce the features observed in the data,” explains Mohanty. “To interpret these precision measurements, it is essential that dynamical model calculations that include critical-point physics be developed.” The STAR collaboration is now mapping lower energies and higher baryon densities using a fixed target (FXT) mode, wherein a 1 mm gold foil sits 2 cm below the beam axis.

“The FXT data are a valuable opportunity to explore QCD matter at high baryon density,” says Mohanty. “Data taking will conclude later this year when RHIC transitions to the Electron–Ion Collider. The Compressed Baryonic Matter experiment at FAIR in Germany will then pick up the study of the QCD critical point towards the end of the 2020s.”

The post STAR hunts QCD critical point appeared first on CERN Courier.

]]>
News The STAR collaboration at BNL has narrowed the search for a long-sought-after “critical point” in the still largely conjectural phase diagram of QCD. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_phases_feature.jpg
Double plasma progress at DESY https://cerncourier.com/a/double-plasma-progress-at-desy/ Tue, 08 Jul 2025 19:33:57 +0000 https://cerncourier.com/?p=113556 New developments tackle two of the biggest challenges in plasma-wave acceleration: beam quality and bunch rate.

The post Double plasma progress at DESY appeared first on CERN Courier.

]]>
What if, instead of using tonnes of metal to accelerate electrons, they were to “surf” on a wave of charge displacements in a plasma? This question, posed in 1979 by Toshiki Tajima and John Dawson, planted the seed for plasma wakefield acceleration (PWA). Scientists at DESY now report some of the first signs that PWA is ready to compete with traditional accelerators at low energies. The results tackle two of the biggest challenges in PWA: beam quality and bunch rate.

“We have made great progress in the field of plasma acceleration,” says Andreas Maier, DESY’s lead scientist for plasma acceleration, “but this is an endeavour that has only just started, and we still have a bit of homework to do to get the system integrated with the injector complexes of a synchrotron, which is our final goal.”

Riding a wave

PWA has the potential to radically miniaturise particle accelerators. Plasma waves are generated when a laser pulse or particle beam ploughs through a millimetres-long hydrogen-filled capillary, displacing electrons and creating a wake of alternating positive and negative charge regions behind it. The process is akin to flotsam and jetsam being accelerated in the wake of a speedboat, and the plasma “wakefields” can be thousands of times stronger than the electric fields in conventional accelerators, allowing particles to gain hundreds of MeV in just a few millimetres. But beam quality and intensity are significant challenges in such narrow confines.

In a first study, a team from the LUX experiment at DESY and the University of Hamburg demonstrated, for the first time, a two-stage correction system to dramatically reduce the energy spread of accelerated electron beams. The first stage stretches the longitudinal extent of the beam from a few femtoseconds to several picoseconds using a series of four zigzagging bending magnets called a magnetic chicane. Next, a radio-frequency cavity reduces the energy variation to below 0.1%, bringing the beam quality in line with conventional accelerators.

“We basically trade beam current for energy stability,” explains Paul Winkler, lead author of a recent publication on active energy compression. “But for the intended application of a synchrotron injector, we would need to stretch the electron bunches anyway. As a result, we achieved performance levels so far only associated with conventional accelerators.”

But producing high-quality beams is only half the battle. To make laser-driven PWA a practical proposition, bunches must be accelerated not just once a second, like at LUX, but hundreds or thousands of times per second. This has now been demonstrated by KALDERA, DESY’s new high-power laser system (see “Beam quality and bunch rate” image).

“Already, on the first try, we were able to accelerate 100 electron bunches per second,” says principal investigator Manuel Kirchen, who emphasises the complementarity of the two advances. The team now plans to scale up the energy and deploy “active stabilisation” to improve beam quality. “The next major goal is to demonstrate that we can contin­uously run the plasma accelerators with high stability,” he says.

With the exception of CERN’s AWAKE experiment (CERN Courier May/June 2024 p25), almost all plasma-wakefield accelerators are designed with medical or industrial applications in mind. Medical applications are particularly promising as they require lower beam energies and place less demanding constraints on beam quality. Advances such as those reported by LUX and KALDERA raise confidence in this new technology and could eventually open the door to cheaper and more portable X-ray equipment, allowing medical imaging and cancer therapy to take place in university labs and hospitals.

The post Double plasma progress at DESY appeared first on CERN Courier.

]]>
News New developments tackle two of the biggest challenges in plasma-wave acceleration: beam quality and bunch rate. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_desy.jpg
Plotting the discovery of Higgs pairs on Elba https://cerncourier.com/a/plotting-the-discovery-of-higgs-pairs-on-elba/ Tue, 08 Jul 2025 19:31:41 +0000 https://cerncourier.com/?p=113648 150 physicists convened on Elba from 11 to 17 May for the Higgs Pairs 2025 workshop.

The post Plotting the discovery of Higgs pairs on Elba appeared first on CERN Courier.

]]>
Precise measurements of the Higgs self-coupling and its effects on the Higgs potential will play a key role in testing the validity of the Standard Model (SM). 150 physicists discussed the required experimental and theoretical manoeuvres on the serene island of Elba from 11 to 17 May at the Higgs Pairs 2025 workshop.

The conference mixed updates on theoretical developments in Higgs-boson pair production, searches for new physics in the scalar sector, and the most recent results from Run 2 and Run 3 of the LHC. Among the highlights was the first Run 3 analysis released by ATLAS on the search for di-Higgs production in the bbγγ final state – a particularly sensitive channel for probing the Higgs self-coupling. This result builds on earlier Run 2 analyses and demonstrates significantly improved sensitivity, now comparable to the full Run 2 combination of all channels. These gains were driven by the use of new b-tagging algorithms, improved mass resolution through updated analysis techniques, and the availability of nearly twice the dataset.

Complementing this, CMS presented the first search for ttHH production – a rare process that would provide additional sensitivity to the Higgs self-coupling and Higgs–top interactions. Alongside this, ATLAS presented first experimental searches for triple Higgs boson production (HHH), one of the rarest processes predicted by the SM. Work on more traditional final states such as bbττ and bbbb is ongoing at both experiments, and continues to benefit from improved reconstruction techniques and larger datasets. 

Beyond current data, the workshop featured discussions of the latest combined projection study by ATLAS and CMS, prepared as part of the input to the upcoming European Strategy Update. It extrapolates results of the Run 2 analyses to expected conditions of the High-Luminosity LHC (HL-LHC), estimating future sensitivities to the Higgs self-coupling and di-Higgs cross-section in scenarios with vastly higher luminosity and upgraded detectors. Under these assumptions, the combined sensitivity of ATLAS and CMS to di-Higgs production is projected to reach a significance of 7.6σ, firmly establishing the process. 

These projections provide crucial input for analysis strategy planning and detector design for the next phase of operations at the HL-LHC. Beyond the HL-LHC, efforts are already underway to design experiments at future colliders that will enhance sensitivity to the production of Higgs pairs, and offer new insights into electroweak symmetry breaking.

The post Plotting the discovery of Higgs pairs on Elba appeared first on CERN Courier.

]]>
Meeting report 150 physicists convened on Elba from 11 to 17 May for the Higgs Pairs 2025 workshop. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_Higgs.jpg
New frontiers in science in the era of AI https://cerncourier.com/a/new-frontiers-in-science-in-the-era-of-ai/ Tue, 08 Jul 2025 19:25:45 +0000 https://cerncourier.com/?p=113671 New Frontiers in Science in the Era of AI arrives with a clear mission: to contextualise AI within the long arc of scientific thought and current research frontiers.

The post New frontiers in science in the era of AI appeared first on CERN Courier.

]]>
New Frontiers in Science in the Era of AI

At a time when artificial intelligence is more buzzword than substance in many corners of public discourse, New Frontiers in Science in the Era of AI arrives with a clear mission: to contextualise AI within the long arc of scientific thought and current research frontiers. This book is not another breathless ode to ChatGPT or deep learning, nor a dry compilation of technical papers. Instead, it’s a broad and ambitious survey, spanning particle physics, evolutionary biology, neuroscience and AI ethics, that seeks to make sense of how emerging technologies are reshaping not only the sciences but knowledge and society more broadly.

The book’s chapters, written by established researchers from diverse fields, aim to avoid jargon while attracting non-specialists, without compromising depth. The book offers an insight into how physics remains foundational across scientific domains, and considers the social, ethical and philosophical implications of AI-driven science.

The first section, “New Physics World”, will be the most familiar terrain for physicists. Ugo Moschella’s essay, “What Are Things Made of? The History of Particles from Thales to Higgs”, opens with a sweeping yet grounded narrative of how metaphysical questions have persisted alongside empirical discoveries. He draws a bold parallel between the ancient idea of mass emerging from a cosmic vortex and the Higgs mechanism, a poetic analogy that holds surprising resonance. Thales, who lived roughly from 624 to 545 BCE, proposed that water is the fundamental substance out of which all others are formed. Following his revelation, Pythagoras and Empedocles added three more items to complete the list of the elements: earth, air and fire. Aristotle added a fifth element: the “aether”. The physical foundation of the standard cosmological model of the ancient world is then rooted in the Aristotelian conceptions of movement and gravity, argues Moschella. His essay lays the groundwork for future chapters that explore entanglement, computation and the transition from thought experiments to quantum technology and AI.

A broad and ambitious survey spanning particle physics, evolutionary biology, neuroscience and AI ethics

The second and third sections venture into evolutionary genetics, epigenetics (the study of heritable changes in gene expression) and neuroscience – areas more peripheral to physics, but timely nonetheless. Contributions by Eva Jablonka, evolutionary theorist and geneticist from Tel Aviv University, and Telmo Pievani, a biologist from the University of Padua, explore the biological implications of gene editing, environmental inheritance and self-directed evolution, as well as the ever-blurring boundaries between what is considered “natural” versus “artificial”. The authors propose that the human ability to edit genes is itself an evolutionary agent – a novel and unsettling idea, as this would be an evolution driven by a will and not by chance. Neuroscientist Jason D Runyan reflects compellingly on free will in the age of AI, blending empirical work with philosophical questions. These chapters enrich the central inquiry of what it means to be a “knowing agent”: someone who acts on nature according to its will, influenced by biological, cognitive and social factors. For physicists, the lesson may be less about adopting specific methods and more about recognising how their own field’s assumptions – about determinism, emergence or complexity – are echoed and challenged in the life sciences.

Perspectives on AI

The fourth section, “Artificial Intelligence Perspectives”, most directly addresses the book’s central theme. The quality, scientific depth and rigour are not equally distributed between these chapters, but are stimulating nonetheless. Topics range from the role of open-source AI in student-led AI projects at CERN’s IdeaSquare and real-time astrophysical discovery. Michael Coughlin and colleagues’ chapter on accelerated AI in astrophysics stands out for its technical clarity and relevance, a solid entry point for physicists curious about AI beyond popular discourse. Absent is an in-depth treatment of current AI applications in high-energy physics, such as anomaly detection in LHC triggers or generative models for simulation. Given the book’s CERN affiliations, this omission is surprising and leaves out some of the most active intersections of AI and high-energy physics (HEP) research.

Even as AI expands our modelling capacity, the epistemic limits of human cognition may remain permanent

The final sections address cosmological mysteries and the epistemological limits of human cognition. David H Wolpert’s epilogue, “What Can We Know About That Which We Cannot Even Imagine?”, serves as a reminder that even as AI expands our modelling capacity, the epistemic limits of human cognition – including conceptual blind spots and unprovable truths – may remain permanent. This tension is not a contradiction but a sobering reflection on the intrinsic boundaries of scientific – and more widely human – knowledge.

This eclectic volume is best read as a reflective companion to one’s own work. For advanced students, postdocs and researchers open to thinking beyond disciplinary boundaries, the book is an enriching, if at times uneven, read.

To a professional scientist, the book occasionally romanticises interdisciplinary exchange between specialised fields without fully engaging with the real methodological difficulties of translating complex concepts to the other sciences. Topics including the limitations of current large-language models, the reproducibility crisis in AI research, and the ethical risks of data-driven surveillance would have benefited from deeper treatment. Ethical questions in HEP may be less prominent in the public eye, but still exist. To mention a few, there are the environmental impact of large-scale facilities, the question of spending a substantial amount of public money on such mega-science projects, the potential dual-use concerns of the technologies developed, the governance of massive international collaborations and data transparency. These deserve more attention, and the book could have explored them more thoroughly.

A timely snapshot

Still, the book doesn’t pretend to be exhaustive. Its strength lies in curating diverse voices and offering a timely snapshot of science, as well as shedding light on ethical and philosophical questions associated with science that are less frequently discussed.

There is a vast knowledge gap in today’s society. Researchers often become so absorbed in their specific domains that they lose sight of their work’s broader philosophical and societal context and the need to explain it to the public. Meanwhile, public misunderstanding of science, and the resulting confusion between fact, theory and opinion, is growing. This gulf provides fertile ground for political manipulation and ideological extremism. New Frontiers in Science in the Era of AI has the immense merit of trying to bridge that gap. The editors and contributors deserve credit for producing a work of both scientific and societal relevance.

The post New frontiers in science in the era of AI appeared first on CERN Courier.

]]>
Review New Frontiers in Science in the Era of AI arrives with a clear mission: to contextualise AI within the long arc of scientific thought and current research frontiers. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_Rev_Frontiers.jpg
Quantum culture https://cerncourier.com/a/quantum-culture/ Tue, 08 Jul 2025 19:24:12 +0000 https://cerncourier.com/?p=113653 Kanta Dihal explores why quantum mechanics captures the imagination of writers – and how ‘quantum culture’ affects the public understanding of science.

The post Quantum culture appeared first on CERN Courier.

]]>
Kanta Dihal

How has quantum mechanics influenced culture in the last 100 years?

Quantum physics offers an opportunity to make the impossible seem plausible. For instance, if your superhero dies dramatically but the actor is still on the payroll, you have a few options available. You could pretend the hero miraculously survived the calamity of the previous instalment. You could also pretend the events of the previous instalment never happened. And then there is Star Wars: “Somehow, Palpatine returned.”

These days, however, quantum physics tends to come to the rescue. Because quantum physics offers the wonderful option to maintain that all previous events really happened, and yet your hero is still alive… in a parallel universe. Much is down to the remarkable cultural impact of the many-worlds interpretation of quantum physics, which has been steadily growing in fame (or notoriety) since Hugh Everett introduced it
in 1957.

Is quantum physics unique in helping fiction authors make the impossible seem possible?

Not really! Before the “quantum” handwave, there was “nuclear”: think of Dr Atomic from Watchmen, or Godzilla, as expressions of the utopian and dystopian expectations of that newly discovered branch of science. Before nuclear, there was electricity, with Frankenstein’s monster as perhaps its most important product. We can go all the way back to the invention of hydraulics in the ancient world, which led to an explosion of tales of liquid-operated automata – early forms of artificial intelligence – such as the bronze soldier Talos in ancient Greece. We have always used our latest discoveries to dream of a future in which our ancient tales of wonder could come true.

Is the many-worlds interpretation the most common theory used in science fiction inspired by quantum mechanics?

Many-worlds has become Marvel’s favourite trope. It allows them to expand on an increasingly entangled web of storylines that borrow from a range of remakes and reboots, as well as introducing gender and racial diversity into old stories. Marvel may have mainstreamed this interpretation, but the viewers of the average blockbuster may not realise exactly how niche it is, and how many alternatives there are. With many interpretations vying for acceptance, every once in a while a brave social scientist ventures to survey quantum-physicists’ preferences. These studies tend to confirm the dominance of the Copenhagen interpretation, with its collapse of the wavefunction rather than the branching universes characteristic of the Everett interpretation. In a 2016 study, for instance, only 6% of quantum physicists claimed that Everett was their favourite interpretation. In 2018 I looked through a stack of popular quantum-physics books published between 1980 and 2017, and found that more than half of these books endorse the many-worlds interpretation. A non-physicist might be forgiven for thinking that quantum physicists are split between two equal-sized enemy camps of Copenhagenists and Everettians.

What makes the many-worlds interpretation so compelling?

Answering this brings us to a fundamental question that fiction has enjoyed exploring since humans first told each other stories: what if? “What if the Nazis won the Second World War?” is pretty much an entire genre by itself these days. Before that, there were alternate histories of the American Civil War and many other key historical events. This means that the many-worlds interpretation fits smoothly into an existing narrative genre. It suggests that these alternate histories may be real, that they are potentially accessible to us and simply happening in a different dimension. Even the specific idea of branching alternative universes existed in fiction before Hugh Everett applied it to quantum mechanics. One famous example is the 1941 short story The Garden of Forking Paths by the Argentinian writer Jorge Luis Borges, in which a writer tries to create a novel in which everything that could happen, happens. His story anticipated the many-worlds interpretation so closely that Bryce DeWitt used an extract from it as the epigraph to his 1973 edited collection The Many-Worlds Interpretation of Quantum Mechanics. But the most uncanny example is, perhaps, Andre Norton’s science-fiction novel The Crossroads of Time, from 1956 – published when Everett was writing his thesis. In her novel, a group of historians invents a “possibility worlds” theory of history. The protagonist, Blake Walker, discovers that this theory is true when he meets a group of men from a parallel universe who are on the hunt for a universe-travelling criminal. Travelling with them, Blake ends up in a world where Hitler won the Battle of Britain. Of course, in fiction, only worlds in which a significant change has taken place are of any real interest to the reader or viewer. (Blake also visits a world inhabited by metal dinosaurs.) The truly uncountable number of slightly different universes Everett’s theory implies are extremely difficult to get our heads around. Nonetheless, our storytelling mindsets have long primed us for a fascination with the many-worlds interpretation.

Have writers put other interpretations to good use?

For someone who really wants to put their physics degree to use in their spare time, I’d recommend the works of Greg Egan: although his novel Quarantine uses the controversial conscious collapse interpretation, he always ensures that the maths checks out. Egan’s attitude towards the scientific content of his novels is best summed up by a quote on his blog: “A few reviewers complained that they had trouble keeping straight [the science of his novel Incandescence]. This leaves me wondering if they’ve really never encountered a book that benefits from being read with a pad of paper and a pen beside it, or whether they’re just so hung up on the idea that only non-fiction should be accompanied by note-taking and diagram-scribbling that it never even occurred to them to do this.”

What other quantum concepts are widely used and abused?

We have Albert Einstein to thank for the extremely evocative description of quantum entanglement as “spooky action at a distance”. As with most scientific phenomena, a catchy nickname such as this one is extremely effective for getting a concept to stick in the popular imagination. While Einstein himself did not initially believe quantum entanglement could be a real phenomenon, as it would violate local causality, we now have both evidence and applications of entanglement in the real world, most notably in quantum cryptography. But in science fiction, the most common application of quantum entanglement is in faster-than-light communication. In her 1966 novel Rocannon’s World, Ursula K Le Guin describes a device called the “ansible”, which interstellar travellers use to instantaneously communicate with each other across vast distances. Her term was so influential that it now regularly appears in science fiction as a widely accepted name for a faster-than-light communications device, the same way we have adopted the word “robot” from the 1920 play R.U.R. by Karel Čapek.

Fiction may get the science wrong, but that is often because the story it tries to tell existed long before the science

How were cultural interpretations of entanglement influenced by the development of quantum theory?

It wasn’t until the 1970s that no-signalling theorems conclusively proved that entanglement correlations, while instantaneous, cannot be controlled or used to send messages. Explaining why is a lot more complex than communicating the notion that observing a particle here has an effect on a particle there. Once again, quantum physics seemingly provides just enough scientific justification to resolve an issue that has plagued science fiction ever since the speed of light was discovered: how can we travel through space, exploring galaxies, settling on distant planets, if we cannot communicate with each other? This same line of thought has sparked another entanglement-related invention in fiction: what if we can send not just messages but also people, or even entire spaceships, across faster-than-light distances using entanglement? Conveniently, quantum physicists had come up with another extremely evocative term that fit this idea perfectly: quantum teleportation. Real quantum teleportation only transfers information. But the idea of teleportation is so deeply embedded in our storytelling past that we can’t help extrapolating it. From stories of gods that could appear anywhere at will to tales of portals that lead to strange new worlds, we have always felt limited by the speeds of travel we have managed to achieve – and once again, the speed of light seems to be a hard limit that quantum teleportation might be able to get us around. In his 2003 novel Timeline, Michael Crichton sends a group of researchers back in time using quantum teleportation, and the videogame Half-Life 2 contains teleportation devices that similarly seem to work through quantum entanglement.

What quantum concepts have unexplored cultural potential?

Clearly, interpretations other than many worlds have a PR problem, so is anyone willing to write a chart topper based on the relational interpretation or QBism? More generally, I think that any question we do not yet have an answer to, or any theory that remains untestable, is a potential source for an excellent story. Richard Feynman famously said, “I think I can safely say that nobody understands quantum mechanics.” Ironically, it is precisely because of this that quantum physics has become such a widespread building block of science fiction: it is just hard enough to understand, just unresolved and unexplained enough to keep our hopes up that one day we might discover that interstellar communication or inter-universe travel might be possible. Few people would choose the realities of theorising over these ancient dreams. That said, the theorising may never have happened without the dreams. How many of your colleagues are intimately acquainted with the very science fiction they criticise for having unrealistic physics? We are creatures of habit and convenience held together by stories, physicists no less than everyone else. This is why we come up with catchy names for theories, and stories about dead-and-alive cats. Fiction may often get the science wrong, but that is often because the story it tries to tell existed long before the science.

The post Quantum culture appeared first on CERN Courier.

]]>
Opinion Kanta Dihal explores why quantum mechanics captures the imagination of writers – and how ‘quantum culture’ affects the public understanding of science. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_INT_dihal_feature.jpg
A scientist in sales https://cerncourier.com/a/a-scientist-in-sales/ Tue, 08 Jul 2025 19:22:15 +0000 https://cerncourier.com/?p=113683 Massimiliano Pindo discusses opportunities for high-energy physicists in marketing and sales.

The post A scientist in sales appeared first on CERN Courier.

]]>
Massimiliano Pindo

The boundary between industry and academia can feel like a chasm. Opportunity abounds for those willing to bridge the gap.

Massimiliano Pindo began his career working on silicon pixel detectors at the DELPHI experiment at the Large Electron–Positron Collider. While at CERN, Pindo developed analytical and technical skills that would later become crucial in his career. But despite his passion for research, doubts clouded his hopes for the future.

“I wanted to stay in academia,” he recalls. “But at that time, it was getting really difficult to get a permanent job.” Pindo moved from his childhood home in Milan to Geneva, before eventually moving back in with his parents while applying for his next research grant. “The golden days of academia where people got a fixed position immediately after a postdoc or PhD were over.”

The path forward seemed increasingly unstable, defined by short-term grants, constant travel and an inability to plan long-term. There was always a constant stream of new grant applications, but permanent contracts were few and far between. With competition increasing, job stability seemed further and further out of reach. “You could make a decent living,” Pindo says, “but the real problem was you could not plan your life.”

Translatable skills

Faced with the unpredictability of academic work, Pindo transitioned into industry – a leap that eventually led him to his current role as marketing and sales director at Renishaw, France, a global engineering and scientific technology company. Pindo was confident that his technical expertise would provide a strong foundation for a job beyond academia, and indeed he found that “hard” skills such as analytical thinking, problem-solving and a deep understanding of technology, which he had honed at CERN alongside soft skills such as teamwork, languages and communication, translated well to his work in industry.

“When you’re a physicist, especially a particle physicist, you’re used to breaking down complex problems, selecting what is really meaningful amongst all the noise, and addressing these issues directly,” Pindo says. His experience in academia gave him the confidence that industry challenges would pale in comparison. “I was telling myself that in the academic world, you are dealing with things that, at least on paper, are more complex and difficult than what you find in industry.”

Initially, these technical skills helped Pindo become a device engineer for a hardware company, before making the switch to sales. The gradual transition from academia to something more hands-on allowed him to really understand the company’s product on a technical level, which made him a more desirable candidate when transitioning into marketing.

“When you are in B2B [business-to-business] mode and selling technical products, it’s always good to have somebody who has technical experience in the industry,” explains Pindo. “You have to have a technical understanding of what you’re selling, to better understand the problems customers are trying to solve.”

However, this experience also allowed him to recognise gaps in his knowledge. As he began gaining more responsibility in his new, more business-focused role, Pindo decided to go back to university and get an MBA. During the programme, he was able to familiarise himself with the worlds of human resources, business strategy and management – skills that aren’t typically the focus in a physics lab.

Pindo’s journey through industry hasn’t been a one-way ticket out of academia. Today, he still maintains a foothold in the academic world, teaching strategy as an affiliated professor at the Sorbonne. “In the end you never leave the places you love,” he says. “I got out through the door – now I’m getting back in through the window!”

Transitioning between industry and academia was not entirely seamless. Misconceptions loomed on both sides, and it took Pindo a while to find a balance between the two.

“There is a stereotype that scientists are people who can’t adapt to industrial environments – that they are too abstract, too theoretical,” Pindo explains. “People think scientists are always in the clouds, disconnected from reality. But that’s not true. The science we make is not the science of cartoons. Scientists can be people who plan and execute practical solutions.”

The misunderstanding, he says, goes both ways. “When I talk to alumni still in academia, many think that industry is a nightmare – boring, routine, uninteresting. But that’s also false,” Pindo says. “There’s this wall of suspicion. Academics look at industry and think, ‘What do they want? What’s the real goal? Are they just trying to make more money?’ There is no trust.”

Tight labour markets

For Pindo, this divide is frustrating and entirely unnecessary. Now with years of experience navigating both worlds, he envisions a more fluid connection between academia and industry – one that leverages the strengths of both. “Industry is currently facing tight labour markets for highly skilled talent, and academia doesn’t have access to the money and practical opportunities that industry can provide,” says Pindo. “Both sides need to work together.”

To bridge this gap, Pindo advocates a more open dialogue and a revolving door between the two fields – one that allows both academics and industry professionals to move fluidly back and forth, carrying their expertise across boundaries. Both sides have much to gain from shared knowledge and collaboration. One way to achieve this, he suggests, is through active participation in alumni networks and university events, which can nurture lasting relationships and mutual understanding. If more professionals embraced this mindset, it could help alleviate the very instability that once pushed him out of academia, creating a landscape where the boundaries between science and industry blur to the benefit of both.

“Everything depends on active listening. You always have to learn from the person in front of you, so give them the chance to speak. We have a better world to build, and that comes only from open dialogue and communication.”

The post A scientist in sales appeared first on CERN Courier.

]]>
Careers Massimiliano Pindo discusses opportunities for high-energy physicists in marketing and sales. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_CAR_Pindo_feature.jpg
Hadronic decays confirm long-lived Ωc0 baryon https://cerncourier.com/a/hadronic-decays-confirm-long-lived-%cf%89c0-baryon/ Tue, 08 Jul 2025 19:19:39 +0000 https://cerncourier.com/?p=113601 A new LHCb analysis of hadronic decays confirms that the Ωc0 baryon lives longer than once thought.

The post Hadronic decays confirm long-lived Ω<sub>c</sub><sup>0</sup> baryon appeared first on CERN Courier.

]]>
LHCb figure 1

In 2018 and 2019, the LHCb collaboration published surprising measurements of the Ξc0 and Ωc0 baryon lifetimes, which were inconsistent with previous results and overturned the established hierarchy between the two. A new analysis of their hadronic decays now confirms this observation, promising insights into the dynamics of baryons.

The Λc+, Ξc+, Ξc0 and Ωc0 baryons – each composed of one charm and two lighter up, down or strange quarks – are the only ground-state singly charmed baryons that decay predominantly via the weak interaction. The main contribution to this process comes from the charm quark transitioning into a strange quark, with the other constituents acting as passive spectators. Consequently, at leading order, their lifetimes should be the same. Differences arise from higher-order effects, such as W-boson exchange between the charm and spectator quarks and quantum interference between identical particles, known as “Pauli interference”. Charm hadron lifetimes are more sensitive to these effects than beauty hadrons because of the smaller charm quark mass compared to the bottom quark, making them a promising testing ground to study these effects.

Measurements of the Ξc0 and Ωc0 lifetimes prior to the start of the LHCb experiment resulted in the PDG averages shown in figure 1. The first LHCb analysis, using charm baryons produced in semi-leptonic decays of beauty baryons, was in tension with the established values, giving a Ωc0 lifetime four times larger than the previous average. The inconsistencies were later confirmed by another LHCb measurement, using an independent data set with charm baryons produced directly (prompt) in the pp collision (CERN Courier July/August 2021 p17). These results changed the ordering of the four single-charm baryons when arranged according to their lifetimes, triggering a scientific discussion on how to treat higher-order effects in decay rate calculations.

Using the full Run 1 and 2 datasets, LHCb has now measured the Ξc0 and Ωc0 lifetimes with a third independent data sample, based on fully reconstructed Ξb Ξc0 ( pKKπ+ and Ωb Ωc0 ( pKKπ+ decays. The selection of these hadronic decay chains exploits the long lifetime of the beauty baryons, such that the selection efficiency is almost independent of the charm baryon decay time. To cancel out the small remaining acceptance effects, the measurement is normalised to the kinematically and topologically similar B D0( K+Kπ+π channel, minimising the uncertainties with only a small additional correction from simulation.

The signal decays are separated from the remaining background by fits to the Ξc0 π and Ωc0 π invariant mass spectra, providing 8260 ± 100 Ξc0 and 355 ± 26 Ωc0 candidates. The decay time distributions are obtained with two independent methods: by determining the yield in each of a specific set of decay time intervals, and by employing a statistical technique that uses the covariance matrix from the fit to the mass spectra. The two methods give consistent results, confirming LHCb’s earlier measurements. Combining the three measurements from LHCb, while accounting for their correlated uncertainties, gives τ(Ξc0) = 150.7 ± 1.6 fs and τc0) = 274.8 ± 10.5 fs. These new results will serve as experimental guidance on how to treat higher-order effects in weak baryon decays, particularly regarding the approach-dependent sign and magnitude of Pauli interference terms.

The post Hadronic decays confirm long-lived Ω<sub>c</sub><sup>0</sup> baryon appeared first on CERN Courier.

]]>
News A new LHCb analysis of hadronic decays confirms that the Ωc0 baryon lives longer than once thought. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_EF_LHCb_feature.jpg
Decoding the Higgs mechanism with vector bosons https://cerncourier.com/a/decoding-the-higgs-mechanism-with-vector-bosons/ Tue, 08 Jul 2025 19:18:25 +0000 https://cerncourier.com/?p=113595 The CMS collaboration jointly analysed all vector boson scattering channels.

The post Decoding the Higgs mechanism with vector bosons appeared first on CERN Courier.

]]>
CMS figure 1

The discovery of the Higgs boson at the LHC in 2012 provided strong experimental support for the Brout–Englert–Higgs mechanism of spontaneous electroweak symmetry breaking (EWSB) as predicted by the Standard Model. The EWSB explains how the W and Z bosons, the mediators of the weak interaction, acquire mass: their longitudinal polarisation states emerge from the Goldstone modes of the Higgs field, linking the mass generation of vector bosons directly to the dynamics of the process.

Yet, its ultimate origins remain un­known and the Standard Model may only offer an effective low-energy description of a more fundamental theo­ry. Exploring this possibility requires precise tests of how EWSB operates, and vector boson scattering (VBS) provides a particularly sensitive probe. In VBS, two electroweak gauge bosons scatter off one another. The cross section remains finite at high energies only because there is an exact cancellation between the pure gauge-boson interactions and the Higgs-boson mediated contributions, an effect analogous to the role of the Z boson propagator in WW production at electron–positron colliders. Deviations from the expected behaviour could signal new dynamics, such as anomalous couplings, strong interactions in the Higgs sector or new particles at higher energy scales.

This result lays the groundwork for future searches for new physics hidden within the electroweak sector

VBS interactions are among the rarest observed so far at the LHC, with cross sections as low as one femtobarn. To disentangle them from the background, researchers rely on the distinctive experimental signature of two high-energy jets in the forward detector regions produced by the initial quarks that radiate the bosons, with minimal hadronic activity between them. Using the full data set from Run 2 of the LHC at a centre-of-mass energy of 13 TeV, the CMS collaboration carried out a comprehensive set of VBS measurements across several production modes: WW (with both same and opposite charges), WZ and ZZ, studied in five final states where both bosons decay leptonically and in two semi-leptonic configurations where one boson decays into leptons and the other into quarks. To enhance sensitivity further, the data from all the measurements have now been combined in a single joint fit, with a complete treatment of uncertainty correlations and a careful handling of events selected by more than one analysis. 

All modes, one analysis

To account for possible deviations from the expected predictions, each process is characterised by a signal strength parameter (μ), defined as the ratio of the measured production rate to the cross section predicted by the Standard Model. A value of μ near unity indicates consistency with the Standard Model, while significant deviations may suggest new physics. The results, summarised in figure 1, display good agreement with the Standard Model predictions: all measured signal strengths are consistent with unity within their respective uncertainties. A mild excess with respect to the leading-order theoretical predictions is observed across several channels, highlighting the need for more accurate modelling, in particular for the measurements that have reached a level of precision where systematic effects dominate. By presenting the first evidence for all charged VBS production modes from a single combined statistical analysis, this CMS result lays the groundwork for future searches for new physics hidden within the electroweak sector.

The post Decoding the Higgs mechanism with vector bosons appeared first on CERN Courier.

]]>
News The CMS collaboration jointly analysed all vector boson scattering channels. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_EF_CMS_feature.jpg
Slovenia, Ireland and Chile tighten ties with CERN https://cerncourier.com/a/slovenia-ireland-and-chile-tighten-ties-with-cern/ Tue, 08 Jul 2025 19:16:38 +0000 https://cerncourier.com/?p=113567 Slovenia becomes CERN’s 25th Member State, and Ireland and Chile have signed agreements to become Associate Member States.

The post Slovenia, Ireland and Chile tighten ties with CERN appeared first on CERN Courier.

]]>
Slovenia became CERN’s 25th Member State on 21 June, formalising a relationship of over 30 years. Full membership confers voting rights in the CERN Council and opportunities for Slovenian enterprises and citizens.

“Slovenia’s full membership in CERN is an exceptional recognition of our science and researchers,” said Igor Papič, Slovenia’s Minister of Higher Education, Science and Innovation. “Furthermore, it reaffirms and strengthens Slovenia’s reputation as a nation building its future on knowledge and science. Indeed, apart from its beautiful natural landscapes, knowledge is the only true natural wealth of our country. For this reason, we have allocated record financial resources to science, research and innovation. Moreover, we have enshrined the obligation to increase these funds annually in the Scientific Research and Innovation Activities Act.”

“On behalf of the CERN Council, I warmly welcome Slovenia as the newest Member State of CERN,” said Costas Fountas, president of the CERN Council. “Slovenia has a longstanding relationship with CERN, with continuous involvement of the Slovenian science community over many decades in the ATLAS experiment in particular.”

On 8 and 16 May, respectively, Ireland and Chile signed agreements to become Associate Member States of CERN, pending the completion of national ratification processes. They join Türkiye, Pakistan, Cyprus, Ukraine, India, Lithuania, Croatia, Latvia and Brazil as Associate Members – a status introduced by the CERN Council in 2010. In this period, the Organization has also concluded international cooperation agreements with Qatar, Sri Lanka, Nepal, Kazakhstan, the Philippines, Thailand, Paraguay, Bosnia and Herzegovina, Honduras, Bahrain and Uruguay.

The post Slovenia, Ireland and Chile tighten ties with CERN appeared first on CERN Courier.

]]>
News Slovenia becomes CERN’s 25th Member State, and Ireland and Chile have signed agreements to become Associate Member States. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_slovenia.jpg
Advances in very-high-energy astrophysics https://cerncourier.com/a/advances-in-very-high-energy-astrophysics/ Tue, 08 Jul 2025 19:14:12 +0000 https://cerncourier.com/?p=113677 Advances in Very High Energy Astrophysics summarises the progress made by the third generation of imaging atmospheric Cherenkov telescopes.

The post Advances in very-high-energy astrophysics appeared first on CERN Courier.

]]>
Advances in Very High Energy Astrophysics: The Science Program of the Third Generation IACTs for Exploring Cosmic Gamma Rays

Imaging atmospheric Cherenkov telescopes (IACTs) are designed to detect very-high-energy gamma rays, enabling the study of a range of both galactic and extragalactic gamma-ray sources. By capturing Cherenkov light from gamma-ray-induced air showers, IACTs help trace the origins of cosmic rays and probe fundamental physics, including questions surrounding dark matter and Lorentz invariance. Since the first gamma-ray source detection by the Whipple telescope in 1989, the field has rapidly advanced through instruments like HESS, MAGIC and VERITAS. Building on these successes, the Cherenkov Telescope Array Observatory (CTAO) represents the next generation of IACTs, with greatly improved sensitivity and energy coverage. The northern CTAO site on La Palma is already collecting data, and major infrastructure development is now underway at the southern site in Chile, where telescope construction is set to begin soon.

Considering the looming start to CTAO telescope construction, Advances in Very High Energy Astrophysics, edited by Reshmi Mukherjee of Barnard College and Roberta Zanin, from the University of Barcelona, is very timely. World-leading experts tackle the almost impossible task of summarising the progress made by the third-generation IACTs: HESS, MAGIC and VERITAS.

The range of topics covered is vast, spanning the last 20 years of progress in the areas of IACT instrumentation, data-analysis techniques, all aspects of high-energy astrophysics, cosmic-ray astrophysics and gamma-ray cosmology.  The authors are necessarily selective, so the depth into each sector is limited, but I believe that the essential concepts were properly introduced and the most important highlights captured. The primary focus of the book lies in discussions surrounding gamma-ray astronomy and high-energy physics, cosmic rays and ongoing research into dark matter.

It appears, however, that the individual chapters were all written independently of each other by different authors, leading to some duplications. Source classes and high-energy radiation mechanisms are introduced multiple times, sometimes with different terminology and notation in the different chapters, which could lead to confusion for novices in the field. But though internal coordination could have been improved, a positive aspect of this independence is that each chapter is self-contained and can be read on its own. I recommend the book to emerging researchers looking for a broad overview of this rapidly evolving field.

The post Advances in very-high-energy astrophysics appeared first on CERN Courier.

]]>
Review Advances in Very High Energy Astrophysics summarises the progress made by the third generation of imaging atmospheric Cherenkov telescopes. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_Rev_Advances_feature.jpg
Hadrons in Porto Alegre https://cerncourier.com/a/hadrons-in-porto-alegre/ Tue, 08 Jul 2025 19:11:51 +0000 https://cerncourier.com/?p=113636 The 16th International Workshop on Hadron Physics welcomed 135 physicists to the Federal University of Rio Grande do Sul in Porto Alegre, Brazil.

The post Hadrons in Porto Alegre appeared first on CERN Courier.

]]>
The 16th International Workshop on Hadron Physics (Hadrons 2025) welcomed 135 physicists to the Federal University of Rio Grande do Sul (UFRGS) in Porto Alegre, Brazil. Delayed by four months due to a tragic flood that devastated the city, the triennial conference took place from 10 to 14 March, despite adversity maintaining its long tradition as a forum for collaboration among Brazilian and international researchers at different stages of their careers.

The workshop’s scientific programme included field theoretical approaches to QCD, the behaviour of hadronic and quark matter in astrophysical contexts, hadronic structure and decays, lattice QCD calculations, recent experimental developments in relativistic heavy-ion collisions, and the interplay of strong and electroweak forces within the Standard Model.

Fernanda Steffens (University of Bonn) explained how deep-inelastic-scattering experiments and theoretical developments are revealing the internal structure of the proton. Kenji Fukushima (University of Tokyo) addressed the theoretical framework and phase structure of strongly interacting matter, with particular emphasis on the QCD phase diagram and its relevance to heavy-ion collisions and neutron stars. Chun Shen (Wayne State University) presented a comprehensive overview of the state-of-the-art techniques used to extract the transport properties of quark–gluon plasma from heavy-ion collision data, emphasising the role of Bayesian inference and machine learning in constraining theoretical models. Li-Sheng Geng (Beihang University) explored exotic hadrons through the lens of hadronic molecules, highlighting symmetry multiplets such as pentaquarks, the formation of multi-hadron states and the role of femtoscopy in studying unstable particle interactions.

This edition of Hadrons was dedicated to the memory of two individuals who left a profound mark on the Brazilian hadronic-physics community: Yogiro Hama, a distinguished senior researcher and educator whose decades-long contributions were foundational to the development of the field in Brazil, and Kau Marquez, an early-career physicist whose passion for science remained steadfast despite her courageous battle with spinal muscular atrophy. Both were remembered with deep admiration and respect, not only for their scientific dedication but also for their personal strength and impact on the community.

Its mission is to cultivate a vibrant and inclusive scientific environment

Since its creation in 1988, the Hadrons workshop has played a central role in developing Brazil’s scientific capacity in particle and nuclear physics. Its structure facilitates close interaction between master’s and doctoral students, and senior researchers, thus enhancing both technical training and academic exchange. This model continues to strengthen the foundations of research and collaboration throughout the Brazilian scientific community.

This is the main event for the Brazilian particle- and nuclear-physics communities, reflecting a commitment to advancing research in this highly interactive field. By circulating the venue across multiple regions of Brazil, each edition further renews its mission to cultivate a vibrant and inclusive scientific environment. This edition was closed by a public lecture on QCD by Tereza Mendes (University of São Paolo), who engaged local students with the foundational questions of strong-interaction physics.

The next edition of the Hadrons series will take place in Bahia in 2028.

The post Hadrons in Porto Alegre appeared first on CERN Courier.

]]>
Meeting report The 16th International Workshop on Hadron Physics welcomed 135 physicists to the Federal University of Rio Grande do Sul in Porto Alegre, Brazil. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_Hadrons.jpg
Muons under the microscope in Cincinnati https://cerncourier.com/a/muons-under-the-microscope-in-cincinnati/ Tue, 08 Jul 2025 19:11:11 +0000 https://cerncourier.com/?p=113641 The 23rd edition of Flavor Physics and CP Violation (FPCP) attracted 100 physicists to Cincinnati, USA, from 2 to 6 June 2025.

The post Muons under the microscope in Cincinnati appeared first on CERN Courier.

]]>
The 23rd edition of Flavor Physics and CP Violation (FPCP) attracted 100 physicists to Cincinnati, USA, from 2 to 6 June 2025. The conference reviews recent experimental and theoretical developments in CP violation, rare decays, Cabibbo–Kobayashi–Maskawa matrix elements, heavy-quark decays, flavour phenomena in charged leptons and neutrinos, and the interplay between flavour physics and high-pT physics at the LHC.

The highlight of the conference was new results on the muon magnetic anomaly. The Muon g-2 experiment at Fermilab released its final measurement of aμ = (g-2)/2 on 3 June, while the conference was in progress, reaching a precision of 127 ppb on the published value. This uncertainty is more than four times smaller than that reported by the previous experiment. One week earlier, on 27 May, the Muon g-2 Theory Initiative published their second calculation of the same quantity, following that published in summer 2020. A major difference between the two calculations is that the earlier one used experimental data and the dispersion integral to evaluate the hadronic contribution to aμ, whereas the update uses a purely theoretical approach based on lattice QCD. The strong tension with the experiment of the earlier calculation is no longer present, with the new calculation compatible with experimental results. Thus, no new physics discovery can be claimed, though the reason for the difference between the two approaches must be understood (see “Fermilab’s final word on muon g-2“). 

The MEG II collaboration presented an important update to their limit on the branching fraction for the lepton-flavour-violating decay μ → eγ. Their new upper bound of 1.5 × 10–13 is determined from data collected in 2021 and 2022. The experiment recorded additional data from 2023 to 2024 and expects to continue data taking for two more years. These data will be sensitive to a branching fraction four to five times smaller than the current limit.

LHCb, Belle II, BESIII and NA62 all discussed recent results in quark flavour physics. Highlights include the first measurement of CP violation in a baryon decay by LHCb and improved limits on CP violation in D-meson decay to two pions by Belle II. With more data, the latter measurements could potentially show that the observed CP violation in charm is from a non-Standard-Model source. 

The Belle II collaboration now plans to collect a sample between 5 to 10 ab–1 by the early 2030s before undergoing an upgrade to collect a 30 to 50 ab–1 sample by the early 2040s. LHCb plan to run to the end of the High-Luminosity LHC and collect 300 fb–1. LHCb recorded almost 10 fb–1 of data last year – more than in all their previous running, and now with a fully software-based trigger with much higher efficiency than the previous hardware-based first-level trigger. Future results from Belle II and the LHCb upgrade are eagerly anticipated.

The 24th FPCP conference will be held from 18 to 22 May 2026 in Bad Honnef, Germany. 

The post Muons under the microscope in Cincinnati appeared first on CERN Courier.

]]>
Meeting report The 23rd edition of Flavor Physics and CP Violation (FPCP) attracted 100 physicists to Cincinnati, USA, from 2 to 6 June 2025. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_FPCP.jpg
A new phase for the FCC https://cerncourier.com/a/a-new-phase-for-the-fcc/ Tue, 08 Jul 2025 19:09:25 +0000 https://cerncourier.com/?p=113623 FCC Week 2025 took place in Vienna from 19 to 23 May.

The post A new phase for the FCC appeared first on CERN Courier.

]]>
FCC Week 2025 gathered more than 600 participants from 34 countries together in Vienna from 19 to 23 May. The meeting was the first following the submission of the FCC’s feasibility study to the European Strategy for Particle Physics (CERN Courier May/June 2025 p9). Comprising three volumes – covering physics and detectors, accelerators and infrastructure, and civil engineering and sustainability – the study represents the most comprehensive blueprint to date for a next-generation collider facility. The next phase will focus on preparing a robust implementation strategy, via technical design, cost assessment, environmental planning and global engagement.

CERN Director-General Fabiola Gianotti estimated the integral FCC programme to offer unparalleled opportunities to explore physics at the shortest distances, and noted growing support and enthusiasm for the programme within the community. That enthusiasm is reflected in the growing collaboration: the FCC collaboration now includes 162 institutes from 38 countries, with 28 new Memoranda of Understanding signed in the past year. These include new partnerships in Latin America, Asia and Ukraine, as well as Statements of Intent from the US and Canada. The FCC vision has also gained visibility in high-level policy dialogues, including the Draghi report on European competitiveness. Scientific plenaries and parallel sessions highlighted updates on simulation tools, rare-process searches and strategies to probe beyond the Standard Model. Detector R&D has progressed significantly, with prototyping, software development and AI-driven simulations advancing rapidly.

In accelerator design, developments included updated lattice and optics concepts involving global “head-on” compensation (using opposing beam interactions) and local chromaticity corrections (to the dependence of beam optics on particle energy). Refinements were also presented to injection schemes, beam collimation and the mitigation of collective effects. A central tool in these efforts is the Xsuite simulation platform, whose capabilities now include spin tracking and modelling based on real collider environments such as SuperKEKB.

Technical innovations also came to the fore. The superconducting RF system for FCC-ee includes 400 MHz Nb/Cu cavities for low-energy operation and 800 MHz Nb cavities for higher-energy modes. The introduction of reverse-phase operation and new RF source concepts – such as the tristron, with energy efficiencies above 90% (CERN Courier May/June 2025 p30) – represent major design advances.

Design developments

Vacuum technologies based on ultrathin NEG coating and discrete photon stops, as well as industrialisation strategies for cost control, are under active development. For FCC-hh, high-field magnet R&D continues on both Nb3Sn prototypes and high-temperature superconductors.

Sessions on technical infrastructure explored everything from grid design, cryogenics and RF power to heat recovery, robotics and safety systems. Sustainability concepts, including renewable energy integration and hydrogen storage, showcased the project’s interdisciplinary scope and long-term environmental planning.

FCC Week 2025 extended well beyond the conference venue, turning Vienna into a vibrant hub for public science outreach

The Early Career Researchers forum drew nearly 100 participants for discussions on sustainability, governance and societal impact. The session culminated in a commitment to inclusive collaboration, echoed by the quote from Austrian-born artist, architect and environmentalist Friedensreich Hundertwasser (1928–2000): “Those who do not honour the past lose the future. Those who destroy their roots cannot grow.”

This spirit of openness and public connection also defined the week’s city-wide engagement. FCC Week 2025 extended well beyond the conference venue, turning Vienna into a vibrant hub for public science outreach. In particular, the “Big Science, Big Impact” session – co-organised with the Austrian Federal Economic Chamber (WKO) – highlighted CERN’s broader role in economic development. Daniel Pawel Zawarczynski (WKO) shared examples of small and medium enterprise growth and technology transfer, noting that CERN participation can open new markets, from tunnelling to aerospace. Economist Gabriel Felbermayr referred to a recent WIFO analysis indicating a benefit-to-cost ratio for the FCC greater than 1.2 under conservative assumptions. The FCC is not only a tool for discovery, observed Johannes Gutleber (CERN), but also a platform enabling technology development, open software innovation and workforce training.

The FCC awards celebrate the creativity, rigour and passion that early-career researchers bring to the programme. This year, Tsz Hong Kwok (University of Zürich) and Audrey Piccini (CERN) won poster prizes, Sara Aumiller (TU München) and Elaf Musa (DESY) received innovation awards, and Ivan Karpov (CERN) and Nicolas Vallis (PSI) were honoured with paper prizes sponsored by Physical Review Accelerators and Beams. As CERN Council President Costas Fountas reminded participants, the FCC is not only about pushing the frontiers of knowledge, but also about enabling a new generation of ideas, collaborations and societal progress.

The post A new phase for the FCC appeared first on CERN Courier.

]]>
Meeting report FCC Week 2025 took place in Vienna from 19 to 23 May. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_FCC.jpg
Mary K Gaillard 1939–2025 https://cerncourier.com/a/mary-k-gaillard-1939-2025/ Tue, 08 Jul 2025 19:05:22 +0000 https://cerncourier.com/?p=113693 Mary K Gaillard, a key figure in the development of the Standard Model of particle physics, passed away on 23 May 2025.

The post Mary K Gaillard 1939–2025 appeared first on CERN Courier.

]]>
Mary K Gaillard, a key figure in the development of the Standard Model of particle physics, passed away on 23 May 2025. She was born in 1939 to a family of academics who encouraged her inquisitiveness and independence. She graduated in 1960 from Hollins College, a small college in Virginia, where her physics professor recognised her talent, helping her get jobs in the Ringuet laboratory at l’École Polytechnique during a junior year abroad and for two summers at the Brookhaven National Laboratory. In 1961 she obtained a master’s degree from Columbia University and in 1968 a doctorate in theoretical physics from the University of Paris at Orsay. Mary K was a research scientist with the French CNRS and a visiting scientist at CERN for most of the 1970s. From 1981 until she retired in 2009, she was a senior scientist at the Lawrence Berkeley National Laboratory and a professor of physics at the University of California at Berkeley, where she was the first woman in the department.

Mary K was a theoretical physicist of great power, gifted both with a deep physical intuition and a very high level of technical mastery. She used her gifts to great effect and made many important contributions to the development of the Standard Model of elementary particle physics that was established precisely during the course of her career. She pursued her love of physics with powerful determination, in the face of overt discrimination that went well beyond what may still exist today. She fought these battles and produced beautiful, important physics, all while raising three children as a devoted mother.

Undeniable impact

After obtaining her master’s degree at Columbia, Mary K accompanied her first husband, Jean-Marc Gaillard, to Paris, where she was rebuffed in many attempts to obtain a position in an experimental group. She next tried and failed, multiple times, to find an advisor in theoretical physics, which she actually preferred to experimental physics but had not pursued because it was regarded as an even more unlikely career for a woman. Eventually, and fortunately for the development of elementary particle physics, Bernard d’Espagnat agreed to supervise her doctoral research at the University of Paris. While she quickly succeeded in producing significant results in her research, respect and recognition were still slow to come. She suffered many slights from a culture that could not understand or countenance the possibility of a woman theoretical physicist and put many obstacles in her way. Respect and recognition did finally come in appropriate measure, however, by virtue of the undeniable impact of her work.

Her contributions to the field are numerous. During an intensely productive period in the mid-1970s, she completed a series of projects that established the framework for the decades to follow that would culminate in the Standard Model. Famously, during a one-year visit to Fermilab in 1973, using the known properties of the “strange” K mesons, she successfully predicted the mass scale of the fourth “charm” quark a few months prior to its discovery. Back at CERN a few years later, she also predicted, in the framework of grand unified theories, the mass of the fifth “bottom” quark – a successful though still speculative prediction. Other impactful work, extracting the experimental consequences of theoretical constructs, laid down the paths that were followed to experimentally validate the charm-quark discovery and to search for the Higgs boson required to complete the Standard Model. Another key contribution showed how “jets”, streams of particles created in high-energy accelerators, could be identified as manifestations of the “gluon” carriers of the strong force of the Standard Model.

In the 1980s in Berkeley, when the Superconducting Super Collider and the Large Hadron Collider were under discussion, she showed that they could successfully uncover the mechanism of electroweak symmetry breaking required to understand the Standard Model weak force, even if it was “dynamical” – an experimentally much more challenging possibility than breaking by a Higgs boson. For the remainder of her career, she focused principally on work to address issues that are still unresolved by the Standard Model. Much of this research involved “supersymmetry” and its extension to encompass the gravitational force, theoretical constructs that originated in the work of her second husband, the late Bruno Zumino, who also moved from CERN to Berkeley.

Mary K’s accomplishments were recognised by numerous honorary societies and awards, including the National Academy of Sciences, the American Academy of Arts and Sciences, and the J. J. Sakurai Prize for Theoretical Particle Physics of the American Physical Society. She served on numerous governmental and academic advisory panels, including six years on the National Science Board. She tells her own story in a memoir, A Singularly Unfeminine Profession, published in 2015. Mary K Gaillard will surely be remembered when the final history of elementary particle physics is written.

The post Mary K Gaillard 1939–2025 appeared first on CERN Courier.

]]>
News Mary K Gaillard, a key figure in the development of the Standard Model of particle physics, passed away on 23 May 2025. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_OBITS_Gaillard.jpg
Fritz Caspers 1950–2025 https://cerncourier.com/a/fritz-caspers-1950-2025/ Tue, 08 Jul 2025 19:03:51 +0000 https://cerncourier.com/?p=113696 Friedhelm “Fritz” Caspers, a master of beam cooling, passed away on 12 March 2025.

The post Fritz Caspers 1950–2025 appeared first on CERN Courier.

]]>
Friedhelm “Fritz” Caspers, a master of beam cooling, passed away on 12 March 2025.

Born in Bonn, Germany in 1950, Fritz studied electrical engineering at RWTH Aachen. He joined CERN in 1981, first as a fellow and then as a staff member. During the 1980s Fritz contributed to stochastic cooling in CERN’s antiproton programme. In the team of Georges Carron and Lars Thorndahl, he helped devise ultra-fast microwave stochastic cooling systems for the then new antiproton cooler ring. He also initiated the development of power field-effect transistors that are still operational today in CERN’s Antiproton Decelerator ring. Fritz conceived novel geometries for pickups and kickers, such as slits cut into ground plates, as now used for the GSI FAIR project, and meander-type electrodes. From 1988 to 1995, Fritz was responsible for all 26 stochastic-cooling systems at CERN. In 1990 he became a senior member of the Institute of Electrical and Electronics Engineers (IEEE), before being distinguished as an IEEE Life Fellow later in his career.

Pioneering diagnostics

In the mid-2000s, Fritz proposed enamel-based clearing electrodes and initiated pertinent collaborations with several German companies. At about the same time, he carried out ultrasound diagnostics on soldered junctions on LHC interconnects. Among the roughly 1000 junctions measured, he and his team found a single non-conform junction. In 2008 Fritz suggested non-elliptical superconducting crab cavities for the HL-LHC. He also proposed and performed pioneering electron-cloud diagnostics and mitigation-using microwaves. For the LHC, he predicted a “magnetron effect”, where coherently radiating cloud electrons might quench the LHC magnets at specific values of their magnetic field. His advice was highly sought after on laboratory-impedance measurements and electromagnetic interference.

Throughout the past three decades, Fritz was active and held in high esteem not only at CERN but all around the world. For example, he helped develop the stochastic cooling systems for GSI in Darmstadt, Germany, where his main contact was Fritz Nolden. He contributed to the construction and commissioning of stochastic cooling for GSI’s Experimental Storage Ring, including the successful demonstration of the stochastic cooling of heavy ions in 1997. Fritz also helped develop the stochastic cooling of rare isotopes for the RI Beam Factory project at RIKEN, Japan.

He helped develop the power field-effect transistors still operational today in CERNs AD ring

Fritz was a long-term collaborator of IMP Lanzhou at the Chinese Academy of Sciences (CAS). In 2015, stochastic cooling was commissioned at the Cooling Storage Ring with his support. Always kind and willing to help anyone who needed him, Fritz also provided valuable suggestions and hands-on experience with impedance measurements for IMP’s HIAF project, especially the titanium-alloy-loaded thin-wall vacuum chamber and magnetic-alloy-loaded RF cavities. In 2021, Fritz was elected as a Distinguished Scientist of the CAS President’s International Fellowship Initiative and awarded the Dieter Möhl Award by the International Committee for Future Accelerators for his contributions to beam cooling.

In 2013, the axion dark-matter research centre IBS-CAPP was established at KAIST, Korea. For this new institute, Fritz proved to be just the right lecturer. Every spring, he visited Korea for a week of intensive lectures on RF techniques, noise measurements and much more. His lessons, which were open to scientists from all over Korea, transformed Korean researchers from RF amateurs into professionals, and his contributions helped propel IBS–CAPP to the forefront of research.

Fritz was far more than just a brilliant scientist. He was a generous mentor, a trusted colleague and a dear friend who lit up a room when he entered, and his absence will be deeply felt by all of us who had the privilege of knowing him. Always on the hunt for novel ideas, Fritz was a polymath and a fully open-minded scientist. His library at home was a visit into the unknown, containing “dark matter”, as we often joked. We will remember Fritz as a gentleman who was full of inspiration for the young and the not-so-young alike. His death is a loss to the whole accelerator world.

The post Fritz Caspers 1950–2025 appeared first on CERN Courier.

]]>
News Friedhelm “Fritz” Caspers, a master of beam cooling, passed away on 12 March 2025. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_OBITS_Caspers.jpg
Sandy Donnachie 1936–2025 https://cerncourier.com/a/sandy-donnachie-1936-2025/ Tue, 08 Jul 2025 19:03:05 +0000 https://cerncourier.com/?p=113702 A particle theorist and scientific leader.

The post Sandy Donnachie 1936–2025 appeared first on CERN Courier.

]]>
Sandy Donnachie, a particle theorist and scientific leader, passed away on 7 April 2025.

Born in 1936 and raised in Kilmarnock, Scotland, Sandy received his BSc and PhD degrees from the University of Glasgow before taking up a lectureship at University College London in 1963. He was a CERN research associate from 1965 to 1967, and then senior lecturer at the University of Glasgow until 1969, when he took up a chair at the University of Manchester and played a leading role in developing the scientific programme at NINA, the electron synchrotron at the nearby Daresbury National Laboratory. Sandy then served as head of the Department of Physics and Astronomy at the University from 1989 to 1994, and as dean of the Faculty of Science and Engineering from 1994 to 1997. He had a formidable reputation – if a staff member or student asked to see him, he would invite them to come at 8 a.m., to test whether what they wanted to discuss was truly important.

Sandy played a leading role in the international scientific community, maintaining strong connections with CERN throughout his career, as scientific delegate to the CERN Council from 1989 to 1994, chair of the SPS committee from 1988 to 1992, and member of the CERN Scientific Policy Committee from 1988 to 1993. In the UK, he chaired the UK’s Nuclear Physics Board from 1989 to 1993, and served as a member of the Science and Engineering Research Council from 1989 to 1994. He also served as an associate editor for Physical Review Letters from 2010 to 2016. In recognition of his leadership and scientific contributions, he was awarded the UK’s Institute of Physics Glazebrook Medal in 1997.

The “Donnachie–Landshoff pomeron” is known to all those working in the field

Sandy is perhaps best known for his body of work with Peter Landshoff on elastic and diffractive scattering: the “Donnachie–Landshoff pomeron” is known to all those
working in the field. The collaboration began half a century ago and when email became available, they were among its early and most enthusiastic users. Sandy only knew Fortran and Peter only knew C, but somehow they managed to collaborate and together wrote more than 50 publications, including a book Pomeron Physics and QCD with Günter Dosch and Otto Nachtmann published in 2004. The collaboration lasted until, so sadly, Sandy was struck with Parkinson’s disease and was no longer able to use email. Earlier in his career, Sandy had made significant contributions to the field of low-energy hadron scattering, in particular through a collaboration with Claud Lovelace, which revealed many hitherto unknown baryon states in pion–nucleon scattering, and through a series of papers on meson photoproduction, initially with Graham Shaw and then with Frits Berends and other co-workers.

Throughout his career, Sandy was notable for his close collaborations with experimental physics groups, including a long association with the Omega Photon Collaboration at CERN, with whom he co-authored 27 published papers. He and Shaw also produced three books, culminating in Electromagnetic Interactions and Hadronic Structure with Frank Close, which was published in 2007.

In his leisure time, Sandy was a great lover of classical music and a keen sailor, golfer and country walker.

The post Sandy Donnachie 1936–2025 appeared first on CERN Courier.

]]>
News A particle theorist and scientific leader. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_OBITS_Donnachie.jpg
Fritz A Ferger 1933–2025 https://cerncourier.com/a/fritz-a-ferger-1933-2025/ Tue, 08 Jul 2025 19:01:59 +0000 https://cerncourier.com/?p=113699 A multi-talented engineer who had a significant impact on the technical development and management of CERN.

The post Fritz A Ferger 1933–2025 appeared first on CERN Courier.

]]>
Fritz Ferger, a multi-talented engineer who had a significant impact on the technical development and management of CERN, passed away on 22 March 2025.

Born in Reutlingen, Germany, on 5 April 1933, Fritz obtained his electrical engineering degree in Stuttgart and a doctorate at the University of Grenoble. A contract with General Electric in his pocket, he visited CERN, curious about the 25 GeV Proton Synchrotron, the construction of which was receiving the finishing touches in the late 1950s. He met senior CERN staff and was offered a contract that he, impressed by the visit, accepted in early 1959.

Fritz’s first assignment was the development of a radio-frequency (RF) accelerating cavity for a planned fixed-field alternating-gradient (FFAG) accelerator. This was abandoned in early 1960 in favour of the study of a 2 × 25 GeV proton–proton collider, the Intersecting Storage Rings (ISR). As a first step, the CERN Electron Storage and Accumulation Ring (CESAR) was constructed to test high-vacuum technology and RF accumulation schemes; Fritz designed and constructed the RF system. With CESAR in operation, he moved on to the construction and tests of the high-power RF system of the ISR, a project that was approved in 1965.

After the smooth running-in of the ISR and, for a while having been responsible for the General Engineering Group, he became division leader of the ISR in 1974, a position he held until 1982. Under his leadership the ISR unfolded its full potential with proton beam currents up to 50 A and a luminosity 35 times the design value, leading CERN to acquire the confidence that colliders were the way to go. Due to his foresight, the development of new technologies was encouraged for the accelerator, including superconducting quadrupoles and pumping by cryo- and getter surfaces. Both were applied on a grand scale in LEP and are still essential for the LHC today.

Under his ISR leadership CERN acquired the confidence that colliders were the way to go

When the resources of the ISR Division were refocussed on LEP in 1983, Fritz became the leader of the Technical Inspection and Safety Commission. This absorbed the activities of the previous health and safety groups, but its main task was to scrutinise the LEP project from all technical and safety aspects. Fritz’s responsibility widened considerably when he became leader of the Technical Support Division in 1986. All of the CERN civil engineering, the tunnelling for the 27 km circumference LEP ring, its auxiliary tunnels, the concreting of the enormous caverns for the experiments and the construction of a dozen surface buildings were in full swing and brought to a successful conclusion in the following years. New buildings on the Meyrin site were added, including the attractive Building 40 for the large experimental groups, in which he took particular pride. At the same time, and under pressure to reduce expenditure, he had to manage several difficult outsourcing contracts.

When he retired in 1997, he could look back on almost 40 years dedicated to CERN; his scientific and technical competence paired with exceptional organisational and administrative talent. We shall always remember him as an exacting colleague with a wide range of interests, and as a friend, appreciated for his open and helpful attitude.

We grieve his loss and offer our sincere condolences to his widow Catherine and their daughters Sophie and Karina.

The post Fritz A Ferger 1933–2025 appeared first on CERN Courier.

]]>
News A multi-talented engineer who had a significant impact on the technical development and management of CERN. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_OBITS_Ferger.jpg
The minimalism of many worlds https://cerncourier.com/a/the-minimalism-of-many-worlds/ Wed, 02 Jul 2025 11:29:05 +0000 https://cerncourier.com/?p=113491 David Wallace argues for the ‘decoherent view’ of quantum mechanics, where at the fundamental level there is neither probability nor wavefunction collapse.

The post The minimalism of many worlds appeared first on CERN Courier.

]]>
Physicists have long been suspicious of the “quantum measurement problem”: the supposed puzzle of how to make sense of quantum mechanics. Everyone agrees (don’t they?) on the formalism of quantum mechanics (QM); any additional discussion of the interpretation of that formalism can seem like empty words. And Hugh Everett III’s infamous “many-worlds interpretation” looks more dubious than most: not just unneeded words but unneeded worlds. Don’t waste your time on words or worlds; shut up and calculate.

But the measurement problem has driven more than philosophy. Questions of how to understand QM have always been entangled, so to speak, with questions of how to apply and use it, and even how to formulate it; the continued controversies about the measurement problem are also continuing controversies in how to apply, teach and mathematically describe QM. The Everett interpretation emerges as the natural reading of one strategy for doing QM, which I call the “decoherent view” and which has largely supplanted the rival “lab view”, and so – I will argue – the Everett interpretation can and should be understood not as a useless adjunct to modern QM but as part of the development in our understanding of QM over the past century.

The view from the lab

The lab view has its origins in the work of Bohr and Heisenberg, and it takes the word “observable” that appears in every QM textbook seriously. In the lab view, QM is not a theory like Newton’s or Einstein’s that aims at an objective description of an external world subject to its own dynamics; rather, it is essentially, irreducibly, a theory of observation and measurement. Quantum states, in the lab view, do not represent objective features of a system in the way that (say) points in classical phase space do: they represent the experimentalist’s partial knowledge of that system. The process of measurement is not something to describe within QM: ultimately it is external to QM. And the so-called “collapse” of quantum states upon measurement represents not a mysterious stochastic process but simply the updating of our knowledge upon gaining more information.

Valued measurements

The lab view has led to important physics. In particular, the “positive operator valued measure” idea, central to many aspects of quantum information, emerges most naturally from the lab view. So do the many extensions, total and partial, to QM of concepts initially from the classical theory of probability and information. Indeed, in quantum information more generally it is arguably the dominant approach. Yet outside that context, it faces severe difficulties. Most notably: if quantum mechanics describes not physical systems in themselves but some calculus of measurement results, if a quantum system can be described only relative to an experimental context, what theory describes those measurement results and experimental contexts themselves?

Dynamical probes

One popular answer – at least in quantum information – is that measurement is primitive: no dynamical theory is required to account for what measurement is, and the idea that we should describe measurement in dynamical terms is just another Newtonian prejudice. (The “QBist” approach to QM fairly unapologetically takes this line.)

One can criticise this answer on philosophical grounds, but more pressingly: that just isn’t how measurement is actually done in the lab. Experimental kit isn’t found scattered across the desert (each device perhaps stamped by the gods with the self-adjoint operator it measures); it is built using physical principles (see “Dynamical probes” figure). The fact that the LHC measures the momentum and particle spectra of various decay processes, for instance, is something established through vast amounts of scientific analysis, not something simply posited. We need an account of experimental practice that allows us to explain how measurement devices work and how to build them.

Perhaps this was viable in the 1930s, but today measurement devices rely on quantum principles

Bohr had such an account: quantum measurements are to be described through classical mechanics. The classical is ineliminable from QM precisely because it is to classical mechanics we turn when we want to describe the experimental context of a quantum system. To Bohr, the quantum–classical transition is a conceptual and philosophical matter as much as a technical one, and classical ideas are unavoidably required to make sense of any quantum description.

Perhaps this was viable in the 1930s. But today it is not only the measured systems but the measurement devices themselves that essentially rely on quantum principles, beyond anything that classical mechanics can describe. And so, whatever the philosophical strengths and weaknesses of this approach – or of the lab view in general – we need something more to make sense of modern QM, something that lets us apply QM itself to the measurement process.

Practice makes perfect

We can look to physics practice to see how. As von Neumann glimpsed, and Everett first showed clearly, nothing prevents us from modelling a measurement device itself inside unitary quantum mechanics. When we do so, we find that the measured system becomes entangled with the device, so that (for instance) if a measured atom is in a weighted superposition of spins with respect to some axis, after measurement then the device is in a similarly-weighted superposition of readout values.

Origins

In principle, this courts infinite regress: how is that new superposition to be interpreted, save by a still-larger measurement device? In practice, we simply treat the mod-squared amplitudes of the various readout values as probabilities, and compare them with observed frequencies. This sounds a bit like the lab view, but there is a subtle difference: these probabilities are understood not with respect to some hypothetical measurement, but as the actual probabilities of the system being in a given state.

Of course, if we could always understand mod-squared amplitudes that way, there would be no measurement problem! But interference precludes this. Set up, say, a Mach–Zehnder interferometer, with a particle beam split in two and then re-interfered, and two detectors after the re-interference (see “Superpositions are not probabilities” figure). We know that if either of the two paths is blocked, so that any particle detected must have gone along the other path, then each of the two outcomes is equally likely: for each particle sent through, detector A fires with 50% probability and detector B with 50% probability. So whichever path the particle went down, we get A with 50% probability and B with 50% probability. And yet we know that if the interferometer is properly tuned and both paths are open, we can get A with 100% probability or 0% probability or anything in between. Whatever microscopic superpositions are, they are not straightforwardly probabilities of classical goings-on.

Unfeasible interference

But macroscopic superpositions are another matter. There, interference is unfeasible (good luck reinterfering the two states of Schrödinger’s cat); nothing formally prevents us from treating mod-squared amplitudes like probabilities.

And decoherence theory has given us a clear understanding of just why interference is invisible in large systems, and more generally when we can and cannot get away with treating mod-squared amplitudes as probabilities. As the work of Zeh, Zurek, Gell-Mann, Hartle and many others (drawing inspiration from Everett and from work on the quantum/classical transition as far back as Mott) has shown, decoherence – that is, the suppression of interference – is simply an aspect of non-equilibrium statistical mechanics. The large-scale, collective degrees of freedom of a quantum system, be it the needle on a measurement device or the centre-of-mass of a dust mote, are constantly interacting with a much larger number of small-scale degrees of freedom: the short-wavelength phonons inside the object itself; the ambient light; the microwave background radiation. We can still find autonomous dynamics for the collective degrees of freedom, but because of the constant transfer of information to the small scale, the coherence of any macroscopic superposition rapidly bleeds into microscopic degrees of freedom, where it is dynamically inert and in practice unmeasurable.

Emergence and scale

Decoherence can be understood in the familiar language of emergence and scale separation. Quantum states are not fundamentally probabilistic, but they are emergently probabilistic. That emergence occurs because for macroscopic systems, the timescale by which energy is transferred from macroscopic to residual degrees of freedom is very long compared to the timescale of the macroscopic system’s own dynamics, which in turn is very long compared to the timescale by which information is transferred. (To take an extreme example, information about the location of the planet Jupiter is recorded very rapidly in the particles of the solar wind, or even the photons of the cosmic background radiation, but Jupiter loses only an infinitesimal fraction of its energy to either.) So the system decoheres very rapidly, but having done so it can still be treated as autonomous.

On this decoherent view of QM, there is ultimately only the unitary dynamics of closed systems; everything else is a limiting or special case. Probability and classicality emerge through dynamical processes that can be understood through known techniques of physics: understanding that emergence may be technically challenging but poses no problem of principle. And this means that the decoherent view can address the lab view’s deficiencies: it can analyse the measurement process quantum mechanically; it can apply quantum mechanics even in cosmological contexts where the “measurement” paradigm breaks down; it can even recover the lab view within itself as a limited special case. And so it is the decoherent view, not the lab view, that – I claim – underlies the way quantum theory is for the most part used in the 21st century, including in its applications in particle physics and cosmology (see “Two views of quantum mechanics” table).

Two views of quantum mechanics

Quantum phenomenon Lab view Decoherent view

Dynamics

Unitary (i.e. governed by the Schrödinger equation) only between measurements

Always unitary

Quantum/classical transition

Conceptual jump between fundamentally different systems

Purely dynamical: classical physics is a limiting case of quantum physics

Measurements

Cannot be treated internal to the formalism

Just one more dynamical interaction

Role of the observer

Conceptually central

Just one more physical system

But if the decoherent view is correct, then at the fundamental level there is neither probability nor wavefunction collapse; nor is there a fundamental difference between a microscopic superposition like those in interference experiments and a macroscopic superposition like Schrödinger’s cat. The differences are differences of degree and scale: at the microscopic level, interference is manifest; as we move to larger and more complex systems it hides away more and more effectively; in practice it is invisible for macroscopic systems. But even if we cannot detect the coherence of the superposition of a live and dead cat, it does not thereby vanish. And so according to the decoherent view, the cat is simultaneously alive and dead in the same way that the superposed atom is simultaneously in two places. We don’t need a change in the dynamics of the theory, or even a reinterpretation of the theory, to explain why we don’t see the cat as alive and dead at once: decoherence has already explained it. There is a “live cat” branch of the quantum state, entangled with its surroundings to an ever-increasing degree; there is likewise a “dead cat” branch; the interference between them is rendered negligible by all that entanglement.

Many worlds

At last we come to the “many worlds” interpretation: for when we observe the cat ourselves, we too enter a superposition of seeing a live and a dead cat. But these “worlds” are not added to QM as exotic new ontology: they are discovered, as emergent features of collective degrees of freedom, simply by working out how to use QM in contexts beyond the lab view and then thinking clearly about its content. The Everett interpretation – the many-worlds theory – is just the decoherent view taken fully seriously. Interference explains why superpositions cannot be understood simply as parameterising our ignorance; unitarity explains how we end up in superpositions ourselves; decoherence explains why we have no awareness of it.

Superpositions are not probabilities

(Forty-five years ago, David Deutsch suggested testing the Everett interpretation by simulating an observer inside a quantum computer, so that we could recohere them after they made a measurement. Then, it was science fiction; in this era of rapid progress on AI and quantum computation, perhaps less so!)

Could we retain the decoherent view and yet avoid any commitment to “worlds”? Yes, but only in the same sense that we could retain general relativity and yet refuse to commit to what lies behind the cosmological event horizon: the theory gives a perfectly good account of the other Everett worlds, and the matter beyond the horizon, but perhaps epistemic caution might lead us not to overcommit. But even so, the content of QM includes the other worlds, just as the content of general relativity includes beyond-horizon physics, and we will only confuse ourselves if we avoid even talking about that content. (Thus Hawking, who famously observed that when he heard about Schrödinger’s cat he reached for his gun, was nonetheless happy to talk about Everettian branches when doing quantum cosmology.)

Alternative views

Could there be a different way to make sense of the decoherent view? Never say never; but the many-worlds perspective results almost automatically from simply taking that view as a literal description of quantum systems and how they evolve, so any alternative would have to be philosophically subtle, taking a different and less literal reading of QM. (Perhaps relationalism, discussed in this issue by Carlo Rovelli, see “Four ways to interpret quantum mechanics“, offers a way to do it, though in many ways it seems more a version of the lab view. The physical collapse and hidden variables interpretations modify the formalism, and so fall outside either category.)

The Everett interpretation is just the decoherent view taken fully seriously

Does the apparent absurdity, or the ontological extravagance, of the Everett interpretation force us, as good scientists, to abandon many-worlds, or if necessary the decoherent view itself? Only if we accept some scientific principle that throws out theories that are too strange or that postulate too large a universe. But physics accepts no such principle, as modern cosmology makes clear.

Are there philosophical problems for the Everett interpretation? Certainly: how are we to think of the emergent ontology of worlds and branches; how are we to understand probability when all outcomes occur? But problems of this kind arise across all physical theories. Probability is philosophically contested even apart from Everett, for instance: is it frequency, rational credence, symmetry or something else? In any case, these problems pose no barrier to the use of Everettian ideas in physics.

The case for the Everett interpretation is that it is the conservative, literal reading of the version of quantum mechanics we actually use in modern physics, and there is no scientific pressure for us to abandon that reading. We could, of course, look for alternatives. Who knows what we might find? Or we could shut up and calculate – within the Everett interpretation.

The post The minimalism of many worlds appeared first on CERN Courier.

]]>
Feature David Wallace argues for the ‘decoherent view’ of quantum mechanics, where at the fundamental level there is neither probability nor wavefunction collapse. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_MANY_probes.jpg
Discovering the neutrino sky https://cerncourier.com/a/discovering-the-neutrino-sky/ Mon, 19 May 2025 08:01:22 +0000 https://cerncourier.com/?p=113109 Lu Lu looks forward to the next two decades of neutrino astrophysics, exploring the remarkable detector concepts needed to probe ultra-high energies from 1 EeV to 1 ZeV.

The post Discovering the neutrino sky appeared first on CERN Courier.

]]>
Lake Baikal, the Mediterranean Sea and the deep, clean ice at the South Pole: trackers. The atmosphere: a calorimeter. Mountains and even the Moon: targets. These will be the tools of the neutrino astrophysicist in the next two decades. Potentially observable energies dwarf those of the particle physicist doing repeatable experiments, rising up to 1 ZeV (1021 eV) for some detector concepts.

The natural accelerators of the neutrino astrophysicist are also humbling. Consider, for instance, the extraordinary relativistic jets emerging from the supermassive black hole in Messier 87 – an accelerator that stretches for about 5000 light years, or roughly 315 million times the distance from the Earth to the Sun.

Alongside gravitational waves, high-energy neutrinos have opened up a new chapter in astronomy. They point to the most extreme events in the cosmos. They can escape from regions where high-energy photons are attenuated by gas and dust, such as NGC 1068, the first steady neutrino emitter to be discovered (see “The neutrino sky” figure). Their energies can rise orders of magnitude above 1 PeV (1015 eV), where the universe becomes opaque to photons due to pair production with the cosmic microwave background. Unlike charged cosmic rays, they are not deflected by magnetic fields, preserving their original direction.

Breaking into the exascale calls for new thinking

High-energy neutrinos therefore offer a unique window into some of the most profound questions in modern physics. Are there new particles beyond the Standard Model at the highest energies? What acceleration mechanisms allow nature to propel them to such extraordinary energies? And is dark matter implicated in these extreme events? With the observation of a 220+570–110 PeV neutrino confounding the limits set by prior observatories and opening up the era of ultra-high-energy neutrino astronomy (CERN Courier March/April 2025 p7), the time is ripe for a new generation of neutrino detectors on an even grander scale (see “Thinking big” table).

A cubic-kilometre ice cube

Detecting high-energy neutrinos is a serious challenge. Though the neutrino–nucleon cross section increases a little less than linearly with neutrino energy, the flux of cosmic neutrinos drops as the inverse square or faster, reducing the event rate by nearly an order of magnitude per decade. A cubic-kilometre-scale detector is required to measure cosmic neutrinos beyond 100 TeV, and Earth starts to be opaque as energies rise beyond a PeV or so, when the odds of a neutrino being absorbed as it passes through the planet are roughly even depending on the direction of the event.

Thinking big

The journey of cosmic neutrino detection began off the coast of the Hawaiian Islands in the 1980s, led by John Learned of the University of Hawaii at Mānoa. The DUMAND (Deep Underwater Muon And Neutrino Detector) project sought to use both an array of optical sensors to measure Cherenkov light and acoustic detectors to measure the pressure waves generated by energetic particle cascades in water. It was ultimately cancelled in 1995 due to engineering difficulties related to deep-sea installation, data transmission over long underwater distances and sensor reliability under high pressure.

The next generation of cubic-kilometre-scale neutrino detectors built on DUMAND’s experience. The IceCube Neutrino Observatory has pioneered neutrino astronomy at the South Pole since 2011, probing energies from 10 GeV to 100 PeV, and is now being joined by experiments under construction such as KM3NeT in the Mediterranean Sea, which observed the 220 PeV candidate, and Baikal–GVD in Lake Baikal, the deepest lake on Earth. All three experiments watch for the deep inelastic scattering of high-energy neutrinos, using optical sensors to detect Cherenkov photons emitted by secondary particles.

Exascale from above

A decade of data-taking from IceCube has been fruitful. The Milky Way has been observed in neutrinos for the first time. A neutrino candidate event has been observed that is consistent with the Glashow resonance – the resonant production in the ice of a real W boson by a 6.3 PeV electron–antineutrino – confirming a longstanding prediction from 1960. Neutrino emission has been observed from supermassive black holes in NGC 1068 and TXS 0506+056. A diffuse neutrino flux has been discovered beyond 10 TeV. Neutrino mixing parameters have been measured. And flavour ratios have been constrained: due to the averaging of neutrino oscillations over cosmological distances, significant deviations from a 1:1:1 ratio of electron, muon and tau neutrinos could imply new physics such as the violation of Lorentz invariance, non-standard neutrino interactions or neutrino decay.

The sensitivity and global coverage of water-Cherenkov neutrino observatories is set to increase still further. The Pacific Ocean Neutrino Experiment (P-ONE) aims to establish a cubic-kilometre-scale deep-sea neutrino telescope off the coast of Canada; IceCube will expand the volume of its optical array by a factor eight; and the TRIDENT and HUNT experiments, currently being prototyped in the South China Sea, may offer the largest detector volumes of all. These detectors will improve sky coverage, enhance angular resolution, and increase statistical precision in the study of neutrino sources from 1 TeV to 10 PeV and above.

Breaking into the exascale calls for new thinking.

Into the exascale

Optical Cherenkov detectors have been exceptionally successful in establishing neutrino astronomy, however, the attenuation of optical photons in water and ice requires the horizontal spacing of photodetectors to a few hundred metres at most, constraining the scalability of the technology. To achieve sensitivity to ultra-high energies measured in EeV (1018 eV), an instrumented area of order 100 km2 would be required. Constructing an optical-based detector on such a scale is impractical.

Earth skimming

One solution is to exchange the tracking volume of IceCube and its siblings with a larger detector that uses the atmosphere as a calorimeter: the deposited energy is sampled on the Earth’s surface.

The Pierre Auger Observatory in Argentina epitomises this approach. If IceCube is presently the world’s largest detector by volume, the Pierre Auger Observatory is the world’s largest detector by area. Over an area of 3000 km2, 1660 water Cherenkov detectors and 24 fluorescence telescopes sample the particle showers generated when cosmic rays with energies beyond 10 EeV strike the atmosphere, producing billions of secondary particles. Among the showers it detects are surely events caused by ultra-high-energy neutrinos, but how might they be identified?

Out on a limb

One of the most promising approaches is to filter events based on where the air shower reaches its maximum development in the atmosphere. Cosmic rays tend to interact after traversing much less atmosphere than neutrinos, since the weakly interacting neutrinos have a much smaller cross-section than the hadronically interacting cosmic rays. In some cases, tau neutrinos can even skim the Earth’s atmospheric edge or “limb” as seen from space, interacting to produce a strongly boosted tau lepton that emerges from the rock (unlike an electron) to produce an upward-going air shower when it decays tens of kilometres later – though not so much later (unlike a muon) that it has escaped the atmosphere entirely. This signature is not possible for charged cosmic rays. So far, Auger has detected no neutrino candidate events of either topology, imposing stringent upper limits on the ultra-high-energy neutrino flux that are compatible with limits set by IceCube. The AugerPrime upgrade, soon expected to be fully operational, will equip each surface detector with scintillator panels and improved electronics.

Pole position

Experiments in space are being developed to detect these rare showers with an even larger instrumentation volume. POEMMA (Probe of Extreme Multi-Messenger Astrophysics) is a proposed satellite mission designed to monitor the Earth’s atmosphere from orbit. Two satellites equipped with fluorescence and Cherenkov detectors will search for ultraviolet photons produced by extensive air showers (see “Exascale from above” figure). EUSO-SPB2 (Extreme Universe Space Observatory on a Super Pressure Balloon 2) will test the same detection methods from the vantage point of high-atmosphere balloons. These instruments can help distinguish cosmic rays from neutrinos by identifying shallow showers and up-going events.

Another way to detect ultra-high-energy neutrinos is by using mountains and valleys as natural neutrino targets. This Earth-skimming technique also primarily relies on tau neutrinos, as the tau leptons produced via deep inelastic scattering in the rock can emerge from Earth’s crust and decay within the atmosphere to generate detectable particle showers in the air.

The Giant Radio Array for Neutrino Detection (GRAND) aims to detect radio signals from these tau-induced air showers using a large array of radio antennas spread over thousands of square kilometres (see “Earth skimming” figure). GRAND is planned to be deployed in multiple remote, mountainous locations, with the first site in western China, followed by others in South America and Africa. The Tau Air-Shower Mountain-Based Observatory (TAMBO) has been proposed to be deployed on the face of the Colca Canyon in the Peruvian Andes, where an array of scintillators will detect the electromagnetic signals from tau-induced air showers.

Another proposed strategy that builds upon the Earth-skimming principle is the Trinity experiment, which employs an array of Cherenkov telescopes to observe nearby mountains. Ground-based air Cherenkov detectors are known for their excellent angular resolution, allowing for precise pointing to trace back to the origin of the high-energy primary particles. Trinity is a proposed system of 18 wide-field Cherenkov telescopes optimised for detecting neutrinos in the 10 PeV–1000 PeV energy range from the direction of nearby mountains – an approach validated by experiments such as Ashra–NTA, deployed on Hawaii’s Big Island utilising the natural topography of the Mauna Loa, Mauna Kea and Hualālai volcanoes.

Diffuse neutrino landscape

All these ultra-high-energy experiments detect particle showers as they develop in the atmosphere, whether from above, below or skimming the surface. But “Askaryan” detectors operate deep within the ice of the Earth’s poles, where both the neutrino interaction and detection occur.

In 1962 Soviet physicist Gurgen Askaryan reasoned that electromagnetic showers must develop a net negative charge excess as they develop, due to the Compton scattering of photons off atomic electrons and the ionisation of atoms by charged particles in the shower. As the charged shower propagates faster than the phase velocity of light in the medium, it should emit radiation in a manner analogous to Cherenkov light. However, there are key differences: Cherenkov radiation is typically incoherent and emitted by individual charged particles, while Askaryan radiation is coherent, being produced by a macroscopic buildup of charge, and is significantly stronger at radio frequencies. The Askaryan effect was experimentally confirmed at SLAC in 2001.

Optimised arrays

Because the attenuation length of radio waves is an order of magnitude longer than for optical photons, it becomes feasible to build much sparser arrays of radio antennas to detect the Askaryan signals than the compact optical arrays used in deep ice Cherenkov detectors. Such detectors are optimised to cover thousands of square kilometres, with typical energy thresholds beyond 100 PeV.

The Radio Neutrino Observatory in Greenland (RNO-G) is a next-generation in-ice radio detector currently under construction on the ~3 km-thick ice sheet above central Greenland, operating at frequencies in the 150–700 MHz range. RNO-G will consist of a sparse array of 35 autonomous radio detector stations, each separated by 1.25 km, making it the first large-scale radio neutrino array in the northern hemisphere.

Moon skimming

In the southern hemisphere, the proposed IceCube-Gen2 will complement the aforementioned eightfold expanded optical array with a radio component covering a remarkable 500 km2. The cold Antarctic ice provides an optimal medium for radio detection, with radio attenuation lengths of roughly 2 km facilitating cost-efficient instrumentation of the large volumes needed to measure the low ultra-high-energy neutrino flux. The radio array will combine in-ice omnidirectional antennas 150 m below the surface with high-gain antennas at a depth of 15 m and upward-facing antennas on the surface to veto the cosmic-ray background.

The IceCube-Gen2 radio array will have the sensitivity to probe features of the spectrum of astrophysical neutrino beyond the PeV scale, addressing the tension between upper limits from Auger and IceCube, and KM3NeT’s 220 +570–110PeV neutrino candidate – the sole ultra-high-energy neutrino yet observed. Extrapolating an isotropic and diffuse flux, IceCube should have detected 75 events in the 72–2600 PeV energy range over its operational period. However, no events have been observed above 70 PeV.

Perhaps the most ambitious way to observe ultra-high-energy neutrinos is to use the Moon as a target

If the detected KM3NeT event has a neutrino energy of around 100 PeV, it could originate from the same astrophysical sources responsible for accelerating ultra-high-energy cosmic rays. In this case, interactions between accelerated protons and ambient photons from starlight or synchrotron radiation would produce pions that decay into ultra-high-energy neutrinos. Alternatively, if its true energy is closer to 1 EeV, it is more likely cosmogenic: arising from the Greisen–Zatsepin–Kuzmin process, in which ultra-high-energy cosmic rays interact with cosmic microwave background photons, producing a Δ-resonance that decays into pions and ultimately neutrinos. IceCube-Gen2 will resolve the spectral shape from PeV to 10 EeV and differentiate between these two possible production mechanisms (see “Diffuse neutrino landscape” figure).

Moonshots

Remarkably, the Radar Echo Telescope (RET) is exploring using radar to actively probe the ice for transient signals. Unlike Askaryan-based detectors, which passively listen for radio pulses generated by charge imbalances in particle cascades, RET’s concept is to beam a radar signal and watch for reflections off the ionisation caused by particle showers. SLAC’s T576 experiment demonstrated the concept in the lab in 2022 by observing a radar echo from a beam of high-energy electrons scattering off a plastic target. RET has now been deployed in Greenland, where it seeks echoes from down-going cosmic rays as a proof of concept.

Full-sky coverage

Perhaps the most ambitious way to observe ultra-high-energy neutrinos foresees using the Moon as a target. When neutrinos with energies above 100 EeV interact near the rim of the Moon, they can induce particle cascades that generate coherent Askaryan radio emission which could be detectable on Earth (see “Moon skimming” figure). Observations could be conducted from Earth-based radio telescopes or from satellites orbiting the Moon to improve detection sensitivity. Lunar Askaryan detectors could potentially be sensitive to neutrinos up to 1 ZeV (1021 eV). No confirmed detections have been reported so far.

Neutrino network

Proposed neutrino observatories are distributed across the globe – a necessary requirement for full sky coverage, given the Earth is not transparent to ultra-high-energy neutrinos (see “Full-sky coverage” figure). A network of neutrino telescopes ensures that transient astrophysical events can always be observed as the Earth rotates. This is particularly important for time-domain multi-messenger astronomy, enabling coordinated observations with gravitational wave detectors and electromagnetic counterparts. The ability to track neutrino signals in real time will be key to identifying the most extreme cosmic accelerators and probing fundamental physics at ultra-high energies.

The post Discovering the neutrino sky appeared first on CERN Courier.

]]>
Feature Lu Lu looks forward to the next two decades of neutrino astrophysics, exploring the remarkable detector concepts needed to probe ultra-high energies from 1 EeV to 1 ZeV. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_NEUTRINOS_sky.jpg
Accelerators on autopilot https://cerncourier.com/a/accelerators-on-autopilot/ Mon, 19 May 2025 07:57:43 +0000 https://cerncourier.com/?p=113076 Verena Kain highlights four ways machine learning is making the LHC more efficient.

The post Accelerators on autopilot appeared first on CERN Courier.

]]>
The James Webb Space Telescope and the LHC

Particle accelerators can be surprisingly temperamental machines. Expertise, specialisation and experience is needed to maintain their performance. Nonlinear and resonant effects keep accelerator engineers and physicists up late into the night. With so many variables to juggle and fine-tune, even the most seasoned experts will be stretched by future colliders. Can artificial intelligence (AI) help?

Proposed solutions take inspiration from space telescopes. The two fields have been jockeying to innovate since the Hubble Space Telescope launched with minimal automation in 1990. In the 2000s, multiple space missions tested AI for fault detection and onboard decision-making, before the LHC took a notable step forward for colliders in the 2010s by incorporating machine learning (ML) in trigger decisions. Most recently, the James Webb Space Telescope launched in 2021 using AI-driven autonomous control systems for mirror alignment, thermal balancing and scheduling science operations with minimal intervention from the ground. The new Efficient Particle Accelerators project at CERN, which I have led since its approval in 2023, is now rolling out AI at scale across CERN’s accelerator complex (see “Dynamic and adaptive” image.

AI-driven automation will only become more necessary in the future. As well as being unprecedented in size and complexity, future accelerators will also have to navigate new constraints such as fluctuating energy availability from intermittent sources like wind and solar power, requiring highly adaptive and dynamic machine operation. This would represent a step change in complexity and scale. A new equipment integration paradigm would automate accelerator operation, equipment maintenance, fault analysis and recovery. Every item of equipment will need to be fully digitalised and able to auto-configure, auto-stabilise, auto-analyse and auto-recover. Like a driverless car, instrumentation and software layers must also be added for safe and efficient performance.

On-site human intervention of the LHC could be treated as a last resort – or perhaps designed out entirely

The final consideration is full virtualisation. While space telescopes are famously inaccessible once deployed, a machine like the Future Circular Collider (FCC) would present similar challenges. Given the scale and number of components, on-site human intervention should be treated as a last resort – or perhaps designed out entirely. This requires a new approach: equipment must be engineered for autonomy from the outset – with built-in margins, high reliability, modular designs and redundancy. Emerging technologies like robotic inspection, automated recovery systems and digital twins will play a central role in enabling this. A digital twin – a real-time, data-driven virtual replica of the accelerator – can be used to train and constrain control algorithms, test scenarios safely and support predictive diagnostics. Combined with differentiable simulations and layered instrumentation, these tools will make autonomous operation not just feasible, but optimal.

The field is moving fast. Recent advances allow us to rethink how humans interact with complex machines – not by tweaking hardware parameters, but by expressing intent at a higher level. Generative pre-trained transformers, a class of large language models, open the door to prompting machines with concepts rather than step-by-step instructions. While further R&D is needed for robust AI copilots, tailor-made ML models have already become standard tools for parameter optimisation, virtual diagnostics and anomaly detection across CERN’s accelerator landscape.

Progress is diverse. AI can reconstruct LHC bunch profiles using signals from wall current monitors, analyse camera images to spot anomalies in the “dump kickers” that safely remove beams, or even identify malfunctioning beam-position monitors. In the following, I identify four different types of AI that have been successfully deployed across CERN’s accelerator complex. They are merely the harbingers of a whole new way of operating CERN’s accelerators.

1. Beam steering with reinforcement learning

In 2020, LINAC4 became the new first link in the LHC’s modernised proton accelerator chain – and quickly became an early success story for AI-assisted control in particle accelerators.

Small deviations in a particle beam’s path within the vacuum chamber can have a significant impact, including beam loss, equipment damage or degraded beam quality. Beams must stay precisely centred in the beampipe to maintain stability and efficiency. But their trajectory is sensitive to small variations in magnet strength, temperature, radiofrequency phase and even ground vibrations. Worse still, errors typically accumulate along the accelerator, compounding the problem. Beam-position monitors (BPMs) provide measurements at discrete points – often noisy – while steering corrections are applied via small dipole corrector magnets, typically using model-based correction algorithms.

Beam steering

In 2019, the reinforcement learning (RL) algorithm normalised advantage function (NAF) was trained online to steer the H beam in the horizontal plane of LINAC4 during commissioning. In RL, an agent learns by interacting with its environment and receiving rewards that guide it toward better decisions. NAF uses a neural network to model the so-called Q-function that estimates rewards in RL and uses this to continuously refine its control policy.

Initially, the algorithm required many attempts to find an effective strategy, and in early iterations it occasionally worsened the beam trajectory, but as training progressed, performance improved rapidly. Eventually, the agent achieved a final trajectory better aligned than the goal of an RMS of 1 mm (see “Beam steering” figure).

This experiment demonstrated that RL can learn effective control policies for accelerator-physics problems within a reasonable amount of time. The agent was fully trained after about 300 iterations, or 30 minutes of beam time, making online training feasible. Since 2019, the use of AI techniques has expanded significantly across accelerator labs worldwide, targeting more and more problems that don’t have any classical solution. At CERN, tools such as GeOFF (Generic Optimisation Framework and Front­end) have been developed to standardise and scale these approaches throughout the accelerator complex.

2. Efficient injection with Bayesian optimisation

Bayesian optimisation (BO) is a global optimisation technique that uses a probabilistic model to find the optimal parameters of a system by balancing exploration and exploitation, making it ideal for expensive or noisy evaluations. A game-changing example of its use is the record-breaking LHC ion run in 2024. BO was extensively used all along the ion chain, and made a significant difference in LEIR (the low-energy ion ring, the first synchrotron in the chain) and in the Super Proton Synchrotron (SPS, the last accelerator before the LHC). In LEIR, most processes are no longer manually optimised, but the multi-turn injection process is still non-trivial and depends on various longitudinal and transverse parameters from its injector LINAC3.

Quick recovery

In heavy-ion accelerators, particles are injected in a partially stripped charge state and must be converted to higher charge states at different stages for efficient acceleration. In the LHC ion injector chain, the stripping foil between LINAC3 and LEIR raises the charge of the lead ions from Pb27+ to Pb54+. A second stripping foil, between the PS and SPS, fully ionises the beam to Pb82+ ions for final acceleration toward the LHC. These foils degrade over time due to thermal stress, radiation damage and sputtering, and must be remotely exchanged using a rotating wheel mechanism. Because each new foil has slightly different stripping efficiency and scattering properties, beam transmission must be re-optimised – a task that traditionally required expert manual tuning.

In 2024 it was successfully demonstrated that BO with embedded physics constraints can efficiently optimise the 21 most important parameters between LEIR and the LINAC3 injector. Following a stripping foil exchange, the algorithm restored the accumulated beam intensity in LEIR to better than nominal levels within just a few dozen iterations (see “Quick recovery” figure).

This example shows how AI can now match or outperform expert human tuning, significantly reducing recovery time, freeing up operator bandwidth and improving overall machine availability.

3. Adaptively correcting the 50 Hz ripple

In high-precision accelerator systems, even tiny perturbations can have significant effects. One such disturbance is the 50 Hz ripple in power supplies – small periodic fluctuations in current that originate from the electrical grid. While these ripples were historically only a concern for slow-extracted proton beams sent to fixed-target experiments, 2024 revealed a broader impact.

SPS intensity

In the SPS, adaptive Bayesian optimisation (ABO) was deployed to control this ripple in real time. ABO extends BO by learning the objective not only as a function of the control parameters, but also as a function of time, which then allows continuous control through forecasting.

The algorithm generated shot-by-shot feed-forward corrections to inject precise counter-noise into the voltage regulation of one of the quadrupole magnet circuits. This approach was already in use for the North Area proton beams, but in summer 2024 it was discovered that even for high-intensity proton beams bound for the LHC, the same ripple could contribute to beam losses at low energy.

Thanks to existing ML frameworks, prior experience with ripple compensation and available hardware for active noise injection, the fix could be implemented quickly. While the gains for protons were modest – around 1% improvement in losses – the impact for LHC ion beams was far more dramatic. Correcting the 50 Hz ripple increased ion transmission by more than 15%. ABO is therefore now active whenever ions are accelerated, improving transmission and supporting the record beam intensity achieved in 2024 (see “SPS intensity” figure).

4. Predicting hysteresis with transformers

Another outstanding issue in today’s multi-cycling synchrotrons with iron-dominated electromagnets is correcting for magnetic hysteresis – a phenomenon where the magnetic field depends not only on the current but also on its cycling history. Cumbersome mitigation strategies include playing dummy cycles and manually re-tuning parameters after each change in magnetic history.

SPS hysteresis

While phenomenological hysteresis models exist, their accuracy is typically insufficient for precise beam control. ML offers a path forward, especially when supported by high-quality field measurement data. Recent work using temporal fusion transformers – a deep-learning architecture designed for multivariate time-series prediction – has demonstrated that ML-based models can accurately predict field deviations from the programmed transfer function across different SPS magnetic cycles (see “SPS hysteresis” figure). This hysteresis model is now used in the SPS control room to provide feed-forward corrections – pre-emptive adjustments to magnet currents based on the predicted magnetic state – ensuring field stability without waiting for feedback from beam measurements and manual adjustments.

A blueprint for the future

With the Efficient Particle Accelerators project, CERN is developing a blueprint for the next generation of autonomous equipment. This includes concepts for continuous self-analysis, anomaly detection and new layers of “Internet of Things” instrumentation that support auto-configuration and predictive maintenance. The focus is on making it easier to integrate smart software layers. Full results are expected by the end of LHC Run 3, with robust frameworks ready for deployment in Run 4.

AI can now match or outperform expert human tuning, significantly reducing recovery time and improving overall machine availability

The goal is ambitious: to reduce maintenance effort by at least 50% wherever these frameworks are applied. This is based on a realistic assumption – already today, about half of all interventions across the CERN accelerator complex are performed remotely, a number that continues to grow. With current technologies, many of these could be fully automated.

Together, these developments will not only improve the operability and resilience of today’s accelerators, but also lay the foundation for CERN’s future machines, where human intervention during operation may become the exception rather than the rule. AI is set to transform how we design, build and operate accelerators – and how we do science itself. It opens the door to new models of R&D, innovation and deep collaboration with industry. 

The post Accelerators on autopilot appeared first on CERN Courier.

]]>
Feature Verena Kain highlights four ways machine learning is making the LHC more efficient. https://cerncourier.com/wp-content/uploads/2025/05/hara_andrew7-scaled.jpg
Powering into the future https://cerncourier.com/a/powering-into-the-future/ Mon, 19 May 2025 07:55:18 +0000 https://cerncourier.com/?p=113089 Nuria Catalan Lasheras and Igor Syratchev explain why klystrons are strategically important to the future of the field – and how CERN plans to boost their efficiency above 90%.

The post Powering into the future appeared first on CERN Courier.

]]>
The Higgs boson is the most intriguing and unusual object yet discovered by fundamental science. There is no higher experimental priority for particle physics than building an electron–positron collider to produce it copiously and study it precisely. Given the importance of energy efficiency and cost effectiveness in the current geopolitical context, this gives unique strategic importance to developing a humble technology called the klystron – a technology that will consume the majority of site power at every major electron–positron collider under consideration, but which has historically only achieved 60% energy efficiency.

The klystron was invented in 1937 by two American brothers, Russell and Sigurd Varian. The Varians wanted to improve aircraft radar systems. At the time, there was a growing need for better high-frequency amplification to detect objects at a distance using radar, a critical technology in the lead-up to World War II.

The Varian’s RF source operated around 3.2 GHz, or a wavelength of about 9.4 cm, in the microwave region of the electromagnetic spectrum. At the time, this was an extraordinarily high frequency – conventional vacuum tubes struggled beyond 300 MHz. Microwave wavelengths promised better resolution, less noise, and the ability to penetrate rain and fog. Crucially, antennas could be small enough to fit on ships and planes. But the source was far too weak for radar.

Klystrons are ubiquitous in medical, industrial and research accelerators – and not least in the next generation of Higgs factories

The Varians’ genius was to invent a way to amplify the electromagnetic signal by up to 30 dB, or a factor of 1000. The US and British military used the klystron for airborne radar, submarine detection of U-boats in the Atlantic and naval gun targeting beyond visual range. Radar helped win the Battle of Britain, the Battle of the Atlantic and Pacific naval battles, making surprise attacks harder by giving advance warning. Winston Churchill called radar “the secret weapon of WWII”, and the klystron was one of its enabling technologies.

With its high gain and narrow bandwidth, the klystron was the first practical microwave amplifier and became foundational in radio-frequency (RF) technology. This was the first time anyone had efficiently amplified microwaves with stability and directionality. Klystrons have since been used in satellite communication, broadcasting and particle accelerators, where they power the resonant RF cavities that accelerate the beams. Klystrons are therefore ubiquitous in medical, industrial and research accelerators – and not least in the next generation of Higgs factories, which are central to the future of high-energy physics.

Klystrons and the Higgs

Hadron colliders like the LHC tend to be circular. Their fundamental energy limit is given by the maximum strength of the bending magnets and the circumference of the tunnel. A handful of RF cavities repeatedly accelerate beams of protons or ions after hundreds or thousands of bending magnets force the beams to loop back through them.

Operating principle

Thanks to their clean and precisely controllable collisions, all Higgs factories under consideration are electron–positron colliders. Electron–positron colliders can be either circular or linear in construction. The dynamics of circular electron–positron colliders are radically different as the particles are 2000 times lighter than protons. The strength required from the bending magnets is relatively low for any practical circumference, however, the energy of the particles must be continually replenished, as they radiate away energy in the bends through synchrotron radiation, requiring hundreds of RF cavities. RF cavities are equally important in the linear case. Here, all the energy must be imparted in a single pass, with each cavity accelerating the beam only once, requiring either hundreds or even thousands of RF cavities.

Either way, 50 to 60% of the total energy consumed by an electron-positron collider is used for RF acceleration, compared to a relatively small fraction in a hadron collider. Efficiently powering the RF cavities is of paramount importance to the energy efficiency and cost effectiveness of the facility as a whole. RF acceleration is therefore of far greater significance at electron–positron colliders than at hadron colliders.

From a pen to a mid-size car

RF cavities cannot simply be plugged into the wall. These finely tuned resonant structures must be excited by RF power – an alternating microwave electromagnetic field that is supplied through waveguides at the appropriate frequency. Due to the geometry of resonant cavities, this excites an on-axis oscillating electrical field. Particles that arrive when the electrical field has the right direction are accelerated. For this reason, particles in an accelerator travel in bunches separated by a long distance, during which the RF field is not optimised for acceleration.

CLIC klystron

Despite the development of modern solid-state amplifiers, the Varians’ klystron is still the most practical technology to generate RF when the power required is in the MW level. They can be as small as a pen or as large and heavy as a mid-size car, depending on the frequency and power required. Linear colliders use higher frequency because they also come with higher gradients and make the linac shorter, whereas a circular collider does not need high gradients as the energy to be given each turn is smaller.

Klystrons fall under the general classification of vacuum tubes – fully enclosed miniature electron accelerators with their own source, accelerating path and “interaction region” where the RF field is produced. Their name is derived from the Greek verb describing the action of waves crashing against the seashore. In a klystron, RF power is generated when electrons crash against a decelerating electric field.

Every klystron contains at least two cavities: an input and an output. The input cavity is powered by a weak RF source that must be amplified. The output cavity generates the strongly amplified RF signal generated by the klystron. All this comes encapsulated in an ultra-high vacuum volume inside the field of a solenoid for focusing (see “Operating principle” figure).

Thanks to the efforts made in recent years, high-efficiency klystrons are now approaching the ultimate theoretical limit

Inside the klystron, electrons leave a heated cathode and are accelerated by a high voltage applied between the cathode and the anode. As they are being pushed forward, a small input RF signal is applied to the input cavity, either accelerating or decelerating the electrons according to their time of arrival. After a long drift, late-emitted accelerated electrons catch up with early-emitted decelerated electrons, intersecting with those that did not see any net accelerating force. This is called velocity bunching.

A second, passive accelerating cavity is placed at the location where maximum bunching occurs. Though of a comparable design, this cavity behaves in an inverse fashion to those used in particle accelerators. Rather than converting the energy of an electromagnetic field into the kinetic energy of particles, the kinetic energy of particles is converted into RF electromagnetic waves. This process can be enhanced by the presence of other passive cavities in between the already mentioned two, as well as by several iterations of bunching and de-bunching before reaching the output cavity. Once decelerated, the spent beam finishes its life in a dump or a water-cooled collector.

Optimising efficiency

Klystrons are ultimately RF amplifiers with a very high gain of the order of 30 to 60 dB and a very narrow bandwidth. They can be built at any frequency from a few hundred MHz to tens of GHz, but each operates within a very small range of frequencies called the bandwidth. After broadcasting became reliant on wider bandwidth vacuum tubes, their application in particle accelerators turned into a small market for high-power klystrons. Most klystrons for science are manufactured by a handful of companies which offer a limited number of models that have been in operation for decades. Their frequency, power and duty cycle may not correspond to the specifications of a new accelerator being considered – and in most cases, little or no thought has been given to energy efficiency or carbon footprint.

Battling space charge

When searching for suitable solutions for the next particle-physics collider, however, optimising the energy efficiency of klystrons and other devices that will determine the final energy bill and CO2 emissions is a task of the utmost importance. Therefore, nearly a decade ago, RF experts at CERN and the University of Lancaster began the High-Efficiency Klystron (HEK) project to maximise beam-to-RF efficiency: the fraction of the power contained in the klystron’s electron beam that is converted into RF power by the output cavity.

The complexity of klystrons resides on the very nonlinear fields to which the electrons are subjected. In the cathode and the first stages of electrostatic acceleration, the collective effect of “space-charge” forces between the electrons determines the strongly nonlinear dynamics of the beam. The same is true when the bunching tightens along the tube, with mutual repulsion between the electrons preventing optimal bunching at the output cavity.

For this reason, designing klystrons is not susceptible to simple analytical calculations. Since 2017, CERN has developed a code called KlyC that simulates the beam along the klystron channel and optimises parameters such as frequency and distance between cavities 100 to 1000 times faster than commercial 3D codes. KlyC is available in the public domain and is being used by an ever-growing list of labs and industrial partners.

Perveance

The main characteristic of a klystron is an obscure magnitude inherited from electron-gun design called perveance. For small perveances, space-charge forces are small, due to either high energy or low intensity, making bunching easy. For large perveances, space-charge forces oppose bunching, lowering beam-to-RF efficiency. High-power klystrons require large currents and therefore high perveances. One way to produce highly efficient, high-power klystrons is therefore for multiple cathodes to generate multiple low-perveance electron beams in a “multi-beam” (MB) klystron.

High-luminosity gains

Overall, there is an almost linear dependence between perveance and efficiency. Thanks to the efforts made in recent years, high-efficiency klystrons are now outperforming industrial klystrons by 10% in efficiency for all values of perveance, and approaching the ultimate theoretical limit (see “Battling space charge” figure).

One of the first designs to be brought to life was based on the E37113, a pulsed klystron with 6 MW peak power working in the X-band at 12 GHz, commercialised by CANON ETD. This klystron is currently used in the test facility at CERN for validating CLIC RF prototypes, which could greatly benefit from a larger power. As part of a collaboration with CERN, CANON ETD built a new tube, according to the design optimised at CERN, to reach a beam-to-RF efficiency of 57% instead of the original 42% (see “CLIC klystron” image and CERN Courier September/October 2022 p9).

As its interfaces with the high-voltage (HV) source and solenoid were kept identical, one can now benefit from 8 MW of RF power for the same energy consumption as before. As changes in the manufacturing of the tube channel are just a small fraction of the manufacture of the instrument, its price should not increase considerably, even if more accurate production methods are required.

In pursuit of power

Towards an FCC klystron

Another successful example of re-designing a tube for high efficiency is the TH2167 – the klystron behind the LHC, which is manufactured by Thales. Originally exhibiting a beam-to-RF efficiency of 60%, it was re-designed by the CERN team to gain 10% and reach 70% efficiency, while again using the same HV source and solenoid. The tube prototype has been built and is currently at CERN, where it has demonstrated the capacity to generate 350 kW of RF power with the same input energy as previously required to produce 300 kW. This power will be decisive when dealing with the higher intensity beam expected after the LHC luminosity upgrade. And all this again for a price comparable to previous models (see “High-luminosity gains” image).

The quest for the highest efficiency is not over yet. The CERN team is currently working on a design that could power the proposed Future Circular collider (FCC). Using about a hundred accelerating cavities, the electron and positron beams will need to be replenished with 100 MW of RF power, and energy efficiency is imperative.

The quest for the highest efficiency is not over yet

Although the same tube in use for the LHC, now boosted to 70% efficiency, could be used to power the FCC, CERN is working towards a vacuum tube that could reach an efficiency over 80%. A two-stage multi-beam klystron was initially designed that was capable of reaching 86% efficiency and generating 1 MW of continuous-wave power (see “Towards an FCC klystron” figure).

Motivated by recent changes in FCC parameters, we have rediscovered an old device called a tristron, which is not a conventional klystron but a “gridded tube” where the electron beam bunching mechanism is different. Tristons have a lower power gain but much greater flexibility. Simulations have confirmed that they can reach efficiencies as high as 90%. This could be a disruptive technology with applications well beyond accelerators. Manufacturing a prototype is an excellent opportunity for knowledge transfer from fundamental research to industrial applications.

The post Powering into the future appeared first on CERN Courier.

]]>
Feature Nuria Catalan Lasheras and Igor Syratchev explain why klystrons are strategically important to the future of the field – and how CERN plans to boost their efficiency above 90%. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_KLYSTRONS_frontis.jpg
Charting DESY’s future https://cerncourier.com/a/charting-desys-future/ Mon, 19 May 2025 07:34:51 +0000 https://cerncourier.com/?p=113176 DESY’s new chair, Beate Heinemann, reflects on the laboratory’s evolving role in science and society – from building next-generation accelerators to navigating Europe’s geopolitical landscape.

The post Charting DESY’s future appeared first on CERN Courier.

]]>
How would you describe DESY’s scientific culture?

DESY is a large laboratory with just over 3000 employees. It was founded 65 years ago as an accelerator lab, and at its heart it remains one, though what we do with the accelerators has evolved over time. It is fully funded by Germany.

In particle physics, DESY has performed many important studies, for example to understand the charm quark following the November Revolution of 1974. The gluon was discovered here in the late 1970s. In the 1980s, DESY ran the first experiments to study B mesons, laying the groundwork for core programmes such as LHCb at CERN and the Belle II experiment in Japan. In the 1990s, the HERA accelerator focused on probing the structure of the proton, which, incidentally, was the subject of my PhD, and those results have been crucial for precision studies of the Higgs boson.

Over time, DESY has become much more than an accelerator and particle-physics lab. Even in the early days, it used what is called synchrotron radiation, the light emitted when electrons change direction in the accelerator. This light is incredibly useful for studying matter in detail. Today, our accelerators are used primarily for this purpose: they generate X-rays that image tiny structures, for example viruses.

DESY’s culture is shaped by its very engaged and loyal workforce. People often call themselves “DESYians” and strongly identify with the laboratory. At its heart, DESY is really an engineering lab. You need an amazing engineering workforce to be able to construct and operate these accelerators.

Which of DESY’s scientific achievements are you most proud of?

The discovery of the gluon is, of course, an incredible achievement, but actually I would say that DESY’s greatest accomplishment has been building so many cutting-edge accelerators: delivering them on time, within budget, and getting them to work as intended.

Take the PETRA accelerator, for example – an entirely new concept when it was first proposed in the 1970s. The decision to build it was made in 1975; construction was completed by 1978; and by 1979 the gluon was discovered. So in just four years, we went from approving a 2.3 km accelerator to making a fundamental discovery, something that is absolutely crucial to our understanding of the universe. That’s something I’m extremely proud of.

I’m also very proud of the European X-ray Free-Electron Laser (XFEL), completed in 2017 and now fully operational. Before that, in 2005 we launched the world’s first free-electron laser, FLASH, and of course in the 1990s HERA, another pioneering machine. Again and again, DESY has succeeded in building large, novel and highly valuable accelerators that have pushed the boundaries of science.

What can we look forward to during your time as chair?

We are currently working on 10 major projects in the next three years alone! PETRA III will be running until the end of 2029, but our goal is to move forward with PETRA IV, the world’s most advanced X-ray source. Securing funding for that first, and then building it, is one of my main objectives. In Germany, there’s a roadmap process, and by July this year we’ll know whether an independent committee has judged PETRA IV to be one of the highest-priority science projects in the country. If all goes well, we aim to begin operating PETRA IV in 2032.

Our FLASH soft X-ray facility is also being upgraded to improve beam quality, and we plan to relaunch it in early September. That will allow us to serve more users and deliver better beam quality, increasing its impact.

In parallel, we’re contributing significantly to the HL-LHC upgrade. More than 100 people at DESY are working on building trackers for the ATLAS and CMS detectors, and parts of the forward calorimeter of CMS. That work needs to be completed by 2028.

Hunting axions

Astroparticle physics is another growing area for us. Over the next three years we’re completing telescopes for the Cherenkov Telescope Array and building detectors for the IceCube upgrade. For the first time, DESY is also constructing a space camera for the satellite UltraSat, which is expected to launch within the next three years.

At the Hamburg site, DESY is diving further into axion research. We’re currently running the ALPS II experiment, which has a fascinating “light shining through a wall” setup. Normally, of course, light can’t pass through something like a thick concrete wall. But in ALPS II, light inside a magnet can convert into an axion, a hypothetical dark-matter particle that can travel through matter almost unhindered. On the other side, another magnet converts the axion back into light. So, it appears as if the light has passed through the wall, when in fact it was briefly an axion. We started the experiment last year. As with most experiments, we began carefully, because not everything works at once, but two more major upgrades are planned in the next two years, and that’s when we expect ALPS II to reach its full scientific potential.

We’re also developing additional axion experiments. One of them, in collaboration with CERN, is called BabyIAXO. It’s designed to look for axions from the Sun, where you have both light and magnetic fields. We hope to start construction before the end of the decade.

Finally, DESY also has a strong and diverse theory group. Their work spans many areas, and it’s exciting to see what ideas will emerge from them over the coming years.

How does DESY collaborate with industry to deliver benefits to society?

We already collaborate quite a lot with industry. The beamlines at PETRA, in particular, are of strong interest. For example, BioNTech conducted some of its research for the COVID-19 vaccine here. We also have a close relationship with the Fraunhofer Society in Germany, which focuses on translating basic research into industrial applications. They famously developed the MP3 format, for instance. Our collaboration with them is quite structured, and there have also been several spinoffs and start-ups based on technology developed at DESY. Looking ahead, we want to significantly strengthen our ties with industry through PETRA IV. With much higher data rates and improved beam quality, it will be far easier to obtain results quickly. Our goal is for 10% of PETRA IV’s capacity to be dedicated to industrial use. Furthermore, we are developing a strong ecosystem for innovation on the campus and the surrounding area, with DESY in the centre, called the Science City Hamburg Bahrenfeld.

What’s your position on “dual use” research, which could have military applications?

The discussion around dual-use research is complicated. Personally, I find the term “dual use” a bit odd – almost any high-tech equipment can be used for both civilian and military purposes. Take a transistor for example, which has countless applications, including military ones, but it wasn’t invented for that reason. At DESY, we’re currently having an internal discussion about whether to engage in projects that relate to defence. This is part of an ongoing process where we’re trying to define under what conditions, if any, DESY would take on targeted projects related to defence. There are a range of views within DESY, and I think that diversity of opinion is valuable. Some people are firmly against this idea, and I respect that. Honestly, it’s probably how I would have felt 10 or 20 years ago. But others believe DESY should play a role. Personally, I’m open to it.

If our expertise can help people defend themselves and our freedom in Europe, that’s something worth considering. Of course, I would love to live in a world without weapons, where no one attacks anyone. But if I were attacked, I’d want to be able to defend myself. I prefer to work on shields, not swords, like in Asterix and Obelix, but, of course, it’s never that simple. That’s why we’re taking time with this. It’s a complex and multifaceted issue, and we’re engaging with experts from peace and security research, as well as the social sciences, to help us understand all dimensions. I’ve already learned far more about this than I ever expected to. We hope to come to a decision on this later this year.

You are DESY’s first female chair. What barriers do you think still exist for women in physics, and how can institutions like DESY address them?

There are two main barriers, I think. The first is that, in my opinion, society at large still discourages girls from going into maths and science.

Certainly in Germany, if you stopped a hundred people on the street, I think most of them would still say that girls aren’t naturally good at maths and science. Of course, there are always exceptions: you do find great teachers and supportive parents who go against this narrative. I wouldn’t be here today if I hadn’t received that kind of encouragement.

That’s why it’s so important to actively counter those messages. Girls need encouragement from an early age, they need to be strengthened and supported. On the encouragement side, DESY is quite active. We run many outreach activities for schoolchildren, including a dedicated school lab. Every year, more than 13,000 school pupils visit our campus. We also take part in Germany’s “Zukunftstag”, where girls are encouraged to explore careers traditionally considered male-dominated, and boys do the same for fields seen as female-dominated.

Looking ahead, we want to significantly strengthen our ties with industry

The second challenge comes later, at a different career stage, and it has to do with family responsibilities. Often, family work still falls more heavily on women than men in many partnerships. That imbalance can hold women back, particularly during the postdoc years, which tend to coincide with the time when many people are starting families. It’s a tough period, because you’re trying to advance your career.

Workplaces like DESY can play a role in making this easier. We offer good childcare options, flexibility with home–office arrangements, and even shared leadership positions, which help make it more manageable to balance work and family life. We also have mentoring programmes. One example is dynaMENT, where female PhD students and postdocs are mentored by more senior professionals. I’ve taken part in that myself, and I think it’s incredibly valuable.

Do you have any advice for early-career women physicists?

If I could offer one more piece of advice, it’s about building a strong professional network. That’s something I’ve found truly valuable. I’m fortunate to have a fantastic international network, both male and female colleagues, including many women in leadership positions. It’s so important to have people you can talk to, who understand your challenges, and who might be in similar situations. So if you’re a student, I’d really recommend investing in your network. That’s very important, I think.

What are your personal reflections on the next-generation colliders?

Our generation has a responsibility to understand the electroweak scale and the Higgs boson. These questions have been around for almost 90 years, since 1935 when Hideki Yukawa explored the idea that forces might be mediated by the exchange of massive particles. While we’ve made progress, a true understanding is still out of reach. That’s what the next generation of machines is aiming to tackle.

The problem, of course, is cost. All the proposed solutions are expensive, and it is very challenging to secure investments for such large-scale projects, even though the return on investment from big science is typically excellent: these projects drive innovation, build high-tech capability and create a highly skilled workforce.

Europe’s role is more vital than ever

From a scientific point of view, the FCC is the most comprehensive option. As a Higgs factory, it offers a broad and strong programme to analyse the Higgs and electroweak gauge bosons. But who knows if we’ll be able to afford it? And it’s not just about money. The timeline and the risks also matter. The FCC feasibility report was just published and is still under review by an expert committee. I’d rather not comment further until I’ve seen the full information. I’m part of the European Strategy Group and we’ll publish a new report by the end of the year. Until then, I want to understand all the details before forming an opinion.

It’s good to have other options too. The muon collider is not yet as technically ready as the FCC or linear collider, but it’s an exciting technology and could be the machine after next. Another could be using plasma-wakefield acceleration, which we’re very actively working on at DESY. It could enable us to build high-energy colliders on a much smaller scale. This is something we’ll need, as we can’t keep building ever-larger machines forever. Investing in accelerator R&D to develop these next-gen technologies is crucial.

Still, I really hope there will be an intermediate machine in the near future, a Higgs factory that lets us properly explore the Higgs boson. There are still many mysteries there. I like to compare it to an egg: you have to crack it open to see what’s inside. And that’s what we need to do with the Higgs.

One thing that is becoming clearer to me is the growing importance of Europe. With the current uncertainties in the US, which are already affecting health and climate research, we can’t assume fundamental research will remain unaffected. That’s why Europe’s role is more vital than ever.

I think we need to build more collaborations between European labs. Sharing expertise, especially through staff exchanges, could be particularly valuable in engineering, where we need a huge number of highly skilled professionals to deliver billion-euro projects. We’ve got one coming up ourselves, and the technical expertise for that will be critical.

I believe science has a key role to play in strengthening Europe, not just culturally, but economically too. It’s an area where we can and should come together.

The post Charting DESY’s future appeared first on CERN Courier.

]]>
Opinion DESY’s new chair, Beate Heinemann, reflects on the laboratory’s evolving role in science and society – from building next-generation accelerators to navigating Europe’s geopolitical landscape. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_INT_Heinemann.jpg
Clean di-pions reveal vector mesons https://cerncourier.com/a/clean-di-pions-reveal-vector-mesons/ Mon, 19 May 2025 07:32:21 +0000 https://cerncourier.com/?p=113155 LHCb has isolated a precisely measured, high-statistics sample of di-pions.

The post Clean di-pions reveal vector mesons appeared first on CERN Courier.

]]>
LHCb figure 1

Heavy-ion collisions usually have very high multiplicities due to colour flow and multiple nucleon interactions. However, when the ions are separated by greater than about twice their radii in so-called ultra-peripheral collisions (UPC), electromagnetic-induced interactions dominate. In these colour-neutral interactions, the ions remain intact and a central system with few particles is produced whose summed transverse momenta, being the Fourier transform of the distance between the ions, is typically less than 100 MeV/c.

In the photoproduction of vector mesons, a photon, radiated from one of the ions, fluctuates into a virtual vector meson long before it reaches the target and then interacts with one or more nucleons in the other ion. The production of ρ mesons has been measured at the LHC by ALICE in PbPb and XeXe collisions, while J/ψ mesons have been measured in PbPb collisions by ALICE, CMS and LHCb. Now, LHCb has isolated a precisely measured, high-statistics sample of di-pions with backgrounds below 1% in which several vector mesons are seen.

Figure 1 shows the invariant mass distribution of the pions, and the fit to the data requires contributions from the ρ meson, continuum ππ, the ω meson and two higher mass resonances at about 1.35 and 1.80 GeV, consistent with excited ρ mesons. The higher structure was also discernible in previous measurements by STAR and ALICE. Since its discovery in 1961, the ρ meson has proved challenging to describe because of its broad width and because of interference effects. More data in the di-pion channel, particularly when practically background-free down almost to production threshold, are therefore welcome. These data may help with hadronic corrections to the prediction of muon g-2: the dip and bump structure at high masses seen by LHCb is qualitatively similar to that observed by BaBar in e+e → π+π scattering (CERN Courier March/April 2025 p21). From the invariant mass spectrum, LHCb has measured the cross-sections for ρ, ω, ρand ρ′′ as a function of rapidity in photoproduction on lead nuclei.

Naively, comparison of the photo­production on the nucleus and on the proton should simply scale with the number of nucleons, and can be calculated in the impulse approximation that only takes into account the nuclear form factor, neglecting all other potential nuclear effects.

However, nuclear shadowing, caused by multiple interactions as the meson passes through the nucleus, leads to a suppression (CERN Courier January/February 2025 p31). In addition, there may be further non-linear QCD effects at play.

Elastic re-scattering is usually described through a Glauber calculation that takes account of multiple elastic scatters. This is extended in the GKZ model using Gribov’s formalism to include inelastic scatters. The inset in figure 1 shows the measured differential cross-section for the ρ meson as a function of rapidity for LHCb data compared to the GKZ prediction, to a prediction for the STARlight generator, and to ALICE data at central rapidities. Additional suppression due to nuclear effects is observed above that predicted by GKZ.

The post Clean di-pions reveal vector mesons appeared first on CERN Courier.

]]>
News LHCb has isolated a precisely measured, high-statistics sample of di-pions. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_EF-LHCb_feature.jpg
European strategy update: the community speaks https://cerncourier.com/a/european-strategy-update-the-community-speaks/ Mon, 19 May 2025 07:18:23 +0000 https://cerncourier.com/?p=113032 A total of 263 submissions range from individual to national perspectives.

The post European strategy update: the community speaks appeared first on CERN Courier.

]]>
Community input themes of the European Strategy process

The deadline for submitting inputs to the 2026 update of the European Strategy for Particle Physics (ESPP) passed on 31 March. A total of 263 submissions, ranging from individual to national perspectives, express the priorities of the high-energy physics community (see “Community inputs” figure). These inputs will be distilled by expert panels in preparation for an Open Symposium that will be held in Venice from 23 to 27 June (CERN Courier March/April 2025 p11).

Launched by the CERN Council in March 2024, the stated aim of the 2026 update to the ESPP is to develop a visionary and concrete plan that greatly advances human knowledge in fundamental physics, in particular through the realisation of the next flagship project at CERN. The community-wide process, which is due to submit recom­mendations to Council by the end of the year, is also expected to prioritise alternative options to be pursued if the preferred project turns out not to be feasible or competitive.

“We are heartened to see so many rich and varied contributions, in particular the national input and the various proposals for the next large-scale accelerator project at CERN,” says strategy secretary Karl Jakobs of the University of Freiburg, speaking on behalf of the European Strategy Group (ESG). “We thank everyone for their hard work and rigour.”

Two proposals for flagship colliders are at an advanced stage: a Future Circular Collider (FCC) and a Linear Collider Facility (LCF). As recommended in the 2020 strategy update, a feasibility study for the FCC was released on 31 March, describing a 91 km-circumference infrastructure that could host an electron–positron Higgs and electroweak factory followed by an energy-frontier hadron collider at a later stage. Inputs for an electron–positron LCF cover potential starting configurations based on Compact Linear Collider (CLIC) or International Linear Collider (ILC) technologies. It is proposed that the latter LCF could be upgraded using CLIC, Cool Copper Collider, plasma-wakefield or energy-recovery technologies and designs. Other proposals outline a muon collider and a possible plasma-wakefield collider, as well as potential “bridging” projects to a future flagship collider. Among the latter are LEP3 and LHeC, which would site an electron–positron and an electron–proton collider, respectively, in the existing LHC tunnel. For the LHeC, an additional energy-recovery linac would need to be added to CERN’s accelerator complex.

Future choices

In probing beyond the Standard Model and more deeply studying the Higgs boson and its electroweak domain, next-generation colliders will pick up where the High-Luminosity LHC (HL-LHC) leaves off. In a joint submission, the ATLAS and CMS collaborations presented physics projections which suggest that the HL-LHC will be able to: observe the H  µ+µ and H  Zγ decays of the Higgs boson; observe Standard Model di-Higgs production; and measure the Higgs’ trilinear self-coupling with a precision better than 30%. The joint document also highlights the need for further progress in high-precision theoretical calculations aligned with the demands of the HL-LHC and serves as important input to the discussion on the choice of a future collider at CERN.

Neutrinos and cosmic messengers, dark matter and the dark sector, strong interactions and flavour physics also attracted many inputs, allowing priorities in non-collider physics to complement collider programmes. Underpinning the community’s physics aspirations are numerous submissions in the categories of accelerator science and technology, detector instrumentation and computing. Progress in these technologies is vital for the realisation of a post-LHC collider, which was also reflected by the recommendation of the 2020 strategy update to define R&D roadmaps. The scientific and technical inputs will be reviewed by the Physics Preparatory Group (PPG), which will conduct comparative assessments of the scientific potential of various proposed projects against defined physics benchmarks.

We are heartened to see so many rich and varied contributions

Key to the ESPP 2026 update are 57 national and national-laboratory submissions, including some from outside Europe. Most identify the FCC as the preferred project to succeed the LHC. If the FCC is found to be unfeasible, many national communities propose that a linear collider at CERN should be pursued, while taking into account the global context: a 250 GeV linear collider may not be competitive if China decides to proceed with a Circular Electron Positron Collider at a comparable energy on the anticipated timescale, potentially motivating a higher energy electron–positron machine or a proton–proton collider instead.

Complex process

In its review, the ESG will take the physics reach of proposed colliders as well as other factors into account. This complex process will be undertaken by seven working groups, addressing: national inputs; diversity in European particle physics; project comparison; implementation of the strategy and deliverability of large projects; relations with other fields of physics; sustainability and environmental impact; public engagement, education, communication and social and career aspects for the next generation; and knowledge and technology transfer. “The ESG and the PPG have their work cut out and we look forward to further strong participation by the full community, in particular at the Open Symposium,” says Jakobs.

A briefing book prepared by the PPG based on the community input and discussions at the Open Symposium will be submitted to the ESG by the end of September for consideration during a five-day-long drafting session, which is scheduled to take place from 1 to 5 December. The CERN Council will then review the final ESG recommendations ahead of a special session to be held in Budapest in May 2026.

The post European strategy update: the community speaks appeared first on CERN Courier.

]]>
News A total of 263 submissions range from individual to national perspectives. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_NA_ESPP.png
Machine learning in industry https://cerncourier.com/a/machine-learning-in-industry/ Mon, 19 May 2025 07:10:04 +0000 https://cerncourier.com/?p=113165 Antoni Shtipliyski offers advice on how early-career researchers can transition into machine-learning roles in industry.

The post Machine learning in industry appeared first on CERN Courier.

]]>
Antoni Shtipliyski

In the past decade, machine learning has surged into every corner of industry, from travel and transport to healthcare and finance. For early-career researchers, who have spent their PhDs and postdocs coding, a job in machine learning may seem a natural next step.

“Scientists often study nature by attempting to model the world around us into math­ematical models and computer code,” says Antoni Shtipliyski, engineering manager at Skyscanner. “But that’s only one part of the story if the aim is to apply these models to large-scale research questions or business problems. A completely orthogonal set of challenges revolves around how people collaborate to build and operate these systems. That’s where the real work begins.”

Used to large-scale experiments and collaborative problem solving, particle physicists are uniquely well-equipped to step into machine-learning roles. Shtipliyski worked on upgrades for the level-1 trigger system of the CMS experiment at CERN, before leaving to lead the machine-learning operations team in one of the biggest travel companies in the world.

Effective mindset

“At CERN, building an experimental detector is just the first step,” says Shtipliyski. “To be useful, it needs to be operated effectively over a long period of time. That’s exactly the mindset needed in industry.”

During his time as a physicist, Shtipliyski gained multiple skills that continue to help him at work today, but there were also a number of other areas he developed to succeed in machine learning in industry. One critical gap in a physicists’ portfolio, he notes, is that many people interpret machine-learning careers as purely algorithmic development and model training.

“At Skyscanner, my team doesn’t build models directly,” he says. “We look after the platform used to push and serve machine-learning models to our users. We oversee the techno-social machine that delivers these models to travellers. That’s the part people underestimate, and where a lot of the challenges lie.”

An important factor for physicists transitioning out of academia is to understand the entire lifecycle of a machine-learning project. This includes not only developing an algorithm, but deploying it, monitoring its performance, adapting it to changing conditions and ensuring that it serves business or user needs.

Learning to write and communicate yourself is incredibly powerful

“In practice, you often find new ways that machine-learning models surprise you,” says Shtipliyski. “So having flexibility and confidence that the evolved system still works is key. In physics we’re used to big experiments like CMS being designed 20 years before being built. By the time it’s operational, it’s adapted so much from the original spec. It’s no different with machine-learning systems.”

This ability to live with ambiguity and work through evolving systems is one of the strongest foundations physicists can bring. But large complex systems cannot be built alone, so companies will be looking for examples of soft skills: teamwork, collaboration, communication and leadership.

“Most people don’t emphasise these skills, but I found them to be among the most useful,” Shtipliyski says. “Learning to write and communicate yourself is incredibly powerful. Being able to clearly express what you’re doing and why you’re doing it, especially in high-trust environments, makes everything else easier. It’s something I also look for when I do hiring.”

Industry may not offer the same depth of exploration as academia, but it does offer something equally valuable: breadth, variety and a dynamic environment. Work evolves fast, deadlines come more readily and teams are constantly changing.

“In academia, things tend to move more slowly. You’re encouraged to go deep into one specific niche,” says Shtipliyski. “In industry, you often move faster and are sometimes more shallow. But if you can combine the depth of thought from academia with the breadth of experience from industry, that’s a winning combination.”

Applied skills

For physicists eyeing a career in machine learning, the most they can do is to familiarise themselves with tools and practices for building and deploying models. Show that you can use the skills developed in academia and apply them to other environments. This tells recruiters that you have a willingness to learn, and is a simple but effective way of demonstrating commitment to a project from start to finish, beyond your assigned work.

“People coming from physics or mathematics might want to spend more time on implementation,” says Shtipliyski. “Even if you follow a guided walkthrough online, or complete classes on Coursera, going through the whole process of implementing things from scratch teaches you a lot. This puts you in a position to reason about the big picture and shows employers your willingness to stretch yourself, to make trade-offs and to evaluate your work critically.”

A common misconception is that practicing machine learning outside of academia is somehow less rigorous or less meaningful. But in many ways, it can be more demanding.

Scientific development is often driven by arguments of beauty and robustness. In industry, there’s less patience for that,” he says. “You have to apply it to a real-world domain – finance, travel, healthcare. That domain shapes everything: your constraints, your models, even your ethics.”

Shtipliyski emphasises that the technical side of machine learning is only one half of the equation. The other half is organisational: helping teams work together, navigate constraints and build systems that evolve over time. Physicists would benefit from exploring different business domains to understand how machine learning is used in different contexts. For example, GDPR constraints make privacy a critical issue in healthcare and tech. Learning how government funding is distributed throughout each project, as well as understanding how to build a trusting relationship between the funding agencies and the team, is equally important.

“A lot of my day-to-day work is just passing information, helping people build a shared mental model,” he says. “Trust is earned by being vulnerable yourself, which allows others to be vulnerable in turn. Once that happens, you can solve almost any problem.”

Taking the lead

Particle physicists are used to working in high-stakes, international teams, so this collaborative mindset is engrained in their training. But many may not have had the opportunity to lead, manage or take responsibility for an entire project from start to finish.

“In CMS, I did not have a lot of say due to the complexity and scale of the project, but I was able to make meaningful contributions in the validation and running of the detector,” says Shtipliyski. “But what I did not get much exposure to was the end-to-end experience, and that’s something employers really want to see.”

This does not mean you need to be a project manager to gain leadership experience. Early-career researchers have the chance to up-skill when mentoring a newcomer, help improve the team’s workflow in a proactive way, or network with other physicists and think outside the box.

You can be the dedicated expert in the room, even if you’re new. That feels really empowering

“Even if you just shadow an existing project, if you can talk confidently about what was done, why it was done and how it might be done differently – that’s huge.”

Many early-career researchers hesitate prior to leaving academia. They worry about making the “wrong” choice, or being labelled as a “finance person” or “tech person” as soon as they enter another industry. This is something Shtipliyski struggled to reckon with, but eventually realised that such labels do not define you.

“It was tough at CERN trying to anticipate what comes next,” he admits. “I thought that I could only have one first job. What if it’s the wrong one? But once a scientist, always a scientist. You carry your experiences with you.”

Shtipliyski quickly learnt that industry operates under a different set of rules: where everyone comes from a different background, and the levels of expertise differ depending on the person you will speak to next. Having faced intense imposter syndrome at CERN – having shared spaces with world-leading experts – industry offered Shtipliyski a more level playing field.

“In academia, there’s a kind of ladder: the longer you stay, the better you get. In industry, it’s not like that,” says Shtipliyski. “You can be the dedicated expert in the room, even if you’re new. That feels really empowering.”

Industry rewards adaptability as much as expertise. For physicists stepping beyond academia, the challenge is not abandoning their training, but expanding it – learning to navigate ambiguity, communicate clearly and understand the full lifecycle of real-world systems. Harnessing a scientist’s natural curiosity, and demonstrating flexibility, allows the transition to become less about leaving science behind, and more about discovering new ways to apply it.

“You are the collection of your past experiences,” says Shtipliyski. “You have the freedom to shape the future.”

The post Machine learning in industry appeared first on CERN Courier.

]]>
Careers Antoni Shtipliyski offers advice on how early-career researchers can transition into machine-learning roles in industry. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_CAR_Shtipliyski_feature.jpg
DESI hints at evolving dark energy https://cerncourier.com/a/desi-hints-at-evolving-dark-energy/ Fri, 16 May 2025 16:57:24 +0000 https://cerncourier.com/?p=113047 The new data could indicate a deviation from the ΛCDM model.

The post DESI hints at evolving dark energy appeared first on CERN Courier.

]]>
The dynamics of the universe depend on a delicate balance between gravitational attraction from matter and the repulsive effect of dark energy. A universe containing only matter would eventually slow down its expansion due to gravitational forces and possibly recollapse. However, observations of Type Ia supernovae in the late 1990s revealed that our universe’s expansion is in fact accelerating, requiring the introduction of dark energy. The standard cosmological model, called the Lambda Cold Dark Matter (ΛCDM) model, provides an elegant and robust explanation of cosmological observations by including normal matter, cold dark matter (CDM) and dark energy. It is the foundation of our current understanding of the universe.

Cosmological constant

In ΛCDM, Λ refers to the cosmological constant – a parameter introduced by Albert Einstein to counter the effect of gravity in his pursuit of a static universe. With the knowledge that the universe is accelerating, Λ is now used to quantify this acceleration. An important parameter that describes dark energy, and therefore influences the evolution of the universe, is its equation-of-state parameter, w. This value relates the pressure dark energy exerts on the universe, p, to its energy density, ρ, via p = wρ. Within ΛCDM, w is –1 and ρ is constant – a combination that has to date explained observations well. However, new results by the Dark Energy Spectroscopic Instrument (DESI) put these assumptions under increasing stress.

These new results are part of the second data release (DR2) from DESI. Mounted on the Nicholas U Mayall 4-metre telescope at Kitt Peak National Observatory in
Arizona, DESI is optimised to measure the spectra of a large number of objects in the sky simultaneously. Joint observations are possible thanks to 5000 optical fibres controlled through robots, which continuously optimise the focal plane of the detector. Combined with a highly efficient processing pipeline, this allows DESI to perform detailed simultaneous spectrometer measurements of a large number of objects in the sky, resulting in a catalogue of measurements of the distance of objects based on their velocity-induced shift in wavelength, or redshift. For its first data release, DESI used 6 million such redshifts, allowing it to show that w was several sigma away from its expected value of –1 (
CERN Courier May/June 2024 p11). For DR2, 14 million measurements are used, enough to provide strong hints of w changing with time.

The first studies of the expansion rate of the universe were based on redshift measurements of local objects, such as supernovae. As the objects are relatively close, they provide data on the acceleration at small redshifts. An alternative method is to use the cosmic microwave background (CMB), which allows for measurements of the evolution of the early universe through complex imprints left on the current distribution of the CMB. The significantly smaller expansion rate measured through the CMB compared to local measurements resulted in a “Hubble tension”, prompting novel measurements to resolve or explain the observed difference (CERN Courier March/April 2025 p28). One such attempt comes from DESI, which aims to provide a detailed 3D map of the universe focusing on the distance between galaxies to measure the expansion (see “3D map” figure).

Tension with ΛCDM

The 3D map produced by DESI can be used to study the evolution of the universe as it holds imprints from small fluctuations in the density of the early universe. These density fluctuations have been studied through their imprint on the CMB, however, they also left imprints in the distribution of baryonic matter until the age of recombination occurred. The variations in baryonic density grew over time into the varying densities of galaxies and other large-scale structures that are observed today.

The regions originally containing higher baryon densities are now those with larger densities of galaxies. Exactly how the matter-density fluctuations evolved into variations in galaxy densities throughout the universe depends on a range of parameters from the ΛCDM model, including w. The detailed map of the universe produced by DESI, which contains a range of objects with redshifts up to 2.5, can therefore be fitted against the ΛCDM model.

Among other studies, the latest data from DESI was combined with that of CMB observations and fitted to the ΛCDM model. This worked relatively well, although it requires a lower matter-density parameter than found from CMB data alone. However, using the resulting cosmological parameters results in a poor match with the data for the early universe coming from supernova measurements. Similarly, fitting the ΛCDM model using the supernova data results in poor agreement with both the DESI and CMB data, thereby putting some strain on the ΛCDM model. Things don’t get significantly better when adding some freedom in these analyses by allowing w to differ from –1.

The new data release provides significant evidence of a deviation from the ΛCDM model

An adaption of the ΛCDM model that results in an agreement with all three datasets requires w to evolve with redshift, or time. The implications for the acceleration of the universe based on these results are shown in the “Tension with ΛCDM” figure, which shows the deceleration rate of the expansion of the universe as a function of redshift. q < 0 implies an accelerating universe. In the ΛCDM model, acceleration increases with time, as redshift approaches 0. DESI data suggests that the acceleration of the universe started earlier, but is currently less than that predicted by ΛCDM.

Although this model matches the data well, a theoretical explanation is difficult. In particular, the data implies that w(z) was below –1, which translates into an energy density that increases with the expansion; however, the energy density seems to have peaked at a redshift of 0.45 and is now decreasing.

Overall, the new data release provides significant evidence of a deviation from the ΛCDM model. The exact significance depends on the specific analysis and which data sets are combined, however, all such studies provide similar results. As no 5σ discrepancy is found yet, there is no reason to discard ΛCDM, though this could change with another two years of DESI data coming up, along with data from the European Euclid mission, Vera C Rubin Observatory, and the Nancy Grace Roman Space Telescope. Each will provide new insights into the expansion for various redshift periods.

The post DESI hints at evolving dark energy appeared first on CERN Courier.

]]>
News The new data could indicate a deviation from the ΛCDM model. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_NA-DESI.jpg
FCC feasibility study complete https://cerncourier.com/a/fcc-feasibility-study-complete/ Fri, 16 May 2025 16:40:37 +0000 https://cerncourier.com/?p=113038 The final report of a study investigating the technical and financial feasibility of a Future Circular Collider at CERN was released on 31 March.

The post FCC feasibility study complete appeared first on CERN Courier.

]]>
The final report of a detailed study investigating the technical and financial feasibility of a Future Circular Collider (FCC) at CERN was released on 31 March. Building on a conceptual design study conducted between 2014 and 2018, the three-volume report is authored by over 1400 scientists and engineers in more than 400 institutes worldwide, and covers aspects of the project ranging from civil engineering to socioeconomic impact. As recommended in the 2020 update to the European Strategy for Particle Physics (ESPP), it was completed in time to serve as an input to the ongoing 2026 update to the ESPP (see “European strategy update: the community speaks“).

The FCC is a proposed collider infrastructure that could succeed the LHC in the 2040s. Its scientific motivation stems from the discovery in 2012 of the final particle of the Standard Model (SM), the Higgs boson, with a mass of just 125 GeV, and the wealth of precision measurements and exploratory searches during 15 years of LHC operations that have excluded many signatures of new physics at the TeV scale. The report argues that the FCC is particularly well equipped to study the Higgs and associated electroweak sectors in detail and that it provides a broad and powerful exploratory tool that would push the limits of the unknown as far as possible.

The report describes how the FCC will seek to address key domains formulated in the 2013 and 2020 ESPP updates, including: mapping the properties of the Higgs and electroweak gauge bosons with accuracies orders of magnitude better than today to probe the processes that led to the emergence of the Brout–Englert–Higgs field’s nonzero vacuum expectation value; ensuring a comprehensive and accurate campaign of precision electroweak, quantum chromodynamics, flavour and top-quark measurements sensitive to tiny deviations from the SM, probing energy scales far beyond the direct kinematic reach; improving by orders of magnitude the sensitivity to rare and elusive phenomena at low energies, including the possible discovery of light particles with very small couplings such as those relevant to the search for dark matter; and increasing by at least an order of magnitude the direct discovery reach for new particles at the energy frontier.

This technology has significant potential for industrial and societal applications

The FCC research programme outlines two possible stages: an electron–positron collider (FCC-ee) running at several centre-of-mass energies to serve as a Higgs, electroweak and top-quark factory, followed at a later stage by a proton–proton collider (FCC-hh) operating at an unprecedented collision energy. An FCC-ee with four detectors is judged to be “the electroweak, Higgs and top factory project with the highest luminosity proposed to date”, able to produce 6 × 1012 Z bosons, 2.4 × 108 W pairs, almost 3 × 106 Higgs bosons, and 2 × 106 top-quark pairs over 15 years of operations. Its versatile RF system would enable flexibility in the running sequence, states the report, allowing experimenters to move between physics programmes and scan through energies at ease. The report also outlines how the FCC-ee injector offers opportunities for other branches of science, including the production of spatially coherent photon beams with a brightness several orders of magnitude higher than any existing or planned light source.

The estimated cost of the construction of the FCC-ee is CHF 15.3 billion. This investment, which would be distributed over a period of about 15 years starting from the early 2030s, includes civil engineering, technical infrastructure, electron and positron accelerators, and four detectors.

Ready for construction

The report describes how key FCC-ee design approaches, such as a double-ring layout, top-up injection with a full-energy booster, a crab-waist collision scheme, and precise energy calibration, have been demonstrated at several previous or presently operating colliders. The FCC-ee is thus “technically ready for construction” and is projected to deliver four-to-five orders of magnitude higher luminosity per unit electrical power than LEP. During operation, its energy consumption is estimated to vary
from 1.1 to 1.8 TWh/y depending on the operation mode compared to CERN’s current consumption of about 1.3 TWh/y. Decarbonised energy including an ever-growing contribution from renewable sources would be the main source of energy for the FCC. Ongoing technology R&D aims at further increasing FCC-ee’s energy efficiency (see “Powering into the future”).

Assuming 14 T Nb3Sn magnet technology as a baseline design, a subsequent hadron collider with a centre-of-mass energy of 85 TeV entering operation in the early 2070s would extend the energy frontier by a factor six and provide an integrated luminosity five to 10 times higher than that of the HL-LHC during 25 years of operation. With four detectors, FCC-hh would increase the mass reach of direct searches for new particles to several tens of TeV, probing a broad spectrum of beyond-the-SM theories and potentially identifying the sources of any deviations found in precision measurements at FCC-ee, especially those involving the Higgs boson. An estimated sample of more than 20 billion Higgs bosons would allow the absolute determination of its couplings to muons, to photons, to the top quark and to Zγ below the percent level, while di-Higgs production would bring the uncertainty on the Higgs self-coupling below the 5% level. FCC-hh would also significantly advance understanding of the hot QCD medium by enabling lead–lead and other heavy-ion collisions at unprecedented energies, and could be configured to provide electron–proton and electron–ion collisions, says the report.

The FCC-hh design is based on LHC experience and would leverage a substantial amount of the technical infrastructure built for the first FCC stage. Two hadron injector options are under study involving a superconducting machine in either the LHC or SPS tunnel. For the purpose of a technical feasibility analysis, a reference scenario based on 14 T Nb3Sn magnets cooled to 1.9 K was considered, yielding 2.4 MW of synchrotron radiation and a power consumption of 360 MW or 2.3 TWh/y – a comparable power consumption to FCC-ee.

FCC-hh’s power consumption might be reduced below 300 MW if the magnet temperature can be raised to 4.5 K. Outlining the potential use of high-
temperature superconductors for 14 to 20 T dipole magnets operating at temperatures between 4.5 K and 20 K, the report notes that such technology could either extend the centre-of-mass energy of FCC-hh to 120 TeV or lead to significantly improved operational sustainability at the same collision energy. “The time window of more than 25 years opened by the lepton-collider stage is long enough to bring that technology to market maturity,” says FCC study leader Michael Benedikt  (CERN). “High-temperature superconductors have significant potential for industrial and societal applications, and particle accelerators can serve as pilots for market uptake, as was the case with the Tevatron and the LHC for NbTi technology.”

Society and sustainability

The report details the concepts and paths to keep the FCC’s environmental footprint low while boosting new technologies to benefit society and developing territorial synergies such as energy reuse. The civil construction process for FCC-ee, which would also serve FCC-hh, is estimated to result in about 500,000 tCO2(eq) over a period of 10 years, which the authors say corresponds to approximately one-third of the carbon budget of the Paris Olympic Games. A socio-economic impact assessment of the FCC integrating environmental aspects throughout its entire lifecycle reveals a positive cost–benefit ratio, even under conservative assumptions and adverse implementation conditions.

The actual journey towards the realisation of the FCC starts now

A major achievement of the FCC feasibility study has been the development of the layout and placement of the collider ring and related infrastructure, which have been optimised for scientific benefit while taking into account territorial compatibility, environmental and construction constraints, and cost. No fewer than 100 scenarios were developed and analysed before settling on the preferred option: a ring circumference of 90.7 km with shaft depths ranging between 200 and 400 m, with eight surface sites and four experiments. Throughout the study, CERN has been accompanied by its host states, France and Switzerland, working with entities at the local, regional and national levels to ensure a constructive dialogue with territorial stakeholders.

The final report of the FCC feasibility study together with numerous referenced technical documents have been submitted to the ongoing ESPP 2026 update, along with studies of alternative projects proposed by the community. The CERN Council may take a decision around 2028.

“After four years of effort, perseverance and creativity, the FCC feasibility study was concluded on 31 March 2025,” says Benedikt. “The actual journey towards the realisation of the FCC starts now and promises to be at least as fascinating as the successive steps that brought us to the present state.”

The post FCC feasibility study complete appeared first on CERN Courier.

]]>
News The final report of a study investigating the technical and financial feasibility of a Future Circular Collider at CERN was released on 31 March. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_NA-FCC.jpg
Gravitational remnants in the sky https://cerncourier.com/a/gravitational-remnants-in-the-sky/ Fri, 16 May 2025 16:36:48 +0000 https://cerncourier.com/?p=113216 Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to one of the most exciting frontiers in modern cosmology and particle physics.

The post Gravitational remnants in the sky appeared first on CERN Courier.

]]>
Astrophysical gravitational waves have revolutionised astronomy; the eventual detection of cosmological gravitons promises to open an otherwise inaccessible window into the universe’s earliest moments. Such a discovery would offer profound insights into the hidden corners of the early universe and physics beyond the Standard Model. Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to the most exciting frontiers in modern cosmology and particle physics.

Giovannini is an esteemed scholar and household name in the fields of theoretical cosmology and early-universe physics. He has written influential research papers, reviews and books on cosmology, providing detailed discussions on several aspects of the early universe. He also authored 2008’s A Primer on the Physics of the Cosmic Microwave Background – a book most cosmologists are very familiar with.

In Relic Gravitons, Giovannini provides a comprehensive exploration of recent developments in the field, striking a remarkable balance between clarity, physical intuition and rigorous mathematical formalism. As such, it serves as an excellent reference – equally valuable for both junior researchers and seasoned experts seeking depth and insight into theoretical cosmology and particle physics.

Relic Gravitons opens with an overview of cosmological gravitons, offering a broad perspective on gravitational waves across different scales and cosmological epochs, while drawing parallels with the electromagnetic spectrum. This graceful introduction sets the stage for a well-contextualised and structured discussion.

Gravitational rainbow

Relic gravitational waves from the early universe span 30 orders of magnitude, from attohertz to gigahertz. Their wavelengths are constrained from above by the Hubble radius, setting a lower frequency bound of 10–18 Hz. At the lowest frequencies, measurements of the cosmic microwave background (CMB) provide the most sensitive probe of gravitational waves. In the nanohertz range, pulsar timing arrays serve as powerful astrophysical detectors. At intermediate frequencies, laser and atomic interferometers are actively probing the spectrum. At higher frequencies, only wide-band interferometers such as LIGO and Virgo currently operate, primarily within the audio band spanning from a few hertz to several kilohertz.

Relic Gravitons

The theoretical foundation begins with a clear and accessible introduction to tensor modes in flat spacetime, followed by spherical harmonics and polarisations. With these basics in place, tensor modes in curved spacetime are also explored, before progressing to effective action, the quantum mechanics of relic gravitons and effective energy density. This structured progression builds a solid framework for phenomenological applications.

The second part of the book is about the signals of the concordance paradigm, which includes discussions of Sakharov oscillations, short, intermediate and long wavelengths, before entering technical interludes in the next section. Here, Giovannini emphasises that the evolution of the comoving Hubble radius is uncertain, spectral energy density and other observables require approximate methods. The chapter expands to include conventional results using the Wentzel–Kramers–Brillouin approach, which is particularly useful when early-universe dynamics deviate from standard inflation.

Phenomenological implications are discussed in the final section, starting with the low-frequency branch that covers the analysis of the phenomenological implications in the lowest-frequency domain. Giovannini then examines the intermediate and high-frequency ranges. The concordance paradigm suggests that large-scale inhomogeneities originate from quantum mechanics, where traveling waves transform into standing waves. The penultimate chapter addresses the hot topic of the “quantumness” of relic gravitons, before diving into the conclusion. The book finishes with five appendices covering all sorts of useful topics, from notation to basic related topics in general relativity and cosmic perturbations.

Relic Gravitons is a must-read for anyone intrigued by the gravitational-wave background and its unparalleled potential to unveil new physics. It is an invaluable resource for those interested in gravitational waves and the unique potential to explore the unknown parts of particle physics and cosmology.

The post Gravitational remnants in the sky appeared first on CERN Courier.

]]>
Review Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to one of the most exciting frontiers in modern cosmology and particle physics. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Rev_GreenBank.jpg
Colour information diffuses in Frankfurt https://cerncourier.com/a/colour-information-diffuses-in-frankfurt/ Fri, 16 May 2025 16:35:40 +0000 https://cerncourier.com/?p=113057 The 31st Quark Matter conference was the best attended in the series’ history, with more than 1000 participants.

The post Colour information diffuses in Frankfurt appeared first on CERN Courier.

]]>
Quark Matter 2025

The 31st Quark Matter conference took place from 6 to 12 April at Goethe University in Frankfurt, Germany. This edition of the world’s flagship conference for ultra-relativistic heavy-ion physics was the best attended in the series’ history, with more than 1000 participants.

A host of experimental measurements and theoretical calculations targeted fundamental questions in many-body QCD. These included the search for a critical point along the QCD phase diagram, the extraction of the properties of the deconfined quark–gluon plasma (QGP) medium created in heavy-ion collisions, and the search for signatures of the formation of this deconfined medium in smaller collision systems.

Probing thermalisation

New results highlighted the ability of the strong force to thermalise the out-of-equilibrium QCD matter produced during the collisions. Thermalisation can be probed by taking advantage of spatial anisotropies in the initial collision geometry which, due to the rapid onset of strong interactions at early times, result in pressure gradients across the system. These pressure gradients in turn translate into a momentum-space anisotropy of produced particles in the bulk, which can be experimentally measured by taking a Fourier transform of the azimuthal distribution of final-state particles with respect to a reference event axis.

An area of active experimental and theoretical interest is to quantify the degree to which heavy quarks, such as charm and beauty, participate in this collective behaviour, which informs on the diffusion properties of the medium. The ALICE collaboration presented the first measurement of the second-order coefficient of the momentum anisotropy of charm baryons in Pb–Pb collisions, showing significant collective behaviour and suggesting that charm quarks undergo some degree of thermalisation. This collective behaviour appears to be stronger in charm baryons than charm mesons, following similar observations for light flavour.

A host of measurements and calculations targeted fundamental questions in many-body QCD

Due to the nature of thermalisation and the long hydrodynamic phase of the medium in Pb–Pb collisions, signatures of the microscopic dynamics giving rise to the thermalisation are often washed out in bulk observables. However, local excitations of the hydrodynamic medium, caused by the propagation of a high-energy jet through the QGP, can offer a window into such dynamics. Due to coupling to the coloured medium, the jet loses energy to the QGP, which in turn re-excites the thermalised medium. These excited states quickly decay and dissipate, and the local perturbation can partially thermalise. This results in a correlated response of the medium in the direction of the propagating jet, the distribution of which allows measurement of the thermalisation properties of the medium in a more controlled manner.

In this direction, the CMS collaboration presented the first measurement of an event-wise two-point energy–energy correlator, for events containing a Z boson, in both pp and Pb–Pb collisions. The two-point correlator represents the energy-weighted cross section of the angle between particle pairs in the event and can separate out QCD effects at different scales, as these populate different regions in angular phase space. In particular, the correlated response of the medium is expected to appear at large angles in the correlator in Pb–Pb collisions.

The use of a colourless Z boson, which does not interact in the QGP, allows CMS to compare events with similar initial virtuality scales in pp and Pb–Pb collisions, without incurring biases due to energy loss in the QCD probes. The collaboration showed modifications in the two-point correlator at large angles, from pp to Pb–Pb collisions, alluding to a possible signature of the correlated response of the medium to the traversing jets. Such measurements can help guide models into capturing the relevant physical processes underpinning the diffusion of colour information in the medium.

Looking to the future

The next addition of this conference series will take place in 2027 in Jeju, South Korea, and the new results presented there should notably contain the latest complement of results from the upgraded Run 3 detectors at the LHC and the newly commissioned sPHENIX detector at RHIC. New collision systems like O–O at the LHC will help shed light on many of the properties of the QGP, including its thermalisation, by varying the lifetime of the pre-equilibrium and hydrodynamic phases in the collision evolution.

The post Colour information diffuses in Frankfurt appeared first on CERN Courier.

]]>
Meeting report The 31st Quark Matter conference was the best attended in the series’ history, with more than 1000 participants. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_FN_Quark_feature.jpg
PhyStat turns 25 https://cerncourier.com/a/phystat-turns-25/ Fri, 16 May 2025 16:31:48 +0000 https://cerncourier.com/?p=112707 On 16 January, physicists and statisticians met in the CERN Council Chamber to celebrate 25 years of the PhyStat series of conferences, workshops and seminars.

The post PhyStat turns 25 appeared first on CERN Courier.

]]>
Confidence intervals

On 16 January, physicists and statisticians met in the CERN Council Chamber to celebrate 25 years of the PhyStat series of conferences, workshops and seminars, which bring together physicists, statisticians and scientists from related fields to discuss, develop and disseminate methods for statistical data analysis and machine learning.

The special symposium heard from the founder and primary organiser of the PhyStat series Louis Lyons (Imperial College London and University of Oxford), who together with Fred James and Yves Perrin initiated the movement with the “Workshop on Confidence Limits” in January 2000. According to Lyons, the series was to bring together physicists and statisticians, a philosophy that has been followed and extended throughout the 22 PhyStat workshops and conferences, as well as numerous seminars and “informal reviews”. Speakers called attention to recognition from the Royal Statistical Society’s pictorial timeline of statistics, starting with the use of averages by Hippias of Elis in 450 BC and culminating with the 2012 discovery of the Higgs boson with 5σ significance.

Lyons and Bob Cousins (UCLA) offered their views on the evolution of statistical practice in high-energy physics, starting in the 1960s bubble-chamber era, strongly influenced by the 1971 book Statistical Methods in Experimental Physics by W T Eadie et al., its 2006 second edition by symposium participant Fred James (CERN), as well as Statistics for Nuclear and Particle Physics (1985) by Louis Lyons – reportedly the most stolen book from the CERN library. Both Lyons and Cousins noted the interest of the PhyStat community not only in practical solutions to concrete problems but also in foundational questions in statistics, with the focus on frequentist methods setting high-energy physics somewhat apart from the Bayesian approach more widely used in astrophysics.

Giving his view of the PhyStat era, ATLAS physicist and director of the University of Wisconsin Data Science Institute Kyle Cranmer emphasised the enormous impact that PhyStat has had on the field, noting important milestones such as the ability to publish full likelihood models through the statistical package RooStats, the treatment of systematic uncertainties with profile-likelihood ratio analyses, methods for combining analyses, and the reuse of published analyses to place constraints on new physics models. In regards to the next 25 years, Cranmer predicted the increasing use of methods that have emerged from PhyStat, such as simulation-based inference, and pointed out that artificial intelligence (the elephant in the room) could drastically alter how we use statistics.

Statistician Mikael Kuusela (CMU) noted that Phystat workshops have provided important two-way communication between the physics and statistics communities, citing simulation-based inference as an example where many key ideas were first developed in physics and later adopted by statisticians. In his view, the use of statistics in particle physics has emerged as “phystatistics”, a proper subfield with distinct problems and methods.

Another important feature of the PhyStat movement has been to encourage active participation and leadership by younger members of the community.  With its 25th anniversary, the torch is now passed from Louis Lyons to Olaf Behnke (DESY), Lydia Brenner (NIKHEF) and a younger team, who will guide Phystat into the next 25 years and beyond.

The post PhyStat turns 25 appeared first on CERN Courier.

]]>
Meeting report On 16 January, physicists and statisticians met in the CERN Council Chamber to celebrate 25 years of the PhyStat series of conferences, workshops and seminars. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_phystat_feature.jpg
Gaseous detectors school at CERN https://cerncourier.com/a/gaseous-detectors-school-at-cern/ Fri, 16 May 2025 16:29:04 +0000 https://cerncourier.com/?p=112717 DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors.

The post Gaseous detectors school at CERN appeared first on CERN Courier.

]]>
How do wire-based detectors compare to resistive-plate chambers? How well do micropattern gaseous detectors perform? Which gas mixtures optimise operation? How will detectors face the challenges of future more powerful accelerators?

Thirty-two students attended the first DRD1 Gaseous Detectors School at CERN last November. The EP-DT Gas Detectors Development (GDD) lab hosted academic lectures and varied hands-on laboratory exercises. Students assembled their own detectors, learnt about their operating characteristics and explored radiation-imaging methods with state-of-the-art readout approaches – all under the instruction of more than 40 distinguished lecturers and tutors, including renowned scientists, pioneers of innovative technologies and emerging experts.

DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors. The collaboration focuses on knowledge sharing and scientific exchange, in addition to the development of novel gaseous detector technologies to address the needs of future experiments. This instrumentation school, initiated in DRD1’s first year, marks the start of a series of regular training events for young researchers that will also serve to exchange ideas between research groups and encourage collaboration.

The school will take place annually, with future editions hosted at different DRD1 member institutes to reach students from a number of regions and communities.

The post Gaseous detectors school at CERN appeared first on CERN Courier.

]]>
Meeting report DRD1 is a new worldwide collaborative framework of more than 170 institutes focused on R&D for gaseous detectors. https://cerncourier.com/wp-content/uploads/2025/03/CCMayJun25_FN_DRD1.jpg
Planning for precision at Moriond https://cerncourier.com/a/planning-for-precision-at-moriond/ Fri, 16 May 2025 16:26:44 +0000 https://cerncourier.com/?p=113063 Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions.

The post Planning for precision at Moriond appeared first on CERN Courier.

]]>
Since 1966 the Rencontres de Moriond has been one of the most important conferences for theoretical and experimental particle physicists. The Electroweak Interactions and Unified Theories session of the 59th edition attracted about 150 participants to La Thuile, Italy, from 23 to 30 March, to discuss electroweak, Higgs-boson, top-quark, flavour, neutrino and dark-matter physics, and the field’s links to astrophysics and cosmology.

Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions. These are particularly important while the international community discusses future projects, basing projections on current results and technology. The conference heard how theoretical investigations of specific models and “catch all” effective field theories are being sharpened to constrain a broader spectrum of possible extensions of the Standard Model. Theoretical parametric uncertainties are being greatly reduced by collider precision measurements and lattice QCD. Perturbative calculations of short-distance amplitudes are reaching to percent-level precision, while hadronic long-distance effects are being investigated both in B-, D- and K-meson decays, as well as in the modelling of collider events.

Comprehensive searches

Throughout Moriond 2025 we heard how a broad spectrum of experiments at the LHC, B factories, neutrino facilities, and astrophysical and cosmological observatories are planning upgrades to search for new physics at both low- and high-energy scales. Several fields promise qualitative progress in understanding nature in the coming years. Neutrino experiments will measure the neutrino mass hierarchy and CP violation in the neutrino sector. Flavour experiments will exclude or confirm flavour anomalies. Searches for QCD axions and axion-like particles will seek hints to the solution of the strong CP problem and possible dark-matter candidates.

The Standard Model has so far been confirmed to be the theory that describes physics at the electroweak scale (up to a few hundred GeV) to a remarkable level of precision. All the particles predicted by the theory have been discovered, and the consistency of the theory has been proven with high precision, including all calculable quantum effects. No direct evidence of new physics has been found so far. Still, big open questions remain that the Standard Model cannot answer, from understanding the origin of neutrino masses and their hierarchy, to identifying the origin and nature of dark matter and dark energy, and explaining the dynamics behind the baryon asymmetry of the universe.

Several fields promise qualitative progress in understanding nature in the coming years

The discovery of the Higgs boson has been crucial to confirming the Standard Model as the theory of particle physics at the electroweak scale, but it does not explain why the scalar Brout–Englert–Higgs (BEH) potential takes the form of a Mexican hat, why the electroweak scale is set by a Higgs vacuum expectation value of 246 GeV, or what the nature of the Yukawa force is that results in the bizarre hierarchy of masses coupling the BEH field to quarks and leptons. Gravity is also not a component of the Standard Model, and a unified theory escapes us.

At the LHC today, the ATLAS and CMS collaborations are delivering Run 1 and 2 results with beyond-expectation accuracies on Higgs-boson properties and electroweak precision measurements. Projections for the high-luminosity phase of the LHC are being updated and Run 3 analyses are in full swing. The LHCb collaboration presented another milestone in flavour physics for the first time at Moriond 2025: the first observation of CP violation in baryon decays. Its rebuilt Run 3 detector with triggerless readout and full software trigger reported its first results at this conference.

Several talks presented scenarios of new physics that could be revealed in today’s data given theoretical guidance of sufficient accuracy. These included models with light weakly interacting particles, vector-like fermions and additional scalar particles. Other talks discussed how revisiting established quantum properties such as entanglement with fresh eyes could offer unexplored avenues to new theoretical paradigms and overlooked new-physics effects.

The post Planning for precision at Moriond appeared first on CERN Courier.

]]>
Meeting report Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_FN_moriond.jpg
Pinpointing polarisation in vector-boson scattering https://cerncourier.com/a/pinpointing-polarisation-in-vector-boson-scattering/ Fri, 16 May 2025 16:20:59 +0000 https://cerncourier.com/?p=113145 Interactions involving longitudinally polarised W and Z bosons provide a stringent test of the SM.

The post Pinpointing polarisation in vector-boson scattering appeared first on CERN Courier.

]]>
In the Standard Model (SM), W and Z bosons acquire mass and longitudinal polarisation through electroweak (EW) symmetry breaking, where the Brout–Englert–Higgs mechanism transforms Goldstone bosons into their longitudinal components. One of the most powerful ways to probe this mechanism is through vector-boson scattering (VBS), a rare process represented in figure 1, where two vector bosons scatter off each other. At high (TeV-scale) energies, interactions involving longitudinally polarised W and Z bosons provide a stringent test of the SM. Without the Higgs boson’s couplings to these polarisation states, their interaction rates would grow uncontrollably with energy, eventually violating unitarity, indicating a complete breakdown of the SM.

Measuring the polarisation of same electric charge (same sign) W-boson pairs in VBS directly tests the predicted EW interactions at high energies through precision measurements. Furthermore, beyond-the-SM scenarios predict modifications to VBS, some affecting specific polarisation states, rendering such measurements valuable avenues for uncovering new physics.

ATLAS figure 2

Using the full proton–proton collision dataset from LHC Run 2 (2015–2018, 140 fb–1 at 13 TeV), the ATLAS collaboration recently published the first evidence for longitudinally polarised W bosons in the electroweak production of same-sign W-boson pairs in final states including two same-sign leptons (electrons or muons) and missing transverse momentum, along with two jets (EW W±W±jj). This process is categorised by the polarisation states of the W bosons: fully longitudinal (WL±WL±jj), mixed (WL±WT±jj), and fully transverse (WT±WT±jj). Measuring the polarisation states is particularly challenging due to the rarity of the VBS events, the presence of two undetected neutrinos, and the absence of a single kinematic variable that efficiently distinguishes between polarisation states. To overcome this, deep neural networks (DNNs) were trained to exploit the complex correlations between event kinematic variables that characterise different polarisations. This approach enabled the separation of the fully longitudinal WL±WL±jj from the combined WT±W±jj (WL±WT±jj plus WT±WT±jj) processes as well as the combined WL±W±jj (WL±WL±jj plus WL±WT±jj) from the purely transverse WT±WT±jj contribution.

To measure the production of WL±WL±jj and WL±W±jj processes, a first DNN (inclusive DNN) was trained to distinguish EW W±W±jj events from background processes. Variables such as the invariant mass of the two highest-energy jets provide strong discrimination for this classification. In addition, two independent DNNs (signal DNNs) were trained to extract polarisation information, separating either WL±WL±jj from WT±W±jj or WL±W±jj from WT±WT±jj, respectively. Angular variables, such as the azimuthal angle difference between the leading leptons and the pseudorapidity difference between the leading and subleading jets, are particularly sensitive to the scattering angles of the W bosons, enhancing the separation power of the signal DNNs. Each DNN is trained using up to 20 kinematic variables, leveraging correlations among them to improve sensitivity.

The signal DNN distributions, within each inclusive DNN region, were used to extract the WL±WL±jj and WL±W±jj polarisation fractions through two independent maximum-likelihood fits. The excellent separation between the WL±W±jj and WT±WT±jj processes can be seen in figure 2 for the WL±W±jj fit, achieving better separation for higher scores of the signal DNN, represented in the x-axis. An observed (expected) significance of 3.3 (4.0) standard deviations was obtained for WL±W±jj, providing the first evidence of same-sign WW production with at least one of the W bosons longitudinally polarised. No significant excess of events consistent with WL±WL±jj production was observed, leading to the most stringent 95% confidence-level upper limits to date on the WL±WL±jj cross section: 0.45 (0.70) fb observed (expected).

There is still much to understand about the electroweak sector of the Standard Model, and the measurement presented in this article remains limited by the size of the available data sample. The techniques developed in this analysis open new avenues for studying W- and Z-boson polarisation in VBS processes during the LHC Run 3 and beyond.

The post Pinpointing polarisation in vector-boson scattering appeared first on CERN Courier.

]]>
News Interactions involving longitudinally polarised W and Z bosons provide a stringent test of the SM. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_EF-ATLAS1.jpg
Particle Cosmology and Astrophysics https://cerncourier.com/a/particle-cosmology-and-astrophysics/ Fri, 16 May 2025 16:10:30 +0000 https://cerncourier.com/?p=113221 In Particle Cosmology and Astrophysics, Dan Hooper captures the rapid developments in particle cosmology over the past three decades.

The post Particle Cosmology and Astrophysics appeared first on CERN Courier.

]]>
Particle Cosmology and Astrophysics

In 1989, Rocky Kolb and Mike Turner published The Early Universe – a seminal book that offered a comprehensive introduction to the then-nascent field of particle cosmology, laying the groundwork for a generation of physicists to explore the connections between the smallest and largest scales of the universe. Since then, the interfaces between particle physics, astrophysics and cosmology have expanded enormously, fuelled by an avalanche of new data from ground-based and space-borne observatories.

In Particle Cosmology and Astrophysics, Dan Hooper follows in their footsteps, providing a much-needed update that captures the rapid developments of the past three decades. Hooper, now a professor at the University of Wisconsin–Madison, addresses the growing need for a text that introduces the fundamental concepts and synthesises the vast array of recent discoveries that have shaped our current understanding of the universe.

Hooper’s textbook opens with 75 pages of “preliminaries”, covering general relativity, cosmology, the Standard Model of particle physics, thermodynamics and high-energy processes in astrophysics. Each of these disciplines is typically introduced in a full semester of dedicated study, supported by comprehensive texts. For example, students seeking a deeper understanding of high-energy phenomena are likely to benefit from consulting Longair’s High Energy Astrophysics or Sigl’s Astroparticle Physics. Similarly, those wishing to advance their knowledge in particle physics will find that more detailed treatments are available in Griffiths’ Introduction to Elementary Particles or Peskin and Schroeder’s An Introduction to Quantum Field Theory, to mention just a few textbooks recommended by the author.

A much-needed update that captures the rapid developments of the past three decades

By distilling these complex subjects into just enough foundational content, Hooper makes the field accessible to those who have been exposed to only a fraction of the standard coursework. His approach provides an essential stepping stone, enabling students to embark on research in particle cosmology and astrophysics with a well calibrated introduction while still encouraging further study through more specialised texts.

Part II, “Cosmology”, follows a similarly pragmatic approach, providing an updated treatment that parallels Kolb and Turner while incorporating a range of topics that have, in the intervening years, become central to modern cosmology. The text now covers areas such as cosmic microwave background (CMB) anisotropies, the evidence for dark matter and its potential particle candidates, the inflationary paradigm, and the evidence and possible nature of dark energy.

Hooper doesn’t shy away from complex subjects, even when they resist simple expositions. The discussion on CMB anisotropies serves as a case in point: anyone who has attempted to condense this complex topic into a few graduate lectures is aware of the challenge in maintaining both depth and clarity. Instead of attempting an exhaustive technical introduction, Hooper offers a qualitative description of the evolution of density perturbations and how one extracts cosmological parameters from CMB observations. This approach, while not substituting for the comprehensive analysis found in texts such as Dodelson’s Modern Cosmology or Baumann’s Cosmology, provides students with a valuable overview that successfully charts the broad landscape of modern cosmology and illustrates the interconnectedness of its many subdisciplines.

Part III, “Particle Astrophysics”, contains a selection of topics that largely reflect the scientific interests of the author, a renowned expert in the field of dark matter. Some colleagues might raise an eyebrow at the book devoting 10 pages each to entire fields such as cosmic rays, gamma rays and neutrino astrophysics, and 50 pages to dark-matter candidates and searches. Others might argue that a book titled Particle Cosmology and Astrophysics is incomplete without detailing the experimental techniques behind the extraordinary advances witnessed in these fields and without at least a short introduction to the booming field of gravitational-wave astronomy. But the truth is that, in the author’s own words, particle cosmology and astrophysics have become “exceptionally multidisciplinary,” and it is impossible in a single textbook to do complete justice to domains that intersect nearly all branches of physics and astronomy. I would also contend that it is not only acceptable but indeed welcome for authors to align the content of their work with their own scientific interests, as this contributes to the diversity of textbooks and offers more choice to lecturers who wish to supplement a standard curriculum with innovative, interdisciplinary perspectives.

Ultimately, I recommend the book as a welcome addition to the literature and an excellent introductory textbook for graduate students and junior scientists entering the field.

The post Particle Cosmology and Astrophysics appeared first on CERN Courier.

]]>
Review In Particle Cosmology and Astrophysics, Dan Hooper captures the rapid developments in particle cosmology over the past three decades. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Rev_hooper_feature.jpg
ALICE measures a rare Ω baryon https://cerncourier.com/a/alice-measures-a-rare-%cf%89-baryon/ Fri, 16 May 2025 16:08:24 +0000 https://cerncourier.com/?p=113150 These results will improve the theoretical description of excited baryons.

The post ALICE measures a rare Ω baryon appeared first on CERN Courier.

]]>
ALICE figure 1

Since the discovery of the electron and proton over 100 years ago, physicists have observed a “zoo” of different types of particles. While some of these particles have been fundamental, like neutrinos and muons, many are composite hadrons consisting of quarks bound together by the exchange of gluons. Studying the zoo of hadrons – their compositions, masses, lifetimes and decay modes – allows physicists to understand the details of the strong interaction, one of the fundamental forces of nature.

The Ω(2012) was discovered by the Belle Collaboration in 2018. The ALICE collaboration recently released an observation of a signal consistent with it with a significance of 15σ in proton–proton (pp) collisions at a centre-of-mass energy of 13 TeV. This is the first observation of the Ω(2012) by another experiment.

While the details of its internal structure are still up for debate, the Ω(2012) consists, at minimum, of three strange quarks bound together. It is a heavier, excited version of the ground-state Ω baryon discovered in 1964, which also contains three strange quarks. Multiple theoretical models predicted a spectrum of excited Ω baryons, with some calling for a state with a mass around 2 GeV. Following the discovery of the Ω(2012), theoretical work has attempted to describe its internal structure, with hypotheses including a simple three-quark baryon or a hadronic molecule.

Using a sample of a billion pp collisions, ALICE has measured the decay of Ω(2012) baryons to ΞK0S pairs. After traveling a few centimetres, these hadrons decay in turn, eventually producing a proton and four charged pions that are tracked by the ALICE detector.

ALICE’s measurements of the mass and width of the Ω(2012) are consistent with Belle’s, and superior precision on the mass. ALICE has also confirmed the rather narrow width of around 6 MeV, which indicates that the Ω(2012) is fairly long-lived for a particle that decays via the strong interaction. Belle and ALICE’s width measurements also lend support to the conclusion that the Ω(2012) has a spin-parity configuration of JP = 3/2.

ALICE also measured the number of Ω(2012) decays to ΞK0S pairs. By comparing this to the total Ω(2012) yield based on statistical thermal model calculations, ALICE has estimated the absolute branching ratio for the Ω(2012) → ΞK0 decay. A branching ratio is the probability of decay to a given mode. The ALICE results indicate that Ω(2012) undergoes two-body (ΞK) decays more than half the time, disfavouring models of the Ω(2012) structure that require large branching ratios for three-body decays.

The present ALICE results will help to improve the theoretical description of the structure of excited baryons. They can also serve as baseline measurements in searches for modifications of Ω-baryon properties in nucleus–nucleus collisions. In the future, Ω(2012) bary­ons may also serve as new probes to study the strangeness enhancement effect observed in proton–proton and nucleus–nucleus collisions.

The post ALICE measures a rare Ω baryon appeared first on CERN Courier.

]]>
News These results will improve the theoretical description of excited baryons. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_EF-ALICE_feature.jpg
Exographer https://cerncourier.com/a/exographer/ Fri, 16 May 2025 15:45:52 +0000 https://cerncourier.com/?p=113226 Exographer puts you in the shoes of a scientist with a barrage of apparatus to investigate the world, writes our reviewer.

The post Exographer appeared first on CERN Courier.

]]>
Exographer

Try lecturing the excitement of subatomic particle discovery to physics students, and you might inspire several future physicists. Lecture physics to a layperson, and you might get a completely different response. Not everyone is excited about particle physics by listening to lectures alone. Sometimes video games can help. 

Exographer, the brainchild of Raphael Granier de Cassagnac (CERN Courier March/April 2025 p48), puts you in the shoes of an investigator in a world where scientists are fascinated by what their planet is made of, and have made a barrage of apparatus to investigate it. Your role is to traverse through this beautiful realm and solve puzzles that may lead to future discoveries, encountering frustration and excitement along the way.

The puzzles are neither nerve-racking nor too difficult, but solving each one brings immense satisfaction, much like the joy of discoveries in particle physics. These eureka moments make up for the hundreds of times when you fell to your death because you forgot to use the item that could have saved you.

The most important part of the game is taking pictures, particularly inside particle detectors. These reveal the tracks of particles, reminiscent of Feynman diagrams. It’s your job to figure out what particles leave these tracks. Is it a known particle? Is it new? Can we add it to our collection?

I am sure that the readers of CERN Courier will be familiar with particle discoveries throughout the past century, but as a particle physicist I still found awe and joy in rediscovering them whilst playing the game. It feels like walking through a museum, with each apparatus you encounter more sophisticated than the last. The game also hides an immensely intriguing lore of scientists from our own world. Curious gamers who spend extra time unravelling these stories are rewarded with various achievements.

All in all, this game is a nice introduction to the world of particle-physics discovery – an enjoyable puzzle/platformer game you should try, regardless of whether or not you are a physicist. 

The post Exographer appeared first on CERN Courier.

]]>
Review Exographer puts you in the shoes of a scientist with a barrage of apparatus to investigate the world, writes our reviewer. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Rev_exographer_feature.jpg
Tau leptons from light resonances https://cerncourier.com/a/tau-leptons-from-light-resonances/ Fri, 16 May 2025 15:40:37 +0000 https://cerncourier.com/?p=113136 Among the fundamental particles, tau leptons occupy a curious spot.

The post Tau leptons from light resonances appeared first on CERN Courier.

]]>
CMS figure 1

Among the fundamental particles, tau leptons occupy a curious spot. They participate in the same sort of reactions as their lighter lepton cousins, electrons and muons, but their large mass means that they can also decay into a shower of pions and they interact more strongly with the Higgs boson. In many new-physics theories, Higgs-like particles – beyond that of the Standard Model – are introduced in order to explain the mass hierarchy or as possible portals to dark matter.

Because of their large mass, tau leptons are especially useful in searches for new physics. However, identifying taus is challenging, as in most cases they decay into a final state of one or more pions and an undetected neutrino. A crucial step in the identification of a tau lepton in the CMS experiment is the hadrons-plus-strips (HPS) algorithm. In the standard CMS reconstruction, a minimum momentum threshold of 20 GeV is imposed, such that the taus have enough momentum to make their decay products fall into narrow cones. However, this requirement reduces sensitivity to low-momentum taus. As a result, previous searches for a Higgs-like resonance φ decaying into two tau leptons required a φ-mass of more than 60 GeV.

CMS figure 2

The CMS experiment has now been able to extend the φ-mass range down to 20 GeV. To improve sensitivity to low-momentum tau decays, machine learning is used to determine a dynamic cone algorithm that expands the cone size as needed. The new algorithm, requiring one tau decaying into a muon and two neutrinos and one tau decaying into hadrons and a neutrino, is implemented in the CMS Scouting trigger system. Scouting extends CMS’s reach into previously inaccessible phase space by retaining only the most relevant information about the event, and thus facilitating much higher event rates.

The sensitivity of the new algorithm is so high that even the upsilon (Υ) meson, a bound state of the bottom quark and its antiquark, can be seen. Figure 1 shows the distribution of the mass of the visible decay products of tau (Mvis), in this case a muon from one tau lepton and either one or three pions from the other. A clear resonance structure is visible at Mvis = 6 GeV, in agreement with the expectation for the Υ meson. The peak is not at the actual mass of the Υ meson (9.46 GeV) due to the presence of neutrinos in the decay. While Υττ decays have been observed at electron–positron colliders, this marks the first evidence at a hadron collider and serves as an important benchmark for the analysis.

Given the high sensitivity of the new algorithm, CMS performed a search for a possible resonance in the range between 20 and 60 GeV using the data recorded in the years 2022 and 2023, and set competitive exclusion limits (see figure 2). For the 2024 and 2025 data taking, the algorithm was further improved, enhancing the sensitivity even more.

The post Tau leptons from light resonances appeared first on CERN Courier.

]]>
News Among the fundamental particles, tau leptons occupy a curious spot. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_EF-CMS_feature.jpg
Walter Oelert 1942–2024 https://cerncourier.com/a/walter-oelert-1942-2024/ Fri, 16 May 2025 15:36:26 +0000 https://cerncourier.com/?p=113189 Walter Oelert, founding spokesperson of COSY-11 and an experimentalist of rare foresight in the study of antimatter, passed away on 25 November 2024.

The post Walter Oelert 1942–2024 appeared first on CERN Courier.

]]>
Walter Oelert

Walter Oelert, founding spokesperson of COSY-11 and an experimentalist of rare foresight in the study of antimatter, passed away on 25 November 2024.

Walter was born in Dortmund on 14 July 1942. He studied physics in Hamburg and Heidelberg, achieving his diploma on solid-state detectors in 1969 and his doctoral thesis on transfer reactions on samarium isotopes in 1973. He spent the years from 1973 to 1975 working on transfer reactions of rare-earth elements as a postdoc in Pittsburgh under Bernie Cohen, after which he continued his nuclear-physics experiments at the Jülich cyclotron.

With the decision to build the “Cooler Synchrotron” (COSY) at Forschungszentrum Jülich (FZJ), he terminated his work on transfer reactions, summarised it in a review article, and switched to the field of medium-energy physics. At the end of 1985 he conducted a research stay at CERN, contributing to the PS185 and the JETSET (PS202) experiments at the antiproton storage ring LEAR, while also collaborating with Swedish partners at the CELSIUS synchrotron in Uppsala. In 1986 he habilitated at Ruhr University Bochum, where he was granted an APL professorship in 1996.

With the experience gained at CERN, Oelert proposed the construction of the international COSY-11 experiment as spokesperson, leading the way on studies of threshold production with full acceptance for the reaction products. From first data in 1996, COSY-11 operated successfully for 11 years, producing important results in several meson-production channels.

At CERN, Walter proposed the production of antihydrogen in the interaction of the antiproton beam with a xenon cluster target – the last experiment before the shutdown of LEAR. The experiment was performed in 1995, resulting in the production of nine antihydrogen atoms. This result was an important factor in the decision by CERN management to build the antiproton–decelerator (AD). In order to continue antihydrogen studies, he received substantial support from Jülich for a partnership in the new ATRAP experiment aiming for CPT violation studies in antihydrogen spectroscopy.

Walter retired in 2008, but kept active in antiproton activities at the AD for more than 10 years, during which time he was affiliated with the Johannes Gutenberg University of Mainz. He was one of the main driving forces on the way to the extra-low-energy antiproton ring (ELENA), which was finally built within time and financial constraints, and drastically improved the performance of the antimatter experiments. He also received a number of honours, notably the Merentibus Medal of the Jagiellonian University of Kraków, and was elected as an external member of the Polish Academy of Arts and Sciences.

Walter’s personality – driven, competent, visionary, inspiring, open minded and caring – was the type of glue that made proactive, successful and happy collaborations.

The post Walter Oelert 1942–2024 appeared first on CERN Courier.

]]>
News Walter Oelert, founding spokesperson of COSY-11 and an experimentalist of rare foresight in the study of antimatter, passed away on 25 November 2024. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Obits_Oelert_feature.jpg
Grigory Vladimirovich Domogatsky 1941–2024 https://cerncourier.com/a/grigory-vladimirovich-domogatsky-1941-2024/ Fri, 16 May 2025 15:34:38 +0000 https://cerncourier.com/?p=113195 Grigory Vladimirovich Domogatsky, spokesman of the Baikal Neutrino Telescope project, passed away on 17 December 2024 at the age of 83.

The post Grigory Vladimirovich Domogatsky 1941–2024 appeared first on CERN Courier.

]]>
Grigory Vladimirovich Domogatsky, spokesman of the Baikal Neutrino Telescope project, passed away on 17 December 2024 at the age of 83.

Born in Moscow in 1941, Domogatsky obtained his PhD in 1970 from Moscow Lomonosov University and then worked at the Moscow Lebedev Institute. There, he studied the processes of the interaction of low-energy neutrinos with matter and neutrino emission during the gravitational collapse of stars. His work was essential for defining the scientific programme of the Baksan Neutrino Observatory. Already at that time, he had put forward the idea of a network of underground detectors to register neutrinos from supernovae, a programme realised decades later by the current SuperNova Early Warning System, SNEWS. Together with his co-author Dmitry Nadyozhin, he showed that neutrinos released in star collapses are drivers in the formation of isotopes such as Li-7, Be-8 and B-11 in the supernova shell, and that these processes play an important role in cosmic nucleosynthesis.

In 1980 Domogatsky obtained his doctor of science (equivalent to the Western habilitation) and in the same year became the head of the newly founded Laboratory of Neutrino Astrophysics at High Energies at the Institute for Nuclear Research of the Russian Academy of Sciences, INR RAS. The central goal of this laboratory was, and is, the construction of an underwater neutrino telescope in Lake Baikal, a task to which he devoted all his life from that point on. He created a team of enthusiastic young experimentalists, starting site explorations in the following year and obtaining first physics results with test configurations later in the 1980s. At the end of the 1980s, the plan for a neutrino telescope comprising about 200 photomultipliers (NT200) was born, and realised together with German collaborators in the 1990s. The economic crisis following the breakdown of the Soviet Union would surely have ended the project if not for Domogatsky’s unshakable will and strong leadership. With the partial configuration of the project deployed in 1994, first neutrino candidates were identified in 1996: the proof of concept for underwater neutrino telescopes had been delivered.

He shaped the image of the INR RAS and the field of neutrino astronomy

NT200 was shut down a decade ago, by which time a new cubic-kilometre telescope in Lake Baikal was already under construction. This project was christened Baikal–GVD, with GVD standing for gigaton volume telescope, though these letters could equally well denote Domogatsky’s initials. Thus far it has reached about half of the size of the IceCube neutrino telescope at the South Pole.

Domogatsky was born to a family of artists and was surrounded by an artistic atmosphere whilst growing up. His grandfather was a famous sculptor, his father a painter, woodcrafter and book illustrator. His brother followed in his father’s footsteps, while Grigory himself married Svetlana, an art historian. He possessed an outstanding literary, historical and artistic education, and all who met him were struck by his knowledge, his old-fashioned noblesse and his intellectual charm.

Domogatsky was a corresponding member of the Russian Academy of Sciences and the recipient of many prestigious awards, most notably the Bruno Pontecorvo Prize and the Pavel Cherenkov Prize. With his leadership in the Baikal project, Grigory Domogatsky shaped the scientific image of the INR RAS and the field of neutrino astronomy. He will be remembered as a carefully weighing scientist, as a person of incredible stamina, and as the unforgettable father figure of the Baikal project.

The post Grigory Vladimirovich Domogatsky 1941–2024 appeared first on CERN Courier.

]]>
News Grigory Vladimirovich Domogatsky, spokesman of the Baikal Neutrino Telescope project, passed away on 17 December 2024 at the age of 83. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Obits_Domogatsky.jpg
Elena Accomando 1965–2025 https://cerncourier.com/a/elena-accomando-1965-2025/ Fri, 16 May 2025 15:01:24 +0000 https://cerncourier.com/?p=113203 Elena Accomando, a distinguished collider phenomenologist, passed away on 7 January 2025.

The post Elena Accomando 1965–2025 appeared first on CERN Courier.

]]>
Elena Accomando

Elena Accomando, a distinguished collider phenomenologist, passed away on 7 January 2025.

Elena received her laurea in physics from the Sapienza University of Rome in 1993, followed by a PhD from the University of Torino in 1997. Her early career included postdoctoral positions at Texas A&M University and the Paul Scherrer Institute, as well as a staff position at the University of Torino. In 2009 she joined the University of Southampton as a lecturer, earning promotions to associate professor in 2018 and professor in 2022.

Elena’s research focused on the theory and phenomenology of particle physics at colliders, searching for new forces and exotic supersymmetric particles at the Large Hadron Collider. She explored a wide range of Beyond the Standard Model (BSM) scenarios at current and future colliders. Her work included studies of new gauge bosons such as the Z′, extra-dimensional models, and CP-violating effects in BSM frameworks, as well as dark-matter scattering on nuclei and quantum corrections to vector-boson scattering. She was also one of the authors of “WPHACT”, a Monte Carlo event generator developed for four-fermion physics at electron–positron colliders, which remains a valuable tool for precision studies. Elena investigated novel signatures in decays of the Higgs boson, aiming to uncover deviations from Standard Model expectations, and was known for connecting theory with experimental applications, proposing phenomenological strategies that were both realistic and impactful. She was well known as a research collaborator at CERN and other international institutions.

She authored the WPHACT Monte Carlo event generator that remains a valuable tool for precision studies

Elena played an integral role in shaping the academic community at Southampton and was greatly admired as a teacher. Her remarkable professional achievements were paralleled by strength and optimism in the face of adversity. Despite her long illness, she remained a positive presence, planning ahead for her work and her family. Her colleagues and students remember her as a brilliant scientist, an inspiring mentor and a warm and compassionate person. She will also be missed by her longstanding colleagues from the CMS collaboration at Rutherford Appleton Laboratory.

Elena is survived by her devoted husband, Francesco, and their two daughters.

The post Elena Accomando 1965–2025 appeared first on CERN Courier.

]]>
News Elena Accomando, a distinguished collider phenomenologist, passed away on 7 January 2025. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Obits_Accomando_feature.jpg
Shoroku Ohnuma 1928–2024 https://cerncourier.com/a/shoroku-ohnuma-1928-2024/ Fri, 16 May 2025 14:51:11 +0000 https://cerncourier.com/?p=113207 Shoroku Ohnuma, who made significant contributions to accelerator physics in the US and Japan, passed away on 4 February 2024, at the age of 95.

The post Shoroku Ohnuma 1928–2024 appeared first on CERN Courier.

]]>
Shoroku Ohnuma

Shoroku Ohnuma, who made significant contributions to accelerator physics in the US and Japan, passed away on 4 February 2024, at the age of 95.

Born on 19 April 1928, in Akita Prefecture, Japan, Ohnuma graduated from the University of Tokyo’s Physics Department in 1950. After studying with Yoichiro Nambu at Osaka University, he came to the US as a Fulbright scholar in 1953, obtaining his doctorate from the University of Rochester in 1956. He maintained a lifelong friendship with neutrino astrophysicist Masatoshi Koshiba, who received his degree from Rochester in the same period. A photo published in the Japanese national newspaper Asahi Shimbun shows him with Koshiba, Richard Feynman and Nambu when the latter won the Nobel Prize in Physics – Ohnuma would often joke that he was the only one pictured who did not win a Nobel.

Ohnuma spent three years doing research at Yale University before returning to Japan to teach at Waseda University. In 1962 he returned to the US with his wife and infant daughter Keiko to work on linear accelerators at Yale. In 1970 he joined the Fermi National Accelerator Laboratory (FNAL), where he contributed significantly to the completion of the Tevatron before moving to the University of Houston in 1986, where he worked on the Superconducting Super Collider (SSC). While he claimed to have moved to Texas because his work at FNAL was done, he must have had high hopes for the SSC, which the first Bush administration slated to be built in Dallas in 1989. Young researchers who worked with him, including me, made up an energetic but inexperienced working team of accelerator researchers. With many FNAL-linked people such as Helen Edwards in the leadership of SSC, we frequently invited professor Ohnuma to Dallas to review the overall design. He was a mentor to me for more than 35 years after our work together at the Texas Accelerator Center in 1988.

Ohnuma reviewed accelerator designs and educated students and young researchers in the US and Japan

After Congress cancelled the SSC in 1993, Ohnuma continued his research at the University of Houston until 1999. Starting in the late 1990s, he visited the JHF, later J-PARC, accelerator group led by Yoshiharu Mori at the University of Tokyo’s Institute for Nuclear Study almost every year. As a member of JHF’s first International Advisory Committee, he reviewed the accelerator design and educated students and young researchers, whom he considered his grandchildren. Indeed, his guidance had grown gentler and more grandfatherly.

In 2000, in semi-retirement, Ohnuma settled at the University of Hawaii, where he continued to frequent the campus most weekdays until his death. Even after the loss of his wife in 2021, he continued walking every day, taking a bus to the university, doing volunteer work at a senior facility, and visiting the Buddhist temple every Sunday. His interest in Zen Buddhism had grown after retirement, and he resolved to copy the Heart Sutra a thousand times on rice paper, with the sumi brush and ink prepared from scratch. We were entertained by his panic at having nearly achieved his goal too soon before his death. The Heart Sutra is a foundational text in Zen Buddhism, chanted on every formal occasion. Undertaking to copy it 1000 times exemplified his considerable tenacity and dedication. Whatever he undertook in the way of study, he was unhurried and unworried, optimistic and cheerful, and persistent.

The post Shoroku Ohnuma 1928–2024 appeared first on CERN Courier.

]]>
News Shoroku Ohnuma, who made significant contributions to accelerator physics in the US and Japan, passed away on 4 February 2024, at the age of 95. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Obits_Ohnuma_feature.jpg
Leading the industry in Monte Carlo simulations for accelerator applications https://cerncourier.com/a/leading-the-industry-in-monte-carlo-simulations-for-accelerator-applications/ Mon, 12 May 2025 14:07:13 +0000 https://cerncourier.com/?p=113263 Particle-beam technology has wide applications in science and industry. Specifically, high-energy x-ray prod­uction is being investigated for FLASH radiotherapy, 14 MeV neutrons are being produced for fusion energy production, and compact electron accelerators are being built for medical-device sterilisation. In each instance it is critical to guarantee that the particle beam is delivered to the end […]

The post Leading the industry in Monte Carlo simulations for accelerator applications appeared first on CERN Courier.

]]>
Figure 1

Particle-beam technology has wide applications in science and industry. Specifically, high-energy x-ray prod­uction is being investigated for FLASH radiotherapy, 14 MeV neutrons are being produced for fusion energy production, and compact electron accelerators are being built for medical-device sterilisation. In each instance it is critical to guarantee that the particle beam is delivered to the end user with the correct makeup, and also to ensure that secondary particles created from scattering interactions are shielded from technicians and sensitive equipment. There is no precise way to predict the random walk of any individual particle as it encounters materials and alloys of different shapes within a complicated apparatus. Monte Carlo methods simulate the random paths of many millions of independent particles, revealing the tendencies of these particles in aggregate. Assessing shielding effectiveness is particularly challenging computationally, as the very nature of shielding means simulations produce low particle rate.

Figure 2

A common technique for shielding calculations takes these random walk simulations a step further by applying variance reduction techniques. Variance reduction techniques are a way of introducing biases in the simulation in a smart way to increase the number of particles emerging from the shielding, while still staying true to the total conservation of matter. In some regions within the shielding, particles are split into independent “daughter” particles with independent pathways but some common history. They are given a weight value, so the overall flux of particles is kept constant. In this way, it is possible to predict the behaviour of a one-in-a-million event without having to simulate one million particle trajectories. The performance of these techniques is shown in figure 2.

Figure 3

These kinds of simulations take on new importance with the global race to develop fusion reactors for energy production. Materials will be exposed to conditions they’ve never seen before, mere feet from the fusion reactions that sustain stars. It is imperative to understand the neutron flux from fusion reactions and how they affect critical components in the sustained operation of fusion facilities if they are going to operate to meet our ever-growing energy needs. Monte Carlo simulation packages are capable of both distributed memory (MPI) and shared memory (OpenMP) parallel computation on the world’s largest supercomputers, engaging hundreds of thousands of cores at once. This enables simulations of billions of particle histories. Together with variance reduction, these powerful simulation tools enable precise estimation of particle fluxes in even the most deeply shielded regions.

RadiaSoft offers browser-based modelling of neutron radiation transport with parallel computation and variance reduction capabilities running on Sirepo, their browser-based interface. Examples of fusion tokamak simulations can be seen above. RadiaSoft is also available for comprehensive consultation in x-ray production, radiation shielding and dose-delivery simulations across a wide range of applications.

The post Leading the industry in Monte Carlo simulations for accelerator applications appeared first on CERN Courier.

]]>
Advertising feature https://cerncourier.com/wp-content/uploads/2025/05/CCMarApr25_RADIASOFT_advertorial_feature.jpg
An international year like no other https://cerncourier.com/a/an-international-year-like-no-other/ Thu, 03 Apr 2025 09:41:02 +0000 https://cerncourier.com/?p=112713 The International Year of Quantum inaugural event was organised at UNESCO Headquarters in Paris in February 2025.

The post An international year like no other appeared first on CERN Courier.

]]>
Last June, the United Nations and UNESCO proclaimed 2025 the International Year of Quantum (IYQ): here is why it really matters.

Everything started a century ago, when scientists like Niels Bohr, Max Planck and Wolfgang Pauli, but also Albert Einstein, Erwin Schrödinger and many others, came up with ideas that would revolutionise our description of the subatomic world. This is when physics transitioned from being a deterministic discipline to a mostly probabilistic one, at least when we look at subatomic scales. Brave predictions of weird behaviours started to attract the attention of an increasingly larger part of the scientific community, and continued to appear decade after decade. The most popular ones being: particle entanglement, the superposition of states and the tunnelling effect. These are also some of the most impactful quantum effects, in terms of the technologies that emerged from them.

One hundred years on, and the scientific community is somewhat acclimatised to observing and measuring the probabilistic nature of particles and quanta. Lasers, MRI and even sliding doors would not exist without the pioneering studies on quantum mechanics. However, it’s common opinion that today we are on the edge of a second quantum revolution.

“International years” are proclaimed to raise awareness, focus global attention, encourage cooperation and mobilise resources towards a certain topic or research domain. The International Year of Quantum also aims to reverse-engineer the approach taken with artificial intelligence (AI), a technology that came along faster than any attempt to educate and prepare the layperson for its adoption. As we know, this is creating a lot of scepticism towards AI, which is often felt to be too complex and designed to generate a loss of control in its users.

The second quantum revolution has begun and we are at the dawn of future powerful applications

The second quantum revolution has begun in recent years and, while we are rapidly moving from simply using the properties of the quantum world to controlling individual quantum systems, we are still at the dawn of future powerful applications. Some quantum sensors are already being used, and quantum cryptography is quite well understood. However, quantum bits need further studies and the exploration of other quantum fields has not even started yet.

Unlike AI, we still have time to push for a more inclusive approach to the development of new technology. During the international year, hundreds of events, workshops and initiatives will emphasise the role of global collaboration in the development of accessible quantum technologies. Through initiatives like the Quantum Technology Initiative (QTI) and the Open Quantum Institute (OQI), CERN is actively contributing not only to scientific research but also to promoting the advancement of its applications for the benefit of society.

The IYQ inaugural event was organised at UNESCO Headquarters in Paris in February 2025. At CERN, this year’s public event season is devoted to the quantum year, and will present talks, performances, a film festival and more. The full programme is available at visit.cern/events.

The post An international year like no other appeared first on CERN Courier.

]]>
Meeting report The International Year of Quantum inaugural event was organised at UNESCO Headquarters in Paris in February 2025. https://cerncourier.com/wp-content/uploads/2025/03/CCMayJun25_FN_IYQ.jpg
CMS observes top–antitop excess https://cerncourier.com/a/cms-observes-top-antitop-excess-2/ Wed, 02 Apr 2025 10:20:07 +0000 https://cerncourier.com/?p=112962 The signal could be caused by a quasi-bound top–antitop meson commonly called "toponium".

The post CMS observes top–antitop excess appeared first on CERN Courier.

]]>
Threshold excess

CERN’s Large Hadron Collider continues to deliver surprises. While searching for additional Higgs bosons, the CMS collaboration may have instead uncovered evidence for the smallest composite particle yet observed in nature – a “quasi-bound” hadron made up of the most massive and shortest-lived fundamental particle known to science and its antimatter counterpart. The findings, which do not yet constitute a discovery claim and could also be susceptible to other explanations, were reported this week at the Rencontres de Moriond conference in the Italian Alps.

Almost all of the Standard Model’s shortcomings motivate the search for additional Higgs bosons. Their properties are usually assumed to be simple. Much as the 125 GeV Higgs boson discovered in 2012 appears to interact with each fundamental fermion with a strength proportional to the fermion’s mass, theories postulating additional Higgs bosons generally expect them to couple more strongly to heavier quarks. This puts the singularly massive top quark at centre stage. If an additional Higgs boson has a mass greater than about 345 GeV and can therefore decay to a top quark–antiquark pair, this should dominate the way it decays inside detectors. Hunting for bumps in the invariant mass spectrum of top–antitop pairs is therefore often considered to be the key experimental signature of additional Higgs bosons above the top–antitop production threshold.

The CMS experiment has observed just such a bump. Intriguingly, however, it is located at the lower limit of the search, right at the top-quark pair production threshold itself, leading CMS to also consider an alternative hypothesis long considered difficult to detect: a top–antitop quasi-bound state known as toponium (see “Threshold excess figure).

The toponium hypothesis is very exciting as we previously did not expect to be able to see it at the LHC

“When we started the project, toponium was not even considered as a background to this search,” explains CMS physics coordinator Andreas Meyer (DESY). “In our analysis today we are only using a simplified model for toponium – just a generic spin-0 colour-singlet state with a pseudoscalar coupling to top quarks. The toponium hypothesis is very exciting as we previously did not expect to be able to see it at the LHC.”

Though other explanations can’t be ruled out, CMS finds the toponium hypothesis to be sufficient to explain the observed excess. The size of the excess is consistent with the latest theoretical estimate of the cross section to produce pseudoscalar toponium of around 6.4 pb.

“The cross section we obtain for our simplified hypothesis is 8.8 pb with an uncertainty of about 15%,” explains Meyer. “One can infer that this is significantly above five sigma.”

The smallest hadron

If confirmed, toponium would be the final example of quarkonium – a term for quark–antiquark states formed from heavy charm, bottom and perhaps top quarks. Charmonium (charm–anticharm) mesons were discovered at SLAC and Brookhaven National Laboratory in the November Revolution of 1974. Bottomonium (bottom–antibottom) mesons were discovered at Fermilab in 1977. These heavy quarks move relatively slowly compared to the speed of light, allowing the strong interaction to be modelled by a static potential as a function of the separation between them. When the quarks are far apart, the potential is proportional to their separation due to the self-interacting gluons forming an elongating flux tube, yielding a constant force of attraction. At close separations, the potential is due to the exchange of individual gluons and is Coulomb-like in form, and inversely proportional to separation, leading to an inverse-square force of attraction. This is the domain where compact quarkonium states are formed, in a near perfect QCD analogy to positronium, wherein an electron and a positron are bound by photon exchange. The Bohr radii of the ground states of charmonium and bottomonium are approximately 0.3 fm and 0.2 fm, and bottomonium is thought to be the smallest hadron yet discovered. Given its larger mass, toponium’s Bohr radius would be an order of magnitude smaller.

Angular analysis

For a long time it was thought that toponium bound states were unlikely to be detected in hadron–hadron collisions. The top quark is the most massive and the shortest-lived of the known fundamental particles. It decays into a bottom quark and a real W boson in the time it takes light to travel just 0.1 fm, leaving little time for a hadron to form. Toponium would be unique among quarkonia in that its decay would be triggered by the weak decay of one of its constituent quarks rather than the annihilation of its constituent quarks into photons or gluons. Toponium is expected to decay at twice the rate of the top quark itself, with a width of approximately 3 GeV.

CMS first saw a 3.5 sigma excess in a 2019 search studying the mass range above 400 GeV, based on 35.9 fb−1 of proton–proton collisions at 13 TeV from 2016. Now armed with 138 fb–1 of collisions from 2016 to 2018, the collaboration extended the search down to the top–antitop production threshold at 345 GeV. Searches are complicated by the possibility that quantum interference between background and Higgs signal processes could generate an experimentally challenging peak–dip structure with a more or less pronounced bump.

“The signal reported by CMS, if confirmed, could be due either to a quasi-bound top–antitop meson, commonly called ‘toponium’, or possibly an elementary spin-zero boson such as appears in models with additional Higgs bosons, or conceivably even a combination of the two,” says theorist John Ellis of King’s College London. “The mass of the lowest-lying toponium state can be calculated quite accurately in QCD, and is expected to lie just below the nominal top–antitop threshold. However, this threshold is smeared out by the short lifetime of the top quark, as well as the mass resolution of an LHC detector, so toponium would appear spread out as a broad excess of events in the final states with leptons and jets that generally appear in top decays.”

Quantum numbers

An important task of the analysis is to investigate the quantum numbers of the signal. It could be a scalar particle, like the Higgs boson discovered in 2012, or a pseudoscalar particle – a different type of spin-0 object with odd rather than even parity. To measure its spin-parity, CMS studied the angular correlations of the top-quark-pair decay products, which retain information on the original quantum state. The decays bear all the experimental hallmarks of a pseudoscalar particle, consistent with toponium (see “Angular analysis” figure) or the pseudoscalar Higgs bosons common to many theories featuring extended Higgs sectors.

“The toponium state produced at the LHC would be a pseudoscalar boson, whose decays into these final states would have characteristic angular distributions, and the excess of events reported by CMS exhibits the angular correlations expected for such a pseudoscalar state,” explains Ellis. “Similar angular correlations would be expected in the decays of an elementary pseudoscalar boson, whereas scalar-boson decays would exhibit different angular correlations that are disfavoured by the CMS analysis.”

Whatever the true cause of the excess, the analyses reflect a vibrant programme of sensitive measurements at the LHC – and the possibility of a timely discovery

Two main challenges now stand in the way of definitively identifying the nature of the excess. The first is to improve the modelling of the creation of top-quark pairs at the LHC, including the creation of bound states at the threshold. The second challenge is to obtain consistency with the ATLAS experiment. “ATLAS had similar studies in the past but with a more conservative approach on the systematic uncertainties,” says ATLAS physics coordinator Fabio Cerutti (LBNL). “This included, for example, larger uncertainties related to parton showers and other top-modelling effects. To shed more light on the CMS observation, be it a new boson, a top quasi-bound state, or some limited understanding of the modelling of top–antitop production at threshold, further studies are needed on our side. We have several analysis teams working on that. We expect to have new results with improved modelling of the top-pair production at threshold and additional variables sensitive to both a new pseudo-scalar boson or a top quasi-bounded state very soon.”

Whatever the true cause of the excess, the analyses reflect a vibrant programme of sensitive measurements at the LHC – and the possibility of a timely discovery.

“Discovering toponium 50 years after the November Revolution would be an unanticipated and welcome golden anniversary present for its charmonium cousin that was discovered in 1974,” concludes Ellis. “The prospective observation and measurement of the vector state of toponium in e+e collisions around 350 GeV have been studied in considerable theoretical detail, but there have been rather fewer studies of the observability of pseudoscalar toponium at the LHC. In addition to the angular correlations observed by CMS, the effective production cross section of the observed threshold effect is consistent with non-relativistic QCD calculations. More detailed calculations will be desirable for confirmation that another quarkonium family member has made its appearance, though the omens are promising.”

The post CMS observes top–antitop excess appeared first on CERN Courier.

]]>
News The signal could be caused by a quasi-bound top–antitop meson commonly called "toponium". https://cerncourier.com/wp-content/uploads/2025/04/CCMayJun25_NA_CMS_feature.jpg
The Hubble tension https://cerncourier.com/a/the-hubble-tension/ Wed, 26 Mar 2025 15:22:42 +0000 https://cerncourier.com/?p=112638 Vivian Poulin asks if the tension between a direct measurement of the Hubble constant and constraints from the early universe could be resolved by new physics.

The post The Hubble tension appeared first on CERN Courier.

]]>

Just like particle physics, cosmology has its own standard model. It is also powerful in prediction, and brings new mysteries and profound implications. The first was the realisation in 1917 that a homogeneous and isotropic universe must be expanding. This led Einstein to modify his general theory of relativity by introducing a cosmological constant (Λ) to counteract gravity and achieve a static universe – an act he labelled his greatest blunder when Edwin Hubble provided observational proof of the universe’s expansion in 1929. Sixty-nine years later, Saul Perlmutter, Adam Riess and Brian Schmidt went further. Their observations of Type Ia supernovae (SN Ia) showed that the universe’s expansion was accelerating. Λ was revived as “dark energy”, now estimated to account for 68% of the total energy density of the universe.

On large scales the dominant motion of galaxies is the Hubble flow, the expansion of the fabric of space itself

The second dominant component of the model emerged not from theory but from 50 years of astrophysical sleuthing. From the “missing mass problem” in the Coma galaxy cluster in the 1930s to anomalous galaxy-rotation curves in the 1970s, evidence built up that additional gravitational heft was needed to explain the formation of the large-scale structure of galaxies that we observe today. The 1980s therefore saw the proposal of cold dark matter (CDM), now estimated to account for 27% of the energy density of the universe, and actively sought by diverse experiments across the globe and in space.

Dark energy and CDM supplement the remaining 5% of normal matter to form the ΛCDM model. ΛCDM is a remarkable six-parameter framework that models 13.8 billion years of cosmic evolution from quantum fluctuations during an initial phase of “inflation” – a hypothesised expansion of the universe by 26 to 30 orders of magnitude in roughly 10–36 seconds at the beginning of time. ΛCDM successfully models cosmic microwave background (CMB) anisotropies, the large-scale structure of the universe, and the redshifts and distances of SN Ia. It achieves this despite big open questions: the nature of dark matter, the nature of dark energy and the mechanism for inflation.

The Hubble tension

Cosmologists are eager to guide beyond-ΛCDM model-building efforts by testing its end-to-end predictions, and the model now seems to be failing the most important: predicting the expansion rate of the universe.

One of the main predictions of ΛCDM is the average energy density of the universe today. This determines its current expansion rate, otherwise known as the Hubble constant (H0). The most precise ΛCDM prediction comes from a fit to CMB data from ESA’s Planck satellite (operational 2009 to 2013), which yields H0 = 67.4 ± 0.5 km/s/Mpc. This can be tested against direct measurements in our local universe, revealing a surprising discrepancy (see “The Hubble tension” figure).

At sufficiently large distances, the dominant motion of galaxies is the Hubble flow – the expansion of the fabric of space itself. Directly measuring the expansion rate of the universe calls for fitting the increase in the recession velocity of galaxies deep within the Hubble flow as a function of distance. The gradient is H0.

Receding supernovae

While high-precision spectroscopy allows recession velocity to be precisely measured using the redshifts (z) of atomic spectra, it is more difficult to measure the distance to astrophysical objects. Geometrical methods such as parallax are imprecise at large distances, but “standard candles” with somewhat predictable luminosities such as cepheids and SN Ia allow distance to be inferred using the inverse square-law. Cepheids are pulsating post-main-sequence stars whose radius and observed luminosity oscillate over a period of one to 100 days, driven by the ionisation and recombination of helium in their outer layers, which increases opacity and traps heat; their period increases with their true luminosity. Before going supernova, SN Ia were white dwarf stars in binary systems; when the white dwarf accretes enough mass from its companion star, runaway carbon fusion produces a nearly standardised peak luminosity for a period of one to two weeks. Only SN Ia are deep enough in the Hubble flow to allow precise measurements of H0. When cepheids are observable in the same galaxies, they can be used to calibrate them.

Distance ladder

At present, the main driver of the Hubble tension is a 2022 measurement of H0 by the SH0ES (Supernova H0 for the Equation of State) team led by Adam Riess. As the SN Ia luminosity is not known from first principles, SH0ES built a “distance ladder” to calibrate the luminosity of 42 SN Ia within 37 host galaxies. The SN Ia are calibrated against intermediate-distance cepheids, and the cepheids are calibrated against four nearby “geometric anchors” whose distance is known through a geometric method (see “Distance ladder” figure). The geometric anchors are: Milky Way parallaxes from ESA’s Gaia mission; detached eclipsing binaries in the large and small magellanic clouds (LMC and SMC); and the “megamaser” galaxy host NGC4258, where water molecules in the accretion disk of a supermassive black hole emit Doppler-shifting microwave maser photons.

The great strength of the SH0ES programme is its use of NASA and ESA’s Hubble Space Telescope (HST, 1990–) at all three rungs of the distance ladder, bypassing the need for cross-calibration between instruments. SN Ia can be calibrated out to 40 Mpc. As a result, in 2022 SH0ES used measurements of 300 or so high-z SN Ia deep within the Hubble flow to measure H0 = 73.04 ± 1.04 km/s/Mpc. This is in more than 5σ tension with Planck’s ΛCDM prediction of 67.4 ± 0.5 km/s/Mpc.

Baryon acoustic oscillation

The sound horizon

The value of H0 obtained from fitting Planck CMB data has been shown to be robust in two key ways.

First, Planck data can be bypassed by combining CMB data from NASA’s WMAP probe (2001–2010) with observations by ground-based telescopes. WMAP in combination with the Atacama Cosmology Telescope (ACT, 2007–2022) yields H0 = 67.6 ± 1.1 km/s/Mpc. WMAP in combination with the South Pole Telescope (SPT, 2007–) yields H0 = 68.2 ± 1.1 km/s/Mpc. Second, and more intriguingly, CMB data can be bypassed altogether.

In the early universe, Compton scattering between photons and electrons was so prevalent that the universe behaved as a plasma. Quantum fluctuations from the era of inflation propagated like sound waves until the era of recombination, when the universe had cooled sufficiently for CMB photons to escape the plasma when protons and electrons combined to form neutral atoms. This propagation of inflationary perturbations left a characteristic scale known as the sound horizon in both the acoustic peaks of the CMB and in “baryon acoustic oscillations” (BAOs) seen in the large-scale structure of galaxy surveys (see “Baryon acoustic oscillation” figure). The sound horizon is the distance travelled by sound waves in the primordial plasma.

While the SH0ES measurement relies on standard candles, ΛCDM predictions rely instead on using the sound horizon as a “standard ruler” against which to compare the apparent size of BAOs at different redshifts, and thereby deduce the expansion rate of the universe. Under ΛCDM, the only two free parameters entering the computation of the sound horizon are the baryon density and the dark-matter density. Planck evaluates both by studying the CMB, but they can be obtained independently of the CMB by combining BAO measurements of the dark-matter density with Big Bang nucleosynthesis (BBN) measurements of the baryon density (see “Sound horizon” figure). The latest measurement by the Dark Energy Spectroscopic Instrument in Arizona (DESI, 2021–) yields H0 = 68.53 ± 0.80 km/s/Mpc, in 3.4σ tension with SH0ES and fully independent of Planck.

Sound horizon

The next few years will be crucial for understanding the Hubble tension, and may decide the fate of the ΛCDM model. ACT, SPT and the Simons Observatory in Chile (2024–) will release new CMB data. DESI, the Euclid space telescope (2023–) and the forthcoming LSST wide-field optical survey in Chile will release new galaxy surveys. “Standard siren” measurements from gravitational waves with electromagnetic counterparts may also contribute to the debate, although the original excitement has dampened with a lack of new events after GW170817. More accurate measurements of the age of the oldest objects may also provide an important new test. If H0 increases, the age of the universe decreases, and the SH0ES measurement favours less than 13.1 billion years at 2σ significance.

The SH0ES measurement is also being checked directly. A key approach is to test the three-step calibration by seeking alternative intermediate standard candles besides cepheids. One candidate is the peak-luminosity “tip” of the red giant branch (TRGB) caused by the sudden start of helium fusion in low-mass stars. The TRGB is bright enough to be seen in distant galaxies that host SN Ia, though at distances smaller than that of cepheids.

Settling the debate

In 2019 the Carnegie–Chicago Hubble Program (CCHP) led by Wendy Freedman and Barry Madore calibrated SN Ia using the TRGB within the LMC and NGC4258 to determine H0 = 69.8 ± 0.8 (stat) ± 1.7 (syst). An independent reanalysis including authors from the SH0ES collaboration later reported H0 = 71.5 ± 1.8 (stat + syst) km/s/Mpc. The difference in the results suggests that updated measurements with the James Webb Space Telescope (JWST) may settle the debate.

James Webb Space Telescope

Launched into space on 25 December 2021, JWST is perfectly adapted to improve measurements of the expansion rate of the universe thanks to its improved capabilities in the near infrared band, where the impact of dust is reduced (see “Improved resolution” figure). Its four-times-better spatial resolution has already been used to re-observe a subsample of the 37 hosts galaxies home to the 42 SN Ia studied by SH0ES and the geometric anchor NGC4258.

So far, all observations suggest good agreement with the previous observations by HST. SH0ES used JWST observations to obtain up to a factor 2.5 reduction in the dispersion of the period-luminosity relation for cepheids with no indication of a bias in HST measurements. Most importantly, they were able to exclude the confusion of cepheids with other stars as being responsible for the Hubble tension at 8σ significance.

Meanwhile, the CCHP team provided new measurements based on three distance indicators: cepheids, the TRGB and a new “population based” method using the J-region of the asymptotic giant branch (JAGB) of carbon-rich stars, for which the magnitude of the mode of the luminosity function can serve as a distance indicator (see the last three rows of “The Hubble tension” figure).

Galaxies used to measure the Hubble constant

The new CCHP results suggest that cepheids may show a bias compared to JAGB and TRGB, though this conclusion was rapidly challenged by SH0ES, who identified a missing source of uncertainty and argued that the size of the sample of SN Ia within hosts with primary distance indicators is too small to provide competitive constraints: they claim that sample variations of order 2.5 km/s/Mpc could explain why the JAGB and TRGB yield a lower value. Agreement may be reached when JWST has observed a larger sample of galaxies – across both teams, 19 of the 37 calibrated by SH0ES have been remeasured so far, plus the geometric anchor NGC 5468 (see “The usual suspects” figure).

At this stage, no single systematic error seems likely to fully explain the Hubble tension, and the problem is more severe than it appears. When calibrated, SN Ia and BAOs constrain not only H0, but the entire redshift range out to z ~ 1. This imposes strong constraints on any new physics introduced in the late universe. For example, recent DESI results suggest that the dynamics of dark energy at late times may not be exactly that of a cosmological constant, but the behaviour needed to reconcile Planck and SH0ES is strongly excluded.

Comparison of JWST and HST views

Rather than focusing on the value of the expansion rate, most proposals now focus on altering the calibration of either SN Ia or BAOs. For example, an unknown systematic error could alter the luminosity of SN Ia in our local vicinity, but we have no indication that their magnitude changes with redshift, and this solution appears to be very constrained.

The most promising solution appears to be that some new physics may have altered the value of the sound horizon in the early universe. As the sound horizon is used to calibrate both the CMB and BAOs, reducing it by 10 Mpc could match the value of H0 favoured by SH0ES (see “Sound horizon” figure). This can be achieved either by increasing the redshift of recombination or the energy density in the pre-recombination universe, giving the sound waves less time to propagate.

The best motivated models invoke additional relativistic species in the early universe such as a sterile neutrino or a new type of “dark radiation”. Another intriguing possibility is that dark energy played a role in the pre-recombination universe, boosting the expansion rate at just the right time. The wide variety and high precision of the data make it hard to find a simple mechanism that is not strongly constrained or finely tuned, but existing models have some of the right features. Future data will be decisive in testing them.

The post The Hubble tension appeared first on CERN Courier.

]]>
Feature Vivian Poulin asks if the tension between a direct measurement of the Hubble constant and constraints from the early universe could be resolved by new physics. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_coverpiccrop.jpg
Do muons wobble faster than expected? https://cerncourier.com/a/do-muons-wobble-faster-than-expected/ Wed, 26 Mar 2025 15:08:49 +0000 https://cerncourier.com/?p=112616 With a new measurement imminent, the Courier explores the experimental results and theoretical calculations used to predict ‘muon g-2’ – one of particle physics’ most precisely known quantities and the subject of a fast-evolving anomaly.

The post Do muons wobble faster than expected? appeared first on CERN Courier.

]]>
Vacuum fluctuation

Fundamental charged particles have spins that wobble in a magnetic field. This is just one of the insights that emerged from the equation Paul Dirac wrote down in 1928. Almost 100 years later, calculating how much they wobble – their “magnetic moment” – strains the computational sinews of theoretical physicists to a level rarely matched. The challenge is to sum all the possible ways in which the quantum fluctuations of the vacuum affect their wobbling.

The particle in question here is the muon. Discovered in cosmic rays in 1936, muons are more massive but ephemeral cousins of the electron. Their greater mass is expected to amplify the effect of any undiscovered new particles shimmering in the quantum haze around them, and measurements have disagreed with theoretical predictions for nearly 20 years. This suggests a possible gap in the Standard Model (SM) of particle physics, potentially providing a glimpse of deeper truths beyond it.

In the coming weeks, Fermilab is expected to present the final results of a seven-year campaign to measure this property, reducing uncertainties to a remarkable one part in 1010 on the magnetic moment of the muon, and 0.1 parts per million on the quantum corrections. Theorists are racing to match this with an updated prediction of comparable precision. The calculation is in good shape, except for the incredibly unusual eventuality that the muon briefly emits a cloud of quarks and gluons at just the moment it absorbs a photon from the magnetic field. But in quantum mechanics all possibilities count all the time, and the experimental precision is such that the fine details of “hadronic vacuum polarisation” (HVP) could be the difference between reinforcing the SM and challenging it.

Quantum fluctuations

The Dirac equation predicts that fundamental spin s = ½ particles have a magnetic moment given by g(eħ/2m)s, where the gyromagnetic ratio (g) is precisely equal to two. For the electron, this remarkable result was soon confirmed by atomic spectroscopy, before more precise experiments in 1947 indicated a deviation from g = 2 of a few parts per thousand. Expressed as a = (g-2)/2, the shift was a surprise and was named the magnetic anomaly or the anomalous magnetic moment.

Quantum fluctuation

This marked the beginning of an enduring dialogue between experiment and theory. It became clear that a relativistic field theory like the developing quantum electrodynamics (QED) could produce quantum fluctuations, shifting g from two. In 1948, Julian Schwinger calculated the first correction to be a = α/2π ≈ 0.00116, aligning beautifully with 1947 experimental results. The emission and absorption of a virtual photon creates a cloud around the electron, altering its interaction with the external magnetic field (see “Quantum fluctuation” figure). Soon, other particles would be seen to influence the calculations. The SM’s limitations suggest that undiscovered particles could also affect these calculations. Their existence might be revealed by a discrepancy between the SM prediction for a particle’s anomalous magnetic moment and its measured value.

As noted, the muon is an even more promising target than the electron, as its sensitivity to physics beyond QED is generically enhanced by the square of the ratio of their masses: a factor of around 43,000. In 1957, inspired by Tsung-Dao Lee and Chen-Ning Yang’s proposal that parity is violated in the weak interaction, Richard Garwin, Leon Lederman and Marcel Weinrich studied the decay of muons brought to rest in a magnetic field at the Nevis cyclotron at Columbia University. As well as showing that parity is broken in both pion and muon decays, they found g to be close to two for muons by studying their “precession” in the magnetic field as their spins circled around the field lines.

Precision

This iconic experiment was the prototype of muon-precession projects at CERN (see CERN Courier September/October 2024 p53), later at Brookhaven National Laboratory and now Fermilab (see “Precision” figure). By the end of the Brookhaven project, a disagreement between the measured value of “aμ” – the subscript indicating g-2 for the muon rather than the electron – and the SM prediction was too large to ignore, motivating the present round of measurements at Fermilab and rapidly improving theory refinements.

g-2 and the Standard Model

Today, a prediction for aμ must include the effects of all three of the SM’s interactions and all of its elementary particles. The leading contributions are from electrons, muons and tau leptons interacting electromagnetically. These QED contributions can be computed in an expansion where each successive term contributes only around 1% of the previous one. QED effects have been computed to fifth order, yielding an extraordinary precision of 0.9 parts per billion – significantly more precise than needed to match measurements of the muon’s g-2, though not the electron’s. It took over half a century to achieve this theoretical tour de force.

The weak interaction gives the smallest contribution to aμ, a million times less than QED. These contributions can also be computed in an expansion. Second order suffices. All SM particles except gluons need to be taken into account.

Gluons are responsible for the strong interaction and appear in the third and last set of contributions. These are described by QCD and are called “hadronic” because quarks and gluons form hadrons at the low energies relevant for the muon g-2 (see “Hadronic contributions” figure). HVP is the largest, though 10,000 times smaller than the corrections due to QED. “Hadronic light-by-light scattering” (HLbL) is a further 100 times smaller due to the exchange of an additional photon. The challenge is that the strong-interaction effects cannot be approximated by a perturbative expansion. QCD is highly nonlinear and different methods are needed.

Data or the lattice?

Even before QCD was formulated, theorists sought to subdue the wildness of the strong force using experimental data. In the case of HVP, this triggered experimental investigations of e+e annihilation into hadrons and later hadronic tau–lepton decays. Though apparently disparate, the production of hadrons in these processes can be related to the clouds of virtual quarks and gluons that are responsible for HVP.

Hadronic contributions

A more recent alternative makes use of massively parallel numerical simulations to directly solve the equations of QCD. To compute quantities such as HVP or HLbL, “lattice QCD” requires hundreds of millions of processor-core hours on the world’s largest supercomputers.

In preparation for Fermilab’s first measurement in 2021, the Muon g-2 Theory Initiative, spanning more than 120 collaborators from over 80 institutions, was formed to provide a reference SM prediction that was published in a 2020 white paper. The HVP contribution was obtained with a precision of a few parts per thousand using a compilation of measurements of e+e annihilation into hadrons. The HLbL contribution was determined from a combination of data-driven and lattice–QCD methods. Though even more complex to compute, HLbL is needed only to 10% precision, as its contribution is smaller.

After summing all contributions, the prediction of the 2020 white paper sits over five standard deviations below the most recent experimental world average (see “Landscape of muon g-2” figure). Such a deviation would usually be interpreted as a discovery of physics beyond the SM. However, in 2021 the result of the first lattice calculation of the HVP contribution with a precision comparable to that of the data-driven white paper was published by the Budapest–Marseille–Wuppertal collaboration (BMW). The result, labelled BMW 2020 as it was uploaded to the preprint archive the previous year, is much closer to the experimental average (green band on the figure), suggesting that the SM may still be in the race. The calculation relied on methods developed by dozens of physicists since the seminal work of Tom Blum (University of Connecticut) in 2002 (see CERN Courier May/June 2021 p25).

Landscape of muon g-2

In 2020, the uncertainties on the data-driven and lattice-QCD predictions for the HVP contribution were still large enough that both could be correct, but BMW’s 2021 paper showed them to be explicitly incompatible in an “intermediate-distance window” accounting for approximately 35% of the HVP contribution, where lattice QCD is most reliable.

This disagreement was the first sign that the 2020 consensus had to be revised. To move forward, the sources of the various disagreements – more numerous now – and the relative limitations of the different approaches must be understood better. Moreover, uncertainty on HVP already dominated the SM prediction in 2020. As well as resolving these discrepancies, its uncertainty must be reduced by a factor of three to fully leverage the coming measurement from Fermilab. Work on the HVP is therefore even more critical than before, as elsewhere the theory house is in order: Sergey Volkov (KITP) recently verified the fifth-order QED calculation of Tatsumi Aoyama, Toichiro Kinoshita and Makiko Nio, identifying an oversight not numerically relevant at current experimental sensitivities; new HLbL calculations remain consistent; and weak contributions have already been checked and are precise enough for the foreseeable future.

News from the lattice

Since BMW’s 2020 lattice results, a further eight lattice-QCD computations of the dominant up-and-down-quark (u + d) contribution to HVP’s intermediate-distance window have been performed with similar precision, with four also including all other relevant contributions. Agreement is excellent and the verdict is clear: the disagreement between the lattice and data-driven approaches is confirmed (see “Intermediate window” figure).

Intermediate window

Work on the short-distance window (about 10% of the HVP contribution) has also advanced rapidly. Seven computations of the u + d contribution have appeared, with four including all other relevant contributions. No significant disagreement is observed.

The long-distance window (around 55% of the total) is by far the most challenging, with the largest uncertainties. In recent weeks three calculations of the dominant u + d contribution have appeared, by the RBC–UKQCD, Mainz and FHM collaborations. Though some differences are present, none can be considered significant for the time being.

With all three windows cross-validated, the Muon g-2 Theory Initiative is combining results to obtain a robust lattice–QCD determination of the HVP contribution. The final uncertainty should be slightly below 1%, still quite far from the 0.2% ultimately needed.

The BMW–DMZ and Mainz collaborations have also presented new results for the full HVP contribution to aμ, and the RBC–UKQCD collaboration, which first proposed the multi-window approach, is also in a position to make a full calculation. (The corresponding result in the “Landscape of muon g-2” figure combines contributions reported in their publications.) Mainz obtained a result with 1% precision using the three windows described above. BMW–DMZ divided its new calculation into five windows and replaced the lattice–QCD computation of the longest distance window – “the tail”, encompassing just 5% of the total – with a data-driven result. This pragmatic approach allows a total uncertainty of just 0.46%, with the collaboration showing that all e+e datasets contributing to this long-distance tail are entirely consistent. This new prediction differs from the experimental measurement of aμ by only 0.9 standard deviations.

These new lattice results, which have not yet been published in refereed journals, make the disagreement with the 2020 data-driven result even more blatant. However, the analysis of the annihilation of e+e into hadrons is also evolving rapidly.

News from electron–positron annihilation

Many experiments have measured the cross-section for e+e annihilation to hadrons as a function of centre-of-mass energy (√s). The dominant contribution to a data-driven calculation of aμ, and over 70% of its uncertainty budget, is provided by the e+e π+π process, in which the final-state pions are produced via the ρ resonance (see “Two-pion channel” figure).

The most recent measurement, by the CMD-3 energy-scan experiment in Novosibirsk, obtained a cross-section on the peak of the ρ resonance that is larger than all previous ones, significantly changing the picture in the π+π channel. Scrutiny by the Theory Initiative has identified no major problem.

Two-pion channel

CMD-3’s approach contrasts that used by KLOE, BaBar and BESIII, which study e+e annihilation with a hard photon emitted from the initial state (radiative return) at facilities with fixed √s. BaBar has innovated by calibrating the luminosity of the initial-state radiation using the μ+μ channel and using a unique “next-to-leading-order” approach that accounts for extra radiation from either the initial or the final state – a necessary step at the required level of precision.

In 1997, Ricard Alemany, Michel Davier and Andreas Höcker proposed an alternative method that employs τ→ ππ0ν decay while requiring some additional theoretical input. The decay rate has been precisely measured as a function of the two-pion invariant mass by the ALEPH and OPAL experiments at LEP, as well as by the Belle and CLEO experiments at B factories, under very different conditions. The measurements are in good agreement. ALEPH offers the best normalisation and Belle the best shape measurement.

KLOE and CMD-3 differ by more than five standard deviations on the ρ peak, precluding a combined analysis of e+e → π+π cross-sections. BaBar and τ data lie between them. All measurements are in good agreement at low energies, below the ρ peak. BaBar, CMD-3 and τ data are also in agreement above the ρ peak. To help clarify this unsatisfactory situation, in 2023 BaBar performed a careful study of radiative corrections to e+e → π+π. That study points to the possible underestimate of systematic uncertainties in radiative-return experiments that rely on Monte Carlo simulations to describe extra radiation, as opposed to the in situ studies performed by BaBar.

The future

While most contributions to the SM prediction of the muon g-2 are under control at the level of precision required to match the forthcoming Fermilab measurement, in trying to reduce the uncertainties of the HVP contribution to a commensurate degree, theorists and experimentalists shattered a 20 year consensus. This has triggered an intense collective effort that is still in progress.

The prospect of testing the limits of the SM through high-precision measurements generates considerable impetus

New analyses of e+e are underway at BaBar, Belle II, BES III and KLOE, experiments are continuing at CMD-3, and Belle II is also studying τ decays. At CERN, the longer term “MUonE” project will extract HVP by analysing how muons scatter off electrons – a very challenging endeavour regarding the unusual accuracy required both in the control of experimental systematic uncertainties and also theoretically, for the radiative corrections.

At the same time, lattice-QCD calculations have made enormous progress in the last five years and provide a very competitive alternative. The fact that several groups are involved with somewhat independent techniques is allowing detailed cross checks. The complementarity of the data-driven and lattice-QCD approaches should soon provide a reliable value for the g-2 theoretical prediction at unprecedented levels of precision.

There is still some way to go to reach that point, but the prospect of testing the limits of the SM through high-precision measurements generates considerable impetus. A new white paper is expected in the coming weeks. The ultimate aim is to reach a level of precision in the SM prediction that allows us to fully leverage the potential of the muon anomalous magnetic moment in the search for new fundamental physics, in concert with the final results of Fermilab’s Muon g-2 experiment and the projected Muon g-2/EDM experiment at J-PARC in Japan, which will implement a novel technique.

The post Do muons wobble faster than expected? appeared first on CERN Courier.

]]>
Feature With a new measurement imminent, the Courier explores the experimental results and theoretical calculations used to predict ‘muon g-2’ – one of particle physics’ most precisely known quantities and the subject of a fast-evolving anomaly. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_MUON-top_feature.jpg
Educational accelerator open to the public https://cerncourier.com/a/educational-accelerator-open-to-the-public/ Wed, 26 Mar 2025 14:37:38 +0000 https://cerncourier.com/?p=112590 What better way to communicate accelerator physics to the public than using a functioning particle accelerator?

The post Educational accelerator open to the public appeared first on CERN Courier.

]]>
What better way to communicate accelerator physics to the public than using a functioning particle accelerator? From January, visitors to CERN’s Science Gateway were able to witness a beam of protons being accelerated and focused before their very eyes. Its designers believe it to be the first working proton accelerator to be exhibited in a museum.

“ELISA gives people who visit CERN a chance to really see how the LHC works,” says Science Gateway’s project leader Patrick Geeraert. “This gives visitors a unique experience: they can actually see a proton beam in real time. It then means they can begin to conceptualise the experiments we do at CERN.”

The model accelerator is inspired by a component of LINAC 4 – the first stage in the chain of accelerators used to prepare beams of protons for experiments at the LHC. Hydrogen is injected into a low-pressure chamber and ionised; a one-metre-long RF cavity accelerates the protons to 2 MeV, which then pass through a thin vacuum-sealed window. In dim light, the protons in the air ionise the gas molecules, producing visible light, allowing members of the public to see the beam’s progress before their very eyes (see “Accelerating education” figure).

ELISA – the Experimental Linac for Surface Analysis – will also be used to analyse the composition of cultural artefacts, geological samples and objects brought in by members of the public. This is an established application of low-energy proton accelerators: for example, a particle accelerator is hidden 15 m below the famous glass pyramids of the Louvre in Paris, though it is almost 40 m long and not freely accessible to the public.

“The proton-beam technique is very effective because it has higher sensitivity and lower backgrounds than electron beams,” explains applied physicist and lead designer Serge Mathot. “You can also perform the analysis in the ambient air, instead of in a vacuum, making it more flexible and better suited to fragile objects.”

For ELISA’s first experiment, researchers from the Australian Nuclear Science Technology Organisation and from Oxford’s Ashmolean Museum have proposed a joint research project about the optimisation of ELISA’s analysis of paint samples designed to mimic ancient cave art. The ultimate goal is to work towards a portable accelerator that can be taken to regions of the world that don’t have access to proton beams.

The post Educational accelerator open to the public appeared first on CERN Courier.

]]>
News What better way to communicate accelerator physics to the public than using a functioning particle accelerator? https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_accelerator.jpg
Game on for physicists https://cerncourier.com/a/game-on-for-physicists/ Wed, 26 Mar 2025 14:35:42 +0000 https://cerncourier.com/?p=112787 Raphael Granier de Cassagnac discusses opportunities for particle physicists in the gaming industry.

The post Game on for physicists appeared first on CERN Courier.

]]>
Raphael Granier de Cassagnac and Exographer

“Confucius famously may or may not have said: ‘When I hear, I forget. When I see, I remember. When I do, I understand.’ And computer-game mechanics can be inspired directly by science. Study it well, and you can invent game mechanics that allow you to engage with and learn about your own reality in a way you can’t when simply watching films or reading books.”

So says Raphael Granier de Cassagnac, a research director at France’s CNRS and Ecole Polytechnique, as well as member of the CMS collaboration at the CMS. Granier de Cassagnac is also the creative director of Exographer, a science-fiction computer game that draws on concepts from particle physics and is available on Steam, Switch, PlayStation 5 and Xbox.

“To some extent, it’s not too different from working at a place like CMS, which is also a super complicated object,” explains Granier de Cassagnac. Developing a game often requires graphic artists, sound designers, programmers and science advisors. To keep a detector like CMS running, you need engineers, computer scientists, accelerator physicists and funding agencies. And that’s to name just a few. Even if you are not the primary game designer or principal investigator, understanding the
fundamentals is crucial to keep the project running efficiently.

Root skills

Most physicists already have some familiarity with structured programming and data handling, which eases the transition into game development. Just as tools like ROOT and Geant4 serve as libraries for analysing particle collisions, game engines such as Unreal, Unity or Godot provide a foundation for building games. Prebuilt functionalities are used to refine the game mechanics.

“Physicists are trained to have an analytical mind, which helps when it comes to organising a game’s software,” explains Granier de Cassagnac. “The engine is merely one big library, and you never have to code anything super complicated, you just need to know how to use the building blocks you have and code in smaller sections to optimise the engine itself.”

While coding is an essential skill for game production, it is not enough to create a compelling game. Game design demands storytelling, character development and world-building. Structure, coherence and the ability to guide an audience through complex information are also required.

“Some games are character-driven, others focus more on the adventure or world-building,” says Granier de Cassagnac. “I’ve always enjoyed reading science fiction and playing role-playing games like Dungeons and Dragons, so writing for me came naturally.”

Entrepreneurship and collaboration are also key skills, as it is increasingly rare for developers to create games independently. Universities and startup incubators can provide valuable support through funding and mentorship. Incubators can help connect entrepreneurs with industry experts, and bridge the gap between scientific research and commercial viability.

“Managing a creative studio and a company, as well as selling the game, was entirely new for me,” recalls Granier de Cassagnac. “While working at CMS, we always had long deadlines and low pressure. Physicists are usually not prepared for the speed of the industry at all. Specialised offices in most universities can help with valorisation – taking scientific research and putting it on the market. You cannot forget that your academic institutions are still part of your support network.”

Though challenging to break into, opportunity abounds for those willing to upskill

The industry is fiercely competitive, with more games being released than players can consume, but a well-crafted game with a unique vision can still break through. A common mistake made by first-time developers is releasing their game too early. No matter how innovative the concept or engaging the mechanics, a game riddled with bugs frustrates players and damages its reputation. Even with strong marketing, a rushed release can lead to negative reviews and refunds – sometimes sinking a project entirely.

“In this industry, time is money and money is time,” explains Granier de Cassagnac. But though challenging to break into, opportunity abounds for those willing to upskill, with the gaming industry worth almost $200 billion a year and reaching more than three billion players worldwide by Granier de Cassagnac’s estimation. The most important aspects for making a successful game are originality, creativity, marketing and knowing the engine, he says.

“Learning must always be part of the process; without it we cannot improve,” adds Granier de Cassagnac, referring to his own upskilling for the company’s next project, which will be even more ambitious in its scientific coverage. “In the next game we want to explore the world as we know it, from the Big Bang to the rise of technology. We want to tell the story of humankind.”

The post Game on for physicists appeared first on CERN Courier.

]]>
Careers Raphael Granier de Cassagnac discusses opportunities for particle physicists in the gaming industry. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_CAREERS_Garnier_feature.jpg
The beauty of falling https://cerncourier.com/a/the-beauty-of-falling/ Wed, 26 Mar 2025 14:34:00 +0000 https://cerncourier.com/?p=112815 Kurt Hinterbichler reviews Claudia de Rham's first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery.

The post The beauty of falling appeared first on CERN Courier.

]]>
The Beauty of Falling

A theory of massive gravity is one in which the graviton, the particle that is believed to mediate the force of gravity, has a small mass. This contrasts with general relativity, our current best theory of gravity, which predicts that the graviton is exactly massless. In 2011, Claudia de Rham (Imperial College London), Gregory Gabadadze (New York University) and Andrew Tolley (Imperial College London) revitalised interest in massive gravity by uncovering the structure of the best possible (in a technical sense) theory of massive gravity, now known as the dRGT theory, after these authors.

Claudia de Rham has now written a popular book on the physics of gravity. The Beauty of Falling is an enjoyable and relatively quick read: a first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery.

De Rahm begins by setting the stage with the breakthroughs that led to our current paradigm of gravity. The Michelson–Morley experiment and special relativity, Einstein’s description of gravity as geometry leading to general relativity and its early experimental triumphs, black holes and cosmology are all described in accessible terms using familiar analogies. De Rham grips the reader by weaving in a deeply personal account of her own life and upbringing, illustrating what inspired her to study these ideas and pursue a career in theoretical physics. She has led an interesting life, from growing up in various parts of the world, to learning to dive and fly, to training as an astronaut and coming within a hair’s breadth of becoming one. Her account of the training and selection process for European Space Agency astronauts is fascinating, and worth the read in its own right.

Moving closer to the present day, de Rahm discusses the detection of gravitational waves at gravitational-wave observatories such as LIGO, the direct imaging of black holes by the Event Horizon Telescope, and the evidence for dark matter and the accelerating expansion of the universe with its concomitant cosmological constant problem. As de Rham explains, this latter discovery underlies much of the interest in massive gravity; there remains the lingering possibility that general relativity may need to be modified to account for the observed accelerated expansion.

In the second part of the book, de Rham warns us that we are departing from the realm of well tested and established physics, and entering the world of more uncertain ideas. A pet peeve of mine is popular accounts that fail to clearly make this distinction, a temptation to which this book does not succumb. 

Here, the book offers something that is hard to find: a first-hand account of the process of thought and discovery in theoretical physics. When reading the latest outrageously overhyped clickbait headlines coming out of the world of fundamental physics, it is easy to get the wrong impression about what theoretical physicists do. This part of the book illustrates how ideas come about: by asking questions of established theories and tugging on their loose threads, we uncover new mathematical structures and, in the process, gain a deeper understanding of the structures we have.

Massive gravity, the focus of this part of the book, is a prime example: by starting with a basic question, “does the graviton have to be massless?”, a new structure was revealed. This structure may or may not have any direct relevance to gravity in the real world, but even if it does not, our study of it has significantly enhanced our understanding of the structure of general relativity. And, as has occurred countless times before with intriguing mathematical structures, it may ultimately prove useful for something completely different and unforeseen – something that its originators did not have even remotely in mind. Here, de Rahm offers invaluable insights both into uncovering a new theoretical structure and what happens next, as the results are challenged and built upon by others in the community.

The post The beauty of falling appeared first on CERN Courier.

]]>
Review Kurt Hinterbichler reviews Claudia de Rham's first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_REV_Beauty_feature.jpg
CMS peers inside heavy-quark jets https://cerncourier.com/a/cms-peers-inside-heavy-quark-jets/ Wed, 26 Mar 2025 14:31:07 +0000 https://cerncourier.com/?p=112764 The CMS collaboration has shed light on the role of the quark mass in parton showers.

The post CMS peers inside heavy-quark jets appeared first on CERN Courier.

]]>
CMS figure 1

Ever since quarks and gluons were discovered, scientists have been gathering clues about their nature and behaviour. When quarks and gluons – collectively called partons – are produced at particle colliders, they shower to form jets – sprays of composite particles called hadrons. The study of jets has been indispensable towards understanding quantum chromodynamics (QCD) and the description of the final state using parton shower models. Recently, particular focus has been on the study of the jet substructure, which provides further input about the modelling of parton showers.

Jets initiated by the heavy charm (c-jets) or bottom quarks (b-jets) provide insight into the role of the quark mass, as an additional energy scale in QCD calculations. Heavy-flavour jets are not only used to test QCD predictions, they are also a key part of the study of other particles, such as the top quark and the Higgs boson. Understanding the internal structure of heavy-quark jets is thus crucial for both the identification of these heavier objects and the interpretation of QCD properties. One such property is the presence of a “dead cone” around the heavy quark, where collinear gluon emissions are suppressed in the direction of motion of the quark.

CMS has shed light on the role of the quark mass in the parton shower with two new results focusing on c- and b-jets, respectively. Heavy-flavour hadrons in these jets are typically long-lived, and decay at a small but measurable distance from the primary interaction vertex. In c-jets, the D0 meson is reconstructed in the K±π decay channel by combining pairs of charged hadrons that do not appear to come from the primary interaction vertex. In the case of b-jets, a novel technique is employed. Instead of reconstructing the b hadron in a given decay channel, its charged decay daughters are identified using a multivariate analysis. In both cases, the decay daughters are replaced by the mother hadron in the jet constituents.

CMS has shed light on the role of the quark mass in the parton shower

Jets are reconstructed by clustering particles in a pairwise manner, leading to a clustering tree that mimics the parton shower process. Substructure techniques are then employed to decompose the jet into two subjets, which correspond to the heavy quark and a gluon being emitted from it. Two of those algorithms are soft drop and late-kT. They select the first and last emission in the jet clustering tree, respectively, capturing different aspects of the QCD shower. Looking at the angle between the two subjets (see figure 1), denoted as Rg for soft drop and θ for late-kT, demonstrates the dead-cone effect, as the small angle emissions of b-jets (left) and c-jets (right) are suppressed compared to the inclusive jet case. The effect is captured better by the late-kT algorithm than soft drop in the case of c-jets.

These measurements serve to refine the tuning of Monte Carlo event generators relating to the heavy-quark mass and strong coupling. Identifying the onset of the dead cone in the vacuum also opens up possibilities for substructure studies in heavy-ion collisions, where emissions induced by the strongly interacting quark–gluon plasma can be isolated.

The post CMS peers inside heavy-quark jets appeared first on CERN Courier.

]]>
News The CMS collaboration has shed light on the role of the quark mass in parton showers. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_EF_CMS_feature.jpg
Salam’s dream visits the Himalayas https://cerncourier.com/a/salams-dream-visits-the-himalayas/ Wed, 26 Mar 2025 14:28:34 +0000 https://cerncourier.com/?p=112728 The BCVSPIN programme aims to facilitate interactions between researchers from Bangladesh, China, Vietnam, Sri Lanka, Pakistan, India and Nepal and the broader international community.

The post Salam’s dream visits the Himalayas appeared first on CERN Courier.

]]>
After winning the Nobel Prize in Physics in 1979, Abdus Salam wanted to bring world-class physics research opportunities to South Asia. This was the beginning of the BCSPIN programme, encompassing Bangladesh, China, Sri Lanka, Pakistan, India and Nepal. The goal was to provide scientists in South and Southeast Asia with new opportunities to learn from leading experts about developments in particle physics, astroparticle physics and cosmology. Together with Jogesh Pati, Yu Lu and Qaisar Shafi, Salam initiated the programme in 1989. This first edition was hosted by Nepal. Vietnam joined in 2009 and BCSPIN became BCVSPIN. Over the years, the conference has been held as far afield as Mexico.

The most recent edition attracted more than 100 participants to the historic Hotel Shanker in Kathmandu, Nepal, from 9 to 13 December 2024. The conference aimed to facilitate interactions between researchers from BCVSPIN countries and the broader international community, covering topics such as collider physics, cosmology, gravitational waves, dark matter, neutrino physics, particle astrophysics, physics beyond the Standard Model and machine learning. Participants ranged from renowned professors from across the globe to aspiring students.

Speaking of aspiring students, the main event was preceded by the BCVSPIN-2024 Masterclass in Particle Physics and Workshop in Machine Learning, hosted at Tribhuvan University from 4 to 6 December. The workshop provided 34 undergraduate and graduate students from around Nepal with a comprehensive introduction to particle physics, high-energy physics (HEP) experiments and machine learning. In addition to lectures, the workshop engaged students in hands-on sessions, allowing them to experience real research by exploring core concepts and applying machine-learning techniques to data from the ATLAS experiment. The students’ enthusiasm was palpable as they delved into the intricacies of particle physics and machine learning. The interactive sessions were particularly engaging, with students eagerly participating in discussions and practical exercises. Highlights included a special talk on artificial intelligence (AI) and a career development session focused on crafting CVs, applications and research statements. These sessions ensured participants were equipped with both academic insights and practical guidance. The impact on students was profound, as they gained valuable skills and networking opportunities, preparing them for future careers in HEP.

The BCVSPIN conference officially started the following Monday. In the spirit of BCVSPIN, the first plenary session featured an insightful talk on the status and prospects of HEP in Nepal, providing valuable insights for both locals and newcomers to the initiative. Then, the latest and the near-future physics highlights of experiments such as ATLAS, ALICE, CMS, as well as Belle, DUNE and IceCube, were showcased. From physics performance such as ATLAS nailing b-tagging with graph neural networks, to the most elaborate mass measurement of the W boson mass by CMS, not to mention ProtoDUNE’s runs exceeding expectations, the audience were offered comprehensive reviews of the recent breakthroughs on the experimental side. The younger physicists willing to continue or start hardware efforts surely appreciated the overview and schedule of the different upgrade programmes. The theory talks covered, among others, dark-matter models, our dear friend the neutrino and the interactions between the two. A special talk on AI invited the audience to reflect on what AI really is and how – in the midst of the ongoing revolution – it impacts the fields of physics and physicists themselves. Overviews of long-term future endeavours such as the Electron–Ion Collider and the Future Circular Collider concluded the programme.

BCVSPIN offers younger scientists precious connections with physicists from the international community

A special highlight of the conference was a public lecture “Oscillating Neutrinos” by the 2015 Nobel Laureate Takaaki Kajita. The event was held near the historical landmark of Patan Durbar Square, in the packed auditorium of the Rato Bangala School. This centre of excellence is known for its innovative teaching methods and quality instruction. More than half the room was filled with excited students from schools and universities, eager to listen to the keynote speaker. After a very pedagogical introduction explaining the “problem of solar neutrinos”, Kajita shared his insights on the discovery of neutrino oscillations and its implications for our understanding of the universe. His presentation included historical photographs of the experiments in Kamioka, Japan, as well as his participation at BCVSPIN in 1994. After encouraging the students to become scientists and answering as many questions as time allowed, he was swept up in a crowd of passionate Nepali youth, thrilled to be in the presence of such a renowned physicist.

The BCVSPIN initiative has changed the landscape of HEP in South and Southeast Asia. With participation made affordable for students, it is a stepping stone for the younger generation of scientists, offering them precious connections with physicists from the international community.

The post Salam’s dream visits the Himalayas appeared first on CERN Courier.

]]>
Meeting report The BCVSPIN programme aims to facilitate interactions between researchers from Bangladesh, China, Vietnam, Sri Lanka, Pakistan, India and Nepal and the broader international community. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_BCVSPIN.jpg
CDF addresses W-mass doubt https://cerncourier.com/a/cdf-addresses-w-mass-doubt/ Wed, 26 Mar 2025 14:24:15 +0000 https://cerncourier.com/?p=112584 Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model prediction

The post CDF addresses W-mass doubt appeared first on CERN Courier.

]]>
The CDF II experiment

It’s tough to be a lone dissenting voice, but the CDF collaboration is sticking to its guns. Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model (SM) prediction. All other measurements are statistically compatible with the SM, though slightly higher, including the most recent by the CMS collaboration at the LHC, which almost matched CDF’s stated precision of 9.4 MeV (CERN Courier November/December 2024 p7).

With CMS’s measurement came fresh scrutiny for the CDF collaboration, which had established one of the most interesting anomalies in fundamental science – a higher-than-expected W mass might reveal the presence of undiscovered heavy virtual particles. Particular scrutiny focused on the quoted momentum resolution of the CDF detector, which the collaboration claims exceeds the precision of any other collider detector by more than a factor of two. A new analysis by CDF verifies the stated accuracy of 25 parts per million by constraining possible biases using a large sample of cosmic-ray muons.

“The publication lays out the ‘warts and all’ of the tracking aspect and explains why the CDF measurement should be taken seriously despite being in disagreement with both the SM and silicon-tracker-based LHC measurements,” says spokesperson David Toback of Texas A&M University. “The paper should be seen as required reading for anyone who truly wants to understand, without bias, the path forward for these incredibly difficult analyses.”

The 2022 W-mass measurement exclusively used information from CDF’s drift chamber – a descendant of the multiwire proportional chamber invented at CERN by Georges Charpak in 1968 – and discarded information from its inner silicon vertex detector as it offered only marginal improvements to momentum resolution. The new analysis by CDF collaborator Ashutosh Kotwal of Duke University studies possible geometrical defects in the experiment’s drift chamber that could introduce unsuspected biases in the measured momenta of the electrons and muons emitted in the decays of W bosons.

“Silicon trackers have replaced wire-based technology in many parts of modern particle detectors, but the drift chamber continues to hold its own as the technology of choice when high accuracy is required over large tracking volumes for extended time periods in harsh collider environments,” opines Kotwal. “The new analysis demonstrates the efficiency and stability of the CDF drift chamber and its insensitivity to radiation damage.”

The CDF II detector operated at Fermilab’s Tevatron collider from 1999 to 2011. Its cylindrical drift chamber was coaxial with the colliding proton and antiproton beams, and immersed in an axial 1.4 T magnetic field. A helical fit yielded track parameters.

The post CDF addresses W-mass doubt appeared first on CERN Courier.

]]>
News Ongoing cross-checks at the Tevatron experiment reinforce its 2022 measurement of the mass of the W boson, which stands seven standard deviations above the Standard Model prediction https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_CDF_feature.jpg
Boost for compact fast radio bursts https://cerncourier.com/a/boost-for-compact-fast-radio-bursts/ Wed, 26 Mar 2025 14:21:58 +0000 https://cerncourier.com/?p=112596 New results from the CHIME telescope support the hypothesis that fast radio bursts originate in close proximity to the turbulent magnetosphere of a central engine.

The post Boost for compact fast radio bursts appeared first on CERN Courier.

]]>
Fast radio bursts (FRBs) are short but powerful bursts of radio waves that are believed to be emitted by dense astrophysical objects such as neutron stars or black holes. They were discovered by Duncan Lorimer and his student David Narkevic in 2007 while studying archival data from the Parkes radio telescope in Australia. Since then, more than a thousand FRBs have been detected, located both within and without the Milky Way. These bursts usually last only a few milliseconds but can release enormous amounts of energy – an FRB detected in 2022 gave off more energy in a millisecond than the Sun does in 30 years – however, the exact mechanism underlying their creation remains a mystery.

Inhomogeneities caused by the presence of gas and dust in the interstellar medium scatter the radio waves coming from an FRB. This creates a stochastic interference pattern on the signal, called scintillation – a phenomenon akin to the twinkling of stars. In a recent study, astronomer Kenzie Nimmo and her colleagues used scintillation data from FRB 20221022A to constrain the size of its emission region. FRB 20221022A is a 2.5 millisecond burst from a galaxy about 200 million light-years away. It was detected on 22 October 2022 by the Canadian Hydrogen Intensity Mapping Experiment Fast Radio Burst project (CHIME/FRB).

The CHIME telescope is currently the world’s leading FRB detector, discovering an average of three new FRBs every day. It consists of four stationary 20 m-wide and 100 m-long semi-cylindrical paraboloidal reflectors with a focal length of 5 m (see “Right on CHIME” figure). 256 dual-polarisation feeds suspended along each axis gives it a field of view of more than 200 square degrees. With a wide bandwidth, high sensitivity and a high-performance correlator to pinpoint where in the sky signals are coming from, CHIME is an excellent instrument for the detection of FRBs. The antenna receives radio waves in the frequency range of 400 to 800 MHz.

Two main classes of models compete to explain the emission mechanisms of FRBs. Near-field models hypothesise that emission occurs in close proximity to the turbulent magnetosphere of a central engine, while far-away models hypothesise that emission occurs in relativistic shocks that propagate out to large radial distances. Nimmo and her team measured two distinct scintillation scales in the frequency spectrum of FRB 20221022A: one originating from its host galaxy or local environment, and another from a scattering site within the Milky Way. By using these scattering sites as astrophysical lenses, they were able to constrain the size of the FRB’s emission region to better than 30,000 km. This emission size contradicted expectations from far-away models. It is more consistent with an emission process occurring within or just beyond the magnetosphere of a central compact object – the first clear evidence for the near-field class of models.

Additionally, FRB 20221022A’s detection paper notes a striking change in the burst’s polarisation angle – an “S-shaped” swing covering about 130° – over a mere 2.5 milliseconds. They interpret this as the emission beam physically sweeping across our line of sight, much like a lighthouse beam passing by an observer, and conclude that it hints at a magnetospheric origin of the emission, as highly magnetised regions can twist or shape how radio waves are emitted. The scintillation studies by Nimmo et al. independently support this conclusion, narrowing the possible sources and mechanisms that power FRBs. Moreover, they highlight the potential of the scintillation technique to explore the emission mechanisms in FRBs and understand their environments.

The field of FRB physics looks set to grow by leaps and bounds. CHIME can already identify host galaxies for FRBs, but an “outrigger” programme using similar detectors geographically displaced from the main telescope at the Dominion Radio Astrophysical Observatory near Penticton, British Columbia, aims to strengthen its localisation capabilities to a precision of tens of milliarcsecond. CHIME recently finished deploying its third outrigger telescope in northern California.

The post Boost for compact fast radio bursts appeared first on CERN Courier.

]]>
News New results from the CHIME telescope support the hypothesis that fast radio bursts originate in close proximity to the turbulent magnetosphere of a central engine. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_Chime.jpg
Charm jets lose less energy https://cerncourier.com/a/charm-jets-lose-less-energy/ Wed, 26 Mar 2025 14:17:29 +0000 https://cerncourier.com/?p=112750 New results from the ALICE collaboration highlight the quark-mass and colour-charge dependence of energy loss in the quark-gluon plasma.

The post Charm jets lose less energy appeared first on CERN Courier.

]]>
ALICE figure 1

Collisions between lead ions at the LHC generate the hottest and densest system ever created in the laboratory. Under these extreme conditions, quarks and gluons are no longer confined inside hadrons but instead form a quark–gluon plasma (QGP). Being heavier than the more abundantly produced light quarks, charm quarks play a special role in probing the plasma since they are created in the collision before the plasma is formed and interact with the plasma as they traverse the collision zone. Charm jets, which are clusters of particles originating from charm quarks, have been investigated for the first time by the ALICE collaboration in Pb–Pb collisions at the LHC using the D0 mesons (that carry a charm quark) as tags.

The primary interest lies in measuring the extent of energy loss experienced by different types of particles as they traverse the plasma, referred to as “in-medium energy loss”. This energy loss specifically depends on the particle type and particle mass, varying between quarks and gluons. Due to their larger mass, charm quarks at low transverse momentum do not reach the speed of light and lose substantially less energy than light quarks through both collisional and radiative processes, as gluon radiation by massive quarks is suppressed: the so-called “dead-cone effect”. Additionally, gluons, which carry a larger colour charge than quarks, experience greater energy loss in the QGP as quantified by the Casimir factors CA = 3 for gluons and CF = 4/3 for quarks. This makes the charm quark an ideal probe for studying the QGP properties. ALICE is well suited to study the in-medium energy loss of charm quarks, which is dependent on the mass of the charm quark and its colour charge.

The production yield of charm jets tagged with fully reconstructed D0 mesons (D0 Kπ+) in central Pb–Pb collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair during LHC Run 2 was measured by ALICE. The results are reported in terms of nuclear modification factor (RAA), which is the ratio of the particle production rate in Pb–Pb collisions to that in proton–proton collisions, scaled by the number of binary nucleon–nucleon collisions. A measured nuclear modification factor of unity would indicate the absence of final-state effects.

The results, shown in figure 1, show a clear suppression (RAA < 1) for both charm jets and inclusive jets (that mainly originate from light quarks and gluons) due to energy loss. Importantly, the charm jets exhibit less suppression than the inclusive jets within the transverse momentum range of 20 to 50 GeV, which is consistent with mass and colour-charge dependence.

The measured results are compared with theoretical model calculations that include mass effects in the in-medium energy loss. Among the different models, LIDO incorporates both the dead-cone effect and the colour-charge effects, which are essential for describing the energy-loss mechanisms. Consequently, it shows reasonable agreement with experimental data, reproducing the observed hierarchy between charm jets and inclusive jets.

The present finding provides a hint of the flavour-dependent energy loss in the QGP, suggesting that charm jets lose less energy than inclusive jets. This highlights the quark-mass and colour-charge dependence of the in-medium energy-loss mechanisms.

The post Charm jets lose less energy appeared first on CERN Courier.

]]>
News New results from the ALICE collaboration highlight the quark-mass and colour-charge dependence of energy loss in the quark-gluon plasma. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_EF_ALICE_feature.jpg
Chamonix looks to CERN’s future https://cerncourier.com/a/chamonix-looks-to-cerns-future/ Wed, 26 Mar 2025 14:15:37 +0000 https://cerncourier.com/?p=112738 CERN’s accelerator and experimental communities converged on Chamonix to chart a course for the future.

The post Chamonix looks to CERN’s future appeared first on CERN Courier.

]]>
The Chamonix Workshop 2025, held from 27 to 30 January, brought together CERN’s accelerator and experimental communities to reflect on achievements, address challenges and chart a course for the future. As the discussions made clear, CERN is at a pivotal moment. The past decade has seen transformative developments across the accelerator complex, while the present holds significant potential and opportunity.

The workshop opened with a review of accelerator operations, supported by input from December’s Joint Accelerator Performance Workshop. Maintaining current performance levels requires an extraordinary effort across all the facilities. Performance data from the ongoing Run 3 shows steady improvements in availability and beam delivery. These results are driven by dedicated efforts from system experts, operations teams and accelerator physicists, all working to ensure excellent performance and high availability across the complex.

Electron clouds parting

Attention is now turning to Run 4 and the High-Luminosity LHC (HL-LHC) era. Several challenges have been identified, including the demand for high-intensity beams, radiofrequency (RF) power limitations and electron-cloud effects. In the latter case, synchrotron-radiation photons strike the beam-pipe walls, releasing electrons which are then accelerated by proton bunches, triggering a cascading electron-cloud buildup. Measures to address these issues will be implemented during Long Shutdown 3 (LS3), ensuring CERN’s accelerators continue to meet the demands of its diverse physics community.

LS3 will be a crucial period for CERN. In addition to the deployment of the HL-LHC and major upgrades to the ATLAS and CMS experiments, it will see a widespread programme of consolidation, maintenance and improvements across the accelerator complex to secure future exploitation over the coming decades.

Progress on the HL-LHC upgrade was reviewed in detail, with a focus on key systems – magnets, cryogenics and beam instrumentation – and on the construction of critical components such as crab cavities. The next two years will be decisive, with significant system testing scheduled to ensure that these technologies meet ambitious performance targets.

Planning for LS3 is already well advan­ced. Coordination between all stakeholders has been key to aligning complex interdependencies, and the experienced teams are making strong progress in shaping a resource-loaded plan. The scale of LS3 will require meticulous coordination, but it also represents a unique opportunity to build a more robust and adaptable accelerator complex for the future. Looking beyond LS3, CERN’s unique accelerator complex is well positioned to support an increasingly diverse physics programme. This diversity is one of CERN’s greatest strengths, offering complementary opportunities across a wide range of fields.

The high demand for beam time at ISOLDE, n_TOF, AD-ELENA and the North and East Areas underscores the need for a well-balanced approach that supports a broad range of physics. The discussions highlighted the importance of balancing these demands while ensuring that the full potential of the accelerator complex is realised.

Future opportunities such as those highlighted by the Physics Beyond Colliders study will be shaped by discussions being held as part of the update of the European Strategy for Particle Physics (ESPP). Defining the next generation of physics programmes entails striking a careful balance between continuity and innovation, and the accelerator community will play a central role in setting the priorities.

A forward-looking session at the workshop focused on the Future Circular Collider (FCC) Feasibility Study and the next steps. The physics case was presented alongside updates on territorial implementation and civil-engineering investigations and plans. How the FCC-ee injector complex would fit into the broader strategic picture was examined in detail, along with the goals and deliverables of the pre-technical design report (pre-TDR) phase that is planned to follow the Feasibility Study’s conclusion.

While the FCC remains a central focus, other future projects were also discussed in the context of the ESPP update. These include mature linear-collider proposals, the potential of a muon collider and plasma wakefield acceleration. Development of key technologies, such as high-field magnets and superconducting RF systems, will underpin the realisation of future accelerator-based facilities.

The next steps – preparing for Run 4, implementing the LS3 upgrade programmes and laying the groundwork for future projects – are ambitious but essential. CERN’s future will be shaped by how well we seize these opportunities.

The shared expertise and dedication of CERN’s personnel, combined with a clear strategic vision, provide a solid foundation for success. The path ahead is challenging, but with careful planning, collaboration and innovation, CERN’s accelerator complex will remain at the heart of discovery for decades to come.

The post Chamonix looks to CERN’s future appeared first on CERN Courier.

]]>
Meeting report CERN’s accelerator and experimental communities converged on Chamonix to chart a course for the future. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_Chamonix.jpg
The triggering of tomorrow https://cerncourier.com/a/the-triggering-of-tomorrow/ Wed, 26 Mar 2025 14:14:12 +0000 https://cerncourier.com/?p=112724 The third TDHEP workshop explored how triggers can cope with high data rates.

The post The triggering of tomorrow appeared first on CERN Courier.

]]>
The third edition of Triggering Discoveries in High Energy Physics (TDHEP) attracted 55 participants to Slovakia’s High Tatras mountains from 9 to 13 December 2024. The workshop is the only conference dedicated to triggering in high-energy physics, and follows previous editions in Jammu, India in 2013 and Puebla, Mexico in 2018. Given the upcoming High-Luminosity LHC (HL-LHC) upgrade, discussions focused on how trigger systems can be enhanced to manage high data rates while preserving physics sensitivity.

Triggering systems play a crucial role in filtering the vast amounts of data generated by modern collider experiments. A good trigger design selects features in the event sample that greatly enrich the proportion of the desired physics processes in the recorded data. The key considerations are timing and selectivity. Timing has long been at the core of experiment design – detectors must capture data at the appropriate time to record an event. Selectivity has been a feature of triggering for almost as long. Recording an event makes demands on running time and data-acquisition bandwidth, both of which are limited.

Evolving architecture

Thanks to detector upgrades and major changes in the cost and availability of fast data links and storage, the past 10 years have seen an evolution in LHC triggers away from hardware-based decisions using coarse-grain information.

Detector upgrades mean higher granularity and better time resolution, improving the precision of the trigger algorithms and the ability to resolve the problem of having multiple events in a single LHC bunch crossing (“pileup”). Such upgrades allow more precise initial-level hardware triggering, bringing the event rate down to a level where events can be reconstructed for further selection via high-level trigger (HLT) systems.

To take advantage of modern computer architecture more fully, HLTs use both graphics processing units (GPUs) and central processing units (CPUs) to process events. In ALICE and LHCb this leads to essentially triggerless access to all events, while in ATLAS and CMS hardware selections are still important. All HLTs now use machine learning (ML) algorithms, with the ATLAS and CMS experiments even considering their use at the first hardware level.

ATLAS and CMS are primarily designed to search for new physics. At the end of Run 3, upgrades to both experiments will significantly enhance granularity and time resolution to handle the high-luminosity environment of the HL-LHC, which will deliver up to 200 interactions per LHC bunch crossing. Both experiments achieved efficient triggering in Run 3, but higher luminosities, difficult-to-distinguish physics signatures, upgraded detectors and increasingly ambitious physics goals call for advanced new techniques. The step change will be significant. At HL-LHC, the first-level hardware trigger rate will increase from the current 100 kHz to 1 MHz in ATLAS and 760 kHz in CMS. The price to pay is increasing the latency – the time delay between input and output – to 10 µsec in ATLAS and 12.5 µsec in CMS.

The proposed trigger systems for ATLAS and CMS are predominantly FPGA-based, employing highly parallelised processing to crunch huge data streams efficiently in real time. Both will be two-level triggers: a hardware trigger followed by a software-based HLT. The ATLAS hardware trigger will utilise full-granularity calorimeter and muon signals in the global-trigger-event processor, using advanced ML techniques for real-time event selection. In addition to calorimeter and muon data, CMS will introduce a global track trigger, enabling real-time tracking at the first trigger level. All information will be integrated within the global-correlator trigger, which will extensively utilise ML to enhance event selection and background suppression.

Substantial upgrades

The other two big LHC experiments already implemented substantial trigger upgrades at the beginning of Run 3. The ALICE experiment is dedicated to studying the strong interactions of the quark–gluon plasma – a state of matter in which quarks and gluons are not confined in hadrons. The detector was upgraded significantly for Run 3, including the trigger and data-acquisition systems. The ALICE continuous readout can cope with 50 kHz for lead ion–lead ion (PbPb) collisions and several MHz for proton–proton (pp) collisions. In PbPb collisions the full data is continuously recorded and stored for offline analysis, while for pp collisions the data is filtered.

Unlike in Run 2, where the hardware trigger reduced the data rate to several kHz, Run 3 uses an online software trigger that is a natural part of the common online–offline computing framework. The raw data from detectors is streamed continuously and processed in real time using high-performance FPGAs and GPUs. ML plays a crucial role in the heavy-flavour software trigger, which is one of the main physics interests. Boosted decision trees are used to identify displaced vertices from heavy quark decays. The full chain from saving raw data in a 100 PB buffer to selecting events of interest and removing the original raw data takes about three weeks and was fully employed last year.

The third edition of TDHEP suggests that innovation in this field is only set to accelerate

The LHCb experiment focuses on precision measurements in heavy-flavour physics. A typical example is measuring the probability of a particle decaying into a certain decay channel. In Run 2 the hardware trigger tended to saturate in many hadronic channels when the luminosity was instantaneously increased. To solve this issue for Run 3 a high-level software trigger was developed that can handle 30 MHz event readout with 4 TB/s data flow. A GPU-based partial event reconstruction and primary selection of displaced tracks and vertices (HLT1) reduces the output data rate to 1 MHz. The calibration and detector alignment (embedded into the trigger system) are calculated during data taking just after HLT1 and feed full-event reconstruction (HLT2), which reduces the output rate to 20 kHz. This represents 10 GB/s written to disk for later analysis.

Away from the LHC, trigger requirements differ considerably. Contributions from other areas covered heavy-ion physics at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC), fixed-target physics at CERN and future experiments at the Facility for Antiproton and Ion Research at GSI Darmstadt and Brookhaven’s Electron–Ion Collider (EIC). NA62 at CERN and STAR at RHIC both use conventional trigger strategies to arrive at their final event samples. The forthcoming CBM experiment at FAIR and the ePIC experiment at the EIC deal with high intensities but aim for “triggerless” operation.

Requirements were reported to be even more diverse in astroparticle physics. The Pierre Auger Observatory combines local and global trigger decisions at three levels to manage the problem of trigger distribution and data collection over 3000 km2 of fluorescence and Cherenkov detectors.

These diverse requirements will lead to new approaches being taken, and evolution as the experiments are finalised. The third edition of TDHEP suggests that innovation in this field is only set to accelerate.

The post The triggering of tomorrow appeared first on CERN Courier.

]]>
Meeting report The third TDHEP workshop explored how triggers can cope with high data rates. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_TDHEP.jpg
Space oddities https://cerncourier.com/a/space-oddities/ Wed, 26 Mar 2025 14:11:01 +0000 https://cerncourier.com/?p=112823 In his new popular book, Harry Cliff tackles the thorny subject of anomalies in fundamental science.

The post Space oddities appeared first on CERN Courier.

]]>
Space Oddities

Space Oddities takes readers on a journey through the mysteries of modern physics, from the smallest subatomic particles to the vast expanse of stars and space. Harry Cliff – an experimental particle physicist at Cambridge University – unravels some of the most perplexing anomalies challenging the Standard Model (SM), with behind-the-scenes scoops from eight different experiments. The most intriguing stories concern lepton universality and the magnetic moment of the muon.

Theoretical predictions have demonstrated an extremely precise value for the muon’s magnetic moment, experimentally verified to an astonishing 11 significant figures. Over the last few years, however, experimental measurements have suggested a slight discrepancy – the devil lying in the 12th digit. 2021 measurements at Fermilab disagreed with theory predictions at 4σ. Not enough to cause a “scientific earthquake”, as Cliff puts it, but enough to suggest that new physics might be at play.

Just as everything seemed to be edging towards a new discovery, Cliff introduces the “villains” of the piece. Groundbreaking lattice–QCD predictions from the Budapest–Marseille–Wuppertal collaboration were published on the same day as a new measurement from Fermilab. If correct, these would destroy the anomaly by contradicting the data-driven theory consensus. (“Yeah, bullshit,” said one experimentalist to Cliff when put to him that the timing wasn’t intended to steal the experiment’s thunder.) The situation is still unresolved, though many new theoretical predictions have been made and a new theoretical consensus is imminent (see “Do muons wobble faster than expected“). Regardless of the outcome, Cliff emphasises that this research will pave the way for future discoveries, and none of it should be taken for granted – even if the anomaly disappears.

“One of the challenging aspects of being part of a large international project is that your colleagues are both collaborators and competitors,” Cliff notes. “When it comes to analysing the data with the ultimate goal of making discoveries, each research group will fight to claim ownership of the most interesting topics.”

This spirit of spurring collaborator- competitors on to greater heights of precision is echoed throughout Cliff’s own experience of working in the LHCb collaboration, where he studies “lepton universality”. All three lepton flavours – electron, muon and tau – should interact almost identically, except for small differences due to their masses. However, over the past decade several experimental results suggested that this theory might not hold in B-meson decays, where muons seemed to be appearing less frequently than electrons. If confirmed, this would point to physics beyond the SM.

Having been involved himself in a complementary but less sensitive analy­sis of B-meson decay channels involving strange quarks, Cliff recalls the emotional rollercoaster experienced by some of the key protagonists: the “RK” team from Imperial College London. After a year of rigorous testing, RK unblinded a sanity check of their new computational toolkit: a reanalysis of the prior measurement that yielded a perfectly consistent R value of 0.72 with an uncertainty of about 0.08, upholding a 3σ discrepancy. Now was the time to put the data collected since then through the same pasta machine: if it agreed, the tension between the SM and their overall measurement would cross the 5σ threshold. After an anxious wait while the numbers were crunched, the team received the results for the new data: 0.93 with an uncertainty of 0.09.

“Dreams of a major discovery evaporated in an instant,” recalls Cliff. “Anyone who saw the RK team in the CERN cafeteria that day could read the result from their faces.” The lead on the RK team, Mitesh Patel, told Cliff that they felt “emotionally train wrecked”.

One day we might make the right mistake and escape the claustrophobic clutches of the SM

With both results combined, the ratio averaged out to 0.85 ± 0.06, just shy of 3σ away from unity. While the experimentalists were deflated, Cliff notes that for theorists this result may have been more exciting than the initial anomaly, as it was easier to explain using new particles or forces. “It was as if we were spying the footprints of a great, unknown beast as it crashed about in a dark jungle,” writes Cliff.

Space Oddities is a great defence of irrepressible experimentation. Even “failed” anomalies are far from useless: if they evaporate, the effort required to investigate them pushes the boundaries of experimental precision, enhances collaboration between scientists across the world, and refines theoretical frameworks. Through retellings and interviews, Cliff helps the public experience the excitement of near breakthroughs, the heartbreak of failed experiments, and the dynamic interactions between theoretical and experimental physicists. Thwarting myths that physicists are cold, calculating figures working in isolation, Cliff sheds light on a community driven by curiosity, ambition and (healthy) competition. His book is a story of hope that one day we might make the right mistake and escape the claustrophobic clutches of the SM.

“I’ve learned so much from my mistakes,” read a poster above Cliff’s undergraduate tutor’s desk. “I think I’ll make another.”

The post Space oddities appeared first on CERN Courier.

]]>
Review In his new popular book, Harry Cliff tackles the thorny subject of anomalies in fundamental science. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_REV_Space_feature.jpg
Probing the quark–gluon plasma in Nagasaki https://cerncourier.com/a/probing-the-quark-gluon-plasma-in-nagasaki/ Wed, 26 Mar 2025 14:08:03 +0000 https://cerncourier.com/?p=112733 The 12th edition of the International Conference on Hard and Electromagnetic Probes attracted over 300 physicists to Nagasaki, Japan.

The post Probing the quark–gluon plasma in Nagasaki appeared first on CERN Courier.

]]>
The 12th edition of the International Conference on Hard and Electromagnetic Probes attracted 346 physicists to Nagasaki, Japan, from 22 to 27 September 2024. Delegates discussed the recent experimental and theoretical findings on perturbative probes of the quark–gluon plasma (QGP) – a hot and deconfined state of matter formed in ultrarelativistic heavy-ion collisions.

The four main LHC experiments played a prominent role at the conference, presenting a large set of newly published results from studies performed on data collected during LHC Run 2, as well as several new preliminary results performed on the new data samples from Run 3.

Jet modifications

A number of significant results on the modification of jets in heavy-ion collisions were presented. Splitting functions characterising the evolution of parton showers are expected to be modified in the presence of the QGP, providing experimental access to the medium properties. A more differential look at these modifications was presented through a correlated measurement of the shared momentum fraction and opening angle of the first splitting satisfying the “soft drop” condition in jets. Additionally, energy–energy correlators have recently emerged as promising observables where the properties of jet modification in the medium might be imprinted at different scales on the observable.

The first measurements of the two-particle energy–energy correlators in p–Pb and Pb–Pb collisions were presented, showing modifications in both the small- and large-angle correlations for both systems compared to pp collisions. A long-sought after effect of energy exchanges between the jet and the medium is a correlated response of the medium in the jet direction. For the first time, measurements of hadron–boson correlations in events containing photons or Z bosons showed a clear depletion of the bulk medium in the direction of the Z boson, providing direct evidence of a medium response correlated to the propagating back-to-back jet. In pp collisions, the first direct measurement of the dead cone of beauty quarks, using novel machine-learning methods to reconstruct the beauty hadron from partial decay information, was also shown.

Several new results from studies of particle production in ultraperipheral heavy-ion collisions were discussed. These studies allow us to investigate the possible onset of gluon saturation at low Bjorken-x values. In this context, new results of charm photoproduction, with measurements of incoherent and coherent J/ψ mesons, as well as of D0 mesons, were released. Photonuclear production cross-sections of di-jets, covering a large interval of photon energies to scan over different regions of Bjorken-x, were also presented. These measurements pave the way for setting constraints on the gluon component of nuclear parton distribution functions at low Bjorken-x values, over a wide Q2 range, in the absence of significant final-state effects.

New experiments will explore higher-density regions of the QCD–matter phase diagram

During the last few years, a significant enhancement of charm and beauty-baryon production in proton–proton collisions was observed, compared to measurements in e+e and ep collisions. These observations have challenged the assumption of the universality of heavy-quark fragmentation across different collision systems. Several intriguing measurements on this topic were released at the conference. In addition to an extended set of charm meson-to-meson and baryon-to-meson production yield ratios, the first measurements of the production of Σc0,++(2520) relative to Σc0,++(2455) at the LHC, obtained exploiting the new Run 3 data samples, were discussed. New insights on the structure of the exotic χc1(3872) state and its hadronisation mechanism were garnered by measuring the ratio of its production yield to that of ψ(2S) mesons in hadronic collisions.

Additionally, strange-to-non-strange production-yield ratios for charm and beauty mesons as a function of the collision multiplicity were released, pointing toward an enhanced strangeness production in a higher colour-density environment. Several theoretical approaches implementing modified hadronisation mechanisms with respect to in-vacuum fragmentation have proven to be able to reproduce at least part of the measurements, but a comprehensive description of the heavy-quark hadronisation, in particular for the baryonic sector, is still to be reached.

A glimpse into the future of the experimental opportunities in this field was also provided. A new and intriguing set of physics observables for a complete characterisation of the QGP with hard probes will become accessible with the planned upgrades of the ALICE, ATLAS, CMS and LHCb detectors, both during the next long LHC shutdown and in the more distant future. New experiments at CERN, such as NA60+, or in other facilities like the Electron–Ion Collider in the US and J-PARC-HI in Japan, will explore higher-density regions of the QCD–matter phase diagram.

The next edition of this conference series is scheduled to be held in Nashville, US, from 1 to 5 June 2026.

The post Probing the quark–gluon plasma in Nagasaki appeared first on CERN Courier.

]]>
Meeting report The 12th edition of the International Conference on Hard and Electromagnetic Probes attracted over 300 physicists to Nagasaki, Japan. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_HP2024.jpg
Encounters with artists https://cerncourier.com/a/encounters-with-artists/ Wed, 26 Mar 2025 13:45:55 +0000 https://cerncourier.com/?p=112793 Over the past 10 years, Mónica Bello facilitated hundreds of encounters between artists and scientists as curator of the Arts at CERN programme.

The post Encounters with artists appeared first on CERN Courier.

]]>
Why should scientists care about art?

Throughout my experiences in the laboratory, I have seen how art is an important part of a scientist’s life. By being connected with art, scientists recognise that their activities are very embedded in contemporary culture. Science is culture. Through art and dialogues with artists, people realise how important science is for society and for culture in general. Science is an important cultural pillar in our society, and these interactions bring scientists meaning.

Are science and art two separate cultures?

Today, if you ask anyone: “What is nature?” they describe everything in scientific terms. The way you describe things, the mysteries of your research: you are actually answering the questions that are present in everyone’s life. In this case, scientists have a sense of responsibility. I think art helps to open this dialogue from science into society.

Do scientists have a responsibility to communicate their research?

All of us have a social responsibility in everything we produce. Ideas don’t belong to anyone, so it’s a collective endeavour. I think that scientists don’t have the responsibility to communicate the research themselves, but that their research cannot be isolated from society. I think it’s a very joyful experience to see that someone cares about what you do.

Why should artists care about science?

If you go to any academic institution, there’s always a scientific component, very often also a technological one. A scientific aspect of your life is always present. This is happening because we’re all on the same course. It’s a consequence of this presence of science in our culture. Artists have an important role in our society, and they help to spark conversations that are important to everyone. Sometimes it might seem as though they are coming from a very individual lens, but in fact they have a very large reach and impact. Not immediately, not something that you can count with data, but there is definitely an impact. Artists open these channels for communicating and thinking about a particular aspect of science, which is difficult to see from a scientific perspective. Because in any discipline, it’s amazing to see your activity from the eyes of others.

Creativity and curiosity are the parameters and competencies that make up artists and scientists

A few years back we did a little survey, and most of the scientists thought that by spending time with artists, they took a step back to think about their research from a different lens, and this changed their perspective. They thought of this as a very positive experience. So I think art is not only about communicating to the public, but about exploring the personal synergies of art and science. This is why artists are so important.

Do experimental and theoretical physicists have different attitudes towards art?

Typically, we think that theorists are much more open to artists, but I don’t agree. In my experiences at CERN, I found many engineers and experimental physicists being highly theoretical. Both value artistic perspectives and their ability to consider questions and scientific ideas in an unconventional way. Experimental physicists would emphasise engagement with instruments and data, while theoretical physicists would focus on conceptual abstraction.

By being with artists, many experimentalists feel that they have the opportunity to talk about things beyond their research. For example, we often talk about the “frontiers of knowledge”. When asked about this, experimentalists or theoretical physicists might tell us about something other than particle physics – like neuroscience, or the brain and consciousness. A scientist is a scientist. They are very curious about everything.

Do these interactions help to blur the distinction between art and science?

Well, here I’m a bit radical because I know that creativity is something we define. Creativity and curiosity are the parameters and competencies that make up artists and scientists. But to become a scientist or an artist you need years of training – it’s not that you can become one just because you are a curious and creative person.

Chroma VII work of art

Not many people can chat about particle physics, but scientists very often chat with artists. I saw artists speaking for hours with scientists about the Higgs field. When you see two people speaking about the same thing, but with different registers, knowledge and background, it’s a precious moment.

When facilitating these discussions between physicists and artists, we don’t speak only about physics, but about everything that worries them. Through that, grows a sort of intimacy that often becomes something else: a friendship. This is the point at which a scientist stops being an information point for an artist and becomes someone who deals with big questions alongside an artist – who is also a very knowledgeable and curious person. This is a process rich in contrast, and you get many interesting surprises out of these interactions.

But even in this moment, they are still artists and scientists. They don’t become this blurred figure that can do anything.

Can scientific discovery exist without art?

That’s a very tricky question. I think that art is a component of science, therefore science cannot exist without art – without the qualities that the artist and scientist have in common. To advance science, you have to create a question that needs to be answered experimentally.

Did discoveries in quantum mechanics affect the arts?

Everything is subjected to quantum mechanics. Maybe what it changed was an attitude towards uncertainty: what we see and what we think is there. There was an increased sense of doubt and general uncertainty in the arts.

Do art and science evolve together or separately?

I think there have been moments of convergence – you can clearly see it in any of the avant garde. The same applies to literature; for example, modernist writers showed a keen interest in science. Poets such as T S Eliot approached poetry with a clear resonance of the first scientific revolutions of the century. There are references to the contributions of Faraday, Maxwell and Planck. You can tell these artists and poets were informed and eager to follow what science was revealing about the world.

You can also note the influence of science in music, as physicists get a better understanding of the physical aspects of sound and matter. Physics became less about viewing the world through a lens, and instead focused on the invisible: the vibrations of matter, electricity, the innermost components of materials. At the end of the 19th and 20th centuries, these examples crop up constantly. It’s not just representing the world as you see it through a particular lens, but being involved in the phenomena of the world and these uncensored realities.

From the 1950s to the 1970s you can see these connections in every single moment. Science is very present in the work of artists, but my feeling is that we don’t have enough literature about it. We really need to conduct more research on this connection between humanities and science.

What are your favourite examples of art influencing science?

Feynman diagrams are one example. Feynman was amazing – a prodigy. Many people before him tried to represent things that escaped our intuition visually and failed. We also have the Pauli Archives here at CERN. Pauli was not the most popular father of quantum mechanics, but he was determined to not only understand mathematical equations but to visualise them, and share them with his friends and colleagues. This sort of endeavour goes beyond just writing – it is about the possibility of creating a tangible experience. I think scientists do that all the time by building machines, and then by trying to understand these machines statistically. I see that in the laboratory constantly, and it’s very revealing because usually people might think of these statistics as something no one cares about – that the visuals are clumsy and nerdy. But they’re not.

Even Leonardo da Vinci was known as a scientist and an artist, but his anatomical sketches were not discovered until hundreds of years after his other works. Newton was also paranoid about expressing his true scientific theories because of the social standards and politics of the time. His views were unorthodox, and he did not want to ruin his prestigious reputation.

Today’s culture also influences how we interpret history. We often think of Aristotle as a philosopher, yet he is also recognised for contributions to natural history. The same with Democritus, whose ideas laid foundations for scientific thought.

So I think that opening laboratories to artists is very revealing about the influence of today’s culture on science.

When did natural philosophy branch out into art and science?

I believe it was during the development of the scientific method: observation, analysis and the evolution of objectivity. The departure point was definitely when we developed a need to be objective. It took centuries to get where we are now, but I think there is a clear division: a line with philosophy, natural philosophy and natural history on one side, and modern science on the other. Today, I think art and science have different purposes. They convene at different moments, but there is always this detour. Some artists are very scientific minded, and some others are more abstract, but they are both bound to speculate massively.

Its really good news for everyone that labs want to include non-scientists

For example, at our Arts at CERN programme we have had artists who were interested in niche scientific aspects. Erich Berger, an artist from Finland, was interested in designing a detector, and scientists whom he met kept telling him that he would need to calibrate the detector. The scientist and the artist here had different goals. For the scientist, the most important thing is that the detector has precision in the greatest complexity. And for the artist, it’s not. It’s about the process of creation, not the analysis.

Do you think that science is purely an objective medium while art is a subjective one?

No. It’s difficult to define subjectivity and objectivity. But art can be very objective. Artists create artefacts to convey their intended message. It’s not that these creations are standing alone without purpose. No, we are beyond that. Now art seeks meaning that is, in this context, grounded in scientific and technological expertise.

How do you see the future of art and science evolving?

There are financial threats to both disciplines. We are still in this moment where things look a bit bleak. But I think our programme is pioneering, because many scientific labs are developing their own arts programmes inspired by the example of Arts at CERN. This is really great, because unless you are in a laboratory, you don’t see what doing science is really about. We usually read science in the newspapers or listen to it on a podcast – everything is very much oriented to the communication of science, but making science is something very specific. It’s really good news for everyone that laboratories want to include non-scientists. Arts at CERN works mostly with visual artists, but you could imagine filmmakers, philosophers, those from the humanities, poets or almost anyone at all, depending on the model that one wants to create in the lab.

The post Encounters with artists appeared first on CERN Courier.

]]>
Opinion Over the past 10 years, Mónica Bello facilitated hundreds of encounters between artists and scientists as curator of the Arts at CERN programme. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_INT_Bello.jpg
Breaking new ground in flavour universality https://cerncourier.com/a/breaking-new-ground-in-flavour-universality/ Wed, 26 Mar 2025 13:43:49 +0000 https://cerncourier.com/?p=112758 A new result from the LHCb collaboration further tightens constraints on the lepton-flavour-universality violation in rare B decays.

The post Breaking new ground in flavour universality appeared first on CERN Courier.

]]>
LHCb figure 1

A new result from the LHCb collaboration supports the hypothesis that the rare decays B± K±e+e and B± K±µ+µoccur at the same rate, further tightening constraints on the magnitude of lepton flavour universality (LFU) violation in rare B decays. The new measurement is the most precise to date in the high-q2 region and the first of its kind at a hadron collider.

LFU is an accidental symmetry of the Standard Model (SM). Under LFU, each generation of lepton ℓ± (electron, muon and tau lepton) is equally likely to interact with the W boson in decay processes such as B± K±+. This symmetry leads to the prediction that the ratio of branching fractions for these decay channels should be unity except for kinematic effects due to the different masses of the charged leptons. The most straightforward ratio to measure is that between the muon and electron decay modes, known as RK. Any significant deviation from RK = 1 could only be explained by the existence of new physics (NP) particles that preferentially couple to one lepton generation over another, violating LFU.

B± K±+ decays are a powerful probe for virtual NP particles. These decays involve an underlying b–to–s quark transition – an example of a flavour-changing neutral current (FCNC). FCNC transitions are extremely rare in the SM, as they occur only through higher-order Feynman diagrams. This makes them particularly sensitive to contributions from NP particles, which could significantly alter the characteristics of the decays. In this case, the mass of the NP particles could be much larger than can be produced directly at the LHC. “Indirect” searches for NP, such as measuring the precisely predicted ratio RK, can probe mass scales beyond the reach of direct-production searches with current experimental resources.

The new measurement is the most precise to date in the high-q2 region

In the decay process B± K±+, the final-state leptons can also originate from an intermediate resonant state, such as a J/ψ or ψ(2S). These resonant channels occur through tree-level Feynman diagrams. Their contributions significantly outnumber the non-resonant FCNC processes and are not expected to be affected by NP. RK is therefore measured in ranges of dilepton invariant mass-squared (q2), which exclude these resonances, to preserve sensitivity to potential NP effects in FCNC processes.

The new result from the LHCb collaboration measures RK in the high-q2 region, above the ψ(2S) resonance. The high-q2 region data has a different composition of backgrounds compared to the low-q2 data, leading to different strategies for their rejection and modelling, and different systematic effects. With RK expected to be unity in all domains in the SM, low-q2 and high-q2 measurements offer powerfully complementary constraints on the magnitude of LFU-violating NP in rare B decays.

The new measurement of RK agrees with the SM prediction of unity and is the most precise to date in the high-q2 region (figure 1). It complements a refined analysis below the J/ψ resonance published by LHCb in 2023, which also reported RK consistent with unity. Both results use the complete proton–proton collision data collected by LHCb from 2011 to 2018. They lay the groundwork for even more precise measurements with data from Run 3 and beyond.

The post Breaking new ground in flavour universality appeared first on CERN Courier.

]]>
News A new result from the LHCb collaboration further tightens constraints on the lepton-flavour-universality violation in rare B decays. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_EF_LHCb_feature.jpg
A new record for precision on B-meson lifetimes https://cerncourier.com/a/a-new-record-for-precision-on-b-meson-lifetimes/ Wed, 26 Mar 2025 13:24:30 +0000 https://cerncourier.com/?p=112771 As direct searches for physics beyond the Standard Model continue to push frontiers at the LHC, the b-hadron physics sector remains a crucial source of insight for testing established theoretical models.

The post A new record for precision on B-meson lifetimes appeared first on CERN Courier.

]]>
ATLAS figure 1

As direct searches for physics beyond the Standard Model continue to push frontiers at the LHC, the b-hadron physics sector remains a crucial source of insight for testing established theoretical models.

The ATLAS collaboration recently published a new measurement of the B0 lifetime using B0 J/ψK*0 decays from the entire Run-2 dataset it has recorded at 13 TeV. The result improves the precision of previous world-leading measurements by the CMS and LHCb collaborations by a factor of two.

Studies of b-hadron lifetimes probe our understanding of the weak interaction. The lifetimes of b-hadrons can be systematically computed within the heavy-quark expansion (HQE) framework, where b-hadron observables are expressed as a perturbative expansion in inverse powers of the b-quark mass.

ATLAS measures the “effective” B0 lifetime, which represents the average decay time incorporating effects from mixing and CP contributions, as τ(B0) = 1.5053 ± 0.0012 (stat.) ± 0.0035 (syst.) ps. The result is consistent with previous measurements published by ATLAS and other experiments, as summarised in figure 1. It also aligns with theoretical predictions from HQE and lattice QCD, as well as with the experimental world average.

The analysis benefitted from the large Run-2 dataset and a refined trigger selection, enabling the collection of an extensive sample of 2.5 million B0 J/ψK*0 decays. Events with a J/ψ meson decaying into two muons with sufficient transverse momentum are cleanly identified in the ATLAS Muon Spectrometer by the first-level hardware trigger. In the next-level software trigger, exploiting the full detector information, these muons are then combined with two tracks measured by the Inner Detector, ensuring they originate from the same vertex.

The B0-meson lifetime is determined through a two-dimensional unbinned maximum-likelihood fit, utilising the measured B0-candidate mass and decay time, and accounting for both signal and background components. The limited hadronic particle-identification capability of ATLAS requires careful modelling of the significant backgrounds from other processes that produce J/ψ mesons. The sensitivity of the fit is increased by estimating the uncertainty of the decay-time measurement provided by the ATLAS tracking and vertexing algorithms on a per-candidate basis. The resulting lifetime measurement is limited by systematic uncertainties, with the largest contributions arising from the correlation between B0 mass and lifetime, and ambiguities in modelling the mass distribution. 

ATLAS combined its measurement with the average decay width (Γs) of the light and heavy Bs-meson mass eigenstates, also measured by ATLAS, to determine the ratio of decay widths as Γd/Γs = 0.9905 ± 0.0022 (stat.) ± 0.0036 (syst.) ± 0.0057 (ext.). The result is consistent with unity and provides a stringent test of QCD predictions, which also support a value near unity.

The post A new record for precision on B-meson lifetimes appeared first on CERN Courier.

]]>
News As direct searches for physics beyond the Standard Model continue to push frontiers at the LHC, the b-hadron physics sector remains a crucial source of insight for testing established theoretical models. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_EF_ATLAS_feature.jpg
Beyond Bohr and Einstein https://cerncourier.com/a/beyond-bohr-and-einstein/ Wed, 26 Mar 2025 13:22:32 +0000 https://cerncourier.com/?p=112808 Jim Al-Khalili reviews Quantum Drama, a new book by physicist and science writer Jim Baggott and the late historian of science John L Heilbron.

The post Beyond Bohr and Einstein appeared first on CERN Courier.

]]>
When I was an undergraduate physics student in the mid-1980s, I fell in love with the philosophy of quantum mechanics. I devoured biographies of the greats of early-20th-century atomic physics – physicists like Bohr, Heisenberg, Schrödinger, Pauli, Dirac, Fermi and Born. To me, as I was struggling with the formalism of quantum mechanics, there seemed to be something so exciting, magical even, about that era, particularly those wonder years of the mid-1920s when its mathematical framework was being developed and the secrets of the quantum world were revealing themselves.

I went on to do a PhD in nuclear reaction theory, which meant I spent most of my time working through mathema­tical derivations, becoming familiar with S-matrices, Green’s functions and scattering amplitudes, scribbling pages of angular-momentum algebra and coding in Fortran 77. And I loved that stuff. There certainly seemed to be little time for worrying about what was really going on inside atomic nuclei. Indeed, I was learning that even the notion of something “really going on” was a vague one. My generation of theoretical physicists were still being very firmly told to “shut up and calculate”, as many adherents of the Copenhagen school of quantum mechanics were keen to advocate. To be fair, so much progress has been made over the past century, in nuclear and particle physics, quantum optics, condensed-matter physics and quantum chemistry, that philosophical issues were seen as an unnecessary distraction. I recall one senior colleague, frustrated by my abiding interest in interpretational matters, admonishing me with: “Jim, an electron is an electron is an electron. Stop trying to say more about it.” And there certainly seemed to be very little in the textbooks I was reading about unresolved issues arising from such topics as the EPR (Einstein–Podolsky–Rosen) paradox and the measurement problem, let alone any analysis of the work of Hugh Everett and David Bohm, who were regarded as mavericks. The Copenhagen hegemony ruled supreme.

What I wasn’t aware of until later in my career was that a community of physicists had indeed continued to worry and think about such matters. These physicists were doing more than just debating and philosophising – they were slowly advancing our understanding of the quantum world. Experimentalists such as Alain Aspect, John Clauser and Anton Zeilinger were devising ingenious experiments in quantum optics – all three of whom were only awarded the Nobel Prize for their work on tests of John Bell’s famous inequality in 2022, which says a lot about how we are only now acknowledging their contribution. Meanwhile, theorists such as Wojciech Zurek, Erich Joos, Deiter Zeh, Abner Shimony and Asher Peres, to name just a few, were formalising ideas on entanglement and decoherence theory. It is certainly high time that quantum-mechanics textbooks – even advanced undergraduate ones – should contain their new insights.

Quantum Drama

All of which brings me to Quantum Drama, a new popular-science book and collaboration between the physicist and science writer Jim Baggott and the late historian of science John L Heilbron. In terms of level, the book is at the higher end of the popular-science market and, as such, will probably be of most interest to, for example, readers of CERN Courier. If I have a criticism of the book it is that its level is not consistent. For it tries to be all things. On occasion, it has wonderful biographical detail, often of less well-known but highly deserving characters. It is also full of wit and new insights. But then sometimes it can get mired in technical detail, such as in the lengthy descriptions of the different Bell tests, which I imagine only professional physicists are likely to fully appreciate.

Having said that, the book is certainly timely. This year the world celebrates the centenary of quantum physics, since the publication of the momentous papers of Heisenberg and Schrödinger on matrix and wave mechanics, in 1925 and 1926, respectively. Progress in quantum information theory and in the development of new quantum technologies is also gathering pace right now, with the promise of quantum computers, quantum sensing and quantum encryption getting ever closer. This all provides an opportunity for the philosophy of quantum mechanics to finally emerge from the shadows into mainstream debate again.

A new narrative

So, what makes Quantum Drama stand out from other books that retell the story of quantum mechanics? Well, I would say that most historical accounts tend to focus only on that golden age between 1900 and 1927, which came to an end at the Solvay Conference in Brussels and those well-documented few days when Einstein and Bohr had their debate about what it all means. While these two giants of 20th-century physics make the front cover of the book, Quantum Drama takes the story on beyond that famous conference. Other accounts, both popular and scholarly, tend to push the narrative that Bohr won the argument, leaving generations of physicists with the idea that the interpretational issues had been resolved – apart that is, from the odd dissenting voices from the likes of Everett or Bohm who tried, unsuccessfully it was argued, to put a spanner in the Copenhagen works. All the real progress in quantum foundations after 1927, or so we were told, was in the development of quantum field theories, such as QED and QCD, the excitement of high-energy physics and the birth of the Standard Model, with the likes of Murray Gell-Mann and Steven Weinberg replacing Heisenberg and Schrödinger at centre stage. Quantum Drama takes up the story after 1927, showing that there has been a lively, exciting and ongoing dispute over what it all means, long after the death of those two giants of physics. In fact, the period up to Solvay 1927 is all dealt with in Act I of the book. The subtitle puts it well: From the Bohr–Einstein Debate to the Riddle of Entanglement.

The Bohr–Einstein debate is still very much alive and kicking

All in all, Quantum Drama delivers something remarkable, for it shines a light on all the muddle, complexity and confusion surrounding a century of debate about the meaning of quantum mechanics and the famous “Copenhagen spirit”, treating the subject with thoroughness and genuine scholarship, and showing that the Bohr–Einstein debate is still very much alive and kicking.

The post Beyond Bohr and Einstein appeared first on CERN Courier.

]]>
Review Jim Al-Khalili reviews Quantum Drama, a new book by physicist and science writer Jim Baggott and the late historian of science John L Heilbron. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_REV_Bell.jpg
Guido Barbiellini 1936–2024 https://cerncourier.com/a/guido-barbiellini-1936-2024/ Wed, 26 Mar 2025 13:21:08 +0000 https://cerncourier.com/?p=112839 Guido Barbiellini Amidei, who passed away on 15 November 2024, made fundamental contributions to both particle physics and astrophysics.

The post Guido Barbiellini 1936–2024 appeared first on CERN Courier.

]]>
Guido Barbiellini

Guido Barbiellini Amidei, who passed away on 15 November 2024, made fundamental contributions to both particle physics and astrophysics.

In 1959 Guido earned a degree in physics from Rome University with a thesis on electron bremsstrahlung in monocrystals under Giordano Diambrini, a skilled experimentalist and excellent teacher. Another key mentor was Marcello Conversi, spokesperson for one of the detectors at the Adone electron–positron collider at INFN Frascati, where Guido became a staff member and developed the first luminometer based on small-angle electron–positron scattering – a technique still used today. Together with Shuji Orito, he also built the first double-tagging system for studying gamma-ray collisions.

Guido later spent several years at CERN, collaborating with Carlo Rubbia, first on the study of K-meson decays at the Proton Synchrotron and then on small-angle proton–proton scattering at the Intersecting Storage Rings. In 1974 he proposed an experiment in a new field for him: neutrino-electron scattering, a fundamental but extremely rare phenomenon known from a handful of events seen in Gargamelle. To distinguish electromagnetic showers from hadronic ones, the CHARM collaboration built a “light” calorimeter made of 150 tonnes of Carrara marble. From 1979 to 1983, 200 electron–neutrino scattering events were recorded.

In 1980 Guido remarked to his friend Ugo Amaldi: “Why don’t we start our own collaboration for LEP instead of joining others?” This suggestion sparked the genesis of the DELPHI collaboration, in which Guido played a pivotal role in defining its scientific objectives and overseeing the construction of the barrel electromagnetic calorimeter. He also contributed significantly to the design of the luminosity monitors. Above all, Guido was a constant driving force within the experiment, offering innovative ideas for fundamental physics during the transition to LEP’s higher-energy phase, and engaging tirelessly with both young students and senior colleagues.

Guido’s insatiable scientific curiosity also extended to CP symmetry violation. In 1989 he co-organised a workshop, with Konrad Kleinknecht and Walter Hoogland, exploring the possibility of an electron–positron ϕ-factory to study CP violation in neutral kaon decays. Two of his papers, with Claudio Santoni, laid the groundwork for constructing the DAΦNE collider in Frascati.

The year 1987 was a turning point for Guido. Firstly, he became a professor at the University of Trieste. Secondly, the detection of neutrinos produced by Supernova 1987A inspired a letter, published in Nature in collaboration with Giuseppe Cocconi, in which it was established that neutrinos have a charge smaller than 10–17 elementary charges. Thirdly, Guido presented a new idea to mount silicon detectors (which he had encountered through work done in DELPHI by Bernard Hyams and Peter Weilhammer) on the International Space Station or a spacecraft to detect cosmic rays and their showers, which led to a seminal paper.

At the beginning of the 1990s, an international collaboration for a large NASA space mission focused on gamma-ray astrophysics (initially named GLAST) began to form, led by SLAC scientists. Guido was among the first proponents and later was the national representative of many INFN groups. The mission, later renamed Fermi, was launched in 2008 and continues to produce significant insights in topics ranging from neutron stars and black holes to dark-matter annihilation.

Beyond GLAST, Guido was captivated by the application of silicon sensors to a new programme of small space missions initiated by the Italian Space Agency. The AGILE gamma-ray astrophysics mission, for which Guido was co-principal investigator, was conceived and approved during this period. Launched in 2007, AGILE made numerous discoveries over nearly 17 years, including identifying the origin of hadronic cosmic rays in supernova remnants and discovering novel, rapid particle acceleration phenomena in the Crab Nebula.

Guido’s passion for physics made him inexhaustible. He always brought fresh insights and thoughtful judgments, fostering a collaborative environment that enriched all the projects he took part in. He was not only a brilliant physicist but also a true gentleman of calm and mild manners, widely appreciated as a teacher and as director of INFN Trieste. Intellectually free and always smiling, he conveyed determination and commitment with grace and a profound dedication to nurturing young talents. He will be deeply missed.

The post Guido Barbiellini 1936–2024 appeared first on CERN Courier.

]]>
News Guido Barbiellini Amidei, who passed away on 15 November 2024, made fundamental contributions to both particle physics and astrophysics. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_Obits_Barbiellini_feature.jpg
Meinhard Regler 1941–2024 https://cerncourier.com/a/meinhard-regler-1941-2024/ Wed, 26 Mar 2025 13:20:13 +0000 https://cerncourier.com/?p=112845 Meinhard Regler, an expert in detector development and software analysis, passed away on 22 September 2024 at the age of 83.

The post Meinhard Regler 1941–2024 appeared first on CERN Courier.

]]>
Meinhard Regler

Meinhard Regler, an expert in detector development and software analysis, passed away on 22 September 2024 at the age of 83.

Born and raised in Vienna, Meinhard studied physics at the Technical University Vienna (TUW) and completed his master’s thesis on deuteron acceleration in a linac at CERN. In 1966 he joined the newly founded Institute of High Energy Physics (HEPHY) of the Austrian Academy of Sciences. He settled in Geneva to participate in a counter experiment at the CERN Proton Synchrotron, and in 1970 obtained his PhD with distinction from TUW.

In 1970 Meinhard became staff member in CERN’s data-handling division. He joined the Split Field Magnet experiment at the Intersecting Storage Rings and, together with HEPHY, contributed specially designed multi-wire proportional chambers. Early on, he realised the importance of rigorous statistical methods for track and vertex reconstruction in complex detectors, resulting in several seminal papers.

In 1975 Meinhard returned to Vienna as leader of HEPHY’s experimental division. From 1993 until his retirement at the end of 2006 he was deputy director and responsible for the detector development and software analysis groups. As a faculty member of TUW he created a series of specialised lectures and practical courses, which shaped a generation of particle physicists. In 1978 Meinhard and Georges Charpak founded the Wire Chamber Conference, now known as the Vienna Conference on Instrumentation (VCI).

Meinhard continued his participation in experiments at CERN, including WA6, UA1 and the European Hybrid Spectrometer. After joining the DELPHI experiment at LEP, he realised the emerging potential of semiconductor tracking devices and established this technology at HEPHY. First applied at DELPHI’s Very Forward Tracker, this expertise was successfully continued with important contributions to the CMS tracker at LHC, the Belle vertex detector at KEKB and several others.

Meinhard is author and co-author of several hundred scientific papers. His and his group’s contributions to track and vertex reconstruction are summarised in the standard textbook Data Analysis Techniques for High-Energy Physics, published by Cambridge University Press and translated into Russian and Chinese.

All that would suffice for a lifetime achievement, but not so for Meinhard. Inspired by the fall of the Iron Curtain, he envisaged the creation of an international centre of excellence in the Vienna region. Initially planned as a spallation neutron source, the project eventually transmuted into a facility for cancer therapy by proton and carbon-ion beams, called MedAustron. Financed by the province of Lower Austria and the hosting city of Wiener Neustadt, and with crucial scientific and engineering support from CERN and Austrian institutes, clinical treatment started in 2016.

Meinhard received several prizes and was rewarded with the highest scientific decoration of Austria

Meinhard was invited as a lecturer to many international conferences and post-graduate schools worldwide. He chaired the VCI series, organised several accelerator schools and conferences in Austria, and served on the boards of the European Physical Society’s international group on accelerators. For his tireless scientific efforts and in particular the realisation of MedAustron, Meinhard received several prizes and was rewarded with the highest scientific decoration of Austria – the Honorary Cross for Science and Arts of First Class.

He was also a co-founder and long-term president of a non-profit organisation in support of mentally handicapped people. His character was incorruptible, strictly committed to truth and honesty, and responsive to loyalty, independent thinking and constructive criticism.

In Meinhard Regler we have lost an enthusiastic scientist, visionary innovator, talented organiser, gifted teacher, great humanist and good friend. His legacy will forever stay with us.

The post Meinhard Regler 1941–2024 appeared first on CERN Courier.

]]>
News Meinhard Regler, an expert in detector development and software analysis, passed away on 22 September 2024 at the age of 83. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_Obits_Regler_feature.jpg
Iosif Khriplovich 1937–2024 https://cerncourier.com/a/iosif-khriplovich-1937-2024/ Wed, 26 Mar 2025 13:19:30 +0000 https://cerncourier.com/?p=112842 Renowned theorist Iosif Khriplovich passed away on 26 September 2024, aged 87.

The post Iosif Khriplovich 1937–2024 appeared first on CERN Courier.

]]>
Renowned Soviet/Russian theorist Iosif Khriplovich passed away on 26 September 2024, aged 87. Born in 1937 in Ukraine to a Jewish family, he graduated from Kiev University and moved to the newly built Academgorodok in Siberia. From 1959 to 2014 he was a prominent member of the theory department at the Budker Institute of Nuclear Physics. He combined his research with teaching at Novosibirsk University, where he also held a professorship in 1983–2009. In 2014 he moved to St. Petersburg to take up a professorial position at Petersburg University and was a corresponding member of the Russian Academy of Sciences from 2000.

In a paper published in 1969, Khriplovich was the first to discover the phenomenon of anti-screening in the SU(2) Yang–Mills theory by calculating the first loop correction to the charge renormalisation. This immediately translates into the crucial first coefficient (–22/3) of the Gell-Mann–Low function and asymptotic freedom of the theory.

Regretfully, Khriplovich did not follow this interpretation of his result even after the key SLAC experiment on deep inelastic scattering and its subsequent partonic interpretation by Feynman. The honour of the discovery of asymptotic freedom in QCD went to three authors of papers published in 1973, who seemingly did not know of Khriplo­vich’s calculations.

In the early 1970s, Khriplovich’s interests turned to fundamental questions on the way towards the Standard Model. One was whether the electroweak theory is described by the Weinberg–Salam model, with neutral currents interacting via Z bosons, or the Georgi–Glashow model without them. While neutrino scattering on nucleons was soon confirmed, the electron interaction with nucleons was still unchecked. One practical way to find out was to use atomic spectroscopy to look for any mixing between states of opposite parity. Actively entering this area, Khriplovich and his students worked out quantitative predictions for the rotation of laser polarisation due to the weak interaction between electrons and nucleons. Their predictions were triumphantly confirmed in experiments, firstly by Barkov and Zolotorev at the Budker Institute. The same parity violating interaction was later observed at SLAC in 1978, proving the Z-exchange and the Weinberg–Salam model beyond any doubt. In 1973, together with Arkady Vainshtein, Khriplovich also derived the first solid limit on the mass of the charm quark that was unexpectedly discovered the following year.

He became engaged in Yang–Mills theories at a time when very few people were interested in them

The work of Khriplovich and his group significantly advanced the theory of many-electron atoms and contributed to the subsequent studies of the violation of fundamental symmetries in processes involving elementary particles, atoms, molecules and atomic nuclei. His students and later close collaborators, such as Victor Flambaum, Oleg Sushkov and Maxim Pospelov, grew as strong physicists who made important contributions to various subfields of theoretical physics. He was awarded the Silver Dirac Medal by the University of New South Wales (Sydney) and the Pomeranchuk Prize by the Institute of Theoretical and Experimental Physics (Moscow).

Yulik, as he was affectionately known, had his own style in physics. He was feisty and focused on issues where he could become a trailblazer, unafraid to cut relations with scientists of any rank if he felt their behaviour did not match his high ethical standards. This is why he became engaged in Yang–Mills theories at a time when very few people were interested in them. Yet, Yulik was always graceful and respectful in his interactions with others, and smiling, as we would like to remember him.

The post Iosif Khriplovich 1937–2024 appeared first on CERN Courier.

]]>
News Renowned theorist Iosif Khriplovich passed away on 26 September 2024, aged 87. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_Obits_Khriplovich.jpg
Strategy symposium shapes up https://cerncourier.com/a/strategy-symposium-shapes-up/ Wed, 26 Mar 2025 13:17:46 +0000 https://cerncourier.com/?p=112593 The Open Symposium of the 2026 update to the European Strategy for Particle Physics will see scientists from around the world debate the future of the field.

The post Strategy symposium shapes up appeared first on CERN Courier.

]]>
Registration is now open for the Open Symposium of the 2026 update to the European Strategy for Particle Physics (ESPP). It will take place from 23 to 27 June at Lido di Venezia in Italy, and see scientists from around the world debate the inputs to the ESPP (see “A call to engage”).

The symposium will begin by surveying the implementation of the last strategy process, whose recommendations were approved by the CERN Council in June 2020. In-depth working-group discussions on all areas of physics and technology will follow.

The rest of the week will see plenary sessions on the different physics and technology areas, starting with various proposals for possible large accelerator projects at CERN, and the status and plans in other regions of the world. Open questions, as well as how they can be addressed by the proposed projects, will be presented in rapporteur talks. This will be followed by longer discussion blocks where the full community can get engaged. On the final day, members of the European Strategy Group will summarise the national inputs and other overarching topics to the ESPP.

The post Strategy symposium shapes up appeared first on CERN Courier.

]]>
News The Open Symposium of the 2026 update to the European Strategy for Particle Physics will see scientists from around the world debate the future of the field. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_symposium.jpg
Karel Šafařík 1953–2024 https://cerncourier.com/a/karel-safarik-1953-2024/ Wed, 26 Mar 2025 13:16:21 +0000 https://cerncourier.com/?p=112849 Karel Šafařík, one of the founding members of the ALICE collaboration, passed away on 7 October 2024.

The post Karel Šafařík 1953–2024 appeared first on CERN Courier.

]]>
Karel Šafařík, one of the founding members of the ALICE collaboration, passed away on 7 October 2024.

Karel graduated in theoretical physics in Bratislava, Slovakia (then Czechoslovakia) in 1976 and worked at JINR Dubna for over 10 years, participating in experiments in Serpukhov and doing theoretical studies on the phenomenology of particle production at high energies. In 1990 he joined Collège de France and the heavy-ion programme at CERN, soon becoming one of the most influential scientists in the Omega series of heavy-ion experiments (WA85, WA94, WA97, NA57) at the CERN Super Proton Synchrotron (SPS). In 2002 Karel was awarded the Slovak Academy of Sciences Prize for his contributions to the observation of the enhancement of the production of multi-strange particles in heavy-ion collisions at the SPS. In 2013 he was awarded the medal of the Czech Physical Society.

As early as 1991, Karel was part of the small group who designed the first heavy-ion detector for the LHC, which later became ALICE. He played a central role in shaping the ALICE experiment, from the definition of physics topics and the detector layout to the design of the data format, tracking, data storage and data analysis. He was pivotal in convincing the collaboration to introduce two layers of pixel detectors to reconstruct decays of charm hadrons only a few tens of microns from the primary vertex in central lead–lead collisions at the LHC – an idea considered by many to be impossible in heavy-ion collisions, but that is now one of the pillars of the ALICE physics programme. He was the ALICE physics coordinator for many years leading up to and including first data taking. Over the years, he also made multiple contributions to ALICE upgrade studies and became known as the “wise man” to be consulted on the trickiest questions.

Karel was a top-class physicist, with a sharp analytical mind, a legendary memory, a seemingly unlimited set of competences ranging from higher mathematics to formal theory, and from detector physics to high-performance computing. At the same time he was a generous, caring and kind colleague who supported, helped, mentored and guided a large number of ALICE collaborators. We miss him dearly.

The post Karel Šafařík 1953–2024 appeared first on CERN Courier.

]]>
News Karel Šafařík, one of the founding members of the ALICE collaboration, passed away on 7 October 2024. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_Obits_Safarik.jpg
Günter Wolf 1937–2024 https://cerncourier.com/a/gunter-wolf-1937-2024/ Wed, 26 Mar 2025 13:15:16 +0000 https://cerncourier.com/?p=112852 Günter Wolf, who played a leading role in the planning, construction and data analysis of experiments that were instrumental in establishing the Standard Model, passed away on 29 October 2024 at the age of 86.

The post Günter Wolf 1937–2024 appeared first on CERN Courier.

]]>
Günter Wolf

Günter Wolf, who played a leading role in the planning, construction and data analysis of experiments that were instrumental in establishing the Standard Model, passed away on 29 October 2024 at the age of 86. He significantly shaped and contributed to the research programme of DESY, and knew better than almost anyone how to form international collaborations and lead them to the highest achievements.

Born in Ulm, Germany in 1937, Wolf studied physics in Tübingen. At the urging of his supervisor Helmut Faissner, he went to Hamburg in 1961 where the DESY synchrotron was being built under DESY founder Willibald Jentschke. Together with Erich Lohrmann and Martin Teucher, he was involved in the preparation of the bubble-chamber experiments there and at the same time took part in experiments at CERN.

The first phase of experiments with high-energy photons at the DESY synchrotron, in which he was involved, had produced widely recognised results on the electromagnetic interactions of elementary particles. In 1967 Wolf seized the opportunity to continue this research at the higher energies of the recently completed linear accelerator at Stanford University (SLAC). He became the spokesperson for an experiment with a polarised gamma beam, which provided new insights into the nature of vector mesons.

In 1971, Jentschke succeeded in bringing Wolf back to Hamburg as senior scientist. He remained associated with DESY for the rest of his life and became a leader in the planning, construction and analysis of key DESY experiments.

Together with Bjørn Wiik, as part of an international collaboration, Wolf designed and realised the DASP detector for DORIS, the first electron–positron storage ring at DESY. This led to the discovery of the excited states of charmonium in 1975 and thus to the ultimate confirmation that quarks are particles. For the next, larger electron–positron storage ring, PETRA, he designed the TASSO detector, again together with Wiik. In 1979, the TASSO collaboration was able to announce the discovery of the gluon through its spokesperson Wolf, for which he, together with colleagues from TASSO, was awarded the High Energy Particle Physics Prize of the European Physical Society.

Wolf’s negotiating skills and deep understanding of physics and technology served particle physics worldwide

In 1982 Wolf became the chair of the experiment selection committee for the planned LEP collider at CERN. His deep understanding of physics and technology, and his negotiating skills, were an essential foundation for the successful LEP programme, just one example of how Wolf has served particle physics worldwide as a member of international scientific committees.

At the same time, Wolf was involved in the planning of the physics programme for the electron–proton collider HERA. The ZEUS general-purpose detector for experiments at HERA was the work of an international collaboration of more than 400 scientists, that Wolf brought together and led as its spokesperson for many years. The experiments at HERA ran from 1992 to 2007, producing outstanding results that include the direct demonstration of the unification of the weak and electromagnetic force at high momentum transfers, the precise measurement of the structure of the proton, which is determined by quarks and gluons, and the surprising finding that there are collisions in which the proton remains intact even at the highest momentum transfers. In 2011 Wolf was awarded the Stern–Gerlach Medal of the German Physical Society, its highest award for achievements in experimental physics.

When dealing with colleagues and staff, Günter Wolf was always friendly, helpful, encouraging and inspiring, but at the same time demanding and insistent on precision and scientific excellence. He took the opinions of others seriously, but only a thorough and competent analysis could convince him. As a result, he enjoyed the greatest respect from everyone and became a role model and friend to many. DESY owes its reputation in the international physics community not least to people like him.

The post Günter Wolf 1937–2024 appeared first on CERN Courier.

]]>
News Günter Wolf, who played a leading role in the planning, construction and data analysis of experiments that were instrumental in establishing the Standard Model, passed away on 29 October 2024 at the age of 86. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_Obits_Wolf_feature.jpg
A call to engage https://cerncourier.com/a/a-call-to-engage/ Mon, 24 Mar 2025 08:47:33 +0000 https://cerncourier.com/?p=112676 The secretary of the 2026 European strategy update, Karl Jakobs, talks about the strong community involvement needed to reach a consensus for the future of our field.

The post A call to engage appeared first on CERN Courier.

]]>
The European strategy for particle physics is the cornerstone of Europe’s decision-making process for the long-term future of the field. In March 2024 CERN Council launched the programme for the third update of the strategy. The European Strategy Group (ESG) and the strategy secretariat for this update were established by CERN Council in June 2024 to organise the full process. Over the past few months, important aspects of the process have been set up, and these are described in more detail on the strategy web pages at europeanstrategyupdate.web.cern.ch/welcome.

The Physics Preparatory Group (PPG) will play an important role in distilling the community’s scientific input and scientific discussions at the open symposium in Venice in June 2025 into a “physics briefing book”. At its meeting in September 2024, CERN Council appointed eight members of the PPG, four on the recommendation of the scientific policy committee and four on the recommendation of the European Committee for Future Accelerators (ECFA). In addition, the PPG has one representative from CERN and two representatives each from the Americas and Asia.

The strategy secretariat also proposed to form nine working groups to cover the full range of physics topics as well as the technology areas of accelerators, detectors and computing. The work of these groups will be co-organised by two conveners, with one of them being a member of the PPG. In addition, an early-career researcher has been appointed to each group to act as a scientific secretary. Both the appointments of the co-conveners and of the early-career researchers are important to increase the engagement by the broader community in the current update. The full composition of the PPG, the co-conveners and the scientific secretaries of the working groups is available on the strategy web pages.

Karl Jakobs

The strategy secretariat has also devised guidelines for input by the community. Any submitted documents must be no more than 10 pages long and provide a comprehensive and self-contained summary of the input. Additional information and details can be submitted in a separate backup document that can be consulted on by the PPG if clarification on any aspect is required. A backup document is not, however, mandatory.

A major component are inputs by national high-energy physics communities, which are expected to be collected individually by each country, and in some cases by region. The information collected from different countries and regions will be most useful if it is as coherent and uniform as possible when addressing the key issues. To assist with this, the ECFA has put together a set of guidelines.

It is anticipated that a number of proposals for large-scale research projects will be submitted as input to the strategy process, including, but not limited to, particle colliders and collider detectors. These proposals are likely to vary in scale, anticipated timeline and technical maturity. In addition to studying the scientific potential of these projects, the ESG wishes to evaluate the sequence of delivery steps and the challenges associated with delivery, and to understand how each project could fit into the wider roadmap for European particle physics. In order to allow a straightforward comparison of projects, we therefore request that all large-scale projects submit a standardised set of technical data in addition to their physics case and technical description.

It is anticipated that a number of proposals for large-scale research projects will be submitted as input to the strategy

To allow the community to take into account and to react to the submissions collected by March 2025 and to the content of the briefing book, national communities are offered further opportunities for input: first ahead of the open symposium (see p11), with a deadline of 26 May 2025; and then ahead of the drafting session, with a deadline of 14 November 2025.

In this strategy process the community must converge on a preferred option for the next collider at CERN and identify a prioritised list of alternative options. The outcome of the process will provide the basis for the decision by CERN Council in 2027 or 2028 on the construction of the next large collider at CERN, following the High-Luminosity LHC. Areas of priority for exploration complementary to colliders and for other experiments to be considered at CERN and other laboratories in Europe will also be identified, as well as priorities for participation in projects outside Europe.

Given the importance of this process and its outcomes, I encourage strong community involvement throughout to reach a consensus for the future of our field.

The post A call to engage appeared first on CERN Courier.

]]>
Opinion The secretary of the 2026 European strategy update, Karl Jakobs, talks about the strong community involvement needed to reach a consensus for the future of our field. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_VIEW-motti.jpg
Edoardo Amaldi and the birth of Big Science https://cerncourier.com/a/edoardo-amaldi-and-the-birth-of-big-science/ Mon, 24 Mar 2025 08:45:02 +0000 https://cerncourier.com/?p=112656 In an interview drawing on memories from childhood and throughout his own distinguished career at CERN, Ugo Amaldi offers deeply personal insights into his father Edoardo’s foundational contributions to international cooperation in science.

The post Edoardo Amaldi and the birth of Big Science appeared first on CERN Courier.

]]>
Ugo Amaldi beside a portrait of his father Edoardo

Should we start with your father’s involvement in the founding of CERN?

I began hearing my father talk about a new European laboratory while I was still in high school in Rome. Our lunch table was always alive with discussions about science, physics and the vision of this new laboratory. Later, I learned that between 1948 and 1949, my father was deeply engaged in these conversations with two of his friends: Gilberto Bernardini, a well-known cosmic-ray expert, and Bruno Ferretti, a professor of theoretical physics at Rome University. I was 15 years old and those table discussions remain vivid in my memory.

So, the idea of a European laboratory was already being discussed before the 1950 UNESCO meeting?

Yes, indeed. Several eminent European physicists, including my father, Pierre Auger, Lew Kowarski and Francis Perrin, recognised that Europe could only be competitive in nuclear physics through collaborative efforts. All the actors wanted to create a research centre that would stop the post-war exodus of physics talent to North America and help rebuild European science. I now know that my father’s involvement began in 1946 when he travelled to Cambridge, Massachusetts, for a conference. There, he met Nobel Prize winner John Cockcroft, and their conversations planted in his mind the first seeds for a European laboratory.

Parallel to scientific discussions, there was an important political initiative led by Swiss philosopher and writer Denis de Rougemont. After spending the war years at Princeton University, he returned to Europe with a vision of fostering unity and peace. He established the Institute of European Culture in Lausanne, Switzerland, where politicians from France, Britain and Germany would meet. In December 1949, during the European Cultural Conference in Lausanne, French Nobel Prize winner Louis de Broglie sent a letter advocating for a European laboratory where scientists from across the continent could work together peacefully.

The Amaldi family in 1948

My father strongly believed in the importance of accelerators to advance the new field that, at the time, was at the crossroads between nuclear physics and cosmic-ray physics. Before the war, in 1936, he had travelled to Berkeley to learn about cyclotrons from Ernest Lawrence. He even attempted to build a cyclotron in Italy in 1942, profiting from the World’s Fair that had to be held in Rome. Moreover, he was deeply affected by the exodus of talented Italian physicists after the war, including Bruno Rossi, Gian Carlo Wick and Giuseppe Cocconi. He saw CERN as a way to bring these scientists back and rebuild European physics.

How did Isidor Rabi’s involvement come into play?

In 1950 my father was corresponding with Gilberto Bernardini, who was spending a year at Columbia University. There Bernardini mentioned the idea of a European laboratory to Isidor Rabi, who, at the same time, was in contact with other prominent figures in this decentralised and multi-centered initiative. Together with Norman Ramsay, Rabi had previously succeeded, in 1947, in persuading nine northeastern US universities to collaborate under the banner of Associated Universities, Inc, which led to the establishment of Brookhaven National Laboratory.

What is not generally known is that before Rabi gave his famous speech at the fifth assembly of UNESCO in Florence in June 1950, he came to Rome and met with my father. They discussed how to bring this idea to fruition. A few days later, Rabi’s resolution at the UNESCO meeting calling for regional research facilities was a crucial step in launching the project. Rabi considered CERN a peaceful compensation for the fact that physicists had built the nuclear bomb.

How did your father and his colleagues proceed after the UNESCO resolution?

Following the UNESCO meeting, Pierre Auger, at that time director of exact and natural sciences at UNESCO, and my father took on the task of advancing the project. In September 1950 Auger spoke of it at a nuclear physics conference in Oxford, and at a meeting of the International Union of Pure and Applied Physics (IUPAP), my father– one of the vice presidents – urged the executive committee to consider how best to implement the Florence resolution. In May 1951, Auger and my father organised a meeting of experts at UNESCO headquarters in Paris, where a compelling justification for the European project was drafted.

The cost of such an endeavour was beyond the means of any single nation. This led to an intergovernmental conference under the auspices of UNESCO in December 1951, where the foundations for CERN were laid. Funding, totalling $10,000 for the initial meetings of the board of experts, came from Italy, France and Belgium. This was thanks to the financial support of men like Gustavo Colonnetti, president of the Italian Research Council, who had already – a year before – donated the first funds to UNESCO.

Were there any significant challenges during this period?

Not everyone readily accepted the idea of a European laboratory. Eminent physicists like Niels Bohr, James Chadwick and Hendrik Kramers questioned the practicality of starting a new laboratory from scratch. They were concerned about the feasibility and allocation of resources, and preferred the coordination of many national laboratories and institutions. Through skilful negotiation and compromise, Auger and my father incorporated some of the concerns raised by the sceptics into a modified version of the project, ensuring broader support. In February 1952 the first agreement setting up a provisional council for CERN was written and signed, and my father was nominated secretary general of the provisional CERN.

Enrico and Giulio Fermi, Ginestra Amaldi, Laura Fermi, Edoardo and Ugo Amaldi

He worked tirelessly, travelling through Europe to unite the member states and start the laboratory’s construction. In particular, the UK was reluctant to participate fully. They had their own advanced facilities, like the 40 MeV cyclotron at the University of Liverpool. In December 1952 my father visited John Cockcroft, at the time director of the Harwell Atomic Energy Research Establishment, to discuss this. There’s an interesting episode where my father, with Cockcroft, met Frederick Lindemann and Baron Cherwell, who was a long-time scientific advisor to Winston Churchill. Cherwell dismissed CERN as another “European paper mill.” My father, usually composed, lost his temper and passionately defended the project. During the following visit to Harwell, Cockcroft reassured him that his reaction was appropriate. From that point on, the UK contributed to CERN, albeit initially as a series of donations rather than as the result of a formal commitment. It may be interesting to add that, during the same visit to London and Harwell, my father met the young John Adams and was so impressed that he immediately offered him a position at CERN.

What were the steps following the ratification of CERN’s convention?

Robert Valeur, chairman of the council during the interim period, and Ben Lockspeiser, chairman of the interim finance committee, used their authority to stir up early initiatives and create an atmosphere of confidence that attracted scientists from all over Europe. As Lew Kowarski noted, there was a sense of “moral commitment” to leave secure positions at home and embark on this new scientific endeavour.

During the interim period from May 1952 to September 1954, the council convened three sessions in Geneva whose primary focus was financial management. The organisation began with an initial endowment of approximately 1 million Swiss Francs, which – as I said – included a contribution from the UK known as the “observer’s gift”. At each subsequent session, the council increased its funding, reaching around 3.7 million Swiss Francs by the end of this period. When the permanent organisation was established, an initial sum of 4.1 million Swiss Francs was made available.

Giuseppe Fidecaro, Edoardo Amaldi and Werner Heisenberg at CERN in 1960

In 1954, my father was worried that if the parliaments didn’t approve the convention before winter, then construction would be delayed because of the wintertime. So he took a bold step and, with the approval of the council president, authorised the start of construction on the main site before the convention was fully ratified.

This led to Lockspeiser jokingly remarking later that council “has now to keep Amaldi out of jail”. The provisional council, set up in 1952, was dissolved when the European Organization for Nuclear Research officially came into being in 1954, though the acronym CERN (Conseil Européen pour la Recherche Nucléaire) was retained. By the conclusion of the interim period, CERN had grown significantly. A critical moment occurred on 29 September  1954, when a specific point in the ratification procedure was reached, rendering all assets temporarily ownerless. During this eight-day period, my father, serving as secretary general, was the sole owner on behalf of the newly forming permanent organisation. The interim phase concluded with the first meeting of the permanent council, marking the end of CERN’s formative years.

Did your father ever consider becoming CERN’s Director-General?

People asked him to be Director-General, but he declined for two reasons. First, he wanted to return to his students and his cosmic-ray research in Rome. Second, he didn’t want people to think he had done all this to secure a prominent position. He believed in the project for its own sake.

When the convention was finally ratified in 1954, the council offered the position of Director-General to Felix Bloch, a Swiss–American physicist and Nobel Prize winner for his work on nuclear magnetic resonance. Bloch accepted but insisted that my father serve as his deputy. My father, dedicated to CERN’s success, agreed to this despite his desire to return to Rome full time.

How did that arrangement work out?

My father agreed but Bloch wasn’t at that time rooted in Europe. He insisted on bringing all his instruments from Stanford so he could continue his research on nuclear magnetic resonance at CERN. He found it difficult to adapt to the demands of leading CERN and soon resigned. The council then elected Cornelis Jan Bakker, a Dutch physicist who had led the synchrocyclotron group, as the new Director-General. From the beginning, he was the person my father thought would have been the ideal director for the initial phase of CERN. Tragically though, Bakker died in a plane crash a year and a half later. I well remember how hard my father was hit by this loss.

How did the development of accelerators at CERN progress?

The decision to adopt the strong focusing principle for the Proton Synchrotron (PS) was a pivotal moment. In August 1952 Otto Dahl, leader of the Proton Synchrotron study group, Frank Goward and Rolf Widerøe visited Brookhaven just as Ernest Courant, Stanley Livingston and Hartland Snyder were developing this new principle. They were so excited by this development that they returned to CERN determined to incorporate it into the PS design. In 1953 Mervyn Hine, a long-time friend of John Adams with whom he had moved to CERN, studied potential issues with misalignment in strong focusing magnets, which led to further refinements in the design. Ultimately, the PS became operational before the comparable accelerator at Brookhaven, marking a significant achievement for European science.

Edoardo Amaldi and Victor Weisskopf in 1974

It’s important here to recognise the crucial contributions of the engineers, who often don’t receive the same level of recognition as physicists. They are the ones who make the work of experimental physicists and theorists possible. “Viki” Weisskopf, Director-General of CERN from 1961 to 1965, compared the situation to the discovery of America. The machine builders are the captains and shipbuilders. The experimentalists are those fellows on the ships who sailed to the other side of the world and wrote down what they saw. The theoretical physicists are those who stayed behind in Madrid and told Columbus that he was going to land in India.

Your father also had a profound impact on the development of other Big Science organisations in Europe

Yes, in 1958 my father was instrumental, together with Pierre Auger, in the founding of the European Space Agency. In a letter written in 1958 to his friend Luigi Crocco, who was professor of jet propulsion in Princeton, he wrote that “it is now very much evident that this problem is not at the level of the single states like Italy, but mainly at the continental level. Therefore, if such an endeavour is to be pursued, it must be done on a European scale, as already done for the building of the large accelerators for which CERN was created… I think it is absolutely imperative for the future organisation to be neither military nor linked to any military organisation. It must be a purely scientific organisation, open – like CERN – to all forms of cooperation and outside the participating countries.” This document reflects my father’s vision of peaceful and non-military European science.

How is it possible for one person to contribute so profoundly to science and global collaboration?

My father’s ability to accept defeats and keep pushing forward was key to his success. He was an exceptional person with a clear vision and unwavering dedication. I hope that by sharing these stories, others might be inspired to pursue their goals with the same persistence and passion.

Could we argue that he was not only a visionary but also a relentless advocate?

He travelled extensively, talked to countless people, and was always cheerful and energetic. He accepted setbacks but kept moving forwards. In this connection, I want to mention Eliane Bertrand, later de Modzelewska, his secretary in Rome who later became secretary of the CERN Council for about 20 years, serving under several Director-Generals. She left a memoir about those early days, highlighting how my father was always travelling, talking and never stopping. It’s a valuable piece of history that, I think, should be published.

Eliane de Modzelewska

International collaboration has been a recurring theme in your own career. How do you view its importance today?

International collaboration is more critical than ever in today’s world. Science has always been a bridge between cultures and nations, and CERN’s history is a testimony of what this brings to humanity. It transcends political differences and fosters mutual understanding. I hope CERN and the broader scientific community will find ways to maintain these vital connections with all countries. I’ve always believed that fostering a collaborative and inclusive environment is one of the main goals of us scientists. It’s not just about achieving results but also about how we work together and support each other along the way.

Looking ahead, what are your thoughts on the future of CERN and particle physics?

I firmly believe that pursuing higher collision energies is essential. While the Large Hadron Collider has achieved remarkable successes, there’s still much we haven’t uncovered – especially regarding supersymmetry. Even though minimal supersymmetry does not apply, I remain convinced that supersymmetry might manifest in ways we haven’t yet understood. Exploring higher energies could reveal supersymmetric particles or other new phenomena.

Like most European physicists, I support the initiative of the Future Circular Collider and starting with an electron–positron collider phase so to explore new frontiers at two very different energy levels. However, if geopolitical shifts delay or complicate these plans, we should consider pushing hard on alternative strategies like developing the technologies for muon colliders.

Ugo Amaldi first arrived at CERN as a fellow in September 1961. Then, for 10 years at the ISS in Rome, he opened two new lines of research: quasi-free electron scattering on nuclei and atoms. Back at CERN, he developed the Roman pots experimental technique, was a co-discoverer of the rise of the proton–proton cross-section with energy, measured the polarisation of muons produced by neutrinos, proposed the concept of a superconducting electron–positron linear collider, and led LEP’s DELPHI Collaboration. Today, he advances the use of accelerators in cancer treatment as the founder of the TERA Foundation for hadron therapy and as president emeritus of the National Centre for Oncological Hadrontherapy (CNAO) in Pavia. He continues his mother and father’s legacy of authoring high-school physics textbooks used by millions of Italian pupils. His motto is: “Physics is beautiful and useful.”

This interview first appeared in the newsletter of CERN’s experimental physics department. It has been edited for concision.

The post Edoardo Amaldi and the birth of Big Science appeared first on CERN Courier.

]]>
Feature In an interview drawing on memories from childhood and throughout his own distinguished career at CERN, Ugo Amaldi offers deeply personal insights into his father Edoardo’s foundational contributions to international cooperation in science. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_AMALDI_Ugo_feature.jpg
Isospin symmetry broken more than expected https://cerncourier.com/a/isospin-symmetry-broken-more-than-expected/ Mon, 24 Mar 2025 08:42:50 +0000 https://cerncourier.com/?p=112578 The NA61/SHINE collaboration have observed a strikingly large imbalance between charged and neutral kaons in argon–scandium collisions.

The post Isospin symmetry broken more than expected appeared first on CERN Courier.

]]>
In the autumn of 2023, Wojciech Brylinski was analysing data from the NA61/SHINE collaboration at CERN for his thesis, when he noticed an unexpected anomaly – a strikingly large imbalance between charged and neutral kaons in argon–scandium collisions. Instead of producing roughly equal numbers, he found that charged kaons were produced 18.4% more often. This suggested that the “isospin symmetry” between up (u) and down (d) quarks might be broken by more than expected due to the differences in their electric charges and masses – a discrepancy that existing theoretical models would struggle to explain. Known sources of isospin asymmetry only predict deviations of a few percent.

“When Wojciech got started, we thought it would be a trivial verification of the symmetry,” says Marek Gaździcki of Jan Kochanowski University of Kielce, spokesperson of NA61/SHINE at the time of the discovery. “We expected it to be closely obeyed – though we had previously measured discrepancies at NA49, they had large uncertainties and were not significant.”

Isospin symmetry is one facet of flavour symmetry, whereby the strong interaction treats all quark flavours identically, except for kinematic differences arising from their different masses. Strong interactions should therefore generate nearly equal yields of charged K+ (us) and K (us), and neutral K0 (ds) and K0 (ds), given the similar masses of the two lightest quarks. NA61/SHINE’s data contradict the hypothesis of equal yields with 4.7σ significance.

“I see two options to interpret the results,” says Francesco Giacosa, a theo­retical physicist at Jan Kochanowski University working with NA61/SHINE. “First, we substantially underestimate the role of electromagnetic interactions in creating quark–antiquark pairs. Second, strong interactions do not obey flavour symmetry – if so, this would falsify QCD.” Isospin is not a symmetry of the electromagnetic interaction as up and down quarks have different electric charges.

While the experiment routinely measures particle yields in nuclear collisions, finding a discrepancy in isospin symmetry was not something researchers were actively looking for. NA61/SHINE’s primary focus is studying the phase diagram of high-energy nuclear collisions using a range of ion beams. This includes looking at the onset of deconfinement, the formation of a quark-gluon plasma fireball, and the search for the hypothesised QCD critical point where the transition between hadronic matter and quark–gluon plasma changes from a smooth crossover to a first-order phase transition. Data is also shared with neutrino and cosmic-ray experiments to help refine their models.

The collaboration is now planning additional studies using different projectiles, targets and collision energies to determine whether this effect is unique to certain heavy-ion collisions or a more general feature of high-energy interactions. They have also put out a call to theorists to help explain what might have caused such an unexpectedly large asymmetry.

“The observation of the rather large isospin violation stands in sharp contrast to its validity in a wide range of physical systems,” says Rob Pisarski, a theoretical physicist from Brookhaven National Laboratory. “Any explanation must be special to heavy-ion systems at moderate energy. NA61/SHINE’s discrepancy is clearly significant, and shows that QCD still has the power to surprise our naive expectations.”

The post Isospin symmetry broken more than expected appeared first on CERN Courier.

]]>
News The NA61/SHINE collaboration have observed a strikingly large imbalance between charged and neutral kaons in argon–scandium collisions. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_NA61.jpg
Cosmogenic candidate lights up KM3NeT https://cerncourier.com/a/cosmogenic-candidate-lights-up-km3net/ Mon, 24 Mar 2025 08:40:44 +0000 https://cerncourier.com/?p=112563 Strings of photodetectors anchored to the seabed off the coast of Sicily have detected the most energetic neutrino ever observed, smashing previous records.

The post Cosmogenic candidate lights up KM3NeT appeared first on CERN Courier.

]]>
Muon neutrino

On 13 February 2023, strings of photodetectors anchored to the seabed off the coast of Sicily detected the most energetic neutrino ever observed, smashing previous records. Embargoed until the publication of a paper in Nature last month, the KM3NeT collaboration believes their observation may have originated in a novel cosmic accelerator, or may even be the first detection of a “cosmogenic” neutrino.

“This event certainly comes as a surprise,” says KM3NeT spokesperson Paul de Jong (Nikhef). “Our measurement converted into a flux exceeds the limits set by IceCube and the Pierre Auger Observatory. If it is a statistical fluctuation, it would correspond to an upward fluctuation at the 2.2σ level. That is unlikely, but not impossible.” With an estimated energy of a remarkable 220 PeV, the neutrino observed by KM3NeT surpasses IceCube’s record by almost a factor of 30.

The existence of ultra-high-energy cosmic neutrinos has been theorised since the 1960s, when astrophysicists began to conceive ways that extreme astrophysical environments could generate particles with very high energies. At about the same time, Arno Penzias and Robert Wilson discovered “cosmic microwave background” (CMB) photons emitted in the era of recombination, when the primordial plasma cooled down and the universe became electrically neutral. Cosmogenic neutrinos were soon hypothesised to result from ultra-high-energy cosmic rays interacting with the CMB. They are expected to have energies above 100 PeV (1017 eV), however, their abundance is uncertain as it depends on cosmic rays, whose sources are still cloaked in intrigue (CERN Courier July/August 2024 p24).

A window to extreme events

But how might they be detected? In this regard, neutrinos present a dichotomy: though outnumbered in the cosmos only by photons, they are notoriously elusive. However, it is precisely their weakly interacting nature that makes them ideal for investigating the most extreme regions of the universe. Cosmic neutrinos travel vast cosmic distances without being scattered or absorbed, providing a direct window into their origins, and enabling scientists to study phenomena such as black-hole jets and neutron-star mergers. Such extreme astrophysical sources test the limits of the Standard Model at energy scales many times higher than is possible in terrestrial particle accelerators.

Because they are so weakly interacting, studying cosmic neutrinos requires giant detectors. Today, three large-scale neutrino telescopes are in operation: IceCube, in Antarctica; KM3NeT, under construction deep in the Mediterranean Sea; and Baikal–GVD, under construction in Lake Baikal in southern Siberia. So far, IceCube, whose construction was completed over 10 years ago, has enabled significant advancements in cosmic-neutrino physics, including the first observation of the Glashow resonance, wherein a 6 PeV electron antineutrino interacts with an electron in the ice sheet to form an on-shell W boson, and the discovery of neutrinos emitted by “active galaxies” powered by a supermassive black hole accreting matter. The previous record-holder for the highest recorded neutrino energy, IceCube has also searched for cosmogenic neutrinos but has not yet observed neutrino candidates above 10 PeV.

Its new northern-hemisphere colleague, KM3NeT, consists of two subdetectors: ORCA, designed to study neutrino properties, and ARCA, which made this detection, designed to detect high-energy cosmic neutrinos and find their astronomical counterparts. Its deep-sea arrays of optical sensors detect Cherenkov light emitted by charged particles created when a neutrino interacts with a quark or electron in the water. At the time of the 2023 event, ARCA comprised 21 vertical detection units, each around 700 m in length. Its location 3.5 km deep under the sea reduces background noise, and its sparse set up over one cubic kilometre optimises the detector for neutrinos of higher energies.

The event that KM3NeT observed in 2023 is thought to be a single muon created by the charged-current interaction of an ultra-high-energy muon neutrino. The muon then crossed horizontally through the entire ARCA detector, emitting Cherenkov light that was picked up by a third of its active sensors. “If it entered the sea as a muon, it would have travelled some 300 km water-equivalent in water or rock, which is impossible,” explains de Jong. “It is most likely the result of a muon neutrino interacting with sea water some distance from the detector.”

The network will improve the chances of detecting new neutrino sources

The best estimate for the neutrino energy of 220 PeV hides substantial uncertainties, given the unknown interaction point and the need to correct for an undetected hadronic shower. The collaboration expects the true value to lie between 110 and 790 PeV with 68% confidence. “The neutrino energy spectrum is steeply falling, so there is a tug-of-war between two effects,” explains de Jong. “Low-energy neutrinos must give a relatively large fraction of their energy to the muon and interact close to the detector, but they are numerous; high-energy neutrinos can interact further away, and give a smaller fraction of their energy to the muon, but they are rare.”

More data is needed to understand the sources of ultra-high-energy neutrinos such as that observed by KM3NeT, where construction has continued in the two years since this remarkable early detection. So far, 33 of 230 ARCA detection units and 24 of 115 ORCA detection units have been installed. Once construction is complete, likely by the end of the decade, KM3NeT will be similar in size to IceCube.

“Once KM3NeT and Baikal–GVD are fully constructed, we will have three large-scale neutrino telescopes of about the same size in operation around the world,” adds Mauricio Bustamante, theoretical astroparticle physicist at the Niels Bohr Institute of the University of Copenhagen. “This expanded network will monitor the full sky with nearly equal sensitivity in any direction, improving the chances of detecting new neutrino sources, including faint ones in new regions of the sky.”

The post Cosmogenic candidate lights up KM3NeT appeared first on CERN Courier.

]]>
News Strings of photodetectors anchored to the seabed off the coast of Sicily have detected the most energetic neutrino ever observed, smashing previous records. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_KM3NeT_feature.jpg
CERN gears up for tighter focusing https://cerncourier.com/a/cern-gears-up-for-tighter-focusing/ Mon, 24 Mar 2025 08:38:19 +0000 https://cerncourier.com/?p=112574 New quadrupole magnets for the High-Luminosity LHC will use Nb3Sn conductors for the first time in an accelerator.

The post CERN gears up for tighter focusing appeared first on CERN Courier.

]]>
When it comes online in 2030, the High-Luminosity LHC (HL-LHC) will feel like a new collider. The hearts of the ATLAS and CMS detectors, and 1.2 km of the 27 km-long Large Hadron Collider (LHC) ring will have been transplanted with cutting-edge technologies that will push searches for new physics into uncharted territory.

On the accelerator side, one of the most impactful upgrades will be the brand-new final focusing systems just before the proton or ion beams arrive at the interaction points. In the new “inner triplets”, particles will slalom in a more focused and compacted way than ever before towards collisions inside the detectors.

To achieve the required focusing strength, the new quadrupole magnets will use Nb3Sn conductors for the first time in an accelerator. Nb3Sn will allow fields as high as 11.5 T, compared to 8.5 T for the conventional NbTi bending magnets used elsewhere in the LHC. As they are a new technology, an integrated test stand of the full 60 m-long inner-triplet assembly is essential – and work is now in full swing.

Learning opportunity

“The main challenge at this stage is the interconnections between the magnets, particularly the interfaces between the magnets and the cryogenic line,” explains Marta Bajko, who leads work on the inner-triplet-string test facility. “During this process, we have encountered nonconformities, out-of-tolerance components, and other difficulties – expected challenges given that these connections are being made for the first time. This phase is a learning opportunity for everyone involved, allowing us to refine the installation process.”

The last magnet – one of two built in the US – is expected to be installed in May. Before then, the so-called N lines, which enable the electrical connections between the different magnets, will be pulled through the entire magnet chain to prepare for splicing the cables together. Individual system tests and short-circuit tests have already been successfully performed and a novel alignment system developed for the HL-LHC is being installed on each magnet. Mechanical transfer function measurements of some magnets are ongoing, while electrical integrity tests in a helium environment have been successfully completed, along with the pressure and leak test of the superconducting link.

“Training the teams is at the core of our focus, as this setup provides the most comprehensive and realistic mock-up before the installations are to be done in the tunnel,” says Bajko. “The surface installation, located in a closed and easily accessible building near the teams’ workshops and laboratories, offers an invaluable opportunity for them to learn how to perform their tasks effectively. This training often takes place alongside other teams, under real installation constraints, allowing them to gain hands-on experience in a controlled yet authentic environment.”

The inner triplet string is composed of a separation and recombination dipole, a corrector-package assembly and a quadrupole triplet. The dipole combines the two counter-rotating beams into a single channel; the corrector package fine-tunes beam parameters; and the quadrupole triplet focuses the beam onto the interaction point.

Quadrupole triplets have been a staple of accelerator physics since they were first implemented in the early 1950s at synchrotrons such as the Brookhaven Cosmotron and CERN’s Proton Synchrotron. Quadrupole magnets are like lenses that are convex (focusing) in one transverse plane and concave (defocusing) in the other, transporting charged particles like beams of light on an optician’s bench. In a quadrupole triplet, the focusing plane alternates with each quadrupole magnet. The effect is to precisely focus the particle beams onto tight spots within the LHC experiments, maximising the number of particles that interact, and increasing the statistical power available to experimental analyses.

Nb3Sn is strategically important because it lays the foundation for future high-energy colliders

Though quadrupole triplets are a time-honoured technique, Nb3Sn brings new challenges. The HL-LHC magnets are the first accelerator magnets to be built at lengths of up to 7 m, and the technical teams at CERN and in the US collaboration – each of which is responsible for half the total “cold mass” production – have decided to produce two variants, primarily driven by differences in available production and testing infrastructure.

Since 2011, engineers and accelerator physicists have been hard at work designing and testing the new magnets and their associated powering, vacuum, alignment, cryogenic, cooling and protection systems. Each component of the HL-LHC will be individually tested before installation in the LHC tunnel, however, this is only half the story as all components must be integrated and operated within the machine, where they will all share a common electrical and cooling circuit. Throughout the rest of 2025, the inner-triplet string will test the integration of all these components, evaluating them in terms of their collective behaviour, in preparation for hardware commissioning and nominal operation.

“We aim to replicate the operational processes of the inner-triplet string using the same tools planned for the HL-LHC machine,” says Bajko. “The control systems and software packages are in an advanced stage of development, prepared through extensive collaboration across CERN, involving three departments and nine equipment groups. The inner-triplet-string team is coordinating these efforts and testing them as if operating from the control room – launching tests in short-circuit mode and verifying system performance to provide feedback to the technical teams and software developers. The test programme has been integrated into a sequencer, and testing procedures are being approved by the relevant stakeholders.”

Return on investment

While Nb3Sn offers significant advantages over NbTi, manufacturing magnets with it presents several challenges. It requires high-temperature heat treatment after winding, and is brittle and fragile, making it more difficult to handle than the ductile NbTi. As the HL-LHC Nb3Sn magnets operate at higher current and energy densities, quench protection is more challenging, and the possibility of a sudden loss of superconductivity requires a faster and more robust protection system.

The R&D required to meet these challenges will provide returns long into the future, says Susana Izquierdo Bermudez, who is responsible at CERN for the new HL-LHC magnets.

“CERN’s investment in R&D for Nb3Sn is strategically important because it lays the foundation for future high-energy colliders. Its increased field strength is crucial for enabling more powerful focusing and bending magnets, allowing for higher beam energies and more compact accelerator designs. This R&D also strengthens CERN’s expertise in advanced superconducting materials and technology, benefitting applications in medical imaging, energy systems and industrial technologies.”

The inner-triplet string will remain an installation on the surface at CERN and is expected to operate until early 2027. Four identical assemblies will be installed underground in the LHC tunnel from 2028 to 2029, during Long Shutdown 3. They will be located 20 m away on either side of the ATLAS and CMS interaction points.

The post CERN gears up for tighter focusing appeared first on CERN Courier.

]]>
News New quadrupole magnets for the High-Luminosity LHC will use Nb3Sn conductors for the first time in an accelerator. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_corrector.jpg
How to unfold with AI https://cerncourier.com/a/how-to-unfold-with-ai/ Mon, 27 Jan 2025 08:00:50 +0000 https://cerncourier.com/?p=112161 Inspired by high-dimensional data and the ideals of open science, high-energy physicists are using artificial intelligence to reimagine the statistical technique of ‘unfolding’.

The post How to unfold with AI appeared first on CERN Courier.

]]>
Open-science unfolding

All scientific measurements are affected by the limitations of measuring devices. To make a fair comparison between data and a scientific hypothesis, theoretical predictions must typically be smeared to approximate the known distortions of the detector. Data is then compared with theory at the level of the detector’s response. This works well for targeted measurements, but the detector simulation must be reapplied to the underlying physics model for every new hypothesis.

The alternative is to try to remove detector distortions from the data, and compare with theoretical predictions at the level of the theory. Once detector effects have been “unfolded” from the data, analysts can test any number of hypotheses without having to resimulate or re-estimate detector effects – a huge advantage for open science and data preservation that allows comparisons between datasets from different detectors. Physicists without access to the smearing functions can only use unfolded data.

No simple task

But unfolding detector distortions is no simple task. If the mathematical problem is solved through a straightforward inversion, using linear algebra, noisy fluctuations are amplified, resulting in large uncertainties. Some sort of “regularisation” must be imposed to smooth the fluctuations, but algorithms vary substantively and none is preeminent. Their scope has remained limited for decades. No traditional algorithm is capable of reliably unfolding detector distortions from data relative to more than a few observables at a time.

In the past few years, a new technique has emerged. Rather than unfolding detector effects from only one or two observables, it can unfold detector effects from multiple observables in a high-dimensional space; and rather than unfolding detector effects from binned histograms, it unfolds detector effects from an unbinned distribution of events. This technique is inspired by both artificial-intelligence techniques and the uniquely sparse and high-dimensional data sets of the LHC.

An ill-posed problem

Unfolding is used in many fields. Astronomers unfold point-spread functions to reveal true sky distributions. Medical physicists unfold detector distortions from CT and MRI scans. Geophysicists use unfolding to infer the Earth’s internal structure from seismic-wave data. Economists attempt to unfold the true distribution of opinions from incomplete survey samples. Engineers use deconvolution methods for noise reduction in signal processing. But in recent decades, no field has had a greater need to innovate unfolding techniques than high-energy physics, given its complex detectors, sparse datasets and stringent standards for statistical rigour.

In traditional unfolding algorithms, analysers first choose which quantity they are interested in measuring. An event generator then creates a histogram of the true values of this observable for a large sample of events in their detector. Next, a Monte Carlo simulation simulates the detector response, accounting for noise, background modelling, acceptance effects, reconstruction errors, misidentification errors and energy smearing. A matrix is constructed that transforms the histogram of the true values of the observable into the histogram of detector-level events. Finally, analysts “invert” the matrix and apply it to data, to unfold detector effects from the measurement.

How to unfold traditionally

Diverse algorithms have been invented to unfold distortions from data, with none yet achieving preeminence.

• Developed by Soviet mathematician Andrey Tikhonov in the late 1940s, Tikhonov regularisation (TR) frames unfolding as a minimisation problem with a penalty term added to suppress fluctuations in the solution.

• In the 1950s, statistical mechanic Edwin Jaynes took inspiration from information theory to seek solutions with maximum entropy, seeking to minimise bias beyond the data constraints.

• Between the 1960s and the 1990s, high-energy physicists increasingly drew on the linear algebra of 19th-century mathematicians Eugenio Beltrami and Camille Jordan to develop singular value decomposition as a pragmatic way to suppress noisy fluctuations.

• In the 1990s, Giulio D’Agostini and other high-energy physicists developed iterative Bayesian unfolding (IBU)– a similar technique to Lucy–Richardson deconvolution, which was developed independently in astronomy in the 1970s. An explicitly probabilistic approach well suited to complex detectors, IBU may be considered a forerunner of the neural-network-based technique described in this article.

IBU and TR are the most widely-used approaches in high-energy physics today, with the RooUnfold tool started by Tim Adye serving countless analysts.

At this point in the analysis, the ill-posed nature of the problem presents a major challenge. A simple matrix inversion seldom suffices as statistical noise produces large changes in the estimated input. Several algorithms have been proposed to regularise these fluctuations. Each comes with caveats and constraints, and there is no consensus on a single method that outperforms the rest (see “How to unfold traditionally” panel).

While these approaches have been successfully applied to thousands of measurements at the LHC and beyond, they have limitations. Histogramming is an efficient way to describe the distributions of one or two observables, but the number of bins grows exponentially with the number of parameters, restricting the number of observables that can be simultaneously unfolded. When unfolding only a few observables, model dependence can creep in, for example due to acceptance effects, and if another scientist wants to change the bin sizes or measure a different observable, they will have to redo the entire process.

New possibilities

AI opens up new possibilities for unfolding particle-physics data. Choosing good parameterisations in a high-dimensional space is difficult for humans, and binning is a way to limit the number of degrees of freedom in the problem, making it more tractable. Machine learning (ML) offers flexibility due to the large number of parameters in a deep neural network. Dozens of observables can be unfolded at once, and unfolded datasets can be published as an unbinned collection of individual events that have been corrected for detector distortions as an ensemble.

Unfolding performance

One way to represent the result is as a set of simulated events with weights that encode information from the data. For example, if there are 10 times as many simulated events as real events, the average weight would be about 0.1, with the distribution of weights correcting the simulation to match reality, and errors on the weights reflecting the uncertainties inherent in the unfolding process. This approach gives maximum flexibility to future analysts, who can recombine them into any binning or combination they desire. The weights can be used to build histograms or compute statistics. The full covariance matrix can also be extracted from the weights, which is important for downstream fits.

But how do we know the unfolded values are capturing the truth, and not just “hallucinations” from the AI model?

An important validation step for these analyses are tests performed on synthetic data with a known answer. Analysts take new simulation models, different from the one being used for the primary analysis, and treat them as if they were real data. By unfolding these alternative simulations, researchers are able to compare their results to a known answer. If the biases are large, analysts will need to refine their methods to reduce the model-dependency. If the biases are small compared to the other uncertainties then this remaining difference can be added into the total uncertainty estimate, which is calculated in the traditional way using hundreds of simulations. In unfolding problems, the choice of regularisation method and strength always involves some tradeoff between bias and variance.

Just as unfolding in two dimensions instead of one with traditional methods can reduce model dependence by incorporating more aspects of the detector response, ML methods use the same underlying principle to include as much of the detector response as possible. Learning differences between data and simulation in high-dimensional spaces is the kind of task that ML excels at, and the results are competitive with established methods (see “Better performance” figure).

Neural learning

In the past few years, AI techniques have proven to be useful in practice, yielding publications from the LHC experiments, the H1 experiment at HERA and the STAR experiment at RHIC. The key idea underpinning the strategies used in each of these results is to use neural networks to learn a function that can reweight simulated events to look like data. The neural network is given a list of relevant features about an event such as the masses, energies and momenta of reconstructed objects, and trained to output the probability that it is from a Monte Carlo simulation or the data itself. Neural connections that reweight and combine the inputs across multiple layers are iteratively adjusted depending on the network’s performance. The network thereby learns the relative densities of the simulation and data throughout phase space. The ratio of these densities is used to transform the simulated distribution into one that more closely resembles real events (see “OmniFold” figure).

Illustration of AI unfolding using the OmniFold algorithm

As this is a recently-developed technique, there are plenty of opportunities for new developments and improvements. These strategies are in principle capable of handling significant levels of background subtraction as well as acceptance and efficiency effects, but existing LHC measurements using AI-based unfolding generally have small backgrounds. And as with traditional methods, there is a risk in trying to estimate too many parameters from not enough data. This is typically controlled by stopping the training of the neural network early, combining multiple trainings into a single result, and performing cross validations on different subsets of the data.

Beyond the “OmniFold” methods we are developing, an active community is also working on alternative techniques, including ones based on generative AI. Researchers are also considering creative new ways to use these unfolded results that aren’t possible with traditional methods. One possibility in development is unfolding not just a selection of observables, but the full event. Another intriguing direction could be to generate new events with the corrections learnt by the network built-in. At present, the result of the unfolding is a reweighted set of simulated events, but once the neural network has been trained, its reweighting function could be used to simulate the unfolded sample from scratch, simplifying the output.

The post How to unfold with AI appeared first on CERN Courier.

]]>
Feature Inspired by high-dimensional data and the ideals of open science, high-energy physicists are using artificial intelligence to reimagine the statistical technique of ‘unfolding’. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_AI_feature.jpg
CERN and ESA: a decade of innovation https://cerncourier.com/a/cern-and-esa-a-decade-of-innovation/ Mon, 27 Jan 2025 07:59:01 +0000 https://cerncourier.com/?p=112108 Enrico Chesta, Véronique Ferlet-Cavrois and Markus Brugger highlight seven ways CERN and ESA are working together to further fundamental exploration and innovation in space technologies.

The post CERN and ESA: a decade of innovation appeared first on CERN Courier.

]]>
Sky maps

Particle accelerators and spacecraft both operate in harsh radiation environments, extreme temperatures and high vacuum. Each must process large amounts of data quickly and autonomously. Much can be gained from cooperation between scientists and engineers in each field.

Ten years ago, the European Space Agency (ESA) and CERN signed a bilateral cooperation agreement to share expertise and facilities. The goal was to expand the limits of human knowledge and keep Europe at the leading edge of progress, innovation and growth. A decade on, CERN and ESA have collaborated on projects ranging from cosmology and planetary exploration to Earth observation and human spaceflight, supporting new space-tech ventures and developing electronic systems, radiation-monitoring instruments and irradiation facilities.

1. Mapping the universe

The Euclid space telescope is exploring the dark universe by mapping the large-scale structure of billions of galaxies out to 10 billion light-years across more than a third of the sky. With tens of petabytes expected in its final data set – already a substantial reduction of the 850 billion bits of compressed images Euclid processes each day – it will generate more data than any other ESA mission by far.

With many CERN cosmologists involved in testing theories of beyond-the-Standard-Model physics, Euclid first became a CERN-recognised experiment in 2015. CERN also contributes to the development of Euclid’s “science ground segment” (SGS), which processes raw data received from the Euclid spacecraft into usable scientific products such as galaxy catalogues and dark-matter maps. CERN’s virtual-machine file system (CernVM-FS) has been integrated into the SGS to allow continuous software deployment across Euclid’s nine data centres and on developers’ laptops.

The telescope was launched in July 2023 and began observations in February 2024. The first piece of its great map of the universe was released in October 2024, showing millions of stars and galaxies from observations and covering 132 square degrees of the southern sky (see “Sky map” figure). Based on just two weeks of observations, it accounts for just 1% of project’s six-year survey, which will be the largest cosmic map ever made.

Future CERN–ESA collaborations on cosmology, astrophysics and multimessenger astronomy are likely to include the Laser Interferometer Space Antenna (LISA) and the NewAthena X-ray observatory. LISA will be the first space-based observatory to study gravitational waves. NewAthena will study the most energetic phenomena in the universe. Both projects are expected to be ready to launch about 10 years from now.

2. Planetary exploration

Though planetary exploration is conceptually far from fundamental physics, its technical demands require similar expertise. A good example is the Jupiter Icy Moons Explorer (JUICE) mission, which will make detailed observations of the gas giant and its three large ocean-bearing moons Ganymede, Callisto and Europa.

Jupiter’s magnetic field is a million times greater in volume than Earth’s magnetosphere, trapping large fluxes of highly energetic electrons and protons. Before JUICE, the direct and indirect impact of high-energy electrons on modern electronic devices, and in particular their ability to cause “single event effects”, had never been studied before. Two test campaigns took place in the VESPER facility, which is part of the CERN Linear Electron Accelerator for Research (CLEAR) project. Components were tested with tuneable beam energies between 60 and 200 MeV, and average fluxes of roughly 108 electrons per square centimetre per second, mirroring expected radiation levels in the Jovian system.

JUICE radiation-monitor measurements

JUICE was successfully launched in April 2023, starting an epic eight-year journey to Jupiter including several flyby manoeuvres that will be used to commission the onboard instruments (see “Flyby” figure). JUICE should reach Jupiter in July 2031. It remains to be seen whether test results obtained at CERN have successfully de-risked the mission.

Another interesting example of cooperation on planetary exploration is the Mars Sample Return mission, which must operate in low temperatures during eclipse phases. CERN supported the main industrial partner, Thales Alenia Space, in qualifying the orbiter’s thermal-protection systems in cryogenic conditions.

3. Earth observation

Earth observation from orbit has applications ranging from environmental monitoring to weather forecasting. CERN and ESA collaborate both on developing the advanced technologies required by these applications and ensuring they can operate in the harsh radiation environment of space.

In 2017 and 2018, ESA teams came to CERN’s North Area with several partner companies to test the performance of radiation monitors, field-programmable gate arrays (FPGAs) and electronics chips in ultra-high-energy ion beams at the Super Proton Synchrotron. The tests mimicked the ultra-high-energy part of the galactic cosmic-ray spectrum, whose effects had never previously been measured on the ground beyond 10 GeV/nucleon. In 2017, ESA’s standard radiation-environment monitor and several FPGAs and multiprocessor chips were tested with xenon ions. In 2018, the highlight of the campaign was the testing of Intel’s Myriad-2 artificial intelligence (AI) chip with lead ions (see “Space AI” figure). Following its radiation characterisation and qualification, in 2020 the chip embarked on the φ-sat-1 mission to autonomously detect clouds using images from a hyperspectral camera.

Myriad 2 chip testing

More recently, CERN joined Edge SpAIce – an EU project to monitor ecosystems onboard the Balkan-1 satellite and track plastic pollution in the oceans. The project will use CERN’s high-level synthesis for machine learning (hls4ml) AI technology to run inference models on an FPGA that will be launched in 2025.

Looking further ahead, ESA’s φ-lab and CERN’s Quantum Technology Initiative are sponsoring two PhD programmes to study the potential of quantum machine learning, generative models and time-series processing to advance Earth observation. Applications may accelerate the task of extracting features from images to monitor natural disasters, deforestation and the impact of environmental effects on the lifecycle of crops.

4. Dosimetry for human spaceflight

In space, nothing is more important than astronauts’ safety and wellbeing. To this end, in August 2021 ESA astronaut Thomas Pesquet activated the LUMINA experiment inside the International Space Station (ISS), as part of the ALPHA mission (see “Space dosimetry” figure). Developed under the coordination of the French Space Agency and the Laboratoire Hubert Curien at the Université Jean-Monnet-Saint-Étienne and iXblue, LUMINA uses two several-kilometre-long phosphorous-doped optical fibres as active dosimeters to measure ionising radiation aboard the ISS.

ESA astronaut Thomas Pesquet

When exposed to radiation, optical fibres experience a partial loss of transmitted power. Using a reference control channel, radiation-induced attenuation can be accurately measured related to the total ionising dose, with the sensitivity of the device primarily governed by the length of the fibre. Having studied optical-fibre-based technologies for many years, CERN helped optimise the architecture of the dosimeters and performed irradiation tests to calibrate the instrument, which will operate on the ISS for a period of up to five years.

LUMINA complements dosimetry measurements performed on the ISS using CERN’s Timepix technology – an offshoot of the hybrid-pixel-detector technology developed for the LHC experiments (CERN Courier September/October 2024 p37). Timepix dosimeters have been integrated in multiple NASA payloads since 2012.

5. Radiation-hardness assurance

It’s no mean feat to ensure that CERN’s accelerator infrastructure functions in increasingly challenging radiation environments. Similar challenges are found in space. Damage can be caused by accumulating ionising doses, single-event effects (SEEs) or so-called displacement damage dose, which dislodges atoms within a material’s crystal lattice rather than ionising them. Radiation-hardness assurance (RHA) reduces radiation-induced failures in space through environment simulations, part selection and testing, radiation-tolerant design, worst-case analysis and shielding definition.

Since its creation in 2008, CERN’s Radiation to Electronics project has amplified the work of many equipment and service groups in modelling, mitigating and testing the effect of radiation on electronics. A decade later, joint test campaigns with ESA demonstrated the value of CERN’s facilities and expertise to RHA for spaceflight. This led to the signing of a joint protocol on radiation environments, technologies and facilities in 2019, which also included radiation detectors and radiation-tolerant systems, and components and simulation tools.

CHARM facility

Among CERN’s facilities is CHARM: the CERN high-energy-accelerator mixed-field facility, which offers an innovative approach to low-cost RHA. CHARM’s radiation field is generated by the interaction between a 24 GeV/c beam from the Proton Synchrotron and a metallic target. CHARM offers a uniquely wide spectrum of radiation types and energies, the possibility to adjust the environment using mobile shielding, and enough space to test a medium-sized satellite in full operating conditions.

Radiation testing is particularly challenging for the new generation of rapidly developed and often privately funded “new space” projects, which frequently make use of commercial and off-the-shelf (COTS) components. Here, RHA relies on testing and mitigation rather than radiation hardening by design. For “flip chip” configurations, which have their active circuitry facing inward toward the substrate, and dense three-dimensional structures that cannot be directly exposed without compromising their performance, heavy-ion beams accelerated to between 10 and 100 MeV/nucleon are the only way to induce SEE in the sensitive semiconductor volumes of the devices.

To enable testing of highly integrated electronic components, ESA supported studies to develop the CHARM heavy ions for micro-electronics reliability-assurance facility – CHIMERA for short (see “CHIMERA” figure). ESA has sponsored key feasibility activities such as: tuning the ion flux in a large dynamic range; tuning the beam size for board-level testing; and reducing beam energy to maximise the frequency of SEE while maintaining a penetration depth of a few millimetres in silicon.

6. In-orbit demonstrators

Weighing 1 kg and measuring just 10 cm on each side – a nanosatellite standard – the CELESTA satellite was designed to study the effects of cosmic radiation on electronics (see “CubeSat” figure). Initiated in partnership with the University of Montpellier and ESA, and launched in July 2022, CELESTA was CERN’s first in-orbit technology demonstrator.

Radiation-testing model of the CELESTA satellite

As well as providing the first opportunity for CHARM to test a full satellite, CELESTA offered the opportunity to flight-qualify SpaceRadMon, which counts single-event upsets (SEUs) and single-event latchups (SELs) in static random-access memory while using a field-effect transistor for dose monitoring. (SEUs are temporary errors caused by a high-energy particle flipping a bit and SELs are short circuits induced by high-energy particles.) More than 30 students contributed to the mission development, partially in the frame of ESA’s Fly Your Satellite Programme. Built from COTS components calibrated in CHARM, SpaceRadMon has since been adopted by other ESA missions such as Trisat and GENA-OT, and could be used in the future as a low-cost predictive maintenance tool to reduce space debris and improve space sustainability.

The maiden flight of the Vega-C launcher placed CELESTA on an atypical quasi-circular medium-Earth orbit in the middle of the inner Van Allen proton belt at roughly 6000 km. Two months of flight data sufficed to validate the performance of the payload and the ground-testing procedure in CHARM, though CELESTA will fly for thousands of years in a region of space where debris is not a problem due to the harsh radiation environment.

The CELESTA approach has since been adopted by industrial partners to develop radiation-tolerant cameras, radios and on-board computers.

7. Stimulating the space economy

Space technology is a fast-growing industry replete with opportunities for public–private cooperation. The global space economy will be worth $1.8 trillion by 2035, according to the World Economic Forum – up from $630 billion in 2023 and growing at double the projected rate for global GDP.

Whether spun off from space exploration or particle physics, ESA and CERN look to support start-up companies and high-tech ventures in bringing to market technologies with positive societal and economic impacts (see “Spin offs” figure). The use of CERN’s Timepix technology in space missions is a prime example. Private company Advacam collaborated with the Czech Technical University to provide a Timepix-based radiation-monitoring payload called SATRAM to ESA’s Proba-V mission to map land cover and vegetation growth across the entire planet every two days.

The Hannover Messe fair

Advacam is now testing a pixel-detector instrument on JoeySat – an ESA-sponsored technology demonstrator for OneWeb’s next-generation constellation of satellites designed to expand global connectivity. Advacam is also working with ESA on radiation monitors for Space Rider and NASA’s Lunar Gateway. Space Rider is a reusable spacecraft whose maiden voyage is scheduled for the coming years, and Lunar Gateway is a planned space station in lunar orbit that could act as a staging post for Mars exploration.

Another promising example is SigmaLabs – a Polish startup founded by CERN alumni specialising in radiation detectors and predictive-maintenance R&D for space applications. SigmaLabs was recently selected by ESA and the Polish Space Agency to provide one of the experiments expected to fly on Axiom Mission 4 – a private spaceflight to the ISS in 2025 that will include Polish astronaut and CERN engineer Sławosz Uznański (CERN Courier May/June 2024 p55). The experiment will assess the scalability and versatility of the SpaceRadMon radiation-monitoring technology initially developed at CERN for the LHC and flight tested on the CELESTA CubeSat.

In radiation-hardness assurance, the CHIMERA facility is associated with the High-Energy Accelerators for Radiation Testing and Shielding (HEARTS) programme sponsored by the European Commission. Its 2024 pilot user run is already stimulating private innovation, with high-energy heavy ions used to perform business-critical research on electronic components for a dozen aerospace companies.

The post CERN and ESA: a decade of innovation appeared first on CERN Courier.

]]>
Feature Enrico Chesta, Véronique Ferlet-Cavrois and Markus Brugger highlight seven ways CERN and ESA are working together to further fundamental exploration and innovation in space technologies. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_CERNandESA_pesquet.jpg
A word with CERN’s next Director-General https://cerncourier.com/a/a-word-with-cerns-next-director-general/ Mon, 27 Jan 2025 07:56:07 +0000 https://cerncourier.com/?p=112181 Mark Thomson, CERN's Director General designate for 2025, talks to the Courier about the future of particle physics.

The post A word with CERN’s next Director-General appeared first on CERN Courier.

]]>
Mark Thomson

What motivates you to be CERN’s next Director-General?

CERN is an incredibly important organisation. I believe my deep passion for particle physics, coupled with the experience I have accumulated in recent years, including leading the Deep Underground Neutrino Experiment, DUNE, through a formative phase, and running the Science and Technology Facilities Council in the UK, has equipped me with the right skill set to lead CERN though a particularly important period.

How would you describe your management style?

That’s a good question. My overarching approach is built around delegating and trusting my team. This has two advantages. First, it builds an empowering culture, which in my experience provides the right environment for people to thrive. Second, it frees me up to focus on strategic planning and engagement with numerous key stakeholders. I like to focus on transparency and openness, to build trust both internally and externally.

How will you spend your familiarisation year before you take over in 2026?

First, by getting a deep understanding of CERN “from within”, to plan how I want to approach my mandate. Second, by lending my voice to the scientific discussion that will underpin the third update to the European strategy for particle physics. The European strategy process is a key opportunity for the particle-physics community to provide genuine bottom-up input and shape the future. This is going to be a really varied and exciting year.

What open question in fundamental physics would you most like to see answered in your lifetime?

I am going to have to pick two. I would really like to understand the nature of dark matter. There are a wide range of possibilities, and we are addressing this question from multiple angles; the search for dark matter is an area where the collider and non-collider experiments can both contribute enormously. The second question is the nature of the Higgs field. The Higgs boson is just so different from anything else we’ve ever seen. It’s not just unique – it’s unique and very strange. There are just so many deep questions, such as whether it is fundamental or composite. I am confident that we will make progress in the coming years. I believe the High-Luminosity LHC will be able to make meaningful measurements of the self-coupling at the heart of the Higgs potential. If you’d asked me five years ago whether this was possible, I would have been doubtful. But today I am very optimistic because of the rapid progress with advanced analysis techniques being developed by the brilliant scientists on the LHC experiments.

What areas of R&D are most in need of innovation to meet our science goals?

Artificial intelligence is changing how we look at data in all areas of science. Particle physics is the ideal testing ground for artificial intelligence, because our data is complex there are none of the issues around the sensitive nature of the data that exist in other fields. Complex multidimensional datasets are where you’ll benefit the most from artificial intelligence. I’m also excited by the emergence of new quantum technologies, which will open up fresh opportunities for our detector systems and also new ways of doing experiments in fundamental physics. We’ve only scratched the surface of what can be achieved with entangled quantum systems.

How about in accelerator R&D?

There are two areas that I would like to highlight: making our current technologies more sustainable, and the development of high-field magnets based on high-temperature superconductivity. This connects to the question of innovation more broadly. To quote one example among many, high-temperature superconducting magnets are likely to be an important component of fusion reactors just as much as particle accelerators, making this a very exciting area where CERN can deploy its engineering expertise and really push that programme forward. That’s not just a benefit for particle physics, but a benefit for wider society.

How has CERN changed since you were a fellow back in 1994?

The biggest change is that the collider experiments are larger and more complex, and the scientific and technical skills required have become more specialised. When I first came to CERN, I worked on the OPAL experiment at LEP – a collaboration of less than 400 people. Everybody knew everybody, and it was relatively easy to understand the science of the whole experiment.

My overarching approach is built around delegating and trusting my team

But I don’t think the scientific culture of CERN and the particle-physics community has changed much. When I visit CERN and meet with the younger scientists, I see the same levels of excitement and enthusiasm. People are driven by the wonderful mission of discovery. When planning the future, we need to ensure that early-career researchers can see a clear way forward with opportunities in all periods of their career. This is essential for the long-term health of particle physics. Today we have an amazing machine that’s running beautifully: the LHC. I also don’t think it is possible to overstate the excitement of the High-Luminosity LHC. So there’s a clear and exciting future out to the early 2040s for today’s early-career researchers. The question is what happens beyond that? This is one reason to ensure that there is not a large gap between the end of the High-Luminosity LHC and the start of whatever comes next.

Should the world be aligning on a single project?

Given the increasing scale of investment, we do have to focus as a global community, but that doesn’t necessarily mean a single project. We saw something similar about 10 years ago when the global neutrino community decided to focus its efforts on two complementary long-baseline projects, DUNE and Hyper-Kamiokande. From the perspective of today’s European strategy, the Future Circular Collider (FCC) is an extremely appealing project that would map out an exciting future for CERN for many decades. I think we’ll see this come through strongly in an open and science-driven European strategy process.

How do you see the scientific case for the FCC?

For me, there are two key points. First, gaining a deep understanding of the Higgs boson is the natural next step in our field. We have discovered something truly unique, and we should now explore its properties to gain deeper insights into fundamental physics. Scientifically, the FCC provides everything you want from a Higgs factory, both in terms of luminosity and the opportunity to support multiple experiments.

Second, investment in the FCC tunnel will provide a route to hadron–hadron collisions at the 100 TeV scale. I find it difficult to foresee a future where we will not want this capability.

These two aspects make the FCC a very attractive proposition.

How successful do you believe particle physics is in communicating science and societal impacts to the public and to policymakers?

I think we communicate science well. After all, we’ve got a great story. People get the idea that we work to understand the universe at its most basic level. It’s a simple and profound message.

Going beyond the science, the way we communicate the wider industrial and societal impact is probably equally important. Here we also have a good story. In our experiments we are always pushing beyond the limits of current technology, doing things that have not been done before. The technologies we develop to do this almost always find their way back into something that will have wider applications. Of course, when we start, we don’t know what the impact will be. That’s the strength and beauty of pushing the boundaries of technology for science.

Would the FCC give a strong return on investment to the member states?

Absolutely. Part of the return is the science, part is the investment in technology, and we should not underestimate the importance of the training opportunities for young people across Europe. CERN provides such an amazing and inspiring environment for young people. The scale of the FCC will provide a huge number of opportunities for young scientists and engineers.

We need to ensure that early-career researchers can see a clear way forward with opportunities in all periods of their career. This is essential for the long-term health of particle physics

In terms of technology development, the detectors for the electron–positron collider will provide an opportunity for pushing forward and deploying new, advanced technologies to deliver the precision required for the science programme. In parallel, the development of the magnet technologies for the future hadron collider will be really exciting, particularly the potential use of high-temperature superconductors, as I said before.

It is always difficult to predict the specific “return on investment” on the technologies for big scientific research infrastructure. Part of this challenge is that some of that benefits might be 20, 30, 40 years down the line. Nevertheless, every retrospective that has tried, has demonstrated that you get a huge downstream benefit.

Do we reward technical innovation well enough in high-energy physics?

There needs to be a bit of a culture shift within our community. Engineering and technology innovation are critical to the future of science and critical to the prosperity of Europe. We should be striving to reward individuals working in these areas.

Should the field make it more flexible for physicists and engineers to work in industry and return to the field having worked there?

This is an important question. I actually think things are changing. The fluidity between academia and industry is increasing in both directions. For example, an early-career researcher in particle physics with a background in deep artificial-intelligence techniques is valued incredibly highly by industry. It also works the other way around, and I experienced this myself in my career when one of my post-doctoral researchers joined from an industry background after a PhD in particle physics. The software skills they picked up from industry were incredibly impactful.

I don’t think there is much we need to do to directly increase flexibility – it’s more about culture change, to recognise that fluidity between industry and academia is important and beneficial. Career trajectories are evolving across many sectors. People move around much more than they did in the past.

Does CERN have a future as a global laboratory?

CERN already is a global laboratory. The amazing range of nationalities working here is both inspiring and a huge benefit to CERN.

How can we open up opportunities in low- and middle-income countries?

I am really passionate about the importance of diversity in all its forms and this includes national and regional inclusivity. It is an agenda that I pursued in my last two positions. At the Deep Underground Neutrino Experiment, I was really keen to engage the scientific community from Latin America, and I believe this has been mutually beneficial. At STFC, we used physics as a way to provide opportunities for people across Africa to gain high-tech skills. Going beyond the training, one of the challenges is to ensure that people use these skills in their home nations. Otherwise, you’re not really helping low- and middle-income countries to develop.

What message would you like to leave with readers?

That we have really only just started the LHC programme. With more than a factor of 10 increase in data to come, coupled with new data tools and upgraded detectors, the High-Luminosity LHC represents a major opportunity for a new discovery. Its nature could be a complete surprise. That’s the whole point of exploring the unknown: you don’t know what’s out there. This alone is incredibly exciting, and it is just a part of CERN’s amazing future.

The post A word with CERN’s next Director-General appeared first on CERN Courier.

]]>
Opinion Mark Thomson, CERN's Director General designate for 2025, talks to the Courier about the future of particle physics. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_INT_thompson_feature.jpg
The other 99% https://cerncourier.com/a/the-other-99/ Mon, 27 Jan 2025 07:44:48 +0000 https://cerncourier.com/?p=112146 Daniel Tapia Takaki describes how ultraperipheral collisions mediated by high-energy photons are shedding light on gluon saturation, gluonic hotspots and nuclear shadowing.

The post The other 99% appeared first on CERN Courier.

]]>
Quarks contribute less than 1% to the mass of protons and neutrons. This provokes an astonishing question: where does the other 99% of the mass of the visible universe come from? The answer lies in the gluon, and how it interacts with itself to bind quarks together inside hadrons.

Much remains to be understood about gluon dynamics. At present, the chief experimental challenge is to observe the onset of gluon saturation – a dynamic equilibrium between gluon splitting and recombination predicted by QCD. The experimental key looks likely to be a rare but intriguing type of LHC interaction known as an ultra­peripheral collision (UPC), and the breakthrough may come as soon as the next experimental run.

Gluon saturation is expected to end the rapid growth in gluon density measured at the HERA electron–proton collider at DESY in the 1990s and 2000s. HERA observed this growth as the energy of interactions increased and as the fraction of the proton’s momentum borne by the gluons (Bjorken x) decreased.

So gluons become more numerous in hadrons as their energy decreases – but to what end?

Gluonic hotspots are now being probed with unprecedented precision at the LHC and are central to understanding the high-energy regime of QCD

Nonlinear effects are expected to arise due to processes like gluon recombination, wherein two gluons combine to become one. When gluon recombination becomes a significant factor in QCD dynamics, gluon saturation sets in – an emergent phenomenon whose energy scale is a critical parameter to determine experimentally. At this scale, gluons begin to act like classical fields and gluon density plateaus. A dilute partonic picture transitions to a dense, saturated state. For recombination to take precedence over splitting, gluon momenta must be very small, corresponding to low values of Bjorken x. The saturation scale should also be directly proportional to the colour-charge density, making heavy nuclei like lead ideal for studying nonlinear QCD phenomena.

But despite strong theoretical reasoning and tantalising experimental hints, direct evidence for gluon saturation remains elusive.

Since the conclusion of the HERA programme, the quest to explore gluon saturation has shifted focus to the LHC. But with no point-like electron to probe the hadronic target, LHC physicists had to find a new point-like probe: light itself. UPCs at the LHC exploit the flux of quasi-real high-energy photons generated by ultra-relativistic particles. For heavy ions like lead, this flux of photons is enhanced by the square of the nuclear charge, enabling studies of photon-proton (γp) and photon-nucleus interactions at centre-of-mass energies reaching the TeV scale.

Keeping it clean

What really sets UPCs apart is their clean environment. UPCs occur at large impact parameters well outside the range of the strong nuclear force, allowing the nuclei to remain intact. Unlike hadronic collisions, which can produce thousands of particles, UPCs often involve only a few final-state particles, for example a single J/ψ, providing an ideal laboratory for gluon saturation. J/ψ are produced when a cc pair created by two or more gluons from one nucleus is brought on-shell by interacting with a quasi-real photon from the other nucleus (see “Sensitivity to saturation” figure).

Power-law observation

Gluon saturation models predict deviations in the γp → J/ψp cross section from the power-law behaviour observed at HERA. The LHC experiments are placing a significant focus on investigating the energy dependence of this process to identify potential signatures of saturation, with ALICE and LHCb extending studies to higher γp centre-of-mass energies (Wγp) and lower Bjorken x than HERA. The results so far reveal that the cross-section continues to increase with energy, consistent with the power-law trend (see “Approaching the plateau?” figure).

The symmetric nature of pp collisions introduces significant challenges. In pp collisions, either proton can act as the photon source, leading to an intrinsic ambiguity in identifying the photon emitter. In proton–lead (pPb) collisions, the lead nucleus overwhelmingly dominates photon emission, eliminating this ambiguity. This makes pPb collisions an ideal environment for precise studies of the photoproduction of J/ψ by protons.

During LHC Run 1, the ALICE experiment probed Wγp up to 706 GeV in pPb collisions, more than doubling HERA’s maximum reach of 300 GeV. This translates to probing Bjorken-x values as low as 10–5, significantly beyond the regime explored at HERA. LHCb took a different approach. The collaboration inferred the behaviour of pp collisions at high energies (“W+ solutions”) by assuming knowledge of their energy dependence at low energies (“W- solutions”), allowing LHCb to probe gluon energies as small as 10–6 in Bjorken x and Wγp up to 2 TeV.

There is not yet any theoretical consensus on whether LHC data align with gluon-saturation predictions, and the measurements remain statistically limited, leaving room for further exploration. Theoretical challenges include incomplete next-to-leading-order calculations and the reliance of some models on fits to HERA data. Progress will depend on robust and model-independent calculations and high-quality UPC data from pPb collisions in LHC Run 3 and Run 4.

Some models predict a slowing increase in the γp → J/ψp cross section with energy at small Bjorken x. If these models are correct, gluon saturation will likely be discovered in LHC Run 4, where we expect to see a clear observation of whether pPb data deviate from the power law observed so far.

Gluonic hotspots

If a UPC photon interacts with the collective colour field of a nucleus – coherent scattering – it probes its overall distribution of gluons. If a UPC photon interacts with individual nucleons or smaller sub-nucleonic structures – incoherent scattering – it can probe smaller-scale gluon fluctuations.

Simulations of the transverse density of gluons in protons

These fluctuations, known as gluonic hotspots, are theorised to become more numerous and overlap in the regime of gluon saturation (see “Onset of saturation” figure). Now being probed with unprecedented precision at the LHC, they are central to understanding the high-energy regime of QCD.

Gluonic hotspots are used to model the internal transverse structure of colliding protons or nuclei (see “Hotspot snapshots” figure). The saturation scale is inherently impact-parameter dependent, with the densest colour charge densities concentrated at the core of the proton or nucleus, and diminishing toward the periphery, though subject to fluctuations. Researchers are increasingly interested in exploring how these fluctuations depend on the impact parameter of collisions to better characterise the spatial dynamics of colour charge. Future analyses will pinpoint contributions from localised hotspots where saturation effects are most likely to be observed.

The energy dependence of incoherent or dissociative photoproduction promises a clear signature for gluon saturation, independent of the coherent power-law method described above. As saturation sets in, all gluon configurations in the target converge to similar densities, causing the variance of the gluon field to decrease, and with it the dissociative cross section. Detecting a peak and a decline in the incoherent cross-section as a function of energy would represent a clear signature of gluon saturation.

Simulations of the transverse density of gluons in lead nuclei

The ALICE collaboration has taken significant steps in exploring this quantum terrain, demonstrating the possibility of studying different geometrical configurations of quantum fluctuations in processes where protons or lead nucleons dissociate. The results highlight a striking correlation between momentum transfer, which is inversely proportional to the impact parameter, and the size of the target structure. The observation that sub-nucleonic structures impart the greatest momentum transfer is compelling evidence for gluonic quantum fluctuations at the sub-nucleon level.

Into the shadows

In 1982 the European Muon Collaboration observed an intriguing phenomenon: nuclei appeared to contain fewer gluons than expected based on the contributions from their individual protons and neutrons. This effect, known as nuclear shadowing, was observed in experiments conducted at CERN at moderate values of Bjorken x. It is now known to occur because the interaction of a probe with one gluon reduces the likelihood of the probe interacting with other gluons within the nucleus – the gluons hiding behind them, in their shadow, so to speak. At smaller values of Bjorken x, saturation further suppresses the number of gluons contributing to the interaction.

Nuclear suppression factor for lead relative to protons

The relationship between gluon saturation and nuclear shadowing is poorly understood, and separating their effects remains an open challenge. The situation is further complicated by an experimental reliance on lead–lead (PbPb) collisions, which, like pp collisions, suffer from ambiguity in identifying the interacting nucleus, unless the interaction is accompanied by an ejected neutron.

The ALICE, CMS and LHCb experiments have extensively studied nuclear shadowing via the exclusive production of vector mesons such as J/ψ in ultraperipheral PbPb
collisions. Results span photon–nucleus collision energies from 10 to 1000 GeV. The onset of nuclear shadowing, or another nonlinear QCD phenomenon like saturation, is clearly visible as a function of energy and Bjorken x (see “Nuclear shadowing” figure).

Multidimensional maps

While both saturation-based and gluon shadowing models describe the data reasonably well at high energies, neither framework captures the observed trends across the entire kinematic range. Future efforts must go beyond energy dependence by being differential in momentum transfer and studying a range of vector mesons with complementary sensitivities to the saturation scale.

Soon to be constructed at Brookhaven National Laboratory, the Electron-Ion Collider (EIC) promises to transform our understanding of gluonic matter. Designed specifically for QCD research, the EIC will probe gluon saturation and shadowing in unprecedented detail, using a broad array of reactions, collision species and energy levels. By providing a multidimensional map of gluonic behaviour, the EIC will address funda­mental questions such as the origin of mass and nuclear spin.

ALICE’s high-granularity forward calorimeter

Before then, a tenfold increase in PbPb statistics in LHC Runs 3 and 4 will allow a transformative leap in low Bjorken-x physics. Though not originally designed for low Bjorken-x physics, the LHC’s unparalleled energy reach and diverse range of colliding systems offers unique opportunities to explore gluon dynamics at the highest energies.

Enhanced capabilities

Surpassing the gains from increased luminosity alone, ALICE’s new triggerless detector readout mode will offer a vast improvement over previous runs, which were constrained by dedicated triggers and bandwidth limitations. Subdetector upgrades will also play an important role. The muon forward tracker has already enhanced ALICE’s capabilities, and the high-granularity forward calorimeter set to be installed in time for Run 4 is specifically designed to improve sensitivity to small Bjorken-x physics (see “Saturation specific” figure).

Ultraperipheral-collision physics at the LHC is far more than a technical exploration of QCD. Gluons govern the structure of all visible matter. Saturation, hotspots and shadowing shed light on the origin of 99% of the mass of the visible universe. 

The post The other 99% appeared first on CERN Courier.

]]>
Feature Daniel Tapia Takaki describes how ultraperipheral collisions mediated by high-energy photons are shedding light on gluon saturation, gluonic hotspots and nuclear shadowing. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_GLUON_frontis.jpg
Charm and synthesis https://cerncourier.com/a/charm-and-synthesis/ Mon, 27 Jan 2025 07:43:29 +0000 https://cerncourier.com/?p=112128 Sheldon Glashow recalls the events surrounding a remarkable decade of model building and discovery between 1964 and 1974.

The post Charm and synthesis appeared first on CERN Courier.

]]>
In 1955, after a year of graduate study at Harvard, I joined a group of a dozen or so students committed to studying elementary particle theory. We approached Julian Schwinger, one of the founders of quantum electrodynamics, hoping to become his thesis students – and we all did.

Schwinger lined us up in his office, and spent several hours assigning thesis subjects. It was a remarkable performance. I was the last in line. Having run out of well-defined thesis problems, he explained to me that weak and electromagnetic interactions share two remarkable features: both are vectorial and both display aspects of universality. Schwinger suggested that I create a unified theory of the two interactions – an electroweak synthesis. How I was to do this he did not say, aside from slyly hinting at the Yang–Mills gauge theory.

By the summer of 1958, I had convinced myself that weak and electromagnetic interactions might be described by a badly broken gauge theory, and Schwinger that I deserved a PhD. I had hoped to partly spend a postdoctoral fellowship in Moscow at the invitation of the recent Russian Nobel laureate Igor Tamm, and sought to visit Niels Bohr’s institute in Copenhagen while awaiting my Soviet visa. With Bohr’s enthusiastic consent, I boarded the SS Île de France with my friend Jack Schnepps. Following a memorable and luxurious crossing – one of the great ship’s last – Jack drove south to Padova to work with Milla Baldo-Ceolin’s emulsion group in Padova, and I took the slow train north to Copenhagen. Thankfully, my Soviet visa never arrived. I found the SU(2) × U(1) structure of the electroweak model in the spring of 1960 at Bohr’s famous institute at Blegsdamvej 19, and wrote the paper that would earn my share of the 1979 Nobel Prize.

We called the new quark flavour charm, completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day

A year earlier, in 1959, Augusto Gamba, Bob Marshak and Susumo Okubo had proposed lepton–hadron symmetry, which regarded protons, neutrons and lambda hyperons as the building blocks of all hadrons, to match the three known leptons at the time: neutrinos, electrons and muons. The idea was falsified by the discovery of a second neutrino in 1962, and superseded in 1964 by the invention of fractionally charged hadron constituents, first by George Zweig and André Petermann, and then decisively by Murray Gell-Mann with his three flavours of quarks. Later in 1964, while on sabbatical in Copenhagen, James Bjorken and I realised that lepton–hadron symmetry could be revived simply by adding a fourth quark flavour to Gell-Mann’s three. We called the new quark flavour “charm”, completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day.

Annus mirabilis

1964 was a remarkable year. In addition to the invention of quarks, Nick Samios spotted the triply strange Ω baryon, and Oscar Greenberg devised what became the critical notion of colour. Arno Penzias and Robert Wilson stumbled on the cosmic microwave background radiation. James Cronin, Val Fitch and others discovered CP violation. Robert Brout, François Englert, Peter Higgs and others invented spontaneously broken non-Abelian gauge theories. And to top off the year, Abdus Salam rediscovered and published my SU(2) × U(1) model, after I had more-or-less abandoned electroweak thoughts due to four seemingly intractable problems.

Four intractable problems of early 1964

How could the W and Z bosons acquire masses while leaving the photon massless?

Steven Weinberg, my friend from both high-school and college, brilliantly solved this problem in 1967 by subjecting the electroweak gauge group to spontaneous symmetry breaking, initiating the half-century-long search for the Higgs boson. Salam published the same solution in 1968.

How could an electroweak model of leptons be extended to describe the weak interactions of hadrons?

John Iliopoulos, Luciano Maiani and I solved this problem in 1970 by introducing charm and quark-lepton symmetry to avoid unobserved strangeness-changing neutral currents.

Was the spontaneously broken electroweak gauge model mathematically consistent?

Gerard ’t Hooft announced in 1971 that he had proven Steven Weinberg’s electroweak model to be renormalisable. In 1972, Claude Bouchiat, John Iliopoulos and Philippe Meyer demonstrated the electroweak model to be free of Adler anomalies provided that lepton–quark symmetry is maintained.

Could the electroweak model describe CP violation without invoking additional spinless fields?

In 1973, Makoto Kobayashi and Toshihide Maskawa showed that the electroweak model could easily and naturally violate CP if there are more than four quark flavours.

Much to my surprise and delight, all of them would be solved within just a few years, with the last theoretical obstacle removed by Makoto Kobayashi and Toshihide Maskawa in 1973 (see “Four intractable problems” panel). A few months later, Paul Musset announced that CERN’s Gargamelle detector had won the race to detect weak neutral-current interactions, giving the electroweak model the status of a predictive theory. Remarkably, the year had begun with Gell-Mann, Harald Fritzsch and Heinrich Leutwyler proposing QCD, and David Gross, Frank Wilczek and David Politzer showing it to be asymptotically free. The Standard Model of particle physics was born.

Charmed findings

But where were the charmed quarks? Early on Monday morning on 11 November, 1974, I was awakened by a phone call from Sam Ting, who asked me to come to his MIT office as soon as possible. He and Ulrich Becker were waiting for me impatiently. They showed me an amazingly sharp resonance. Could it be a vector meson like the ρ or ω and be so narrow, or was it something quite different? I hopped in my car and drove to Harvard, where my colleagues Alvaro de Rújula and Howard Georgi excitedly regaled me about the Californian side of the story. A few days later, experimenters in Frascati confirmed the BNL–SLAC discovery, and de Rújula and I submitted our paper “Is Bound Charm Found?” – one of two papers on the J/ψ discovery printed in Physical Review Letters on 5 July 1965 that would prove to be correct. Among five false papers was one written by my beloved mentor, Julian Schwinger.

Sam Ting at CERN in 1976

The second correct paper was by Tom Appelquist and David Politzer. Well before that November, they had realised (without publishing) that bound states of a charmed quark and its antiquark lying below the charm threshold would be exceptionally narrow due the asymptotic freedom of QCD. De Rújula suggested to them that such a system be called charmonium in an analogy with positronium. His term made it into the dictionary. Shortly afterward, the 1976 Nobel Prize in Physics was jointly awarded to Burton Richter and Sam Ting for “their pioneering work in the discovery of a heavy elementary particle of a new kind” – evidence that charm was not yet a universally accepted explanation. Over the next few years, experimenters worked hard to confirm the predictions of theorists at Harvard and Cornell by detecting and measuring the masses, spins and transitions among the eight sub-threshold charmonium states. Later on, they would do the same for 14 relatively narrow states of bottomonium.

Abdus Salam, Tom Ball and Paul Musset

Other experimenters were searching for particles containing just one charmed quark or antiquark. In our 1975 paper “Hadron Masses in a Gauge Theory”, de Rújula, Georgi and I included predictions of the masses of several not-yet-discovered charmed mesons and baryons. The first claim to have detected charmed particles was made in 1975 by Robert Palmer and Nick Samios at Brookhaven, again with a bubble-chamber event. It seemed to show a cascade decay process in which one charmed baryon decays into another charmed baryon, which itself decays. The measured masses of both of the charmed baryons were in excellent agreement with our predictions. Though the claim was not widely accepted, I believe to this day that Samios and Palmer were the first to detect charmed particles.

Sheldon Glashow and Steven Weinberg

The SLAC electron–positron collider, operating well above charm threshold, was certainly producing charmed particles copiously. Why were they not being detected? I recall attending a conference in Wisconsin that was largely dedicated to this question. On the flight home, I met my old friend Gerson Goldhaber, who had been struggling unsuccessfully to find them. I think I convinced him to try a bit harder. A couple of weeks later in 1976, Goldhaber and François Pierre succeeded. My role in charm physics had come to a happy ending. 

  • This article is adapted from a presentation given at the Institute of High-Energy Physics in Beijing on 20 October 2024 to celebrate the 50th anniversary of the discovery of the J/ψ.

The post Charm and synthesis appeared first on CERN Courier.

]]>
Feature Sheldon Glashow recalls the events surrounding a remarkable decade of model building and discovery between 1964 and 1974. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_GLASHOW_lectures.jpg
Muon cooling kickoff at Fermilab https://cerncourier.com/a/muon-cooling-kickoff-at-fermilab/ Mon, 27 Jan 2025 07:27:55 +0000 https://cerncourier.com/?p=112324 The first of a new series of workshops to discuss the future of beam-cooling technology for a muon collider.

The post Muon cooling kickoff at Fermilab appeared first on CERN Courier.

]]>
More than 100 accelerator scientists, engineers and particle physicists gathered in person and remotely at Fermilab from 30 October to 1 November for the first of a new series of workshops to discuss the future of beam-cooling technology for a muon collider. High-energy muon colliders offer a unique combination of discovery potential and precision. Unlike protons, muons are point-like particles that can achieve comparable physics outcomes at lower centre-of-mass energies. The large mass of the muon also suppresses synchrotron radiation, making muon colliders promising candidates for exploration at the energy frontier.

The International Muon Collider Collaboration (IMCC), supported by the EU MuCol study, is working to assess the potential of a muon collider as a future facility, along with the R&D needed to make it a reality. European engagement in this effort crystalised following the 2020 update to the European Strategy for Particle Physics (ESPPU), which identified the development of bright muon beams as a high-priority initiative. Worldwide interest in a muon collider is quickly growing: the 2023 Particle Physics Project Prioritization Panel (P5) recently identified it as an important future possibility for the US particle-physics community; Japanese colleagues have proposed a muon-collider concept, muTRISTAN (CERN Courier July/August 2024 p8); and Chinese colleagues have actively contributed to IMCC efforts as collaboration members.

Lighting the way

The workshop focused on reviewing the scope and design progress of a muon-cooling demonstrator facility, identifying potential host sites and timelines, and exploring science programmes that could be developed alongside it. Diktys Stratakis (Fermilab) began by reviewing the requirements and challenges of muon cooling. Delivering a high-brightness muon beam will be essential to achieving the luminosity needed for a muon collider. The technique proposed for this is ionisation cooling, wherein the phase-space volume of the muon beam decreases as it traverses a sequence of cells, each containing an energy- absorbing mat­erial and accelerating radiofrequency (RF) cavities.

Roberto Losito (CERN) called for a careful balance between ambition and practicality – the programme must be executed in a timely way if a muon collider is to be a viable next-generation facility. The Muon Cooling Demonstrator programme was conceived to prove that this technology can be developed, built and reliably operated. This is a critical step for any muon-collider programme, as highlighted in the ESPPU–LDG Accelerator R&D Roadmap published in 2022. The plan is to pursue a staged approach, starting with the development of the magnet, RF and absorber technology, and demonstrating the robust operation of high-gradient RF cavities in high magnetic fields. The components will then be integrated into a prototype cooling cell. The programme will conclude with a demonstration of the operation of a multi-cell cooling system with a beam, building on the cooling proof of principle made by the Muon Ionisation Cooling Experiment.

Chris Rogers (STFC RAL) summarised an emerging consensus that it is critical to demonstrate the reliable operation of a cooling lattice formed of multiple cells. While the technological complexity of the cooling-cell prototype will undergo further review, the preliminary choice presents a moderately challenging performance that could be achieved within five to seven years with reasonable investment. The target cooling performance of a whole cooling lattice remains to be established and depends on future funding levels. However, delegates agreed that a timely demonstration is more important than an ambitious cooling target.

Worldwide interest in a muon collider is quickly growing

The workshop also provided an opportunity to assess progress in designing the cooling-cell prototype. Given that the muon beam originates from hadron decays and is initially the size of a watermelon, solenoid magnets were chosen as they can contain large beams in a compact lattice and provide focusing in both horizontal and vertical planes simultaneously. Marco Statera (INFN LASA) presented preliminary solutions for the solenoid coil configuration based on high-temperature superconductors operating at 20 K: the challenge is to deliver the target magnetic field profile given axial forces, coil stresses and compact integration.

In ionisation cooling, low-Z absorbers are used to reduce the transverse momenta of the muons while keeping the multiple scattering at manageable levels. Candidate materials are lithium hydride and liquid hydrogen. Chris Rogers discussed the need to test absorbers and containment windows at the highest intensities. The potential for performance tests using muons or intensity tests using another particle species such as protons was considered to verify understanding of the collective interaction between the beam and the absorber. RF cavities are required to replace longitudinal energy lost in the absorbers.  Dario Giove (INFN LASA) introduced the prototype of an RF structure based on three coupled 704 MHz cavities and presented a proposal to use existing INFN capabilities to carry out a test programme for materials and cavities in magnetic fields. The use of cavity windows was also discussed, as it would enable greater accelerating gradients, though at the cost of beam degradation, increased thermal loads and possible cavity detuning. The first steps in integ­rating these latest hardware designs into a compact cooling cell were presented by Lucio Rossi (INFN LASA and UMIL). Future work needs to address the management of the axial forces and cryogenic heat loads, Rossi observed.

Many institutes presented a strong interest in contributing to the programme, both in the hardware R&D and hosting the eventual demonstrator. The final sessions of the workshop focused on potential host laboratories.

The event underscored the critical need for sustained innovation, timely implementation and global cooperation

At CERN, two potential sites were discussed, with ongoing studies focusing on the TT7 tunnel, where a moderate-power 10 kW proton beam from the Proton Synchrotron could be used for muon production. Preliminary beam physics studies of muon beam production and transport are already underway. Lukasz Krzempek (CERN) and Paul Jurj (Imperial College London) presented the first integration and beam-physics studies of the demonstrator facility in the TT7 tunnel, highlighting civil engineering and beamline design requirements, logistical challenges and safety considerations, finding no apparent showstoppers.

Jeff Eldred (Fermilab) gave an overview of Fermilab’s broad range of candidate sites and proton-beam energies. While further feasibility studies are required, Eldred highlighted that using 8 GeV protons from the Booster is an attractive option due to the favourable existing infrastructure and its alignment with Fermilab’s muon-collider scenario, which envisions a proton driver based on the same Booster proton energy.

The Fermilab workshop represented a significant milestone in advancing the Muon Cooling Demonstrator, highlighting enthusiasm from the US community to join forces with the IMCC and growing interest in Asia. As Mark Palmer (BNL) observed in his closing remarks, the event underscored the critical need for sustained innovation, timely implementation and global cooperation to make the muon collider a reality.

The post Muon cooling kickoff at Fermilab appeared first on CERN Courier.

]]>
Meeting report The first of a new series of workshops to discuss the future of beam-cooling technology for a muon collider. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_cool.jpg
CLOUD explains Amazon aerosols https://cerncourier.com/a/cloud-explains-amazon-aerosols/ Mon, 27 Jan 2025 07:26:49 +0000 https://cerncourier.com/?p=112200 The CLOUD collaboration at CERN has revealed a new source of atmospheric aerosol particles that could help scientists to refine climate models.

The post CLOUD explains Amazon aerosols appeared first on CERN Courier.

]]>
In a paper published in the journal Nature, the CLOUD collaboration at CERN has revealed a new source of atmospheric aerosol particles that could help scientists to refine climate models.

Aerosols are microscopic particles suspended in the atmosphere that arise from both natural sources and human activities. They play an important role in Earth’s climate system because they seed clouds and influence their reflectivity and coverage. Most aerosols arise from the spontaneous condensation of molecules that are present in the atmosphere only in minute concentrations. However, the vapours responsible for their formation are not well understood, particularly in the remote upper troposphere.

The CLOUD (Cosmics Leaving Outdoor Droplets) experiment at CERN is designed to investigate the formation and growth of atmospheric aerosol particles in a controlled laboratory environment. CLOUD comprises a 26 m3 ultra-clean chamber and a suite of advanced instruments that continuously analyse its contents. The chamber contains a precisely selected mixture of gases under atmospheric conditions, into which beams of charged pions are fired from CERN’s Proton Synchrotron to mimic the influence of galactic cosmic rays.

“Large concentrations of aerosol particles have been observed high over the Amazon rainforest for the past 20 years, but their source has remained a puzzle until now,” says CLOUD spokesperson Jasper Kirkby. “Our latest study shows that the source is isoprene emitted by the rainforest and lofted in deep convective clouds to high altitudes, where it is oxidised to form highly condensable vapours. Isoprene represents a vast source of biogenic particles in both the present-day and pre-industrial atmospheres that is currently missing in atmospheric chemistry and climate models.”

Isoprene is a hydrocarbon containing five carbon atoms and eight hydrogen atoms. It is emitted by broad-leaved trees and other vegetation and is the most abundant non-methane hydrocarbon released into the atmosphere. Until now, isoprene’s ability to form new particles has been considered negligible.

Seeding clouds

The CLOUD results change this picture. By studying the reaction of hydroxyl radicals with isoprene at upper tropospheric temperatures of –30 °C and –50 °C, the collaboration discovered that isoprene oxidation products form copious particles at ambient isoprene concentrations. This new source of aerosol particles does not require any additional vapours. However, when minute concentrations of sulphuric acid or iodine oxoacids were introduced into the CLOUD chamber, a 100-fold increase in aerosol formation rate was observed. Although sulphuric acid derives mainly from anthropogenic sulphur dioxide emissions, the acid concentrations used in CLOUD can also arise from natural sources.

In addition, the team found that isoprene oxidation products drive rapid growth of particles to sizes at which they can seed clouds and influence the climate – a behaviour that persists in the presence of nitrogen oxides produced by lightning at upper-tropospheric concentrations. After continued growth and descent to lower altitudes, these particles may provide a globally important source for seeding shallow continental and marine clouds, which influence Earth’s radiative balance – the amount of incoming solar radiation compared to outgoing longwave radiation (see “Seeding clouds” figure).

“This new source of biogenic particles in the upper troposphere may impact estimates of Earth’s climate sensitivity, since it implies that more aerosol particles were produced in the pristine pre-industrial atmosphere than previously thought,” adds Kirkby. “However, until our findings have been evaluated in global climate models, it’s not possible to quantify the effect.”

The CLOUD findings are consistent with aircraft observations over the Amazon, as reported in an accompanying paper in the same issue of Nature. Together, the two papers provide a compelling picture of the importance of isoprene-driven aerosol formation and its relevance for the atmosphere.

Since it began operation in 2009, the CLOUD experiment has unearthed several mechanisms by which aerosol particles form and grow in different regions of Earth’s atmosphere. “In addition to helping climate researchers understand the critical role of aerosols in Earth’s climate, the new CLOUD result demonstrates the rich diversity of CERN’s scientific programme and the power of accelerator-based science to address societal challenges,” says CERN Director for Research and Computing, Joachim Mnich.

The post CLOUD explains Amazon aerosols appeared first on CERN Courier.

]]>
News The CLOUD collaboration at CERN has revealed a new source of atmospheric aerosol particles that could help scientists to refine climate models. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_NA_cloudfrontis.jpg
Painting Higgs’ portrait in Paris https://cerncourier.com/a/painting-higgs-portrait-in-paris/ Mon, 27 Jan 2025 07:25:46 +0000 https://cerncourier.com/?p=112363 The 14th Higgs Hunting workshop deciphered the latest results from the ATLAS and CMS experiments.

The post Painting Higgs’ portrait in Paris appeared first on CERN Courier.

]]>
The 14th Higgs Hunting workshop took place from 23 to 25 September 2024 at Orsay’s IJCLab and Paris’s Laboratoire Astroparticule et Cosmologie. More than 100 participants joined lively discussions to decipher the latest developments in theory and results from the ATLAS and CMS experiments.

The portrait of the Higgs boson painted by experimental data is becoming more and more precise. Many new Run 2 and first Run 3 results have developed the picture this year. Highlights included the latest di-Higgs combinations with cross-section upper limits reaching down to 2.5 times the Standard Model (SM) expectations. A few excesses seen in various analyses were also discussed. The CMS collaboration reported a brand new excess of top–antitop events near the top–antitop production threshold, with a local significance of more than 5σ above the background described by perturbative quantum chromodynamics (QCD) only, that could be due to a pseudoscalar top–antitop bound state. A new W-boson mass measurement by the CMS collaboration – a subject deeply connected to electroweak symmetry breaking – was also presented, reporting a value consistent with the SM prediction with a very accurate precision of 9.9 MeV (CERN Courier November/December 2024 p7).

Parton shower event generators were in the spotlight. Historical talks by Torbjörn Sjöstrand (Lund University) and Bryan Webber (University of Cambridge) described the evolution of the PYTHIA and HERWIG generators, the crucial role they played in the discovery of the Higgs boson, and the role they now play in the LHC’s physics programme. Differences in the modelling of the parton–shower systematics by the ATLAS and CMS collaborations led to lively discussions!

The vision talk was given by Lance Dixon (SLAC) about the reconstruction of scattering amplitudes directly from analytic properties, as a complementary approach to Lagrangians and Feynman diagrams. Oliver Bruning (CERN) conveyed the message that the HL-LHC accelerator project is well on track, and Patricia McBride (Fermilab) reached a similar conclusion regarding ATLAS and CMS’s Phase-2 upgrades, enjoining new and young people to join the effort, to ensure they are ready and commissioned for the start of Run 4.

The next Higgs Hunting workshop will be held in Orsay and Paris from 15 to 17 July 2025, following EPS-HEP in Marseille from 7 to 11 July.

The post Painting Higgs’ portrait in Paris appeared first on CERN Courier.

]]>
Meeting report The 14th Higgs Hunting workshop deciphered the latest results from the ATLAS and CMS experiments. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_higgs.jpg
Trial trap on a truck https://cerncourier.com/a/trial-trap-on-a-truck/ Mon, 27 Jan 2025 07:24:01 +0000 https://cerncourier.com/?p=112206 CERN'S BASE-STEP experiment has taken the first step in testing the world's most compact antimatter trap.

The post Trial trap on a truck appeared first on CERN Courier.

]]>
Thirty years ago, physicists from Harvard University set out to build a portable antiproton trap. They tested it on electrons, transporting them 5000 km from Nebraska to Massachusetts, but it was never used to transport antimatter. Now, a spin-off project of the Baryon Antibaryon Symmetry Experiment (BASE) at CERN has tested their own antiproton trap, this time using protons. The ultimate goal is to deliver antiprotons to labs beyond CERN’s reach.

“For studying the fundamental properties of protons and antiprotons, you need to take extremely precise measurements – as precise as you can possibly make it,” explains principal investigator Christian Smorra. “This level of precision is extremely difficult to achieve in the antimatter factory, and can only be reached when the accelerator is shut down. This is why we need to relocate the measurements – so we can get rid of these problems and measure anytime.”

The team has made considerable strides to miniaturise their apparatus. BASE-STEP is far and away the most compact design for an antiproton trap yet built, measuring just 2 metres in length, 1.58 metres in height and 0.87 metres across. Weighing in at 1 tonne, transportation is nevertheless a complex operation. On 24 October, 70 protons were introduced into the trap and lifted onto a truck using two overhead cranes. The protons made a round trip through CERN’s main site before returning home to the antimatter factory. All 70 protons were safely transported and the experiment with these particles continued seemlessly, successfully demonstrating the trap’s performance.

Antimatter needs to be handled carefully, to avoid it annihilating with the walls of the trap. This is hard to achieve in the controlled environment of a laboratory, let alone on a moving truck. Just like in the BASE laboratory, BASE–STEP uses a Penning trap with two electrode stacks inside a single solenoid. The magnetic field confines charged particles radially, and the electric fields trap them axially. The first electrode stack collects antiprotons from CERN’s antimatter factory and serves as an “airlock” by protecting antiprotons from annihilation with the molecules of external gases. The second is used for long-term storage. While in transit, non-destructive image-current detection monitors the particles and makes sure they have not hit the walls of the trap.

“We originally wanted a system that you can put in the back of your car,” says Smorra. “Next, we want to try using permanent magnets instead of a superconducting solenoid. This would make the trap even smaller and save CHF 300,000. With this technology, there will be so much more potential for future experiments at CERN and beyond.”

With or without a superconducting magnet, continuous cooling is essential to prevent heat from degrading the trap’s ultra-high vacuum. Penning traps conventionally require two separate cooling systems – one for the trap and one for the superconducting magnet. BASE-STEP combines the cooling systems into one, as the Harvard team proposed in 1993. Ultimately, the transport system will have a cryocooler that is attached to a mobile power generator with a liquid-helium buffer tank present as a backup. Should the power generator be interrupted, the back-up cooling system provides a grace period of four hours to fix it and save the precious cargo of antiprotons. But such a scenario carries no safety risk given the miniscule amount of antimatter being transported. “The worst that can happen is the antiprotons annihilate, and you have to go back to the antimatter factory to refill the trap,” explains Smorra.

With the proton trial-run a success, the team are confident they will be able to use this apparatus to successfully deliver antiprotons to precision laboratories in Europe. Next summer, BASE-STEP will load up the trap with 1000 antiprotons and hit the road. Their first stop is scheduled to be Heinrich Heine University in  Germany.

“We can use the same apparatus for the antiproton transport,” says Smorra. “All we need to do is switch the polarity of the electrodes.”

The post Trial trap on a truck appeared first on CERN Courier.

]]>
News CERN'S BASE-STEP experiment has taken the first step in testing the world's most compact antimatter trap. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_NA_base.jpg
Emphasising the free circulation of scientists https://cerncourier.com/a/emphasising-the-free-circulation-of-scientists/ Mon, 27 Jan 2025 07:23:24 +0000 https://cerncourier.com/?p=112341 The 33rd assembly of the International Union of Pure and Applied Physics took place in Haikou, China.

The post Emphasising the free circulation of scientists appeared first on CERN Courier.

]]>
Physics is a universal language that unites scientists worldwide. No event illustrates this more vividly than the general assembly of the International Union of Pure and Applied Physics (IUPAP). The 33rd assembly convened 100 delegates representing territories around the world in Haikou, China, from 10 to 14 October 2024. Amid today’s polarised global landscape, one clear commitment emerged: to uphold the universality of science and ensure the free movement of scientists.

IUPAP was established in 1922 in the aftermath of World War I to coordinate international efforts in physics. Its logo is recognisable from conferences and proceedings, but its mission is less widely understood. IUPAP is the only worldwide organisation dedicated to the advancement of all fields of physics. Its goals include promoting global development and cooperation in physics by sponsoring international meetings; strengthening physics education, especially in developing countries; increasing diversity and inclusion in physics; advancing the participation and recognition of women and of people from under-represented groups; enhancing the visibility of early-career talents; and promoting international agreements on symbols, units, nomenclature and standards. At the 33rd assembly, 300 physicists were elected to the executive council and specialised commissions for a period of three years.

Global scientific initiatives were highlighted, including the International Year of Quantum Science and Technology (IYQ2025) and the International Decade on Science for Sustainable Development (IDSSD) from 2024 to 2033, which was adopted by the United Nations General Assembly in August 2023. A key session addressed the importance of industry partnerships, with delegates exploring strategies to engage companies in IYQ2025 and IDSSD to further IUPAP’s mission of using physics to drive societal progress. Nobel laureate Giorgio Parisi discussed the role of physics in promoting a sustainable future, and public lectures by fellow laureates Barry Barish, Takaaki Kajita and Samuel Ting filled the 1820-seat Oriental Universal Theater with enthusiastic students.

A key focus of the meeting was visa-related issues affecting international conferences. Delegates reaffirmed the union’s commitment to scientists’ freedom of movement. IUPAP stands against any discrimination in physics and will continue to sponsor events only in locations that uphold this value – a stance that is orthogonal to the policy of countries imposing sanctions on scientists affiliated with specific institutions.

A joint session with the fall meeting of the Chinese Physical Society celebrated the 25th anniversary of the IUPAP working group “Women in Physics” and emphasised diversity, equity and inclusion in the field. Since 2002, IUPAP has established precise guidelines for the sponsorship of conferences to ensure that women are fairly represented among participants, speakers and committee members, and has actively monitored the data ever since. This has contributed to a significant change in the participation of women in IUPAP-sponsored conferences. IUPAP is now building on this still-necessary work on gender by focusing on discrimination on the grounds of disability and ethnicity.

The closing ceremony brought together the themes of continuity and change. Incoming president Silvina Ponce Dawson (University of Buenos Aires) and president-designate Sunil Gupta (Tata Institute) outlined their joint commitment to maintaining an open dialogue among all physicists in an increasingly fragmented world, and to promoting physics as an essential tool for development and sustainability. Outgoing leaders Michel Spiro (CNRS) and Bruce McKellar (University of Melbourne) were honoured for their contributions, and the ceremonial handover symbolised a smooth transition of leadership.

As the general assembly concluded, there was a palpable sense of momentum. From strategic modernisation to deeper engagement with global issues, IUPAP is well-positioned to make physics more relevant and accessible. The resounding message was one of unity and purpose: the physics community is dedicated to leveraging science for a brighter, more sustainable future.

The post Emphasising the free circulation of scientists appeared first on CERN Courier.

]]>
Meeting report The 33rd assembly of the International Union of Pure and Applied Physics took place in Haikou, China. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_IUPAP.jpg
The new hackerpreneur https://cerncourier.com/a/the-new-hackerpreneur/ Mon, 27 Jan 2025 07:22:11 +0000 https://cerncourier.com/?p=112258 Hackathons can kick-start your career, says hacker and entrepreneur Jiannan Zhang.

The post The new hackerpreneur appeared first on CERN Courier.

]]>
The World Wide Web, AI and quantum computing – what do these technologies have in common? They all started out as “hacks”, says Jiannan Zhang, founder of the open-source community platform DoraHacks. “When the Web was invented at CERN, it demonstrated that in order to fundamentally change how people live and work, you have to think of new ways to use existing technology,” says Zhang. “Progress cannot be made if you always start from scratch. That’s what hackathons are for.”

Ten years ago, Zhang helped organise the first CERN Webfest, a hackathon that explores creative uses of technology for science and society. Webfest helped Zhang develop his coding skills and knowledge of physics by applying it to something beyond his own discipline. He also made long-lasting connections with teammates, who were from different academic backgrounds and all over the world. After participating in more hackathons, Zhang’s growing “hacker spirit” inspired him to start his own company. In 2024 Zhang returned to Webfest not as a participant, but as the CEO of DoraHacks.

Hackathons are social coding events often spanning multiple days. They are inclusive and open – no academic institution or corporate backing is required – making them accessible to a diverse range of talented individuals. Participants work in teams, pooling their skills to tackle technical problems through software, hardware or a business plan for a new product. Physicists, computer scientists, engineers and entrepreneurs all bring their strengths to the table. Young scientists can pursue work that may not fit within typical research structures, develop their skills, and build portfolios and professional networks.

“If you’re really passionate about some­thing, you should be able to jump on a project and work on it,” says Zhang. “You shouldn’t need to be associated with a university or have a PhD to pursue it.”

For early-career researchers, hackathons offer more than just technical challenges. They provide an alternative entry point into research and industry, bridging the gap between academia and real-world applications. University-run hackathons often attract corporate sponsors, giving them the budget to rent out stadiums with hundreds, sometimes thousands, of attendees.

“These large-scale hackathons really capture the attention of headhunters and mentors from industry,” explains Zhang. “They see the events as a recruitment pool. It can be a really effective way to advance careers and speak to representatives of big companies, as well as enhancing your coding skills.”

In the 2010s, weekend hackathons served as Zhang’s stepping stone into entrepreneurship. “I used to sit in the computer-science common room and work on my hacks. That’s how I met most of my friends,” recalled Zhang. “But later I realised that to build something great, I had to effectively organise people and capital. So I started to skip my computer-science classes and sneak into the business classrooms.” Zhang would hide in the back row of the business lectures, plotting his plan towards entrepreneurship. He networked with peers to evaluate different business models each day. “It was fun to combine our knowledge of engineering and business theory,” he added. “It made the journey a lot less stressful.”

But the transition from science to entrepreneurship was hard. “At the start you must learn and do everything yourself. The good thing is you’re exposed to lots of new skills and new people, but you also have to force yourself to do things you’re not usually good at.”

This is a dilemma many entrepreneurs face: whether to learn new skills from scratch, or to find business partners and delegate tasks. But finding trustworthy business partners is not always easy, and making the wrong decision can hinder the start up’s progress. That’s why planning the company’s vision and mission from the start is so important.

“The solution is actually pretty straight forward,” says Zhang. “You need to spend more time completing the important milestones yourself, to ensure you have a feasible product. Once you make the business plan and vision clear, you get support from everywhere.”

Decentralised community governance

Rather than hackathon participants competing for a week before abandoning their code, Zhang started DoraHacks to give teams from all over the world a chance to turn their ideas into fully developed products. “I want hackathons to be more than a recruitment tool,” he explains. “They should foster open-source development and decentralised community governance. Today, a hacker from Tanzania can collaborate virtually with a team in the US, and teams gain support to develop real products. This helps make tech fields much more diverse and accessible.”

Zhang’s company enables this by reducing logistical costs for organisers and providing funding mechanisms for participants, making hackathons accessible to aspiring researchers beyond academic institutions. As the community expands, new doors open for young scientists at the start of their careers.

“The business model is changing,” says Zhang. Hackathons are becoming fundamental to emerging technologies, particularly in areas like quantum computing, blockchain and AI, which often start out open source. “There will be a major shift in the process of product creation. Instead of building products in isolation, new technologies rely on platforms and infrastructure where hackers can contribute.”

Today, hackathons aren’t just about coding or networking – they’re about pushing the boundaries of what’s possible, creating meaningful solutions and launching new career paths. They act as incubators for ideas with lasting impact. Zhang wants to help these ideas become reality. “The future of innovation is collaborative and open source,” he says. “The old world relies on corporations building moats around closed-source technology, which is inefficient and inaccessible. The new world is centred around open platform technology, where people can build on top of old projects. This collaborative spirit is what makes the hacker movement so important.”

The post The new hackerpreneur appeared first on CERN Courier.

]]>
Careers Hackathons can kick-start your career, says hacker and entrepreneur Jiannan Zhang. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_CAR_zhang.jpg
The value of being messy https://cerncourier.com/a/the-value-of-being-messy/ Mon, 27 Jan 2025 07:20:48 +0000 https://cerncourier.com/?p=112190 Claire Malone argues that science communicators should not stray too far into public-relations territory.

The post The value of being messy appeared first on CERN Courier.

]]>
The line between science communication and public relations has become increasingly blurred. On one side, scientific press officers highlight institutional success, secure funding and showcase breakthrough discoveries. On the other, science communicators and journalists present scientific findings in a way that educates and entertains readers – acknowledging both the triumphs and the inherent uncertainties of the scientific process.

The core difference between these approaches lies in how they handle the inevitable messiness of science. Science isn’t a smooth, linear path of consistent triumphs; it’s an uncertain, trial-and-error journey. This uncertainty, and our willingness to discuss it openly, is what distinguishes authentic science communication from a polished public relations (PR) pitch. By necessity, PR often strives to present a neat narrative, free of controversy or doubt, but this risks creating a distorted perception of what science actually is.

Finding your voice

Take, for example, the situation in particle physics. Experiments probing the fundamental laws of physics are often critiqued in the press for their hefty price tags – particularly when people are eager to see resources directed towards solving global crises like climate change or preventing future pandemics. When researchers and science communicators are finding their voice, a pressing question is how much messiness to communicate in uncertain times.

After completing my PhD as part of the ATLAS collaboration, I became a science journalist and communicator, connecting audiences across Europe and America with the joy of learning about fundamental physics. After a recent talk at the Royal Institution in London, in which I explained how ATLAS measures fundamental particles, I received an email from a colleague. The only question the talk prompted him to ask was about the safety of colliding protons, aiming to create undiscovered particles. This reaction reflects how scientific misinformation – such as the idea that experiments at CERN could endanger the planet – can be persistent and difficult to eradicate.

In response to such criticisms and concerns, I have argued many times for the value of fundamental physics research, often highlighting the vast number of technological advancements it enables, from touch screens to healthcare advances. However, we must be wary not to only rely on this PR tactic of stressing the tangible benefits of research, as it can sometimes sidestep the uncertainties and iterative nature of scientific investigation, presenting an oversimplified version of scientific progress.

From Democritus to the Standard Model

This PR-driven approach risks undermining public understanding and trust in science in the long run. When science is framed solely as a series of grand successes without any setbacks, people may become confused or disillusioned when they inevitably encounter controversies or failures. Instead, this is where honest science communication shines – admitting that our understanding evolves, that we make mistakes and that uncertainties are an integral part of the process.

Our evolving understanding of particle physics is a perfect illustration of this. From Democritus’ concept of “indivisible atoms” to the development of the Standard Model, every new discovery has refined or even overhauled our previous understanding. This is the essence of science – always refining, never perfect – and it’s exactly what we should be communicating to the public.

Embracing this messiness doesn’t necessarily reduce public trust. When presenting scientific results to the public, it’s important to remember that uncertainty can take many forms, and how we communicate these forms can significantly affect credibility. Technical uncertainty – expressing complexity or incomplete information – often increases audience trust, as it communicates the real intricacies of scientific research. Conversely, consensus uncertainty – spotlighting disagreements or controversies among experts – can have a negative impact on credibility. When it comes to genuine disagreements among scientists, effectively communicating uncertainty to the public requires a thoughtful balance. Transparency is key: acknowledging the existence of different scientific perspectives helps the public understand that science is a dynamic process. Providing context about why disagreements exist, whether due to limited data or competing theoretical frameworks, also helps in making the uncertainty comprehensible.

Embrace errors

In other words, the next time you present your latest results on social media, don’t shy away from including the error bars. And if you must have a public argument with a colleague about what the results mean, context is essential!

Acknowledging the existence of different scientific perspectives helps the public understand that science is a dynamic process

No one knows where the next breakthrough will come from or how it might solve the challenges we face. In an information ecosystem increasingly filled with misinformation, scientists and science communicators must help people understand the iterative, uncertain and evolving nature of science. As science communicators, we should be cautious not to stray too far into PR territory. Authentic communication doesn’t mean glossing over uncertainties but rather embracing them as an essential part of the story. This way, the public can appreciate science not just as a collection of established facts, but as an ongoing, dynamic process – messy, yet ultimately satisfying.

The post The value of being messy appeared first on CERN Courier.

]]>
Opinion Claire Malone argues that science communicators should not stray too far into public-relations territory. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_VIEW_malone.jpg
Cornering compressed SUSY https://cerncourier.com/a/cornering-compressed-susy/ Mon, 27 Jan 2025 07:18:49 +0000 https://cerncourier.com/?p=112235 A new CMS analysis explores an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra.

The post Cornering compressed SUSY appeared first on CERN Courier.

]]>
CMS figure 1

Since the LHC began operations in 2008, the CMS experiment has been searching for signs of supersymmetry (SUSY) – the only remaining spacetime symmetry not yet observed to have consequences for physics. It has explored higher and higher masses of supersymmetric particles (sparticles) with increasing collision energies and growing datasets. No evidence has been observed so far. A new CMS analysis using data recorded between 2016 and 2018 continues this search in an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra.

The masses of SUSY sparticles have very important implications for both the physics of our universe and how they could be potentially produced and observed at experiments like CMS. The heavier the sparticle, the rarer its appearance. On the other hand, when heavy sparticles decay, their mass is converted to the masses and momenta of SM particles, like leptons and jets. These particles are detected by CMS, with large masses leaving potentially spectacular (and conspicuous) signatures. Each heavy sparticle is expected to continue to decay to lighter ones, ending with the lightest SUSY particles (LSPs). LSPs, though massive, are stable and do not decay in the detector. Instead, they appear as missing momentum. In cases of compressed sparticle mass spectra, the mass difference between the initially produced sparticles and LSPs is small. This means the low rates of production of massive sparticles are not accompanied by high-momentum decay products in the detector. Most of their mass ends up escaping in the form of invisible particles, significantly complicating observation.

This new CMS result turns this difficulty on its head, using a kinematic observable RISR, which is directly sensitive to the mass of LSPs as opposed to the mass difference between parent sparticles and LSPs. The result is even better discrimination between SUSY and SM backgrounds when sparticle spectra are more compressed.

This approach focuses on events where putative SUSY candidates receive a significant “kick” from initial-state radiation (ISR) – additional jets recoiling opposite the system of sparticles. When the sparticle masses are highly compressed, the invisible, massive LSPs receive most of the ISR momentum-kick, with this fraction telling us about the LSP masses through the RISR observable.

Given the generic applicability of the approach, the analysis is able to systematically probe a large class of possible scenarios. This includes events with various numbers of leptons (0, 1, 2 or 3) and jets (including those from heavy-flavour quarks), with a focus on objects with low momentum. These multiplicities, along with RISR and other selected discriminating variables, are used to categorise recorded events and a comprehensive fit is performed to all these regions. Compressed SUSY signals would appear at larger values of RISR, while bins at lower values are used to model and constrain SM backgrounds. With more than 2000 different bins in RISR, over several hundred object-based categ­ories, a significant fraction of the experimental phase space in which compressed SUSY could hide is scrutinised.

In the absence of significant observed deviations in data yields from SM expectations, a large collection of SUSY scenarios can be excluded at high confidence level (CL), including those with the production of stop quarks, EWKinos and sleptons. As can be seen in the results for stop quarks (figure 1), the analysis is able to achieve excellent sensitivity to compressed SUSY. Here, as for many of the SUSY scenarios considered, the analy­sis provides the world’s most stringent constraints on compressed SUSY, further narrowing the space it could be hiding.

The post Cornering compressed SUSY appeared first on CERN Courier.

]]>
News A new CMS analysis explores an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_CMS_feature.jpg
Chinese space station gears up for astrophysics https://cerncourier.com/a/chinese-space-station-gears-up-for-astrophysics/ Mon, 27 Jan 2025 07:16:33 +0000 https://cerncourier.com/?p=112214 China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades.

The post Chinese space station gears up for astrophysics appeared first on CERN Courier.

]]>
Completed in 2022, China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades. Like the International Space Station, its ability to provide large amounts of power, support heavy payloads and access powerful communication and computing facilities give it many advantages over typical satellite platforms. As such, both Chinese and international collaborations have been developing a number of science missions ranging from optical astronomy to the detection of cosmic rays with PeV energies.

For optical astronomy, the space station will be accompanied by the Xuntian telescope, which can be translated to “survey the heavens”. Xuntian is currently planned to be launched in mid-2025 to fly alongside Tiangong, thereby allowing for regular maintenance. Although its spatial resolution will be similar to that of the Hubble Space Telescope, Xuntian’s field of view will be about 300 times larger, allowing the observation of many objects at the same time. In addition to producing impressive images similar to those sent by Hubble, the instrument will be important for cosmological studies where large statistics for astronomical objects are typically required to study their evolution.

Another instrument that will observe large portions of the sky is LyRIC (Lyman UV Radiation from Interstellar medium and Circum-galactic medium). After being placed on the space station in the coming years, LyRIC will probe the poorly studied far-ultraviolet regime that contains emission lines from neutral hydrogen and other elements. While difficult to measure, this allows studies of baryonic matter in the universe, which can be used to answer important questions such as why only about half of the total baryons in the standard “ΛCDM” cosmological model can be accounted for.

At slightly higher energies, the Diffuse X-ray Explorer (DIXE) aims to use a novel type of X-ray detector to reach an energy resolution better than 1% in the 0.1 to 10 keV energy range. It achieves this using cryogenic transition-edge sensors (TESs), which exploit the rapid change in resistance that occurs during a superconducting phase transition. In this regime, the resistivity of the material is highly dependent on its temperature, allowing the detection of minuscule temperature increases resulting from X-rays being absorbed by the material. Positioned to scan the sky above the Tiangong space station, DIXE will be able, among other things, to measure the velocity of mat­erial that appears to have been emitted by the Milky Way during an active stage of its central black hole. Its high-energy resolution will allow Doppler shifts of the order of several eV to be measured, requiring the TES detectors to operate at 50 mK. Achieving such temperatures demands a cooling system of 640 W – a power level that is difficult to achieve on a satellite, but relatively easy to acquire on a space station. As such, DIXE will be one of the first detectors using this new technology when it launches in 2025, leading the way for missions such as the European ATHENA mission that plans to use it starting in 2037.

Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up

POLAR-2 was accepted as an international payload on the China space station through the United Nations Office for Outer Space Affairs and has since become a CERN-recognised experiment. The mission started as a Swiss, German, Polish and Chinese collaboration building on the success of POLAR, which flew on the space station’s predecessor Tiangong-2. Like its earlier incarnation, POLAR-2 measures the polarisation of high-energy X rays or gamma rays to provide insights into, for example, the magnetic fields that produced the emission. As one of the most sensitive gamma-ray detectors in the sky, POLAR-2 can also play an important role in alerting other instruments when a bright gamma-ray transient, such as a gamma-ray burst, appears. The importance of such alerts has resulted in the expansion of POLAR-2 to include an accompanying imaging spectrometer, which will provide detailed spectral and location information on any gamma-ray transient. Also now foreseen for this second payload is an additional wide-field-of-view X-ray polarimeter. The international team developing the three instruments, which are scheduled to be launched in 2027, is led by the Institute of High Energy Physics in Beijing.

For studying the universe using even higher energy emissions, the space station will host the High Energy cosmic-Radiation Detection Facility (HERD). HERD is designed to study both cosmic rays and gamma rays at energies beyond those accessible to instruments like AMS-02, CALET (CERN Courier July/August 2024 p24) and DAMPE. It aims to achieve this, in part, by simply being larger, resulting in a mass that is currently only possible to support on a space station. The HERD calorimeter will be 55 radiation lengths long and consist of several tonnes of scintillating cubic LYSO crystals. The instrument will also use high-precision silicon trackers, which in combination with the deep calorimeter, will provide a better angular resolution and a geometrical acceptance 30 times larger than the present AMS-02 (which is due to be upgraded next year). This will allow HERD to probe the cosmic-ray spectrum up to PeV energies, filling in the energy gap between current space missions and ground-based detectors. HERD started out as an international mission with a large European contribution, however delays on the European side regarding participation, in combination with a launch requirement of 2027, mean that it is currently foreseen to be a fully Chinese mission.

Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up. As well as providing researchers with a pristine view of the electromagnetic universe, instruments such as HERD will enable vital cross-checks of data from AMS-02 and other unique experiments in space.

The post Chinese space station gears up for astrophysics appeared first on CERN Courier.

]]>
News China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_NA_astro.jpg
Taking the lead in the monopole hunt https://cerncourier.com/a/taking-the-lead-in-the-monopole-hunt/ Mon, 27 Jan 2025 07:15:21 +0000 https://cerncourier.com/?p=112230 Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931.

The post Taking the lead in the monopole hunt appeared first on CERN Courier.

]]>
ATLAS figure 1

Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931. He pointed out that if monopoles exist, electric charge must be quantised, meaning that particle charges must be integer multiples of a fundamental charge. Electric charge quantisation is indeed observed in nature, with no other known explanation for this striking phenomenon. The ATLAS collaboration performed a search for these elusive particles using lead–lead (PbPb) collisions at 5.36 TeV from Run 3 of the Large Hadron Collider.

The search targeted the production of monopole–antimonopole pairs via photon–photon interactions, a process enhanced in heavy-ion collisions due to the strong electromagnetic fields (Z2) generated by the Z = 82 lead nuclei. Ultraperipheral collisions are ideal for this search, as they feature electromagnetic interactions without direct nuclear contact, allowing rare processes like monopole production to dominate in visible signatures. The ATLAS study employed a novel detection technique exploiting the expected highly ionising nature of these particles, leaving a characteristic signal in the innermost silicon detectors of the ATLAS experiment (figure 1).

The analysis employed a non-perturbative semiclassical model to estimate monopole production. Traditional perturbative models, which rely on Feynman diagrams, are inadequate due to the large coupling constant of magnetic monopoles. Instead, the study used a model based on the Schwinger mechanism, adapted for magnetic fields, to predict monopole production in the ultraperipheral collisions’ strong magnetic fields. This approach offers a more robust
theoretical framework for the search.

ATLAS figure 2

The experiment’s trigger system was critical to the search. Given the high ionisation signature of monopoles, traditional calorimeter-based triggers were unsuitable, as even high-momentum monopoles lose energy rapidly through ionisation and do not reach the calorimeter. Instead, the trigger, newly introduced for the 2023 PbPb data-taking campaign, focused on detecting the forward neutrons emitted during electromagnetic interactions. The level-1 trigger system identified neutrons using the Zero-Degree Calorimeter, while the high-level trigger required more than 100 clusters of pixel-detector hits in the inner detector – an approach sensitive to monopoles due to their high ionisation signatures.

Additionally, the analysis examined the topology of pixel clusters to further refine the search, as a more aligned azimuthal distribution in the data would indicate a signature consistent with monopoles (figure 1), while the uniform distribution typically associated with beam-induced backgrounds could be identified and suppressed.

No significant monopole signal is observed beyond the expected background, with the latter being estimated using a data-driven technique. Consequently, the analysis set new upper limits on the cross-section for magnetic monopole production (figure 2), significantly improving existing limits for low-mass monopoles in the 20–150 GeV range. Assuming a non-perturbative semiclassical model, the search excludes monopoles with a single Dirac magnetic charge and masses below 120 GeV. The techniques developed in this search will open new possibilities to study other highly ionising particles that may emerge from beyond-Standard Model physics.

The post Taking the lead in the monopole hunt appeared first on CERN Courier.

]]>
News Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_ATLAS_feature.jpg
Unprecedented progress in energy-efficient RF https://cerncourier.com/a/unprecedented-progress-in-energy-efficient-rf/ Mon, 27 Jan 2025 07:14:38 +0000 https://cerncourier.com/?p=112349 Forty-five experts from industry and academia met in the magnificent city of Toledo for the second workshop on efficient RF sources.

The post Unprecedented progress in energy-efficient RF appeared first on CERN Courier.

]]>
Forty-five experts from industry and academia met in the magnificent city of Toledo, Spain from 23 to 25 September 2024 for the second workshop on efficient RF sources. Part of the I.FAST initiative on sustainable concepts and technologies (CERN Courier July/August 2024 p20), the event focused on recent advances in energy-efficient technology for RF sources essential to accelerators. Progress in the last two years has been unprecedented, with new initiatives and accomplishments around the world fuelled by the ambitious goals of new, high-energy particle-physics projects.

Out of more than 30 presentations, a significant number featured pulsed, high-peak-power RF sources working at frequencies above 3 GHz in the S, C and X bands. These involve high-efficiency klystrons that are being designed, built and tested for the KEK e/e+ Injector, the new EuPRAXIA@SPARC_LAB linac, the CLIC testing facilities, muon collider R&D, the CEPC injector linac and the C3 project. Reported increases in beam-to-RF power efficiency range from 15 percentage points for the retro­fit prototype for CLIC to more than 25 points (expected) for a new greenfield klystron design that can be used across most new projects.

A very dynamic area for R&D is the search of efficient sources for the continuous wave (CW) and long-pulse RF needed for circular accelerators. Typically working in the L-band, existing devices deliver less than 3 MW in peak power. Solid-state amplifiers, inductive output tubes, klystrons, magnetrons, triodes and exotic newly rediscovered vacuum tubes called “tristrons” compete in this arena. Successful prototypes have been built for the High-Luminosity LHC and CEPC with power efficiency gains of 10 to 20 points. In the case of the LHC, this will allow 15% more power without an impact on the electricity bill; in the case of a circular Higgs factory, this will allow a 30% reduction. CERN and SLAC are also investigating very-high-efficiency vacuum tubes for the Future Circular Collider with a potential reduction of close to 50% on the final electricity bill. A collaboration between academia and industry would certainly be required to bring this exciting new technology to light.

Besides the astounding advances in vacuum-tube technology, solid-state amplifiers based on cheap transistors are undergoing a major transformation thanks to the adoption of gallium-nitride technology. Commercial amplifiers are now capable of delivering kilowatts of power at low duty cycles with a power efficiency of 80%, while Uppsala University and the European Spallation Source have demonstrated the same efficiency for combined systems working in CW.

The search for energy efficiency does not stop at designing and building more efficient RF sources. All aspects of operation, power combination and using permanent magnets and efficient modulators need to be folded in, as described by many concrete examples during the workshop. The field is thriving.

The post Unprecedented progress in energy-efficient RF appeared first on CERN Courier.

]]>
Meeting report Forty-five experts from industry and academia met in the magnificent city of Toledo for the second workshop on efficient RF sources. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_WERFSII.jpg
ICFA talks strategy and sustainability in Prague https://cerncourier.com/a/icfa-talks-strategy-and-sustainability-in-prague-2/ Mon, 27 Jan 2025 07:13:18 +0000 https://preview-courier.web.cern.ch/?p=111309 The 96th ICFA meeting heard extensive reports from the leading HEP laboratories and various world regions on their recent activities and plans.

The post ICFA talks strategy and sustainability in Prague appeared first on CERN Courier.

]]>
ICFA, the International Committee for Future Accelerators, was formed in 1976 to promote international collaboration in all phases of the construction and exploitation of very-high-energy accelerators. Its 96th meeting took place on 20 and 21 July during the recent ICHEP conference in Prague. Almost all of the 16 members from across the world attended in person, making the assembly lively and constructive.

The committee heard extensive reports from the leading HEP laboratories and various world regions on their recent activities and plans, including a presentation by Paris Sphicas, the chair of the European Committee for Future Accelerators (ECFA), on the process for the update of the European strategy for particle physics (ESPP). Launched by CERN Council in March 2024, the ESPP update is charged with recommending the next collider project at CERN after HL-LHC operation.

A global task

The ESPP update is also of high interest to non-European institutions and projects. Consequently, in addition to the expected inputs to the strategy from European HEP communities, those from non-European HEP communities are also welcome. Moreover, the recent US P5 report and the Chinese plans for CEPC, with a potential positive decision in 2025/2026, and discussions about the ILC project in Japan, will be important elements of the work to be carried out in the context of the ESPP update. They also emphasise the global nature of high-energy physics.

An integral part of the work of ICFA is carried out within its panels, which have been very active. Presentations were given from the new panel on the Data Lifecycle (chair Kati Lassila-Perini, Helsinki), the Beam Dynamics panel (new chair Yuan He, IMPCAS) and the Advanced and Novel Accelerators panel (new chair Patric Muggli, Max Planck Munich, proxied at the meeting by Brigitte Cros, Paris-Saclay). The Instrumentation and Innovation Development panel (chair Ian Shipsey, Oxford) is setting an example with its numerous schools, the ICFA instrumentation awards and centrally sponsored instrumentation studentships for early-career researchers from underserved world regions. Finally, the chair of the ILC International Development Team panel (Tatsuya Nakada, EPFL) summarised the latest status of the ILC Technological Network, and the proposed ILC collider project in Japan.

ICFA noted interesting structural developments in the global organisation of HEP

A special session was devoted to the sustainability of HEP accelerator infrastructures, considering the need to invest efforts into guidelines that enable better comparison of the environmental reports of labs and infrastructures, in particular for future facilities. It was therefore natural for ICFA to also hear reports not only from the panel on Sustainable Accelerators and Colliders led by Thomas Roser (BNL), but also from the European Lab Directors Working Group on Sustainability. This group, chaired by Caterina Bloise (INFN) and Maxim Titov (CEA), is mandated to develop a set of key indicators and a methodology for the reporting on future HEP projects, to be delivered in time for the ESPP update.

Finally, ICFA noted some very interesting structural developments in the global organisation of HEP. In the Asia-Oceania region, ACFA-HEP was recently formed as a sub-panel under the Asian Committee for Future Accelerators (ACFA), aiming for a better coordination of HEP activities in this particular region of the world. Hopefully, this will encourage other world regions to organise themselves in a similar way in order to strengthen their voice in the global HEP community – for example in Latin America. Here, a meeting was organised in August by the Latin American Association for High Energy, Cosmology and Astroparticle Physics (LAA-HECAP) to bring together scientists, institutions and funding agencies from across Latin America to coordinate actions for jointly funding research projects across the continent.

The next in-person ICFA meeting will be held during the Lepton–Photon conference in Madison, Wisconsin (USA), in August 2025.

The post ICFA talks strategy and sustainability in Prague appeared first on CERN Courier.

]]>
Meeting report The 96th ICFA meeting heard extensive reports from the leading HEP laboratories and various world regions on their recent activities and plans. https://cerncourier.com/wp-content/uploads/2024/09/CCNovDec24_FN_ICFA.jpg
Isolating photons at low Bjorken x https://cerncourier.com/a/isolating-photons-at-low-bjorken-x/ Mon, 27 Jan 2025 07:11:37 +0000 https://cerncourier.com/?p=112249 A new measurement by ALICE will help to constrain the gluon PDF.

The post Isolating photons at low Bjorken x appeared first on CERN Courier.

]]>
ALICE figure 1

In high-energy collisions at the LHC, prompt photons are those that do not originate from particle decays and are instead directly produced by the hard scattering of quarks and gluons (partons). Due to their early production, they provide a clean method to probe the partons inside the colliding nucleons, and in particular the fraction of the momentum of the nucleon carried by each parton (Bjorken x). The distribution of each parton in Bjorken x is known as its parton distribution function (PDF).

Theoretical models of particle production rely on the precise knowledge of PDFs, which are derived from vast amounts of experimental data. The high centre-of-mass energies (√s) at the LHC probe very small values of the momentum fraction, Bjorken x. At “midrapidity”, when a parton scatters with a large angle with respect to the beam axis, and a prompt photon is produced in the final state, a useful approximation to Bjorken x is provided by the dimensionless variable xT = 2pT/√s, where pT is the transverse momentum of the prompt photon.

Prompt photons can also be produced by next-to-leading order processes such as parton fragmentation or bremsstrahlung. A clean separation of the different prompt photon sources is difficult experimentally, but fragmentation can be suppressed by selecting “isolated photons”. For a photon to be considered isolated, the sum of the transverse energies or transverse momenta of the particles produced in a cone around the photon must be smaller than some threshold – a selection that can be done both in the experimental measurement and theoretical calculations. An isolation requirement also helps to reduce the background of decay photons, since hadrons that can decay to photons are often produced in jet fragmentation.

The ALICE collaboration now reports the measurement of the differential cross-section for isolated photons in proton–proton collisions at √s = 13 TeV at midrapidity. The photon measurement is performed by the electromagnetic calorimeter, and the isolated photons are selected by combining with the data from the central inner tracking system and time-projection chamber, requiring that the summed pT of the charged particles in a cone of angular radius 0.4 radians centred on the photon candidate be smaller than 1.5 GeV/c. The isolated photon cross-sections are obtained within the transverse momentum range from 7 to 200 GeV/c, corresponding to 1.1 × 10–3 < xT < 30.8 × 10–3.

Figure 1 shows the new ALICE results alongside those from ATLAS, CMS and prior measurements in proton–proton and proton–antiproton collisions at lower values of √s. The figure spans more than 15 orders of magnitude on the y-axis, representing the cross-section, over a wide range of xT. The present measurement probes the smallest Bjorken x with isolated photons at midrapidity to date. The experimental data points show an agreement between all the measurements when scaled with the collision energy to the power n = 4.5. Such a scaling is designed to cancel the predicted 1/(pT)n dependence of partonic 2  2 scattering cross-sections in perturbative QCD and reveal insights into the gluon PDF (see “The other 99%“).

This measurement will help to constrain the gluon PDF and will play a crucial role in exploring medium-induced modifications of hard probes in nucleus–nucleus collisions.

The post Isolating photons at low Bjorken x appeared first on CERN Courier.

]]>
News A new measurement by ALICE will help to constrain the gluon PDF. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_ALICE_feature.jpg
R(D) ratios in line at LHCb https://cerncourier.com/a/rd-ratios-in-line-at-lhcb/ Fri, 24 Jan 2025 16:00:50 +0000 https://cerncourier.com/?p=112240 The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation.

The post R(D) ratios in line at LHCb appeared first on CERN Courier.

]]>
LHCb figure 1

The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation in the framework of the Standard Model (SM). The b  cτντ transition has the potential to reveal new particles or forces that interact primarily with third-generation particles, which are subject to the less stringent experimental constraints at present. As a tree-level SM process mediated by W-boson exchange, its amplitude is large, resulting in large branching fractions and significant data samples to analyse.

The observable under scrutiny is the ratio of decay rates between the signal mode involving τ and ντ leptons from the third generation of fermions and the normalisation mode containing μ and νμ leptons from the second generation. Within the SM, this lepton flavour universality (LFU) ratio deviates from unity only due to the different mass of the charged leptons – but new contributions could change the value of the ratios. A longstanding tension exists between the SM prediction and the experimental measurements, requiring further input to clarify the source of the discrepancy.

The LHCb collaboration analysed four decay modes: B0 D(*)+ν, with ℓ representing τ or μ. Each is selected using the same visible final state of one muon and light hadrons from the decay of the charm meson. In the normalisation mode, the muon originates directly from the B-hadron decay, while in the signal mode, it arises from the decay of the τ lepton. The four contributions are analysed simultaneously, yielding two LFU ratios between taus and muons – one using the ground state of the D+ meson and one the excited state D*+.

The control of the background contributions is particularly complicated in this analysis as the final state is not fully reconstructible, limiting the resolution on some of the discriminating variables. Instead, a three-dimensional template fit separates the signal and the normalisation from the background versus: the momentum transferred to the lepton pair (q2); the energy of the muon in the rest frame of the B meson (Eμ*); and the invariant mass missing from the visible system. Each contribution is modelled using a template histogram derived either from simulation or from selected control samples in data.

This constitutes the world’s second most precise measurement of R(D)

To prevent the simulated data sample size from becoming a limiting factor in the precision of the measurement, a fast tracker-only simulation technique was exploited for the first time in LHCb. Another novel aspect of this work is the use of the HAMMER software tool during the minimisation procedure of the likelihood fit, which enables a fast, but exact, variation of a template as a function of the decay-model parameters. This variation is important to allow the form factors of both the signal and normalisation channels to vary as the constraints derived from the predictions that use precise lattice calculations can have larger uncertainties than those obtained from the fit.

The fit projection over one of the discriminating variables is shown in figure 1, illustrating the complexity of the analysed data sample but nonetheless showcasing LHCb’s ability to distinguish the signal modes (red and orange) from the normalisation modes (two shades of blue) and background contributions.

The measured LFU ratios are in good agreement with the current world average and the predictions of the SM: R(D+) = 0.249 ± 0.043 (stat.) ± 0.047 (syst.) and R(D*+) = 0.402 ± 0.081(stat.) ± 0.085 (syst.). Under isospin symmetry assumptions, this constitutes the world’s second most precise measurement of R(D), following a 2019 measurement by the Belle collaboration. This analysis complements other ongoing efforts at LHCb and other experiments to test LFU across different decay channels. The precision of the measurements reported here is primarily limited by the size of the signal and control samples, so more precise measurements are expected with future LHCb datasets.

The post R(D) ratios in line at LHCb appeared first on CERN Courier.

]]>
News The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_LHCb_feature.jpg
Rapid developments in precision predictions https://cerncourier.com/a/rapid-developments-in-precision-predictions/ Fri, 24 Jan 2025 15:57:39 +0000 https://cerncourier.com/?p=112358 Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions.

The post Rapid developments in precision predictions appeared first on CERN Courier.

]]>
High Precision for Hard Processes in Turin

Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions. To keep pace with experimental observations at the LHC and elsewhere, precision computing has had to develop rapidly in recent years – efforts that have been monitored and driven by the biennial High Precision for Hard Processes (HP2) conference for almost two decades now. The latest edition attracted 120 participants to the University of Torino from 10 to 13 September 2024.

All speakers addressed the same basic question: how can we achieve the most precise theoretical description for a wide variety of scattering processes at colliders?

The recipe for precise prediction involves many ingredients, so the talks in Torino probed several research directions. Advanced methods for the calculation of scattering amplitudes were discussed, among others, by Stephen Jones (IPPP Durham). These methods can be applied to detailed high-order phenomenological calculations for QCD, electroweak processes and BSM physics, as illustrated by Ramona Groeber (Padua) and Eleni Vryonidou (Manchester). Progress in parton showers – a crucial tool to bridge amplitude calculations and experimental results – was presented by Silvia Ferrario Ravasio (CERN). Dedicated methods to deal with the delicate issue of infrared divergences in high-order cross-section calculations were reviewed by Chiara Signorile-Signorile (Max Planck Institute, Munich).

The Torino conference was dedicated to the memory of Stefano Catani, a towering figure in the field of high-energy physics, who suddenly passed away at the beginning of this year. Starting from the early 1980s, and for the whole of his career, Catani made groundbreaking contributions in every facet of HP2. He was an inspiration to a whole generation of physicists working in high-energy phenomenology. We remember him as a generous and kind person, and a scientist of great rigour and vision. He will be sorely missed.

The post Rapid developments in precision predictions appeared first on CERN Courier.

]]>
Meeting report Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_HP_feature.jpg
AI treatments for stroke survivors https://cerncourier.com/a/ai-treatments-for-stroke-survivors/ Fri, 24 Jan 2025 15:52:08 +0000 https://cerncourier.com/?p=112345 Data on strokes is plentiful but fragmented, making it difficult to exploit in data-driven treatment strategies.

The post AI treatments for stroke survivors appeared first on CERN Courier.

]]>
Data on strokes is plentiful but fragmented, making it difficult to exploit in data-driven treatment strategies. The toolbox of the high-energy physicist is well adapted to the task. To amplify CERN’s societal contributions through technological innovation, the Unleashing a Comprehensive, Holistic and Patient-Centric Stroke Management for a Better, Rapid, Advanced and Personalised Stroke Diagnosis, Treatment and Outcome Prediction (UMBRELLA) project – co-led by Vall d’Hebron Research Institute and Siemens Healthineers – was officially launched on 1 October 2024. The kickoff meeting in Barcelona, Spain, convened more than 20 partners, including Philips, AstraZeneca, KU Leuven and EATRIS. Backed by nearly €27 million from the EU’s Innovative Health Initiative and industry collaborators, the project aims to transform stroke care across Europe.

The meeting highlighted the urgent need to address stroke as a pressing health challenge in Europe. Each year, more than one million acute stroke cases occur in Europe, with nearly 10 million survivors facing long-term consequences. In 2017, the economic burden of stroke treatments was estimated to be €60 billion – a figure that continues to grow. UMBRELLA’s partners outlined their collective ambition to translate a vast and fragmented stroke data set into actionable care innovations through standardisation and integration.

UMBRELLA will utilise advanced digital technologies to develop AI-powered predictive models for stroke management. By standardising real-world stroke data and leveraging tools like imaging technologies, wearable devices and virtual rehabilitation platforms, UMBRELLA aims to refine every stage of care – from diagnosis to recovery. Based on post-stroke data, AI-driven insights will empower clinicians to uncover root causes of strokes, improve treatment precision and predict patient outcomes, reshaping how stroke care is delivered.

Central to this effort is the integration of CERN’s federated-learning platform, CAFEIN. A decentralised approach to training machine-learning algorithms without exchanging data, it was initiated thanks to seed funding from CERN’s knowledge transfer budget for the benefit of medical applications: now CAFEIN promises to enhance diagnosis, treatment and prevention strategies for stroke victims, ultimately saving countless lives. A main topic of the kickoff meeting was the development of the “U-platform” – a federated data ecosystem co-designed by Siemens Healthineers and CERN. Based on CAFEIN, the infrastructure will enable the secure and privacy preserving training of advanced AI algorithms for personalised stroke diagnostics, risk prediction and treatment decisions without sharing sensitive patient data between institutions. Building on CERN’s expertise, including its success in federated AI modelling for brain pathologies under the EU TRUST­roke project, the CAFEIN team is poised to handle the increasing complexity and scale of data sets required by UMBRELLA.

Beyond technological advancements, the UMBRELLA consortium discussed a plan to establish standardised protocols for acute stroke management, with an emphasis on integrating these protocols into European healthcare guidelines. By improving data collection and facilitating outcome predictions, these standards will particularly benefit patients in remote and underserved regions. The project also aims to advance research into the causes of strokes, a quarter of which remain undetermined – a statistic UMBRELLA seeks to change.

This ambitious initiative not only showcases CERN’s role in pioneering federated-learning technologies but also underscores the broader societal benefits brought by basic science. By pushing technologies beyond the state-of-the-art, CERN and other particle-physics laboratories have fuelled innovations that have an impact on our everyday lives. As UMBRELLA begins its journey, its success holds the potential to redefine stroke care, delivering life-saving advancements to millions and paving the way for a healthier, more equitable future.

The post AI treatments for stroke survivors appeared first on CERN Courier.

]]>
Meeting report Data on strokes is plentiful but fragmented, making it difficult to exploit in data-driven treatment strategies. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_UMBRELLA.jpg
Dark matter: evidence, theory and constraints https://cerncourier.com/a/dark-matter-evidence-theory-and-constraints/ Fri, 24 Jan 2025 15:49:47 +0000 https://cerncourier.com/?p=112278 Dark Matter: Evidence, Theory and Constraints will be useful to those who wish to broaden or extend their research interests, for instance to a different dark-matter candidate.

The post Dark matter: evidence, theory and constraints appeared first on CERN Courier.

]]>
Dark Matter: Evidence, Theory and Constraints

Cold non-baryonic dark matter appears to make up 85% of the matter and 25% of the energy in our universe. However, we don’t yet know what it is. As the opening of many research proposals state, “The nature of dark matter is one of the major open questions in physics.”

The evidence for dark matter comes from astronomical and cosmological observations. Theoretical particle physics provides us with various well motivated candidates, such as weakly interacting massive particles (WIMPs), axions and primordial black holes. Each has different experimental and observational signatures and a wide range of searches are taking place. Dark-matter research spans a very broad range of topics and methods. This makes it a challenging research field to enter and master. Dark Matter: Evidence, Theory and Constraints by David Marsh, David Ellis and Viraf Mehta, the latest addition to the Princeton Series in Astrophysics, clearly presents the relevant essentials of all of these areas.

The book starts with a brief history of dark matter and some warm-up calculations involving units. Part one outlines the evidence for dark matter, on scales ranging from individual galaxies to the entire universe. It compactly summarises the essential background material, including cosmological perturbation theory.

Part two focuses on theories of dark matter. After an overview of the Standard Model of particle physics, it covers three candidates with very different motivations, properties and phenomenology: WIMPs, axions and primordial black holes. Part three then covers both direct and indirect searches for these candidates. I particularly like the schematic illustrations of experiments; they should be helpful for theorists who want to (and should!) understand the essentials of experimental searches.

The main content finishes with a brief overview of other dark-matter candidates. Some of these arguably merit more extensive coverage, in particular sterile neutrinos. The book ends with extensive recommendations for further reading, including textbooks, review papers and key research papers.

Dark-matter research spans a broad range of topics and methods, making it a challenging field to master

The one thing I would argue with is the claim in the introduction that dark matter has already been discovered. I agree with the authors that the evidence for dark matter is strong and currently cannot all be explained by modified gravity theories. However, given that all of the evidence for dark matter comes from its gravitational effects, I’m open to the possibility that our understanding of gravity is incorrect or incomplete. The authors are also more positive than I am about the prospects for dark-matter detection in the near future, claiming that we will soon know which dark-matter candidates exist “in the real pantheon of nature”. Optimism is a good thing, but this is a promise that dark-matter researchers (myself included…) have now been making for several decades.

The conversational writing style is engaging and easy to read. The annotation of equations with explanatory text is novel and helpful, and  the inclusion of numerous diagrams – simple and illustrative where possible and complex when called for – aids understanding. The attention to detail is impressive. I reviewed a draft copy for the publishers, and all of my comments and suggestions have been addressed in detail.

This book will be extremely useful to newcomers to the field, and I recommend it strongly to PhD students and undergraduate research students. It is particularly well suited as a companion to a lecture course, with numerous quizzes, problems and online materials, including numerical calculations and plots using Jupyter notebooks. It will also be useful to those who wish to broaden or extend their research interests, for instance to a different dark-matter candidate.

The post Dark matter: evidence, theory and constraints appeared first on CERN Courier.

]]>
Review Dark Matter: Evidence, Theory and Constraints will be useful to those who wish to broaden or extend their research interests, for instance to a different dark-matter candidate. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_REV-dark_feature.jpg
The B’s Ke+e–s https://cerncourier.com/a/the-bs-kee-s/ Fri, 24 Jan 2025 15:45:52 +0000 https://cerncourier.com/?p=112331 The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world.

The post The B’s Ke<sup>+</sup>e<sup>–</sup>s appeared first on CERN Courier.

]]>
The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world to CERN from 23 to 25 October 2024. Patrick Koppenburg (Nikhef) began the meeting by looking back 10 years, when three and four sigma anomalies abounded: the inclusive/exclusive puzzles; the illuminatingly named P5 observable; and the lepton-universality ratios for rare B decays. While LHCb measurements have mostly eliminated the anomalies seen in the lepton-universality ratios, many of the other anomalies persist – most notably, the corresponding branching fractions for rare B-meson decays still appear to be suppressed significantly below Standard Model (SM) theory predictions. Sara Celani (Heidelberg) reinforced this picture with new results for Bs→ φμ+μ and Bs→ φe+e, showing the continued importance of new-physics searches in these modes.

Changing flavour

The discussion on rare B decays continued in the session on flavour-changing neutral-currents. With new lattice-QCD results pinning down short-distance local hadronic contributions, the discussion focused on understanding the long-distance contributions arising from hadronic resonances and charm rescattering. Arianna Tinari (Zurich) and Martin Hoferichter (Bern) judged the latter not to be dramatic in magnitude. Lakshan Madhan (Cambridge) presented a new amplitude analysis in which the long and short-distance contributions are separated via the kinematic dependence of the decay amplitudes. New theo­retical analyses of the nonlocal form factors for B → K(*)μ+μ and B → K(*)e+e were representative of the workshop as a whole: truly the bee’s knees.

Another challenge to accurate theory predictions for rare decays, the widths of vector final states, snuck its way into the flavour-changing charged-currents session, where Luka Leskovec (Ljubljana) presented a comprehensive overview of lattice methods for decays to resonances. Leskovec’s optimistic outlook for semileptonic decays with two mesons in the final state stood in contrast to prospects for applying lattice methods to D-D mixing: such studies are currently limited to the SU(3)-flavour symmetric point of equal light-quark masses, explained Felix Erben (CERN), though he offered a glimmer of hope in the form of spectral reconstruction methods currently under development.

LHCb’s beauty and charm physics programme reported substantial progress. Novel techniques have been implemented in the most recent CP-violation studies, potentially leading to an impressive uncertainty of just 1° in future measurements of the CKM angle gamma. LHCb has recently placed a special emphasis on beauty and charm baryons, where the experiment offers unique capabilities to perform many interesting measurements ranging from CP violation to searches for very rare decays and their form factors. Going from three quarks to four and five, the spectroscopy session illustrated the rich and complex debate around tetraquark and pentaquark states with a big open discussion on the underlying structure of the 20 or so discovered at LHCb: which are bound states of quarks and which are simply meson molecules? (CERN Courier November/December 2024 p26 and p33.)

LHCb’s ability to do unique physics was further highlighted in the QCD, electroweak (EW) and exotica session, where the collaboration has shown the most recent publicly available measurement of the weak-mixing angle in conjunction with W/Z-boson production cross-sections and other EW observables. LHCb have put an emphasis on combined QCD + QED and effective-field-theory calculations, and the interplay between EW precision observables and new-physics effects in couplings to the third generation. By studying phase space inaccessible to any other experiment, a study of hypothetical dark photons decaying to electrons showed the LHCb experiment to be a unique environment for direct searches for long-lived and low-mass particles.

Attendees left the workshop with a fresh perspective

Parallel to Implications 2024, the inaugural LHCb Open Data and Ntuple Wizard Workshop, took place on 22 October as a satellite event, providing theorists and phenomenologists with a first look at a novel software application for on-demand access to custom ntuples from the experiment’s open data. The LHCb Ntupling Service will offer a step-by-step wizard for requesting custom ntuples and a dashboard to monitor the status of requests, communicate with the LHCb open data team and retrieve data. The beta version was released at the workshop in advance of the anticipated public release of the application in 2025, which promises open access to LHCb’s Run 2 dataset for the first time.

A recurring satellite event features lectures by theorists on topics following LHCb’s scientific output. This year, Simon Kuberski (CERN) and Saša Prelovšek (Ljubljana) took the audience on a guided tour through lattice QCD and spectroscopy.

With LHCb’s integrated luminosity in 2024 exceeding all previous years combined, excitement was heightened. Attendees left the workshop with a fresh perspective on how to approach the challenges faced by our community.

The post The B’s Ke<sup>+</sup>e<sup>–</sup>s appeared first on CERN Courier.

]]>
Meeting report The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_bees.jpg