Searches for new physics Archives – CERN Courier https://cerncourier.com/c/searches-for-new-physics/ Reporting on international high-energy physics Tue, 08 Jul 2025 19:40:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://cerncourier.com/wp-content/uploads/2025/03/cropped-favicon-32x32.png Searches for new physics Archives – CERN Courier https://cerncourier.com/c/searches-for-new-physics/ 32 32 Fermilab’s final word on muon g-2 https://cerncourier.com/a/fermilabs-final-word-on-muon-g-2/ Tue, 08 Jul 2025 19:40:43 +0000 https://cerncourier.com/?p=113549 In parallel, theorists have published an updated Standard Model prediction based purely on lattice QCD.

The post Fermilab’s final word on muon g-2 appeared first on CERN Courier.

]]>
Fermilab’s Muon g-2 collaboration has given its final word on the magnetic moment of the muon. The new measurement agrees closely with a significantly revised Standard Model (SM) prediction. Though the experimental measurement will likely now remain stable for several years, theorists expect to make rapid progress to reduce uncertainties and resolve tensions underlying the SM value. One of the most intriguing anomalies in particle physics is therefore severely undermined, but not yet definitively resolved.

The muon g-2 anomaly dates back to the late 1990s and early 2000s, when measurements at Brookhaven National Laboratory (BNL) uncovered a possible discrepancy by comparison to theoretical predictions of the so-called muon anomaly, aμ = (g-2)/2. aμ expresses the magnitude of quantum loop corrections to the leading-order prediction of the Dirac equation, which multiplies the classical gyromagnetic ratio of fundamental fermions by a “g-factor” of precisely two. Loop corrections of aμ ~ 0.1% quantify the extent to which virtual particles emitted by the muon further increase the strength of its interaction with magnetic fields. Were measurements to be shown to deviate from SM predictions, this would indicate the influence of virtual fields beyond the SM.

Move on up

In 2013, the BNL experiment’s magnetic storage ring was transported from Long Island, New York, to Fermilab in Batavia, Illinois. After years of upgrades and improvements, the new experiment began in 2017. It now reports a final precision of 127 parts per billion (ppb), bettering the experiment’s design precision of 140 ppb, and a factor of four more sensitive than the BNL result.

“First and foremost, an increase in the number of stored muons allowed us to reduce our statistical uncertainty to 98 ppb compared to 460 ppb for BNL,” explains co-spokesperson Peter Winter of Argonne National Laboratory, “but a lot of technical improvements to our calorimetry, tracking, detector calibration and magnetic-field mapping were also needed to improve on the systematic uncertainties from 280 ppb at BNL to 78 ppb at Fermilab.”

This formidable experimental precision throws down the gauntlet to the theory community

The final Fermilab measurement is (116592070.5 ± 11.4 (stat.) ± 9.1(syst.) ± 2.1 (ext.)) × 10–11, fully consistent with the previous BNL measurement. This formidable precision throws down the gauntlet to the Muon g-2 Theory Initiative (TI), which was founded to achieve an international consensus on the theoretical prediction.

The calculation is difficult, featuring contributions from all sectors of the SM (CERN Courier March/April 2025 p21). The TI published its first whitepaper in 2020, reporting aμ = (116591810 ± 43) × 10–11, based exclusively on a data-driven analysis of cross-section measurements at electron–positron colliders (WP20). In May, the TI updated its prediction, publishing a value aμ = (116592033 ± 62) × 10–11, statistically incompatible with the previous prediction at the level of three standard deviations, and with an increased uncertainty of 530 ppb (WP25). The new prediction is based exclusively on numerical SM calculations. This was made possible by rapid progress in the use of lattice QCD to control the dominant source of uncertainty, which arises due to the contribution of so-called hadronic vacuum polarisation (HVP). In HVP, the photon representing the magnetic field interacts with the muon during a brief moment when a virtual photon erupts into a difficult-to-model cloud of quarks and gluons.

Significant shift

“The switch from using the data-driven method for HVP in WP20 to lattice QCD in WP25 results in a significant shift in the SM prediction,” confirms Aida El-Khadra of the University of Illinois, chair of the TI, who believes that it is not unreasonable to expect significant error reductions in the next couple of years. “There still are puzzles to resolve, particularly around the experimental measurements that are used in the data-driven method for HVP, which prevent us, at this point in time, from obtaining a new prediction for HVP in the data-driven method. This means that we also don’t yet know if the data-driven HVP evaluation will agree or disagree with lattice–QCD calculations. However, given the ongoing dedicated efforts to resolve the puzzles, we are confident we will soon know what the data-driven method has to say about HVP. Regardless of the outcome of the comparison with lattice QCD, this will yield profound insights.”

We are making plans to improve experimental precision beyond the Fermilab experiment

On the experimental side, attention now turns to the Muon g-2/EDM experiment at J-PARC in Tokai, Japan. While the Fermilab experiment used the “magic gamma” method first employed at CERN in the 1970s to cancel the effect of electric fields on spin precession in a magnetic field (CERN Courier September/October 2024 p53), the J-PARC experiment seeks to control systematic uncertainties by exercising particularly tight control of its muon beam. In the Japanese experiment, antimatter muons will be captured by atomic electrons to form muonium, ionised using a laser, and reaccelerated for a traditional precession measurement with sensitivity to both the muon’s magnetic moment and its electric dipole moment (CERN Courier July/August 2024 p8).

“We are making plans to improve experimental precision beyond the Fermilab experiment, though their precision is quite tough to beat,” says spokesperson Tsutomu Mibe of KEK. “We also plan to search for the electric dipole moment of the muon with an unprecedented precision of roughly 10–21 e cm, improving the sensitivity of the last results from BNL by a factor of 70.”

With theoretical predictions from high-order loop processes expected to be of the order 10–38 e cm, any observation of an electric dipole moment would be a clear indication of new physics.

“Construction of the experimental facility is currently ongoing,” says Mibe. “We plan to start data taking in 2030.”

The post Fermilab’s final word on muon g-2 appeared first on CERN Courier.

]]>
News In parallel, theorists have published an updated Standard Model prediction based purely on lattice QCD. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_fermilab.jpg
Planning for precision at Moriond https://cerncourier.com/a/planning-for-precision-at-moriond/ Fri, 16 May 2025 16:26:44 +0000 https://cerncourier.com/?p=113063 Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions.

The post Planning for precision at Moriond appeared first on CERN Courier.

]]>
Since 1966 the Rencontres de Moriond has been one of the most important conferences for theoretical and experimental particle physicists. The Electroweak Interactions and Unified Theories session of the 59th edition attracted about 150 participants to La Thuile, Italy, from 23 to 30 March, to discuss electroweak, Higgs-boson, top-quark, flavour, neutrino and dark-matter physics, and the field’s links to astrophysics and cosmology.

Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions. These are particularly important while the international community discusses future projects, basing projections on current results and technology. The conference heard how theoretical investigations of specific models and “catch all” effective field theories are being sharpened to constrain a broader spectrum of possible extensions of the Standard Model. Theoretical parametric uncertainties are being greatly reduced by collider precision measurements and lattice QCD. Perturbative calculations of short-distance amplitudes are reaching to percent-level precision, while hadronic long-distance effects are being investigated both in B-, D- and K-meson decays, as well as in the modelling of collider events.

Comprehensive searches

Throughout Moriond 2025 we heard how a broad spectrum of experiments at the LHC, B factories, neutrino facilities, and astrophysical and cosmological observatories are planning upgrades to search for new physics at both low- and high-energy scales. Several fields promise qualitative progress in understanding nature in the coming years. Neutrino experiments will measure the neutrino mass hierarchy and CP violation in the neutrino sector. Flavour experiments will exclude or confirm flavour anomalies. Searches for QCD axions and axion-like particles will seek hints to the solution of the strong CP problem and possible dark-matter candidates.

The Standard Model has so far been confirmed to be the theory that describes physics at the electroweak scale (up to a few hundred GeV) to a remarkable level of precision. All the particles predicted by the theory have been discovered, and the consistency of the theory has been proven with high precision, including all calculable quantum effects. No direct evidence of new physics has been found so far. Still, big open questions remain that the Standard Model cannot answer, from understanding the origin of neutrino masses and their hierarchy, to identifying the origin and nature of dark matter and dark energy, and explaining the dynamics behind the baryon asymmetry of the universe.

Several fields promise qualitative progress in understanding nature in the coming years

The discovery of the Higgs boson has been crucial to confirming the Standard Model as the theory of particle physics at the electroweak scale, but it does not explain why the scalar Brout–Englert–Higgs (BEH) potential takes the form of a Mexican hat, why the electroweak scale is set by a Higgs vacuum expectation value of 246 GeV, or what the nature of the Yukawa force is that results in the bizarre hierarchy of masses coupling the BEH field to quarks and leptons. Gravity is also not a component of the Standard Model, and a unified theory escapes us.

At the LHC today, the ATLAS and CMS collaborations are delivering Run 1 and 2 results with beyond-expectation accuracies on Higgs-boson properties and electroweak precision measurements. Projections for the high-luminosity phase of the LHC are being updated and Run 3 analyses are in full swing. The LHCb collaboration presented another milestone in flavour physics for the first time at Moriond 2025: the first observation of CP violation in baryon decays. Its rebuilt Run 3 detector with triggerless readout and full software trigger reported its first results at this conference.

Several talks presented scenarios of new physics that could be revealed in today’s data given theoretical guidance of sufficient accuracy. These included models with light weakly interacting particles, vector-like fermions and additional scalar particles. Other talks discussed how revisiting established quantum properties such as entanglement with fresh eyes could offer unexplored avenues to new theoretical paradigms and overlooked new-physics effects.

The post Planning for precision at Moriond appeared first on CERN Courier.

]]>
Meeting report Particle physics today benefits from a wealth of high-quality data at the same time as powerful new ideas are boosting the accuracy of theoretical predictions. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_FN_moriond.jpg
Tau leptons from light resonances https://cerncourier.com/a/tau-leptons-from-light-resonances/ Fri, 16 May 2025 15:40:37 +0000 https://cerncourier.com/?p=113136 Among the fundamental particles, tau leptons occupy a curious spot.

The post Tau leptons from light resonances appeared first on CERN Courier.

]]>
CMS figure 1

Among the fundamental particles, tau leptons occupy a curious spot. They participate in the same sort of reactions as their lighter lepton cousins, electrons and muons, but their large mass means that they can also decay into a shower of pions and they interact more strongly with the Higgs boson. In many new-physics theories, Higgs-like particles – beyond that of the Standard Model – are introduced in order to explain the mass hierarchy or as possible portals to dark matter.

Because of their large mass, tau leptons are especially useful in searches for new physics. However, identifying taus is challenging, as in most cases they decay into a final state of one or more pions and an undetected neutrino. A crucial step in the identification of a tau lepton in the CMS experiment is the hadrons-plus-strips (HPS) algorithm. In the standard CMS reconstruction, a minimum momentum threshold of 20 GeV is imposed, such that the taus have enough momentum to make their decay products fall into narrow cones. However, this requirement reduces sensitivity to low-momentum taus. As a result, previous searches for a Higgs-like resonance φ decaying into two tau leptons required a φ-mass of more than 60 GeV.

CMS figure 2

The CMS experiment has now been able to extend the φ-mass range down to 20 GeV. To improve sensitivity to low-momentum tau decays, machine learning is used to determine a dynamic cone algorithm that expands the cone size as needed. The new algorithm, requiring one tau decaying into a muon and two neutrinos and one tau decaying into hadrons and a neutrino, is implemented in the CMS Scouting trigger system. Scouting extends CMS’s reach into previously inaccessible phase space by retaining only the most relevant information about the event, and thus facilitating much higher event rates.

The sensitivity of the new algorithm is so high that even the upsilon (Υ) meson, a bound state of the bottom quark and its antiquark, can be seen. Figure 1 shows the distribution of the mass of the visible decay products of tau (Mvis), in this case a muon from one tau lepton and either one or three pions from the other. A clear resonance structure is visible at Mvis = 6 GeV, in agreement with the expectation for the Υ meson. The peak is not at the actual mass of the Υ meson (9.46 GeV) due to the presence of neutrinos in the decay. While Υττ decays have been observed at electron–positron colliders, this marks the first evidence at a hadron collider and serves as an important benchmark for the analysis.

Given the high sensitivity of the new algorithm, CMS performed a search for a possible resonance in the range between 20 and 60 GeV using the data recorded in the years 2022 and 2023, and set competitive exclusion limits (see figure 2). For the 2024 and 2025 data taking, the algorithm was further improved, enhancing the sensitivity even more.

The post Tau leptons from light resonances appeared first on CERN Courier.

]]>
News Among the fundamental particles, tau leptons occupy a curious spot. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_EF-CMS_feature.jpg
CMS observes top–antitop excess https://cerncourier.com/a/cms-observes-top-antitop-excess-2/ Wed, 02 Apr 2025 10:20:07 +0000 https://cerncourier.com/?p=112962 The signal could be caused by a quasi-bound top–antitop meson commonly called "toponium".

The post CMS observes top–antitop excess appeared first on CERN Courier.

]]>
Threshold excess

CERN’s Large Hadron Collider continues to deliver surprises. While searching for additional Higgs bosons, the CMS collaboration may have instead uncovered evidence for the smallest composite particle yet observed in nature – a “quasi-bound” hadron made up of the most massive and shortest-lived fundamental particle known to science and its antimatter counterpart. The findings, which do not yet constitute a discovery claim and could also be susceptible to other explanations, were reported this week at the Rencontres de Moriond conference in the Italian Alps.

Almost all of the Standard Model’s shortcomings motivate the search for additional Higgs bosons. Their properties are usually assumed to be simple. Much as the 125 GeV Higgs boson discovered in 2012 appears to interact with each fundamental fermion with a strength proportional to the fermion’s mass, theories postulating additional Higgs bosons generally expect them to couple more strongly to heavier quarks. This puts the singularly massive top quark at centre stage. If an additional Higgs boson has a mass greater than about 345 GeV and can therefore decay to a top quark–antiquark pair, this should dominate the way it decays inside detectors. Hunting for bumps in the invariant mass spectrum of top–antitop pairs is therefore often considered to be the key experimental signature of additional Higgs bosons above the top–antitop production threshold.

The CMS experiment has observed just such a bump. Intriguingly, however, it is located at the lower limit of the search, right at the top-quark pair production threshold itself, leading CMS to also consider an alternative hypothesis long considered difficult to detect: a top–antitop quasi-bound state known as toponium (see “Threshold excess figure).

The toponium hypothesis is very exciting as we previously did not expect to be able to see it at the LHC

“When we started the project, toponium was not even considered as a background to this search,” explains CMS physics coordinator Andreas Meyer (DESY). “In our analysis today we are only using a simplified model for toponium – just a generic spin-0 colour-singlet state with a pseudoscalar coupling to top quarks. The toponium hypothesis is very exciting as we previously did not expect to be able to see it at the LHC.”

Though other explanations can’t be ruled out, CMS finds the toponium hypothesis to be sufficient to explain the observed excess. The size of the excess is consistent with the latest theoretical estimate of the cross section to produce pseudoscalar toponium of around 6.4 pb.

“The cross section we obtain for our simplified hypothesis is 8.8 pb with an uncertainty of about 15%,” explains Meyer. “One can infer that this is significantly above five sigma.”

The smallest hadron

If confirmed, toponium would be the final example of quarkonium – a term for quark–antiquark states formed from heavy charm, bottom and perhaps top quarks. Charmonium (charm–anticharm) mesons were discovered at SLAC and Brookhaven National Laboratory in the November Revolution of 1974. Bottomonium (bottom–antibottom) mesons were discovered at Fermilab in 1977. These heavy quarks move relatively slowly compared to the speed of light, allowing the strong interaction to be modelled by a static potential as a function of the separation between them. When the quarks are far apart, the potential is proportional to their separation due to the self-interacting gluons forming an elongating flux tube, yielding a constant force of attraction. At close separations, the potential is due to the exchange of individual gluons and is Coulomb-like in form, and inversely proportional to separation, leading to an inverse-square force of attraction. This is the domain where compact quarkonium states are formed, in a near perfect QCD analogy to positronium, wherein an electron and a positron are bound by photon exchange. The Bohr radii of the ground states of charmonium and bottomonium are approximately 0.3 fm and 0.2 fm, and bottomonium is thought to be the smallest hadron yet discovered. Given its larger mass, toponium’s Bohr radius would be an order of magnitude smaller.

Angular analysis

For a long time it was thought that toponium bound states were unlikely to be detected in hadron–hadron collisions. The top quark is the most massive and the shortest-lived of the known fundamental particles. It decays into a bottom quark and a real W boson in the time it takes light to travel just 0.1 fm, leaving little time for a hadron to form. Toponium would be unique among quarkonia in that its decay would be triggered by the weak decay of one of its constituent quarks rather than the annihilation of its constituent quarks into photons or gluons. Toponium is expected to decay at twice the rate of the top quark itself, with a width of approximately 3 GeV.

CMS first saw a 3.5 sigma excess in a 2019 search studying the mass range above 400 GeV, based on 35.9 fb−1 of proton–proton collisions at 13 TeV from 2016. Now armed with 138 fb–1 of collisions from 2016 to 2018, the collaboration extended the search down to the top–antitop production threshold at 345 GeV. Searches are complicated by the possibility that quantum interference between background and Higgs signal processes could generate an experimentally challenging peak–dip structure with a more or less pronounced bump.

“The signal reported by CMS, if confirmed, could be due either to a quasi-bound top–antitop meson, commonly called ‘toponium’, or possibly an elementary spin-zero boson such as appears in models with additional Higgs bosons, or conceivably even a combination of the two,” says theorist John Ellis of King’s College London. “The mass of the lowest-lying toponium state can be calculated quite accurately in QCD, and is expected to lie just below the nominal top–antitop threshold. However, this threshold is smeared out by the short lifetime of the top quark, as well as the mass resolution of an LHC detector, so toponium would appear spread out as a broad excess of events in the final states with leptons and jets that generally appear in top decays.”

Quantum numbers

An important task of the analysis is to investigate the quantum numbers of the signal. It could be a scalar particle, like the Higgs boson discovered in 2012, or a pseudoscalar particle – a different type of spin-0 object with odd rather than even parity. To measure its spin-parity, CMS studied the angular correlations of the top-quark-pair decay products, which retain information on the original quantum state. The decays bear all the experimental hallmarks of a pseudoscalar particle, consistent with toponium (see “Angular analysis” figure) or the pseudoscalar Higgs bosons common to many theories featuring extended Higgs sectors.

“The toponium state produced at the LHC would be a pseudoscalar boson, whose decays into these final states would have characteristic angular distributions, and the excess of events reported by CMS exhibits the angular correlations expected for such a pseudoscalar state,” explains Ellis. “Similar angular correlations would be expected in the decays of an elementary pseudoscalar boson, whereas scalar-boson decays would exhibit different angular correlations that are disfavoured by the CMS analysis.”

Whatever the true cause of the excess, the analyses reflect a vibrant programme of sensitive measurements at the LHC – and the possibility of a timely discovery

Two main challenges now stand in the way of definitively identifying the nature of the excess. The first is to improve the modelling of the creation of top-quark pairs at the LHC, including the creation of bound states at the threshold. The second challenge is to obtain consistency with the ATLAS experiment. “ATLAS had similar studies in the past but with a more conservative approach on the systematic uncertainties,” says ATLAS physics coordinator Fabio Cerutti (LBNL). “This included, for example, larger uncertainties related to parton showers and other top-modelling effects. To shed more light on the CMS observation, be it a new boson, a top quasi-bound state, or some limited understanding of the modelling of top–antitop production at threshold, further studies are needed on our side. We have several analysis teams working on that. We expect to have new results with improved modelling of the top-pair production at threshold and additional variables sensitive to both a new pseudo-scalar boson or a top quasi-bounded state very soon.”

Whatever the true cause of the excess, the analyses reflect a vibrant programme of sensitive measurements at the LHC – and the possibility of a timely discovery.

“Discovering toponium 50 years after the November Revolution would be an unanticipated and welcome golden anniversary present for its charmonium cousin that was discovered in 1974,” concludes Ellis. “The prospective observation and measurement of the vector state of toponium in e+e collisions around 350 GeV have been studied in considerable theoretical detail, but there have been rather fewer studies of the observability of pseudoscalar toponium at the LHC. In addition to the angular correlations observed by CMS, the effective production cross section of the observed threshold effect is consistent with non-relativistic QCD calculations. More detailed calculations will be desirable for confirmation that another quarkonium family member has made its appearance, though the omens are promising.”

The post CMS observes top–antitop excess appeared first on CERN Courier.

]]>
News The signal could be caused by a quasi-bound top–antitop meson commonly called "toponium". https://cerncourier.com/wp-content/uploads/2025/04/CCMayJun25_NA_CMS_feature.jpg
Do muons wobble faster than expected? https://cerncourier.com/a/do-muons-wobble-faster-than-expected/ Wed, 26 Mar 2025 15:08:49 +0000 https://cerncourier.com/?p=112616 With a new measurement imminent, the Courier explores the experimental results and theoretical calculations used to predict ‘muon g-2’ – one of particle physics’ most precisely known quantities and the subject of a fast-evolving anomaly.

The post Do muons wobble faster than expected? appeared first on CERN Courier.

]]>
Vacuum fluctuation

Fundamental charged particles have spins that wobble in a magnetic field. This is just one of the insights that emerged from the equation Paul Dirac wrote down in 1928. Almost 100 years later, calculating how much they wobble – their “magnetic moment” – strains the computational sinews of theoretical physicists to a level rarely matched. The challenge is to sum all the possible ways in which the quantum fluctuations of the vacuum affect their wobbling.

The particle in question here is the muon. Discovered in cosmic rays in 1936, muons are more massive but ephemeral cousins of the electron. Their greater mass is expected to amplify the effect of any undiscovered new particles shimmering in the quantum haze around them, and measurements have disagreed with theoretical predictions for nearly 20 years. This suggests a possible gap in the Standard Model (SM) of particle physics, potentially providing a glimpse of deeper truths beyond it.

In the coming weeks, Fermilab is expected to present the final results of a seven-year campaign to measure this property, reducing uncertainties to a remarkable one part in 1010 on the magnetic moment of the muon, and 0.1 parts per million on the quantum corrections. Theorists are racing to match this with an updated prediction of comparable precision. The calculation is in good shape, except for the incredibly unusual eventuality that the muon briefly emits a cloud of quarks and gluons at just the moment it absorbs a photon from the magnetic field. But in quantum mechanics all possibilities count all the time, and the experimental precision is such that the fine details of “hadronic vacuum polarisation” (HVP) could be the difference between reinforcing the SM and challenging it.

Quantum fluctuations

The Dirac equation predicts that fundamental spin s = ½ particles have a magnetic moment given by g(eħ/2m)s, where the gyromagnetic ratio (g) is precisely equal to two. For the electron, this remarkable result was soon confirmed by atomic spectroscopy, before more precise experiments in 1947 indicated a deviation from g = 2 of a few parts per thousand. Expressed as a = (g-2)/2, the shift was a surprise and was named the magnetic anomaly or the anomalous magnetic moment.

Quantum fluctuation

This marked the beginning of an enduring dialogue between experiment and theory. It became clear that a relativistic field theory like the developing quantum electrodynamics (QED) could produce quantum fluctuations, shifting g from two. In 1948, Julian Schwinger calculated the first correction to be a = α/2π ≈ 0.00116, aligning beautifully with 1947 experimental results. The emission and absorption of a virtual photon creates a cloud around the electron, altering its interaction with the external magnetic field (see “Quantum fluctuation” figure). Soon, other particles would be seen to influence the calculations. The SM’s limitations suggest that undiscovered particles could also affect these calculations. Their existence might be revealed by a discrepancy between the SM prediction for a particle’s anomalous magnetic moment and its measured value.

As noted, the muon is an even more promising target than the electron, as its sensitivity to physics beyond QED is generically enhanced by the square of the ratio of their masses: a factor of around 43,000. In 1957, inspired by Tsung-Dao Lee and Chen-Ning Yang’s proposal that parity is violated in the weak interaction, Richard Garwin, Leon Lederman and Marcel Weinrich studied the decay of muons brought to rest in a magnetic field at the Nevis cyclotron at Columbia University. As well as showing that parity is broken in both pion and muon decays, they found g to be close to two for muons by studying their “precession” in the magnetic field as their spins circled around the field lines.

Precision

This iconic experiment was the prototype of muon-precession projects at CERN (see CERN Courier September/October 2024 p53), later at Brookhaven National Laboratory and now Fermilab (see “Precision” figure). By the end of the Brookhaven project, a disagreement between the measured value of “aμ” – the subscript indicating g-2 for the muon rather than the electron – and the SM prediction was too large to ignore, motivating the present round of measurements at Fermilab and rapidly improving theory refinements.

g-2 and the Standard Model

Today, a prediction for aμ must include the effects of all three of the SM’s interactions and all of its elementary particles. The leading contributions are from electrons, muons and tau leptons interacting electromagnetically. These QED contributions can be computed in an expansion where each successive term contributes only around 1% of the previous one. QED effects have been computed to fifth order, yielding an extraordinary precision of 0.9 parts per billion – significantly more precise than needed to match measurements of the muon’s g-2, though not the electron’s. It took over half a century to achieve this theoretical tour de force.

The weak interaction gives the smallest contribution to aμ, a million times less than QED. These contributions can also be computed in an expansion. Second order suffices. All SM particles except gluons need to be taken into account.

Gluons are responsible for the strong interaction and appear in the third and last set of contributions. These are described by QCD and are called “hadronic” because quarks and gluons form hadrons at the low energies relevant for the muon g-2 (see “Hadronic contributions” figure). HVP is the largest, though 10,000 times smaller than the corrections due to QED. “Hadronic light-by-light scattering” (HLbL) is a further 100 times smaller due to the exchange of an additional photon. The challenge is that the strong-interaction effects cannot be approximated by a perturbative expansion. QCD is highly nonlinear and different methods are needed.

Data or the lattice?

Even before QCD was formulated, theorists sought to subdue the wildness of the strong force using experimental data. In the case of HVP, this triggered experimental investigations of e+e annihilation into hadrons and later hadronic tau–lepton decays. Though apparently disparate, the production of hadrons in these processes can be related to the clouds of virtual quarks and gluons that are responsible for HVP.

Hadronic contributions

A more recent alternative makes use of massively parallel numerical simulations to directly solve the equations of QCD. To compute quantities such as HVP or HLbL, “lattice QCD” requires hundreds of millions of processor-core hours on the world’s largest supercomputers.

In preparation for Fermilab’s first measurement in 2021, the Muon g-2 Theory Initiative, spanning more than 120 collaborators from over 80 institutions, was formed to provide a reference SM prediction that was published in a 2020 white paper. The HVP contribution was obtained with a precision of a few parts per thousand using a compilation of measurements of e+e annihilation into hadrons. The HLbL contribution was determined from a combination of data-driven and lattice–QCD methods. Though even more complex to compute, HLbL is needed only to 10% precision, as its contribution is smaller.

After summing all contributions, the prediction of the 2020 white paper sits over five standard deviations below the most recent experimental world average (see “Landscape of muon g-2” figure). Such a deviation would usually be interpreted as a discovery of physics beyond the SM. However, in 2021 the result of the first lattice calculation of the HVP contribution with a precision comparable to that of the data-driven white paper was published by the Budapest–Marseille–Wuppertal collaboration (BMW). The result, labelled BMW 2020 as it was uploaded to the preprint archive the previous year, is much closer to the experimental average (green band on the figure), suggesting that the SM may still be in the race. The calculation relied on methods developed by dozens of physicists since the seminal work of Tom Blum (University of Connecticut) in 2002 (see CERN Courier May/June 2021 p25).

Landscape of muon g-2

In 2020, the uncertainties on the data-driven and lattice-QCD predictions for the HVP contribution were still large enough that both could be correct, but BMW’s 2021 paper showed them to be explicitly incompatible in an “intermediate-distance window” accounting for approximately 35% of the HVP contribution, where lattice QCD is most reliable.

This disagreement was the first sign that the 2020 consensus had to be revised. To move forward, the sources of the various disagreements – more numerous now – and the relative limitations of the different approaches must be understood better. Moreover, uncertainty on HVP already dominated the SM prediction in 2020. As well as resolving these discrepancies, its uncertainty must be reduced by a factor of three to fully leverage the coming measurement from Fermilab. Work on the HVP is therefore even more critical than before, as elsewhere the theory house is in order: Sergey Volkov (KITP) recently verified the fifth-order QED calculation of Tatsumi Aoyama, Toichiro Kinoshita and Makiko Nio, identifying an oversight not numerically relevant at current experimental sensitivities; new HLbL calculations remain consistent; and weak contributions have already been checked and are precise enough for the foreseeable future.

News from the lattice

Since BMW’s 2020 lattice results, a further eight lattice-QCD computations of the dominant up-and-down-quark (u + d) contribution to HVP’s intermediate-distance window have been performed with similar precision, with four also including all other relevant contributions. Agreement is excellent and the verdict is clear: the disagreement between the lattice and data-driven approaches is confirmed (see “Intermediate window” figure).

Intermediate window

Work on the short-distance window (about 10% of the HVP contribution) has also advanced rapidly. Seven computations of the u + d contribution have appeared, with four including all other relevant contributions. No significant disagreement is observed.

The long-distance window (around 55% of the total) is by far the most challenging, with the largest uncertainties. In recent weeks three calculations of the dominant u + d contribution have appeared, by the RBC–UKQCD, Mainz and FHM collaborations. Though some differences are present, none can be considered significant for the time being.

With all three windows cross-validated, the Muon g-2 Theory Initiative is combining results to obtain a robust lattice–QCD determination of the HVP contribution. The final uncertainty should be slightly below 1%, still quite far from the 0.2% ultimately needed.

The BMW–DMZ and Mainz collaborations have also presented new results for the full HVP contribution to aμ, and the RBC–UKQCD collaboration, which first proposed the multi-window approach, is also in a position to make a full calculation. (The corresponding result in the “Landscape of muon g-2” figure combines contributions reported in their publications.) Mainz obtained a result with 1% precision using the three windows described above. BMW–DMZ divided its new calculation into five windows and replaced the lattice–QCD computation of the longest distance window – “the tail”, encompassing just 5% of the total – with a data-driven result. This pragmatic approach allows a total uncertainty of just 0.46%, with the collaboration showing that all e+e datasets contributing to this long-distance tail are entirely consistent. This new prediction differs from the experimental measurement of aμ by only 0.9 standard deviations.

These new lattice results, which have not yet been published in refereed journals, make the disagreement with the 2020 data-driven result even more blatant. However, the analysis of the annihilation of e+e into hadrons is also evolving rapidly.

News from electron–positron annihilation

Many experiments have measured the cross-section for e+e annihilation to hadrons as a function of centre-of-mass energy (√s). The dominant contribution to a data-driven calculation of aμ, and over 70% of its uncertainty budget, is provided by the e+e π+π process, in which the final-state pions are produced via the ρ resonance (see “Two-pion channel” figure).

The most recent measurement, by the CMD-3 energy-scan experiment in Novosibirsk, obtained a cross-section on the peak of the ρ resonance that is larger than all previous ones, significantly changing the picture in the π+π channel. Scrutiny by the Theory Initiative has identified no major problem.

Two-pion channel

CMD-3’s approach contrasts that used by KLOE, BaBar and BESIII, which study e+e annihilation with a hard photon emitted from the initial state (radiative return) at facilities with fixed √s. BaBar has innovated by calibrating the luminosity of the initial-state radiation using the μ+μ channel and using a unique “next-to-leading-order” approach that accounts for extra radiation from either the initial or the final state – a necessary step at the required level of precision.

In 1997, Ricard Alemany, Michel Davier and Andreas Höcker proposed an alternative method that employs τ→ ππ0ν decay while requiring some additional theoretical input. The decay rate has been precisely measured as a function of the two-pion invariant mass by the ALEPH and OPAL experiments at LEP, as well as by the Belle and CLEO experiments at B factories, under very different conditions. The measurements are in good agreement. ALEPH offers the best normalisation and Belle the best shape measurement.

KLOE and CMD-3 differ by more than five standard deviations on the ρ peak, precluding a combined analysis of e+e → π+π cross-sections. BaBar and τ data lie between them. All measurements are in good agreement at low energies, below the ρ peak. BaBar, CMD-3 and τ data are also in agreement above the ρ peak. To help clarify this unsatisfactory situation, in 2023 BaBar performed a careful study of radiative corrections to e+e → π+π. That study points to the possible underestimate of systematic uncertainties in radiative-return experiments that rely on Monte Carlo simulations to describe extra radiation, as opposed to the in situ studies performed by BaBar.

The future

While most contributions to the SM prediction of the muon g-2 are under control at the level of precision required to match the forthcoming Fermilab measurement, in trying to reduce the uncertainties of the HVP contribution to a commensurate degree, theorists and experimentalists shattered a 20 year consensus. This has triggered an intense collective effort that is still in progress.

The prospect of testing the limits of the SM through high-precision measurements generates considerable impetus

New analyses of e+e are underway at BaBar, Belle II, BES III and KLOE, experiments are continuing at CMD-3, and Belle II is also studying τ decays. At CERN, the longer term “MUonE” project will extract HVP by analysing how muons scatter off electrons – a very challenging endeavour regarding the unusual accuracy required both in the control of experimental systematic uncertainties and also theoretically, for the radiative corrections.

At the same time, lattice-QCD calculations have made enormous progress in the last five years and provide a very competitive alternative. The fact that several groups are involved with somewhat independent techniques is allowing detailed cross checks. The complementarity of the data-driven and lattice-QCD approaches should soon provide a reliable value for the g-2 theoretical prediction at unprecedented levels of precision.

There is still some way to go to reach that point, but the prospect of testing the limits of the SM through high-precision measurements generates considerable impetus. A new white paper is expected in the coming weeks. The ultimate aim is to reach a level of precision in the SM prediction that allows us to fully leverage the potential of the muon anomalous magnetic moment in the search for new fundamental physics, in concert with the final results of Fermilab’s Muon g-2 experiment and the projected Muon g-2/EDM experiment at J-PARC in Japan, which will implement a novel technique.

The post Do muons wobble faster than expected? appeared first on CERN Courier.

]]>
Feature With a new measurement imminent, the Courier explores the experimental results and theoretical calculations used to predict ‘muon g-2’ – one of particle physics’ most precisely known quantities and the subject of a fast-evolving anomaly. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_MUON-top_feature.jpg
The triggering of tomorrow https://cerncourier.com/a/the-triggering-of-tomorrow/ Wed, 26 Mar 2025 14:14:12 +0000 https://cerncourier.com/?p=112724 The third TDHEP workshop explored how triggers can cope with high data rates.

The post The triggering of tomorrow appeared first on CERN Courier.

]]>
The third edition of Triggering Discoveries in High Energy Physics (TDHEP) attracted 55 participants to Slovakia’s High Tatras mountains from 9 to 13 December 2024. The workshop is the only conference dedicated to triggering in high-energy physics, and follows previous editions in Jammu, India in 2013 and Puebla, Mexico in 2018. Given the upcoming High-Luminosity LHC (HL-LHC) upgrade, discussions focused on how trigger systems can be enhanced to manage high data rates while preserving physics sensitivity.

Triggering systems play a crucial role in filtering the vast amounts of data generated by modern collider experiments. A good trigger design selects features in the event sample that greatly enrich the proportion of the desired physics processes in the recorded data. The key considerations are timing and selectivity. Timing has long been at the core of experiment design – detectors must capture data at the appropriate time to record an event. Selectivity has been a feature of triggering for almost as long. Recording an event makes demands on running time and data-acquisition bandwidth, both of which are limited.

Evolving architecture

Thanks to detector upgrades and major changes in the cost and availability of fast data links and storage, the past 10 years have seen an evolution in LHC triggers away from hardware-based decisions using coarse-grain information.

Detector upgrades mean higher granularity and better time resolution, improving the precision of the trigger algorithms and the ability to resolve the problem of having multiple events in a single LHC bunch crossing (“pileup”). Such upgrades allow more precise initial-level hardware triggering, bringing the event rate down to a level where events can be reconstructed for further selection via high-level trigger (HLT) systems.

To take advantage of modern computer architecture more fully, HLTs use both graphics processing units (GPUs) and central processing units (CPUs) to process events. In ALICE and LHCb this leads to essentially triggerless access to all events, while in ATLAS and CMS hardware selections are still important. All HLTs now use machine learning (ML) algorithms, with the ATLAS and CMS experiments even considering their use at the first hardware level.

ATLAS and CMS are primarily designed to search for new physics. At the end of Run 3, upgrades to both experiments will significantly enhance granularity and time resolution to handle the high-luminosity environment of the HL-LHC, which will deliver up to 200 interactions per LHC bunch crossing. Both experiments achieved efficient triggering in Run 3, but higher luminosities, difficult-to-distinguish physics signatures, upgraded detectors and increasingly ambitious physics goals call for advanced new techniques. The step change will be significant. At HL-LHC, the first-level hardware trigger rate will increase from the current 100 kHz to 1 MHz in ATLAS and 760 kHz in CMS. The price to pay is increasing the latency – the time delay between input and output – to 10 µsec in ATLAS and 12.5 µsec in CMS.

The proposed trigger systems for ATLAS and CMS are predominantly FPGA-based, employing highly parallelised processing to crunch huge data streams efficiently in real time. Both will be two-level triggers: a hardware trigger followed by a software-based HLT. The ATLAS hardware trigger will utilise full-granularity calorimeter and muon signals in the global-trigger-event processor, using advanced ML techniques for real-time event selection. In addition to calorimeter and muon data, CMS will introduce a global track trigger, enabling real-time tracking at the first trigger level. All information will be integrated within the global-correlator trigger, which will extensively utilise ML to enhance event selection and background suppression.

Substantial upgrades

The other two big LHC experiments already implemented substantial trigger upgrades at the beginning of Run 3. The ALICE experiment is dedicated to studying the strong interactions of the quark–gluon plasma – a state of matter in which quarks and gluons are not confined in hadrons. The detector was upgraded significantly for Run 3, including the trigger and data-acquisition systems. The ALICE continuous readout can cope with 50 kHz for lead ion–lead ion (PbPb) collisions and several MHz for proton–proton (pp) collisions. In PbPb collisions the full data is continuously recorded and stored for offline analysis, while for pp collisions the data is filtered.

Unlike in Run 2, where the hardware trigger reduced the data rate to several kHz, Run 3 uses an online software trigger that is a natural part of the common online–offline computing framework. The raw data from detectors is streamed continuously and processed in real time using high-performance FPGAs and GPUs. ML plays a crucial role in the heavy-flavour software trigger, which is one of the main physics interests. Boosted decision trees are used to identify displaced vertices from heavy quark decays. The full chain from saving raw data in a 100 PB buffer to selecting events of interest and removing the original raw data takes about three weeks and was fully employed last year.

The third edition of TDHEP suggests that innovation in this field is only set to accelerate

The LHCb experiment focuses on precision measurements in heavy-flavour physics. A typical example is measuring the probability of a particle decaying into a certain decay channel. In Run 2 the hardware trigger tended to saturate in many hadronic channels when the luminosity was instantaneously increased. To solve this issue for Run 3 a high-level software trigger was developed that can handle 30 MHz event readout with 4 TB/s data flow. A GPU-based partial event reconstruction and primary selection of displaced tracks and vertices (HLT1) reduces the output data rate to 1 MHz. The calibration and detector alignment (embedded into the trigger system) are calculated during data taking just after HLT1 and feed full-event reconstruction (HLT2), which reduces the output rate to 20 kHz. This represents 10 GB/s written to disk for later analysis.

Away from the LHC, trigger requirements differ considerably. Contributions from other areas covered heavy-ion physics at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider (RHIC), fixed-target physics at CERN and future experiments at the Facility for Antiproton and Ion Research at GSI Darmstadt and Brookhaven’s Electron–Ion Collider (EIC). NA62 at CERN and STAR at RHIC both use conventional trigger strategies to arrive at their final event samples. The forthcoming CBM experiment at FAIR and the ePIC experiment at the EIC deal with high intensities but aim for “triggerless” operation.

Requirements were reported to be even more diverse in astroparticle physics. The Pierre Auger Observatory combines local and global trigger decisions at three levels to manage the problem of trigger distribution and data collection over 3000 km2 of fluorescence and Cherenkov detectors.

These diverse requirements will lead to new approaches being taken, and evolution as the experiments are finalised. The third edition of TDHEP suggests that innovation in this field is only set to accelerate.

The post The triggering of tomorrow appeared first on CERN Courier.

]]>
Meeting report The third TDHEP workshop explored how triggers can cope with high data rates. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_FN_TDHEP.jpg
Space oddities https://cerncourier.com/a/space-oddities/ Wed, 26 Mar 2025 14:11:01 +0000 https://cerncourier.com/?p=112823 In his new popular book, Harry Cliff tackles the thorny subject of anomalies in fundamental science.

The post Space oddities appeared first on CERN Courier.

]]>
Space Oddities

Space Oddities takes readers on a journey through the mysteries of modern physics, from the smallest subatomic particles to the vast expanse of stars and space. Harry Cliff – an experimental particle physicist at Cambridge University – unravels some of the most perplexing anomalies challenging the Standard Model (SM), with behind-the-scenes scoops from eight different experiments. The most intriguing stories concern lepton universality and the magnetic moment of the muon.

Theoretical predictions have demonstrated an extremely precise value for the muon’s magnetic moment, experimentally verified to an astonishing 11 significant figures. Over the last few years, however, experimental measurements have suggested a slight discrepancy – the devil lying in the 12th digit. 2021 measurements at Fermilab disagreed with theory predictions at 4σ. Not enough to cause a “scientific earthquake”, as Cliff puts it, but enough to suggest that new physics might be at play.

Just as everything seemed to be edging towards a new discovery, Cliff introduces the “villains” of the piece. Groundbreaking lattice–QCD predictions from the Budapest–Marseille–Wuppertal collaboration were published on the same day as a new measurement from Fermilab. If correct, these would destroy the anomaly by contradicting the data-driven theory consensus. (“Yeah, bullshit,” said one experimentalist to Cliff when put to him that the timing wasn’t intended to steal the experiment’s thunder.) The situation is still unresolved, though many new theoretical predictions have been made and a new theoretical consensus is imminent (see “Do muons wobble faster than expected“). Regardless of the outcome, Cliff emphasises that this research will pave the way for future discoveries, and none of it should be taken for granted – even if the anomaly disappears.

“One of the challenging aspects of being part of a large international project is that your colleagues are both collaborators and competitors,” Cliff notes. “When it comes to analysing the data with the ultimate goal of making discoveries, each research group will fight to claim ownership of the most interesting topics.”

This spirit of spurring collaborator- competitors on to greater heights of precision is echoed throughout Cliff’s own experience of working in the LHCb collaboration, where he studies “lepton universality”. All three lepton flavours – electron, muon and tau – should interact almost identically, except for small differences due to their masses. However, over the past decade several experimental results suggested that this theory might not hold in B-meson decays, where muons seemed to be appearing less frequently than electrons. If confirmed, this would point to physics beyond the SM.

Having been involved himself in a complementary but less sensitive analy­sis of B-meson decay channels involving strange quarks, Cliff recalls the emotional rollercoaster experienced by some of the key protagonists: the “RK” team from Imperial College London. After a year of rigorous testing, RK unblinded a sanity check of their new computational toolkit: a reanalysis of the prior measurement that yielded a perfectly consistent R value of 0.72 with an uncertainty of about 0.08, upholding a 3σ discrepancy. Now was the time to put the data collected since then through the same pasta machine: if it agreed, the tension between the SM and their overall measurement would cross the 5σ threshold. After an anxious wait while the numbers were crunched, the team received the results for the new data: 0.93 with an uncertainty of 0.09.

“Dreams of a major discovery evaporated in an instant,” recalls Cliff. “Anyone who saw the RK team in the CERN cafeteria that day could read the result from their faces.” The lead on the RK team, Mitesh Patel, told Cliff that they felt “emotionally train wrecked”.

One day we might make the right mistake and escape the claustrophobic clutches of the SM

With both results combined, the ratio averaged out to 0.85 ± 0.06, just shy of 3σ away from unity. While the experimentalists were deflated, Cliff notes that for theorists this result may have been more exciting than the initial anomaly, as it was easier to explain using new particles or forces. “It was as if we were spying the footprints of a great, unknown beast as it crashed about in a dark jungle,” writes Cliff.

Space Oddities is a great defence of irrepressible experimentation. Even “failed” anomalies are far from useless: if they evaporate, the effort required to investigate them pushes the boundaries of experimental precision, enhances collaboration between scientists across the world, and refines theoretical frameworks. Through retellings and interviews, Cliff helps the public experience the excitement of near breakthroughs, the heartbreak of failed experiments, and the dynamic interactions between theoretical and experimental physicists. Thwarting myths that physicists are cold, calculating figures working in isolation, Cliff sheds light on a community driven by curiosity, ambition and (healthy) competition. His book is a story of hope that one day we might make the right mistake and escape the claustrophobic clutches of the SM.

“I’ve learned so much from my mistakes,” read a poster above Cliff’s undergraduate tutor’s desk. “I think I’ll make another.”

The post Space oddities appeared first on CERN Courier.

]]>
Review In his new popular book, Harry Cliff tackles the thorny subject of anomalies in fundamental science. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_REV_Space_feature.jpg
A word with CERN’s next Director-General https://cerncourier.com/a/a-word-with-cerns-next-director-general/ Mon, 27 Jan 2025 07:56:07 +0000 https://cerncourier.com/?p=112181 Mark Thomson, CERN's Director General designate for 2025, talks to the Courier about the future of particle physics.

The post A word with CERN’s next Director-General appeared first on CERN Courier.

]]>
Mark Thomson

What motivates you to be CERN’s next Director-General?

CERN is an incredibly important organisation. I believe my deep passion for particle physics, coupled with the experience I have accumulated in recent years, including leading the Deep Underground Neutrino Experiment, DUNE, through a formative phase, and running the Science and Technology Facilities Council in the UK, has equipped me with the right skill set to lead CERN though a particularly important period.

How would you describe your management style?

That’s a good question. My overarching approach is built around delegating and trusting my team. This has two advantages. First, it builds an empowering culture, which in my experience provides the right environment for people to thrive. Second, it frees me up to focus on strategic planning and engagement with numerous key stakeholders. I like to focus on transparency and openness, to build trust both internally and externally.

How will you spend your familiarisation year before you take over in 2026?

First, by getting a deep understanding of CERN “from within”, to plan how I want to approach my mandate. Second, by lending my voice to the scientific discussion that will underpin the third update to the European strategy for particle physics. The European strategy process is a key opportunity for the particle-physics community to provide genuine bottom-up input and shape the future. This is going to be a really varied and exciting year.

What open question in fundamental physics would you most like to see answered in your lifetime?

I am going to have to pick two. I would really like to understand the nature of dark matter. There are a wide range of possibilities, and we are addressing this question from multiple angles; the search for dark matter is an area where the collider and non-collider experiments can both contribute enormously. The second question is the nature of the Higgs field. The Higgs boson is just so different from anything else we’ve ever seen. It’s not just unique – it’s unique and very strange. There are just so many deep questions, such as whether it is fundamental or composite. I am confident that we will make progress in the coming years. I believe the High-Luminosity LHC will be able to make meaningful measurements of the self-coupling at the heart of the Higgs potential. If you’d asked me five years ago whether this was possible, I would have been doubtful. But today I am very optimistic because of the rapid progress with advanced analysis techniques being developed by the brilliant scientists on the LHC experiments.

What areas of R&D are most in need of innovation to meet our science goals?

Artificial intelligence is changing how we look at data in all areas of science. Particle physics is the ideal testing ground for artificial intelligence, because our data is complex there are none of the issues around the sensitive nature of the data that exist in other fields. Complex multidimensional datasets are where you’ll benefit the most from artificial intelligence. I’m also excited by the emergence of new quantum technologies, which will open up fresh opportunities for our detector systems and also new ways of doing experiments in fundamental physics. We’ve only scratched the surface of what can be achieved with entangled quantum systems.

How about in accelerator R&D?

There are two areas that I would like to highlight: making our current technologies more sustainable, and the development of high-field magnets based on high-temperature superconductivity. This connects to the question of innovation more broadly. To quote one example among many, high-temperature superconducting magnets are likely to be an important component of fusion reactors just as much as particle accelerators, making this a very exciting area where CERN can deploy its engineering expertise and really push that programme forward. That’s not just a benefit for particle physics, but a benefit for wider society.

How has CERN changed since you were a fellow back in 1994?

The biggest change is that the collider experiments are larger and more complex, and the scientific and technical skills required have become more specialised. When I first came to CERN, I worked on the OPAL experiment at LEP – a collaboration of less than 400 people. Everybody knew everybody, and it was relatively easy to understand the science of the whole experiment.

My overarching approach is built around delegating and trusting my team

But I don’t think the scientific culture of CERN and the particle-physics community has changed much. When I visit CERN and meet with the younger scientists, I see the same levels of excitement and enthusiasm. People are driven by the wonderful mission of discovery. When planning the future, we need to ensure that early-career researchers can see a clear way forward with opportunities in all periods of their career. This is essential for the long-term health of particle physics. Today we have an amazing machine that’s running beautifully: the LHC. I also don’t think it is possible to overstate the excitement of the High-Luminosity LHC. So there’s a clear and exciting future out to the early 2040s for today’s early-career researchers. The question is what happens beyond that? This is one reason to ensure that there is not a large gap between the end of the High-Luminosity LHC and the start of whatever comes next.

Should the world be aligning on a single project?

Given the increasing scale of investment, we do have to focus as a global community, but that doesn’t necessarily mean a single project. We saw something similar about 10 years ago when the global neutrino community decided to focus its efforts on two complementary long-baseline projects, DUNE and Hyper-Kamiokande. From the perspective of today’s European strategy, the Future Circular Collider (FCC) is an extremely appealing project that would map out an exciting future for CERN for many decades. I think we’ll see this come through strongly in an open and science-driven European strategy process.

How do you see the scientific case for the FCC?

For me, there are two key points. First, gaining a deep understanding of the Higgs boson is the natural next step in our field. We have discovered something truly unique, and we should now explore its properties to gain deeper insights into fundamental physics. Scientifically, the FCC provides everything you want from a Higgs factory, both in terms of luminosity and the opportunity to support multiple experiments.

Second, investment in the FCC tunnel will provide a route to hadron–hadron collisions at the 100 TeV scale. I find it difficult to foresee a future where we will not want this capability.

These two aspects make the FCC a very attractive proposition.

How successful do you believe particle physics is in communicating science and societal impacts to the public and to policymakers?

I think we communicate science well. After all, we’ve got a great story. People get the idea that we work to understand the universe at its most basic level. It’s a simple and profound message.

Going beyond the science, the way we communicate the wider industrial and societal impact is probably equally important. Here we also have a good story. In our experiments we are always pushing beyond the limits of current technology, doing things that have not been done before. The technologies we develop to do this almost always find their way back into something that will have wider applications. Of course, when we start, we don’t know what the impact will be. That’s the strength and beauty of pushing the boundaries of technology for science.

Would the FCC give a strong return on investment to the member states?

Absolutely. Part of the return is the science, part is the investment in technology, and we should not underestimate the importance of the training opportunities for young people across Europe. CERN provides such an amazing and inspiring environment for young people. The scale of the FCC will provide a huge number of opportunities for young scientists and engineers.

We need to ensure that early-career researchers can see a clear way forward with opportunities in all periods of their career. This is essential for the long-term health of particle physics

In terms of technology development, the detectors for the electron–positron collider will provide an opportunity for pushing forward and deploying new, advanced technologies to deliver the precision required for the science programme. In parallel, the development of the magnet technologies for the future hadron collider will be really exciting, particularly the potential use of high-temperature superconductors, as I said before.

It is always difficult to predict the specific “return on investment” on the technologies for big scientific research infrastructure. Part of this challenge is that some of that benefits might be 20, 30, 40 years down the line. Nevertheless, every retrospective that has tried, has demonstrated that you get a huge downstream benefit.

Do we reward technical innovation well enough in high-energy physics?

There needs to be a bit of a culture shift within our community. Engineering and technology innovation are critical to the future of science and critical to the prosperity of Europe. We should be striving to reward individuals working in these areas.

Should the field make it more flexible for physicists and engineers to work in industry and return to the field having worked there?

This is an important question. I actually think things are changing. The fluidity between academia and industry is increasing in both directions. For example, an early-career researcher in particle physics with a background in deep artificial-intelligence techniques is valued incredibly highly by industry. It also works the other way around, and I experienced this myself in my career when one of my post-doctoral researchers joined from an industry background after a PhD in particle physics. The software skills they picked up from industry were incredibly impactful.

I don’t think there is much we need to do to directly increase flexibility – it’s more about culture change, to recognise that fluidity between industry and academia is important and beneficial. Career trajectories are evolving across many sectors. People move around much more than they did in the past.

Does CERN have a future as a global laboratory?

CERN already is a global laboratory. The amazing range of nationalities working here is both inspiring and a huge benefit to CERN.

How can we open up opportunities in low- and middle-income countries?

I am really passionate about the importance of diversity in all its forms and this includes national and regional inclusivity. It is an agenda that I pursued in my last two positions. At the Deep Underground Neutrino Experiment, I was really keen to engage the scientific community from Latin America, and I believe this has been mutually beneficial. At STFC, we used physics as a way to provide opportunities for people across Africa to gain high-tech skills. Going beyond the training, one of the challenges is to ensure that people use these skills in their home nations. Otherwise, you’re not really helping low- and middle-income countries to develop.

What message would you like to leave with readers?

That we have really only just started the LHC programme. With more than a factor of 10 increase in data to come, coupled with new data tools and upgraded detectors, the High-Luminosity LHC represents a major opportunity for a new discovery. Its nature could be a complete surprise. That’s the whole point of exploring the unknown: you don’t know what’s out there. This alone is incredibly exciting, and it is just a part of CERN’s amazing future.

The post A word with CERN’s next Director-General appeared first on CERN Courier.

]]>
Opinion Mark Thomson, CERN's Director General designate for 2025, talks to the Courier about the future of particle physics. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_INT_thompson_feature.jpg
Cornering compressed SUSY https://cerncourier.com/a/cornering-compressed-susy/ Mon, 27 Jan 2025 07:18:49 +0000 https://cerncourier.com/?p=112235 A new CMS analysis explores an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra.

The post Cornering compressed SUSY appeared first on CERN Courier.

]]>
CMS figure 1

Since the LHC began operations in 2008, the CMS experiment has been searching for signs of supersymmetry (SUSY) – the only remaining spacetime symmetry not yet observed to have consequences for physics. It has explored higher and higher masses of supersymmetric particles (sparticles) with increasing collision energies and growing datasets. No evidence has been observed so far. A new CMS analysis using data recorded between 2016 and 2018 continues this search in an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra.

The masses of SUSY sparticles have very important implications for both the physics of our universe and how they could be potentially produced and observed at experiments like CMS. The heavier the sparticle, the rarer its appearance. On the other hand, when heavy sparticles decay, their mass is converted to the masses and momenta of SM particles, like leptons and jets. These particles are detected by CMS, with large masses leaving potentially spectacular (and conspicuous) signatures. Each heavy sparticle is expected to continue to decay to lighter ones, ending with the lightest SUSY particles (LSPs). LSPs, though massive, are stable and do not decay in the detector. Instead, they appear as missing momentum. In cases of compressed sparticle mass spectra, the mass difference between the initially produced sparticles and LSPs is small. This means the low rates of production of massive sparticles are not accompanied by high-momentum decay products in the detector. Most of their mass ends up escaping in the form of invisible particles, significantly complicating observation.

This new CMS result turns this difficulty on its head, using a kinematic observable RISR, which is directly sensitive to the mass of LSPs as opposed to the mass difference between parent sparticles and LSPs. The result is even better discrimination between SUSY and SM backgrounds when sparticle spectra are more compressed.

This approach focuses on events where putative SUSY candidates receive a significant “kick” from initial-state radiation (ISR) – additional jets recoiling opposite the system of sparticles. When the sparticle masses are highly compressed, the invisible, massive LSPs receive most of the ISR momentum-kick, with this fraction telling us about the LSP masses through the RISR observable.

Given the generic applicability of the approach, the analysis is able to systematically probe a large class of possible scenarios. This includes events with various numbers of leptons (0, 1, 2 or 3) and jets (including those from heavy-flavour quarks), with a focus on objects with low momentum. These multiplicities, along with RISR and other selected discriminating variables, are used to categorise recorded events and a comprehensive fit is performed to all these regions. Compressed SUSY signals would appear at larger values of RISR, while bins at lower values are used to model and constrain SM backgrounds. With more than 2000 different bins in RISR, over several hundred object-based categ­ories, a significant fraction of the experimental phase space in which compressed SUSY could hide is scrutinised.

In the absence of significant observed deviations in data yields from SM expectations, a large collection of SUSY scenarios can be excluded at high confidence level (CL), including those with the production of stop quarks, EWKinos and sleptons. As can be seen in the results for stop quarks (figure 1), the analysis is able to achieve excellent sensitivity to compressed SUSY. Here, as for many of the SUSY scenarios considered, the analy­sis provides the world’s most stringent constraints on compressed SUSY, further narrowing the space it could be hiding.

The post Cornering compressed SUSY appeared first on CERN Courier.

]]>
News A new CMS analysis explores an often overlooked, difficult corner of SUSY manifestations: compressed sparticle mass spectra. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_CMS_feature.jpg
Taking the lead in the monopole hunt https://cerncourier.com/a/taking-the-lead-in-the-monopole-hunt/ Mon, 27 Jan 2025 07:15:21 +0000 https://cerncourier.com/?p=112230 Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931.

The post Taking the lead in the monopole hunt appeared first on CERN Courier.

]]>
ATLAS figure 1

Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931. He pointed out that if monopoles exist, electric charge must be quantised, meaning that particle charges must be integer multiples of a fundamental charge. Electric charge quantisation is indeed observed in nature, with no other known explanation for this striking phenomenon. The ATLAS collaboration performed a search for these elusive particles using lead–lead (PbPb) collisions at 5.36 TeV from Run 3 of the Large Hadron Collider.

The search targeted the production of monopole–antimonopole pairs via photon–photon interactions, a process enhanced in heavy-ion collisions due to the strong electromagnetic fields (Z2) generated by the Z = 82 lead nuclei. Ultraperipheral collisions are ideal for this search, as they feature electromagnetic interactions without direct nuclear contact, allowing rare processes like monopole production to dominate in visible signatures. The ATLAS study employed a novel detection technique exploiting the expected highly ionising nature of these particles, leaving a characteristic signal in the innermost silicon detectors of the ATLAS experiment (figure 1).

The analysis employed a non-perturbative semiclassical model to estimate monopole production. Traditional perturbative models, which rely on Feynman diagrams, are inadequate due to the large coupling constant of magnetic monopoles. Instead, the study used a model based on the Schwinger mechanism, adapted for magnetic fields, to predict monopole production in the ultraperipheral collisions’ strong magnetic fields. This approach offers a more robust
theoretical framework for the search.

ATLAS figure 2

The experiment’s trigger system was critical to the search. Given the high ionisation signature of monopoles, traditional calorimeter-based triggers were unsuitable, as even high-momentum monopoles lose energy rapidly through ionisation and do not reach the calorimeter. Instead, the trigger, newly introduced for the 2023 PbPb data-taking campaign, focused on detecting the forward neutrons emitted during electromagnetic interactions. The level-1 trigger system identified neutrons using the Zero-Degree Calorimeter, while the high-level trigger required more than 100 clusters of pixel-detector hits in the inner detector – an approach sensitive to monopoles due to their high ionisation signatures.

Additionally, the analysis examined the topology of pixel clusters to further refine the search, as a more aligned azimuthal distribution in the data would indicate a signature consistent with monopoles (figure 1), while the uniform distribution typically associated with beam-induced backgrounds could be identified and suppressed.

No significant monopole signal is observed beyond the expected background, with the latter being estimated using a data-driven technique. Consequently, the analysis set new upper limits on the cross-section for magnetic monopole production (figure 2), significantly improving existing limits for low-mass monopoles in the 20–150 GeV range. Assuming a non-perturbative semiclassical model, the search excludes monopoles with a single Dirac magnetic charge and masses below 120 GeV. The techniques developed in this search will open new possibilities to study other highly ionising particles that may emerge from beyond-Standard Model physics.

The post Taking the lead in the monopole hunt appeared first on CERN Courier.

]]>
News Magnetic monopoles are hypothetical particles that would carry magnetic charge, a concept first proposed by Paul Dirac in 1931. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_ATLAS_feature.jpg
R(D) ratios in line at LHCb https://cerncourier.com/a/rd-ratios-in-line-at-lhcb/ Fri, 24 Jan 2025 16:00:50 +0000 https://cerncourier.com/?p=112240 The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation.

The post R(D) ratios in line at LHCb appeared first on CERN Courier.

]]>
LHCb figure 1

The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation in the framework of the Standard Model (SM). The b  cτντ transition has the potential to reveal new particles or forces that interact primarily with third-generation particles, which are subject to the less stringent experimental constraints at present. As a tree-level SM process mediated by W-boson exchange, its amplitude is large, resulting in large branching fractions and significant data samples to analyse.

The observable under scrutiny is the ratio of decay rates between the signal mode involving τ and ντ leptons from the third generation of fermions and the normalisation mode containing μ and νμ leptons from the second generation. Within the SM, this lepton flavour universality (LFU) ratio deviates from unity only due to the different mass of the charged leptons – but new contributions could change the value of the ratios. A longstanding tension exists between the SM prediction and the experimental measurements, requiring further input to clarify the source of the discrepancy.

The LHCb collaboration analysed four decay modes: B0 D(*)+ν, with ℓ representing τ or μ. Each is selected using the same visible final state of one muon and light hadrons from the decay of the charm meson. In the normalisation mode, the muon originates directly from the B-hadron decay, while in the signal mode, it arises from the decay of the τ lepton. The four contributions are analysed simultaneously, yielding two LFU ratios between taus and muons – one using the ground state of the D+ meson and one the excited state D*+.

The control of the background contributions is particularly complicated in this analysis as the final state is not fully reconstructible, limiting the resolution on some of the discriminating variables. Instead, a three-dimensional template fit separates the signal and the normalisation from the background versus: the momentum transferred to the lepton pair (q2); the energy of the muon in the rest frame of the B meson (Eμ*); and the invariant mass missing from the visible system. Each contribution is modelled using a template histogram derived either from simulation or from selected control samples in data.

This constitutes the world’s second most precise measurement of R(D)

To prevent the simulated data sample size from becoming a limiting factor in the precision of the measurement, a fast tracker-only simulation technique was exploited for the first time in LHCb. Another novel aspect of this work is the use of the HAMMER software tool during the minimisation procedure of the likelihood fit, which enables a fast, but exact, variation of a template as a function of the decay-model parameters. This variation is important to allow the form factors of both the signal and normalisation channels to vary as the constraints derived from the predictions that use precise lattice calculations can have larger uncertainties than those obtained from the fit.

The fit projection over one of the discriminating variables is shown in figure 1, illustrating the complexity of the analysed data sample but nonetheless showcasing LHCb’s ability to distinguish the signal modes (red and orange) from the normalisation modes (two shades of blue) and background contributions.

The measured LFU ratios are in good agreement with the current world average and the predictions of the SM: R(D+) = 0.249 ± 0.043 (stat.) ± 0.047 (syst.) and R(D*+) = 0.402 ± 0.081(stat.) ± 0.085 (syst.). Under isospin symmetry assumptions, this constitutes the world’s second most precise measurement of R(D), following a 2019 measurement by the Belle collaboration. This analysis complements other ongoing efforts at LHCb and other experiments to test LFU across different decay channels. The precision of the measurements reported here is primarily limited by the size of the signal and control samples, so more precise measurements are expected with future LHCb datasets.

The post R(D) ratios in line at LHCb appeared first on CERN Courier.

]]>
News The accidental symmetries observed between the three generations of leptons are poorly understood, with no compelling theoretical motivation. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_EF_LHCb_feature.jpg
The B’s Ke+e–s https://cerncourier.com/a/the-bs-kee-s/ Fri, 24 Jan 2025 15:45:52 +0000 https://cerncourier.com/?p=112331 The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world.

The post The B’s Ke<sup>+</sup>e<sup>–</sup>s appeared first on CERN Courier.

]]>
The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world to CERN from 23 to 25 October 2024. Patrick Koppenburg (Nikhef) began the meeting by looking back 10 years, when three and four sigma anomalies abounded: the inclusive/exclusive puzzles; the illuminatingly named P5 observable; and the lepton-universality ratios for rare B decays. While LHCb measurements have mostly eliminated the anomalies seen in the lepton-universality ratios, many of the other anomalies persist – most notably, the corresponding branching fractions for rare B-meson decays still appear to be suppressed significantly below Standard Model (SM) theory predictions. Sara Celani (Heidelberg) reinforced this picture with new results for Bs→ φμ+μ and Bs→ φe+e, showing the continued importance of new-physics searches in these modes.

Changing flavour

The discussion on rare B decays continued in the session on flavour-changing neutral-currents. With new lattice-QCD results pinning down short-distance local hadronic contributions, the discussion focused on understanding the long-distance contributions arising from hadronic resonances and charm rescattering. Arianna Tinari (Zurich) and Martin Hoferichter (Bern) judged the latter not to be dramatic in magnitude. Lakshan Madhan (Cambridge) presented a new amplitude analysis in which the long and short-distance contributions are separated via the kinematic dependence of the decay amplitudes. New theo­retical analyses of the nonlocal form factors for B → K(*)μ+μ and B → K(*)e+e were representative of the workshop as a whole: truly the bee’s knees.

Another challenge to accurate theory predictions for rare decays, the widths of vector final states, snuck its way into the flavour-changing charged-currents session, where Luka Leskovec (Ljubljana) presented a comprehensive overview of lattice methods for decays to resonances. Leskovec’s optimistic outlook for semileptonic decays with two mesons in the final state stood in contrast to prospects for applying lattice methods to D-D mixing: such studies are currently limited to the SU(3)-flavour symmetric point of equal light-quark masses, explained Felix Erben (CERN), though he offered a glimmer of hope in the form of spectral reconstruction methods currently under development.

LHCb’s beauty and charm physics programme reported substantial progress. Novel techniques have been implemented in the most recent CP-violation studies, potentially leading to an impressive uncertainty of just 1° in future measurements of the CKM angle gamma. LHCb has recently placed a special emphasis on beauty and charm baryons, where the experiment offers unique capabilities to perform many interesting measurements ranging from CP violation to searches for very rare decays and their form factors. Going from three quarks to four and five, the spectroscopy session illustrated the rich and complex debate around tetraquark and pentaquark states with a big open discussion on the underlying structure of the 20 or so discovered at LHCb: which are bound states of quarks and which are simply meson molecules? (CERN Courier November/December 2024 p26 and p33.)

LHCb’s ability to do unique physics was further highlighted in the QCD, electroweak (EW) and exotica session, where the collaboration has shown the most recent publicly available measurement of the weak-mixing angle in conjunction with W/Z-boson production cross-sections and other EW observables. LHCb have put an emphasis on combined QCD + QED and effective-field-theory calculations, and the interplay between EW precision observables and new-physics effects in couplings to the third generation. By studying phase space inaccessible to any other experiment, a study of hypothetical dark photons decaying to electrons showed the LHCb experiment to be a unique environment for direct searches for long-lived and low-mass particles.

Attendees left the workshop with a fresh perspective

Parallel to Implications 2024, the inaugural LHCb Open Data and Ntuple Wizard Workshop, took place on 22 October as a satellite event, providing theorists and phenomenologists with a first look at a novel software application for on-demand access to custom ntuples from the experiment’s open data. The LHCb Ntupling Service will offer a step-by-step wizard for requesting custom ntuples and a dashboard to monitor the status of requests, communicate with the LHCb open data team and retrieve data. The beta version was released at the workshop in advance of the anticipated public release of the application in 2025, which promises open access to LHCb’s Run 2 dataset for the first time.

A recurring satellite event features lectures by theorists on topics following LHCb’s scientific output. This year, Simon Kuberski (CERN) and Saša Prelovšek (Ljubljana) took the audience on a guided tour through lattice QCD and spectroscopy.

With LHCb’s integrated luminosity in 2024 exceeding all previous years combined, excitement was heightened. Attendees left the workshop with a fresh perspective on how to approach the challenges faced by our community.

The post The B’s Ke<sup>+</sup>e<sup>–</sup>s appeared first on CERN Courier.

]]>
Meeting report The Implications of LHCb measurements and future prospects workshop drew together more than 200 theorists and experimentalists from across the world. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_bees.jpg
From spinors to supersymmetry https://cerncourier.com/a/from-spinors-to-supersymmetry/ Fri, 24 Jan 2025 15:34:01 +0000 https://cerncourier.com/?p=112273 In their new book, From Spinors to Supersymmetry, Herbi Dreiner, Howard Haber and Stephen Martin describe the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry.

The post From spinors to supersymmetry appeared first on CERN Courier.

]]>
From Spinors to Supersymmetry

This text is a hefty volume of around 1000 pages describing the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry. The authors of this volume, Herbi Dreiner, Howard Haber and Stephen Martin, are household names in the phenomenology of particle physics with many original contributions in the topics that are covered in the book. Haber is also well known at CERN as a co-author of the legendary Higgs Hunter’s Guide (Perseus Books, 1990), a book that most collider physicists of the pre and early LHC eras are very familiar with.

The book starts with a 250-page introduction (chapters one to five) to the Standard Model (SM), covering more or less the theory material that one finds in standard advanced textbooks. The emphasis is on the theoretical side, with no discussion on experimental results, providing a succinct discussion of topics ranging from how to obtain Feynman rules to anomaly-cancellation calculations. In chapter six, extensions of the SM are discussed, starting with the seesaw-extended SM, moving on to a very detailed exposition of the two-Higgs-doublet model and finishing with grand unification theories (GUTs).

The second part of the book (from chapter seven onwards) is about supersymmetry in general. It begins with an accessible introduction that is also applicable to other beyond-SM-physics scenarios. This gentle and very pedagogical pattern continues to chapter eight, before proceeding to a more demanding supersymmetry-algebra discussion in chapter nine. Superfields, supersymmetric radiative corrections and supersymmetry symmetry breaking, which are discussed in the subsequent chapters, are more advanced topics that will be of interest to specialists in these areas.

The third part (chapter 13 onwards) discusses realistic supersymmetric models starting from the minimal supersymmetric SM (MSSM). After some preliminaries, chapter 15 provides a general presentation of MSSM phenomenology, discussing signatures relevant for proton–proton and electron–positron collisions, as well as direct dark-matter searches. A short discussion on beyond-MSSM scenarios is given in chapter 16, including NMSSM, seesaw, GUTs and R-parity violating theories. Phenomenological implications, for example their impact on proton decay, are also discussed.

Part four includes basic Feynman diagram calculations in the SM and MSSM using two-component spinor formalism. Starting from very simple tree-level SM processes, like Bhabha scattering and Z-boson decays, it proceeds with tree-level supersymmetric processes, standard one-loop calculations and their supersymmetric counterparts, and Higgs-boson mass corrections. The presentation of this is very practical and useful for those who want to see how to perform easy calculations in SM or MSSM using two-component spinor formalism. The material is accessible and detailed enough to be used for teaching master’s or graduate-level students.

A valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry

The book finishes with almost 200 pages of appendices covering all sorts of useful topics, from notation to commonly used identity lists and group theory.

The book requires some familiarity with master’s-level particle-physics concepts, for example via Halzen and Martin’s Quarks and Leptons or Paganini’s Fundamentals of Particle Physics. Some familiarity with quantum field theory is helpful but not needed for large parts of the book. No effort is made to be brief: two-component spinor formalism is discussed in all its detail in a very pedagogic and clear way. Parts two and three are a significant enhancement to the well known A Supersymmetry Primer (arXiv:hep-ph/9709356), which is very popular among beginners to supersymmetry and written by Stephen Martin, one of authors of this volume. A rich collection of exercises is included in every chapter, and the appendix chapters are no exception to this.

Do not let the word supersymmetry in the title to fool you: even if you are not interested in supersymmetric extensions you can find a detailed exposition on two-component formalism for spinors, SM calculations with this formalism and a detailed discussion on how to design extensions of the scalar sector of the SM. Chapter three is particularly useful, describing in 54 pages how to get from the two-component to the four-component spinor formalism that is more familiar to many of us.

This is a book for advanced graduate students and researchers in particle-physics phenomenology, which nevertheless contains much that will be of interest to advanced physics students and particle-physics researchers in boththeory and experiment. This is because the size of the volume allows the authors to start from the basics and dwell in topics that most other books of that type cover in less detail, making them less accessible. I expect that Dreiner, Haber and Martin will become a valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry.

The post From spinors to supersymmetry appeared first on CERN Courier.

]]>
Review In their new book, From Spinors to Supersymmetry, Herbi Dreiner, Howard Haber and Stephen Martin describe the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_REV-spinors_feature.jpg
W mass snaps back https://cerncourier.com/a/w-mass-snaps-back/ Wed, 20 Nov 2024 13:58:46 +0000 https://cern-courier.web.cern.ch/?p=111397 A new measurement from the CMS experiment at the LHC contradicts the anomaly reported by CDF.

The post W mass snaps back appeared first on CERN Courier.

]]>
Based on the latest data inputs, the Standard Model (SM) constrains the mass of the W boson (mW) to be 80,353 ± 6 MeV. At tree level, mW depends only on the mass of the Z boson and the weak and electromagnetic couplings. The boson’s tendency to briefly transform into a top quark and a bottom quark causes the largest quantum correction. Any departure from the SM prediction could signal the presence of additional loops containing unknown heavy particles.

The CDF experiment at the Tevatron observed just such a departure in 2022, plunging the boson into a midlife crisis 39 years after it was discovered at CERN’s SpSS collider (CERN Courier September/October 2023 p27). A new measurement from the CMS experiment at the LHC now contradicts the anomaly reported by CDF. While the CDF result stands seven standard deviations above the SM, CMS’s measurement aligns with the SM prediction and previous results at the LHC. The CMS and CDF results claim joint first place in precision, provoking a dilemma for phenomenologists.

New-physics puzzle

“The result by CDF remains puzzling, as it is extremely difficult to explain the discrepancy with the three LHC measurements by the presence of new physics, in particular as there is also a discrepancy with D0 at the same facility,” says Jens Erler of Johannes Gutenberg-Universität Mainz. “Together with measurements of the weak mixing angle, the CMS result confirms the validity of the SM up to new physics scales well into the TeV region.”

“I would not call this ‘case closed’,” agrees Sven Heinemeyer of the Universidad Autónoma de Madrid. “There must be a reason why CDF got such an anomalously high value, and understanding what is going on may be very beneficial for future investigations. We know that the SM is not the last word, and there are clear cases that require physics beyond the SM (BSM). The question is at which scale BSM physics appears, or how strongly it is coupled to the SM particles.”

The result confirms the validity of the SM up to new physics scales well into the TeV region

To obtain their result, CDF analysed four million W-boson decays originating from 1.96 TeV proton–antiproton collisions at Fermilab’s Tevatron collider between 1984 and 2011. In stark disagreement with the SM, the analysis yielded a mass of 80,433.5 ± 9.4 MeV. This result induced the ATLAS collaboration to revisit its 2017 analysis of W → μν and W → eνdecays in 7 TeV proton–proton collisions using the latest global data on parton distribution functions, which describe the probable momenta of quarks and gluons inside the proton. A newly developed fit was also implemented. The central value remained consistent with the SM, with a reduced uncertainty of 16 MeV increasing its tension with the new CDF result. A less precise measurement by the LHCb collaboration also favoured the SM (CERN Courier May/June 2023 p10).

CMS now reports mW to be 80,360.2 ± 9.9 MeV, concluding a study of W → μν decays begun eight years ago.

“One of the main strategic choices of this analysis is to use a large dataset of Run 2 data,” says CMS spokesperson Gautier Hamel de Monchenault. “We are using 16.8 fb–1 of 13 TeV data at a relatively high pileup of on average 25 interactions per bunch crossing, leading to very large samples of about 7.5 million Z bosons and 90 million W bosons.”

With high pileup and high energies come additional challenges. The measurement uses an innovative analysis tech­nique that benchmarks W → μν decay systematics using Z → μμ decays as independent validation wherein one muon is treated as a neutrino. The ultimate precision of the measurement relies on reconstructing the muon’s momentum in the detector’s silicon tracker to better than one part in 10,000 – a groundbreaking level of accuracy built on minutely modelling energy loss, multiple scattering, magnetic-field inhomogeneities and misalignments. “What is remarkable is that this incredible level of precision on the muon momentum measurement is obtained without using Z → μμ as a calibration candle, but only using a huge sample of J/ψ→ μμ events,” says Hamel de Monchenault. “In this way, the Z → μμ sample can be used for an independent closure test, which also provides a competitive measurement of the Z mass.”

Measurement matters

Measuring mW using W → μν decays is challenging because the neutrino escapes undetected. mW must be inferred from either the distribution of the transverse mass visible in the events (mT) or the distribution of the transverse momentum of the muons (pT). The mT approach used by CDF is the most precise option at the Tevatron, but typically less precise at the LHC, where hadronic recoil is difficult to distinguish from pileup. The LHC experiments also face a greater challenge when reconstructing mW from distributions of pT. In proton–antiproton collisions at the Tevatron, W bosons could be created via the annihilation of pairs of valence quarks. In proton–proton collisions at the LHC, the antiquark in the annihilating pair must come from the less well understood sea; and at LHC energies, the partons have lower fractions of the proton’s momentum – a less well constrained domain of parton distribution functions.

“Instead of exploiting the Z → μμ sample to tune the parameters of W-boson production, CMS is using the W data themselves to constrain the theory parameters of the prediction for the pT spectrum, and using the independent Z → μμ sample to validate this procedure,” explains Hamel de Monchenault. “This validation gives us great confidence in our theory modelling.”

“The CDF collaboration doesn’t have an explanation for the incompatibility of the results,” says spokesperson David Toback of Texas A&M University. “Our focus is on the checks of our own analysis and understanding of the ATLAS and CMS methods so we can provide useful critiques that might be helpful in future dialogues. On the one hand, the consistency of the ATLAS and CMS results must be taken seriously. On the other, given the number of iterations and improvements needed over decades for our own analysis – CDF has published five times over 30 years – we still consider both LHC results ‘early days’ and look forward to more details, improved methodology and additional measurements.”

The LHC experiments each plan improvements using new data. The results will build on a legacy of electroweak precision at the LHC that was not anticipated to be possible at a hadron collider (CERN Courier September/October 2024 p29).

“The ATLAS collaboration is extremely impressed with the new measurement by CMS and the extraordinary precision achieved using high-pileup data,” says spokesperson Andreas Hoecker. “It is a tour de force, accomplished by means of a highly complex fit, for which we applaud the CMS collaboration.” ATLAS’s next measurement of mW will focus on low-pileup data, to improve sensitivity to mT relative to their previous result.

The ATLAS collaboration is extremely impressed with the new measurement by CMS

The LHCb collaboration is working on an update of their measurement using its full Run 2 data set. LHCb’s forward acceptance may prove to be powerful in a global fit. “LHCb probes parton density functions in different phase space regions, and that makes the measurements from LHCb anticorrelated with those of ATLAS and CMS, promising a significant impact on the average, even if the overall uncertainty is larger,” says spokesperson Vincenzo Vagnoni. The goal is to progress LHC measurements towards a combined precision of 5 MeV. CMS plans several improvements to their own analysis.

“There is still a significant factor to be gained on the momentum scale, with which we could reach the same precision on the Z-boson mass as LEP,” says Hamel de Monchenault. “We are confident that we can also use a future, large low-pileup run to exploit the W recoil and mT to complement the muon pT spectrum. Electrons can also be used, although in this case the Z sample could not be kept independent in the energy calibration.”

The post W mass snaps back appeared first on CERN Courier.

]]>
News A new measurement from the CMS experiment at the LHC contradicts the anomaly reported by CDF. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NA_tension-1-1.jpg
Shifting sands for muon g–2 https://cerncourier.com/a/shifting-sands-for-muon-g-2/ Wed, 20 Nov 2024 13:56:37 +0000 https://cern-courier.web.cern.ch/?p=111400 Two recent results may ease the tension between theory and experiment.

The post Shifting sands for muon g–2 appeared first on CERN Courier.

]]>
Lattice–QCD calculation

The Dirac equation predicts the magnetic moment of the muon (g) to be precisely two in units of the Bohr magneton. Virtual lines and loops add roughly 0.1% to this value, giving rise to a so-called anomalous contribution often quantified by aμ = (g–2)/2. Countless electromagnetic loops dominate the calculation, spontaneous symmetry breaking is evident in the effect of weak interactions, and contributions from the strong force are non-perturbative. Despite this formidable complexity, theoretical calculations of aμ have been experimentally verified to nine significant figures.

The devil is in the 10th digit. The experimental world average for aμ currently stands more than 5σ above the Standard Model (SM) prediction published by the Muon g-2 Theory Initiative in a 2020 white paper. But two recent results may ease this tension in advance of a new showdown with experiment next year.

The first new input is data from the CMD-3 experiment at the Budker Institute of Nuclear Physics, which yields aμconsistent with experimental data. Comparable electron–positron (e+e) collider data from the KLOE experiment at the National Laboratory of Frascati, the BaBar experiment at SLAC, the BESIII experiment at IHEP Beijing and CMD-3’s predecessor CMD-2, were the backbone of the 2020 theory white paper. With KLOE and CMD-3 now incompatible at the level of 5σ, theorists are exploring alternative bases for the theoretical prediction, such as an ab-initio approach based on lattice QCD and a data-driven approach using tau–lepton decays.

The second new result is an updated theory calculation of aμ by the Budapest–Marseille–Wuppertal (BMW) collaboration. BMW’s ab-initio lattice–QCD calculation of 2020 was the first to challenge the data-driven consensus expressed in the 2020 white paper. The recent update now claims a superior precision, driven in part by the pragmatic implementation of a data-driven approach in the low-mass region, where experiments are in good agreement. Though only accounting for 5% of the hadronic contribution to aμ, this “long distance” region is often the largest source of error in lattice–QCD calculations, and relatively insensitive to the use of finer lattices.

The new BMW result is fully compatible with the experimental world average, and incompatible with the 2020 white paper at the level of 4σ.

“It seems to me that the 0.9σ agreement between the direct experimental measurement of the magnetic moment of the muon and the ab-initio calculation of BMW has most probably postponed the possible discovery of new physics in this process,” says BMW spokesperson Zoltán Fodor (Wuppertal). “It is important to mention that other groups have partial results, too, so-called window results, and they all agree with us and in several cases disagree with the result of the data-driven method.”

These two analyses were among the many discussed at the seventh plenary workshop of the Muon g-2 Theory Initiative held in Tsukuba, Japan from 9 to 13 September. The theory initiative is planning to release an updated prediction in a white paper due to be published in early 2025. With multiple mature e+e and lattice–QCD analyses underway for several years, attention now turns to tau decays – the subject of a soon-to-be-announced mini-workshop to ensure their full availability for consideration as a possible basis for the 2025 white paper. Input data would likely originate from tau decays recorded by the Belle experiment at KEK and the ALEPH experiment at CERN, both now decommissioned.

I am hopeful we will be able to establish consolidation between independent lattice calculations at the sub-percent level

“From a theoretical point of view, the challenge for including the tau data is the isospin rotation that is needed to convert the weak hadronic tau decay to the desired input for hadronic vacuum polarisation,” explains theory-initiative chair Aida X El-Khadra (University of Illinois). Hadronic vacuum polarisation (HVP) is the most challenging part of the calculation of aμ, accounting for the effect of a muon emitting a virtual photon that briefly transforms into a flurry of quarks and gluons just before it absorbs the photon representing the magnetic field (CERN Courier May/June 2021 p25).

Lattice QCD offers the possibility of a purely theoretical calculation of HVP. While BMW remains the only group to have published a full lattice-QCD calculation, multiple groups are zeroing in on its most sensitive aspects (CERN CourierSeptember/October 2024 p21).

“The main challenge in lattice-QCD calculations of HVP is improving the precision to the desired sub-percent level, especially at long distances,” continues El-Khadra. “With the new results for the long-distance contribution by the RBC/UKQCD and Mainz collaborations that were already reported this year, and the results that are still expected to be released this fall, I am hopeful that we will be able to establish consolidation between independent lattice calculations at the sub-percent level. In this case we will provide a lattice-only determination of HVP in the second white paper.”

The post Shifting sands for muon g–2 appeared first on CERN Courier.

]]>
News Two recent results may ease the tension between theory and experiment. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NA_twoprong_feature-1-1.jpg
Data analysis in the age of AI https://cerncourier.com/a/data-analysis-in-the-age-of-ai/ Wed, 20 Nov 2024 13:50:36 +0000 https://cern-courier.web.cern.ch/?p=111424 Experts in data analysis, statistics and machine learning for physics came together from 9 to 12 September for PHYSTAT’s Statistics meets Machine Learning workshop.

The post Data analysis in the age of AI appeared first on CERN Courier.

]]>
Experts in data analysis, statistics and machine learning for physics came together from 9 to 12 September at Imperial College London for PHYSTAT’s Statistics meets Machine Learning workshop. The goal of the meeting, which is part of the PHYSTAT series, was to discuss recent developments in machine learning (ML) and their impact on the statistical data-analysis techniques used in particle physics and astronomy.

Particle-physics experiments typically produce large amounts of highly complex data. Extracting information about the properties of fundamental physics interactions from these data is a non-trivial task. The general availability of simulation frameworks makes it relatively straightforward to model the forward process of data analysis: to go from an analytically formulated theory of nature to a sample of simulated events that describe the observation of that theory for a given particle collider and detector in minute detail. The inverse process – to infer from a set of observed data what is learned about a theory – is much harder as the predictions at the detector level are only available as “point clouds” of simulated events, rather than as the analytically formulated distributions that are needed by most statistical-inference methods.

Traditionally, statistical techniques have found a variety of ways to deal with this problem, mostly centered on simplifying the data via summary statistics that can be modelled empirically in an analytical form. A wide range of ML algorithms, ranging from neural networks to boosted decision trees trained to classify events as signal- or background-like, have been used in the past 25 years to construct such summary statistics.

The broader field of ML has experienced a very rapid development in recent years, moving from relatively straightforward models capable of describing a handful of observable quantities, to neural models with advanced architectures such as normalising flows, diffusion models and transformers. These boast millions to billions of parameters that are potentially capable of describing hundreds to thousands of observables – and can now extract features from the data with an order-of-magnitude better performance than traditional approaches. 

New generation

These advances are driven by newly available computation strategies that not only calculate the learned functions, but also their analytical derivatives with respect to all model parameters, greatly speeding up training times, in particular in combination with modern computing hardware with graphics processing units (GPUs) that facilitate massively parallel calculations. This new generation of ML models offers great potential for novel uses in physics data analyses, but have not yet found their way to the mainstream of published physics results on a large scale. Nevertheless, significant progress has been made in the particle-physics community in learning the technology needed, and many new developments using this technology were shown at the workshop.

This new generation of machine-learning models offers great potential for novel uses in physics data analyses

Many of these ML developments showcase the ability of modern ML architectures to learn multidimensional distributions from point-cloud training samples to a very good approximation, even when the number of dimensions is large, for example between 20 and 100. 

A prime use-case of such ML models is an emerging statistical analysis strategy known as simulation-based inference (SBI), where learned approximations of the probability density of signal and background over the full high-dimensional observables space are used, dispensing with the notion of summary statistics to simplify the data. Many examples were shown at the workshop, with applications ranging from particle physics to astronomy, pointing to significant improvements in sensitivity. Work is ongoing on procedures to model systematic uncertainties, and no published results in particle physics exist to date. Examples from astronomy showed that SBI can give results of comparable precision to the default Markov chain Monte Carlo approach for Bayesian computations, but with orders of magnitude faster computation times.

Beyond binning

A commonly used alternative approach to the full-fledged theory parameter inference from observed data is known as deconvolution or unfolding. Here the goal is publishing intermediate results in a form where the detector response has been taken out, but stopping short of interpreting this result in a particular theory framework. The classical approach to unfolding requires estimating a response matrix that captures the smearing effect of the detector on a particular observable, and applying the inverse of that to obtain an estimate of a theory-level distribution – however, this approach is challenging and limited in scope, as the inversion is numerically unstable, and requires a low dimensionality binning of the data. Results on several ML-based approaches were presented, which either learn the response matrix from modelling distributions outright (the generative approach) or learn classifiers that reweight simulated samples (the discriminative approach). Both approaches show very promising results that do not have the limitations on the binning and dimensionality of the distribution of the classical response-inversion approach.

A third domain where ML is facilitating great progress is that of anomaly searches, where an anomaly can either be a single observation that doesn’t fit the distribution (mostly in astronomy), or a collection of events that together don’t fit the distribution (mostly in particle physics). Several analyses highlighted both the power of ML models in such searches and the bounds from statistical theory: it is impossible to optimise sensitivity for single-event anomalies without knowing the outlier distribution, and unsupervised anomaly detectors require a semi-supervised statistical model to interpret ensembles of outliers.

A final application of machine-learned distributions that was much discussed is data augmentation – sampling a new, larger data sample from a learned distribution. If the synthetic data is significantly larger than the training sample, its statistical power will be greater, but will derive this statistical power from the smooth interpolation of the model, potentially generating so-called inductive bias. The validity of the assumed smoothness depends on its realism in a particular setting, for which there is no generic validation strategy. The use of a generative model amounts to a tradeoff between bias and variance.

Interpretable and explainable

Beyond the various novel applications of ML, there were lively discussions on the more fundamental aspects of artificial intelligence (AI), notably on the notion of and need for AI to be interpretable or explainable. Explainable AI aims to elucidate what input information was used, and its relative importance, but this goal has no unambiguous definition. The discussion on the need for explainability centres to a large extent on trust: would you trust a discovery if it is unclear what information the model used and how it was used? Can you convince peers of the validity of your result? The notion of interpretable AI goes beyond that. It is an often-desired quality by scientists, as human knowledge resulting from AI-based science is generally desired to be interpretable, for example in the form of theories based on symmetries, or structures that are simple, or “low-rank”. However, interpretability has no formal criteria, which makes it an impractical requirement. Beyond practicality, there is also a fundamental point: why should nature be simple? Why should models that describe it be restricted to being interpretable? The almost philosophical nature of this question made the discussion on interpretability one of the liveliest ones in the workshop, but for now without conclusion.

Human knowledge resulting from AI-based science is generally desired to be interpretable

For the longer-term future there are several interesting developments in the pipeline. In the design and training of new neural models, two techniques were shown to have great promise. The first one is the concept of foundation models, which are very large models that are pre-trained by very large datasets to learn generic features of the data. When these pre-trained generic models are retrained to perform a specific task, they are shown to outperform purpose-trained models for that same task. The second is on encoding domain knowledge in the network. Networks that have known symmetry principles encoded in the model can significantly outperform models that are generically trained on the same data.

The evaluation of systematic effects is still mostly taken care of in the statistical post-processing step. Future ML techniques may more fully integrate systematic uncertainties, for example by reducing the sensitivity to these uncertainties through adversarial training or pivoting methods. Beyond that, future methods may also integrate the currently separate step of propagating systematic uncertainties (“learning the profiling”) into the training of the procedure. A truly global end-to-end optimisation of the full analysis chain may ultimately become feasible and computationally tractable for models that provide analytical derivatives.

The post Data analysis in the age of AI appeared first on CERN Courier.

]]>
Meeting report Experts in data analysis, statistics and machine learning for physics came together from 9 to 12 September for PHYSTAT’s Statistics meets Machine Learning workshop. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24FN_Phystat-1-1.jpg
A rich harvest of results in Prague https://cerncourier.com/a/a-rich-harvest-of-results-in-prague/ Wed, 20 Nov 2024 13:34:58 +0000 https://cern-courier.web.cern.ch/?p=111420 The 42nd international conference on high-energy physics reported progress across all areas of high-energy physics.

The post A rich harvest of results in Prague appeared first on CERN Courier.

]]>
The 42nd international conference on high-energy physics (ICHEP) attracted almost 1400 participants to Prague in July. Expectations were high, with the field on the threshold of a defining moment, and ICHEP did not disappoint. A wealth of new results showed significant progress across all areas of high-energy physics.

With the long shutdown on the horizon, the third run of the LHC is progressing in earnest. Its high-availability operation and mastery of operational risks were highly praised. Run 3 data is of immense importance as it will be the dataset that experiments will work with for the next decade. With the newly collected data at 13.6 TeV, the LHC experiments showed new measurements of Higgs and di-electroweak-boson production, though of course most of the LHC results were based on the Run 2 (2014 to 2018) dataset, which is by now impeccably well calibrated and understood. This also allowed ATLAS and CMS to bring in-depth improvements to reconstruction algorithms.

AI algorithms

A highlight of the conference was the improvements brought by state-of-the-art artificial-intelligence algorithms such as graph neural networks, both at the trigger and reconstruction level. A striking example of this is the ATLAS and CMS flavour-tagging algorithms, which have improved their rejection of light jets by a factor of up to four. This has important consequences. Two outstanding examples are: di-Higgs-boson production, which is fundamental for the measurement of the Higgs boson self-coupling (CERN Courier July/August 2024 p7); and the Higgs boson’s Yukawa coupling to charm quarks. Di-Higgs-boson production should be independently observable by both general-purpose experiments at the HL-LHC, and an observation of the Higgs boson’s coupling to charm quarks is getting closer to being within reach.

The LHC experiments continue to push the limits of precision at hadron colliders. CMS and LHCb presented new measurements of the weak mixing angle. The per-mille precision reached is close to that of LEP and SLD measurements (CERN Courier September/October 2024 p29). ATLAS presented the most precise measurement to date (0.8%) of the strong coupling constant extracted from the measurement of the transverse momentum differential cross section of Drell–Yan Z-boson production. LHCb provided a comprehensive analysis of the B0→ K0* μ+μ angular distributions, which had previously presented discrepancies at the level of 3σ. Taking into account long-distance contributions significantly weakens the tension down to 2.1σ.

Pioneering the highest luminosities ever reached at colliders (setting a record at 4.7 × 1034 cm–2 s–1), SuperKEKB has been facing challenging conditions with repeated sudden beam losses. This is currently an obstacle to further progress to higher luminosities. Possible causes have been identified and are currently under investigation. Meanwhile, with the already substantial data set collected so far, the Belle II experiment has produced a host of new results. In addition to improved CKM angle measurements (alongside LHCb), in particular of the γ angle, Belle II (alongside BaBar) presented interesting new insights in the long standing |Vcb| and |Vub| inclusive versus exclusive measurements puzzle (CERN Courier July/August 2024 p30), with new |Vcb| exclusive measurements that significantly reduce the previous 3σ tension.

Maurizio Pierini

ATLAS and CMS furthered their systematic journey in the search for new phenomena to leave no stone unturned at the energy frontier, with 20 new results presented at the conference. This landmark outcome of the LHC puts further pressure on the naturalness paradigm.

A highlight of the conference was the overall progress in neutrino physics. Accelerator-based experiments NOvA and T2K presented a first combined measurement of the mass difference, neutrino mixing and CP parameters. Neutrino telescopes IceCube with DeepCore and KM3NeT with ORCA (Oscillation Research with Cosmics in the Abyss) also presented results with impressive precision. Neutrino physics is now at the dawn of a bright new era of precision with the next-generation accelerator-based long baseline experiments DUNE and Hyper Kamiokande, the upgrade of DeepCore, the completion of ORCA and the medium baseline JUNO experiment. These experiments will bring definitive conclusions on the measurement of the CP phase in the neutrino sector and the neutrino mass hierarchy – two of the outstanding goals in the field.

The KATRIN experiment presented a new upper limit on the effective electron–anti-neutrino mass of 0.45 eV, well en route towards their ultimate sensitivity of 0.2 eV. Neutrinoless double-beta-decay search experiments KamLAND-Zen and LEGEND-200 presented limits on the effective neutrino mass of approximately 100 meV; the sensitivity of the next-generation experiments LEGEND-1T, KamLAND-Zen-1T and nEXO should reach 20 meV and either fully exclude the inverted ordering hypothesis or discover this long-sought process. Progress on the reactor neutrino anomaly was reported, with recent fission data suggesting that the fluxes are overestimated, thus weakening the significance of the anti-neutrino deficits.

Neutrinos were also a highlight for direct-dark-matter experiments as Xenon announced the observation of nuclear recoil events from8B solar neutrino coherent elastic scattering on nuclei, thus signalling that experiments are now reaching the neutrino fog. The conference also highlighted the considerable progress across the board on the roadmap laid out by Kathryn Zurek at the conference to search for dark matter in an extraordinarily large range of possibilities, spanning 89 orders of magnitude in mass from 10–23 eV to 1057 GeV. The roadmap includes cosmological and astrophysical observations, broad searches at the energy and intensity frontier, direct searches at low masses to cover relic abundance motivated scenarios, building a suite of axion searches, and pursuing indirect-detection experiments.

Lia Merminga and Fabiola Gianotti

Neutrinos also made the headlines in multi-messenger astrophysics experiments with the announcement by the KM3Net ARCA (Astroparticle Research with Cosmics in the Abyss) collaboration of a muon-neutrino event that could be the most energetic ever found. The energy of the muon from the interaction of the neutrino is compatible with having an energy of approximately 100 PeV, thus opening a fascinating window on astrophysical processes at energies well beyond the reach of colliders. The conference showed that we are now well within the era of multi-messenger astrophysics, via beautiful neutrinos, gamma rays and gravitational-wave results.

The conference saw new bridges across fields being built. The birth of collider-neutrino physics with the beautiful results from FASERν and SND fill the missing gap in neutrino–nucleon cross sections between accelerator neutrinos and neutrino astronomy. ALICE and LHCb presented new results on He3 production that complement the AMS results. Astrophysical He3 could signal the annihilation of dark matter. ALICE also presented a broad, comprehensive review of the progress in understanding strongly interacting matter at extreme energy densities.

The highlight in the field of observational cosmology was the recent data from DESI, the Dark Energy Spectroscopic Instrument in operation since 2021, which bring splendid new data on baryon acoustic oscillation measurements. These precious new data agree with previous indirect measurements of the Hubble constant, keeping the tension with direct measurements in excess of 2.5σ. In combination with CMB measurements, the DESI measurements also set an upper limit on the sum of neutrino masses at 0.072 eV, in tension with the inverted ordering of neutrino masses hypothesis. This limit is dependent on the cosmological model.

In everyone’s mind at the conference, and indeed across the domain of high-energy physics, it is clear that the field is at a defining moment in its history: we will soon have to decide what new flagship project to build. To this end, the conference organised a thrilling panel discussion featuring the directors of all the major laboratories in the world. “We need to continue to be bold and ambitious and dream big,” said Fermilab’s Lia Merminga, summarising the spirit of the discussion.

“As we have seen at this conference, the field is extremely vibrant and exciting,” said CERN’s Fabiola Gianotti at the conclusion of the panel. In these defining times for the future of our field, ICHEP 2024 was an important success. The progress in all areas is remarkable and manifest through the outstanding number of beautiful new results shown at the conference.

The post A rich harvest of results in Prague appeared first on CERN Courier.

]]>
Meeting report The 42nd international conference on high-energy physics reported progress across all areas of high-energy physics. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24FN_ICHEP1-2.jpg
NA62 observes its golden decay https://cerncourier.com/a/na62-observes-its-golden-decay/ Wed, 20 Nov 2024 13:21:18 +0000 https://cern-courier.web.cern.ch/?p=111416 The measurement is the most precise to date and about 50% higher than the SM prediction.

The post NA62 observes its golden decay appeared first on CERN Courier.

]]>
In a game of snakes and ladders, players move methodically up the board, occasionally encountering opportunities to climb a ladder. The NA62 experiment at CERN is one such opportunity. Searching for ultra-rare decays at colliders and fixed- target experiments like NA62 can offer a glimpse at energy scales an order of magnitude higher than is directly accessible when creating particles in a frontier machine.

The trick is to study hadron decays that are highly suppressed by the GIM mechanism (see “Charming clues for existence“). Should massive particles beyond the Standard Model (SM) exist at the right energy scale, they could disrupt the delicate cancellations expected in the SM by making brief virtual appearances according to the limits imposed by Heisenberg’s uncertainty principle. In a recent featured article, Andrzej Buras (Technical University Munich) identified the six most promising rare decays where new physics might be discovered before the end of the decade (CERN Courier July/August 2024 p30). Among them is K+→ π+νν, the ultra-rare decay sought by NA62. In the SM, fewer than one K+in 10 billion decays this way, requiring the team to exercise meticulous attention to detail in excluding backgrounds. The collaboration has now announced that it has observed the process with 5σ significance.

“This observation is the culmination of a project that started more than a decade ago,” says spokesperson Giuseppe Ruggiero of INFN and the University of Florence. “Looking for effects in nature that have probabilities of happening of the order of 10–11 is both fascinating and challenging. After rigorous and painstaking work, we have finally seen the process NA62 was designed and built to observe.”

In the NA62 experiment, kaons are produced by colliding a high-intensity proton beam from CERN’s Super Proton Synchrotron into a stationary beryllium target. Almost a billion secondary particles are produced each second. Of these, about 6% are positively charged kaons that are tagged and matched with positively charged pions from the decay K+→ π+νν, with the neutrinos escaping undetected. Upgrades to NA62 during Long Shutdown 2 increased the experiment’s signal efficiency while maintaining its sample purity, allowing the collaboration to double the expected signal of their previous measurement using new data collected between 2021 and 2022. A total of 51 events pass the stringent selection criteria, over an expected background of 18+32, definitely establishing the existence of this decay for the first time.

NA62 measures the branching ratio for K+→ π+νν to be 13.0+3.3–2.9× 10–11 – the most precise measurement to date and about 50% higher than the SM prediction, though compatible with it within 1.7σ at the current level of precision. NA62’s full data set will be required to test the validity of the SM in this decay. Data taking is ongoing.

The post NA62 observes its golden decay appeared first on CERN Courier.

]]>
News The measurement is the most precise to date and about 50% higher than the SM prediction. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NA_NA62-1-1.jpg
Look to the Higgs self-coupling https://cerncourier.com/a/look-to-the-higgs-self-coupling/ Mon, 16 Sep 2024 14:18:52 +0000 https://preview-courier.web.cern.ch/?p=111164 Matthew McCullough argues that beyond-the-Standard Model physics may be most strongly expressed in the Higgs self-coupling.

The post Look to the Higgs self-coupling appeared first on CERN Courier.

]]>
What are the microscopic origins of the Higgs boson? As long as we lack the short-wavelength probes needed to study its structure directly, our best tool to confront this question is to measure its interactions.

Let’s consider two with starkly contrasting experimental prospects. The coupling of the Higgs boson to two Z bosons (HZZ) has been measured with a precision of around 5%, increasing to around 1.3% by the end of High-Luminosity LHC (HL-LHC) operations. The Higgs boson’s self-coupling (HHH) has so far only been measured with a precision of the order of several hundred percent, improving to around the 50% level by the end of HL-LHC operations – though it’s now rumoured that this latter estimate may be too pessimistic.

Good motives

As HZZ can be measured much more precisely than HHH, is it the more promising window beyond the Standard Model (SM)? An agnostic might say that both measurements are equally valuable, while a “top down” theorist might seek to judge which theories are well motivated, and ask how they modify the two couplings. In supersymmetry and minimal composite Higgs models, for example, modifications to HZZ and HHH are typically of a similar magnitude. But “well motivated” is a slippery notion and I don’t entirely trust it.

Fortunately there is a happy compromise between these perspectives, using the tool of choice of the informed agnostic: effective field theory. It’s really the same physical principle as trying to look within an object when your microscope operates on wavelengths greater than its physical extent. Just as the microscopic structure of an atom is imprinted, at low energies, in its multipolar (dipole, quadrupole and so forth) interactions with photons, so too would the microscopic structure of the Higgs boson leave its trace in modifications to its SM interactions.

All possible coupling modifications from microscopic new physics can be captured by effective field theory and organised into classes of “UV-completion”. UV-completions are the concrete microscopic scenarios that could exist. (Here, ultraviolet light is a metaphor for the short-wavelength probes needed to study the Higgs boson’s microscopic origins in detail.) Scenarios with similar patterns are said to live in the same universality class. Families of universality classes can be identified from the bottom up. A powerful tool for this is naïve dimensional analysis (NDA).

Matthew McCullough

One particularly sharp arrow in the NDA quiver is ℏ counting, which establishes how many couplings and/or ℏs must be present in the EFT modification of an interaction. Couplings tell you the number of fundamental interactions involved. ℏs establish the need for quantum effects. For instance, NDA tells us that the coefficient of the Fermi interaction must have two couplings, which the electroweak theory duly supplies – a W boson transforms a neutron into a proton, and then decays into an electron and a neutrino.

For our purposes, NDA tells us that modifications to HZZ must necessarily involve one more ℏ or two fewer couplings than any underlying EFT interaction that modifies HHH. In the case of one more ℏ, modifications to HZZ could potentially be an entire quantum loop factor smaller than modifications to HHH. In the case of two fewer couplings, modifications to HHH could be as large as a factor g2 greater than for HZZ, where g is a generic coupling. Either way, it is theoretically possible that the BSM modifications could be up to a couple of orders of magnitude greater for HHH than for HZZ. (Naively, a loop factor counts as around 1/16 π2 or about 0.01, and in the most strongly interacting scenarios, g2 can rise to about 16 π2.)

Why does this contrast so strongly with supersymmetry and the minimal composite Higgs? They are simply in universality classes where modifications to HZZ and HHH are comparable in magnitude. But there are more universality classes in heaven and Earth than are dreamt of in our well-motivated scenarios.

Faced with the theoretical possibility of a large hierarchy in coupling modifications, it behoves the effective theorist to provide an existence proof of a concrete UV-completion where this happens, or we may have revealed a universality class of measure zero. But such an example exists: the custodial quadruplet model. I often say it’s a model that only a mother could love, but it could exist in nature, and gives rise to coupling modifications a full loop factor of about 200 greater for HHH than HZZ.

When confronted with theories beyond the SM, all Higgs couplings are not born equal: UV-completions matter. Though HZZ measurements are arguably the most powerful general probe, future measurements of HHH will explore new territory that is inaccessible to other coupling measurements. This territory is largely uncharted, exotic and beyond the best guesses of theorists. Not bad circumstances for the start of any adventure.

The post Look to the Higgs self-coupling appeared first on CERN Courier.

]]>
Opinion Matthew McCullough argues that beyond-the-Standard Model physics may be most strongly expressed in the Higgs self-coupling. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_VIEW_informed.jpg
Electroweak SUSY after LHC Run 2 https://cerncourier.com/a/electroweak-susy-after-lhc-run-2/ Mon, 16 Sep 2024 14:13:36 +0000 https://preview-courier.web.cern.ch/?p=110449 Supersymmetry (SUSY) provides elegant solutions to many of the problems of the Standard Model (SM) by introducing new boson/fermion partners for each SM fermion/boson, and by extending the Higgs sector. If SUSY is realised in nature at the TeV scale, it would accommodate a light Higgs boson without excessive fine-tuning. It could furthermore provide a […]

The post Electroweak SUSY after LHC Run 2 appeared first on CERN Courier.

]]>
ATLAS figure 1

Supersymmetry (SUSY) provides elegant solutions to many of the problems of the Standard Model (SM) by introducing new boson/fermion partners for each SM fermion/boson, and by extending the Higgs sector. If SUSY is realised in nature at the TeV scale, it would accommodate a light Higgs boson without excessive fine-tuning. It could furthermore provide a viable dark-matter candidate, and be a key ingredient to the unification of the electroweak and strong forces at high energy. The SUSY partners of the SM bosons can mix to form what are called charginos and neutralinos, collectively referred to as electroweakinos.

Electroweakinos would be produced only through the electroweak interaction, where their production cross sections in proton–proton collisions are orders of magnitude smaller than strongly produced squarks and gluinos (the supersymmetric partners of quarks and gluons). Therefore, while extensive searches using the Run 1 (7–8 TeV) and Run 2 (13 TeV) LHC datasets have turned up null results, the corresponding chargino/neutralino exclusion limits remain substantially weaker than those for strongly interacting SUSY particles.

The ATLAS collaboration has recently released a comprehensive analysis of the electroweak SUSY landscape based on its Run 2 searches. Each individual search targeted specific chargino/neutralino production mechanisms and subsequent decay modes. The analyses were originally interpreted in so-called “simplified models”, where only one production mechanism is considered, and only one possible decay. However, if SUSY is realised in nature, its particles will have many possible production and decay modes, with rates depending on the SUSY parameters. The new ATLAS analysis brings these pieces together by reinterpreting 10 searches in the phenomenological Minimal Supersymmetric Standard Model (pMSSM), which includes a range of SUSY particles, production mechanisms and decay modes governed by 19 SUSY parameters. The results provide a global picture of ATLAS’s sensitivity to electroweak SUSY and, importantly, reveals the gaps that remain to be explored.

ATLAS figure 2

The 19-dimensional pMSSM parameter space was randomly sampled to produce a set of 20,000 SUSY model points. The 10 selected ATLAS searches were then performed on each model point to determine whether it is excluded with at least 95% confidence level. This involved simulating datasets for each SUSY model, and re-running the corresponding analyses and statistical fits. An extensive suite of reinterpretation tools was employed to achieve this, including preserved likelihoods and RECAST – a framework for preserving analysis workflows and re-applying them to new signal models.

The results show that, while electro­weakino masses have been excluded up to 1 TeV in simplified models, the coverage with regard to the pMSSM is not exhaustive. Numerous scenarios remain viable, including mass regions nominally covered by previous searches (inside the dashed line in figure 1). The pMSSM models may evade detection due to smaller production cross-sections and decay probabilities compared to simplified models. Scenarios with small mass-splittings between the lightest and next-to-lightest neutralino can reproduce the dark-matter relic density, but are particularly elusive at the LHC. The decays in these models produce challenging event features with low-momentum particles that are difficult to reconstruct and separate from SM events.

Beyond ATLAS, experiments such as LZ aim at detecting relic dark-matter particles through their scattering by target nuclei. This provides a complementary probe to ATLAS searches for dark matter produced in the LHC collisions. Figure 2 shows the LZ sensitivity to the pMSSM models considered by ATLAS, compared to the sensitivity of its SUSY searches. ATLAS is particularly sensitive to the region where the dark-matter candidate is around half the Z/Higgs-boson mass, causing enhanced dark-matter annihilation that could have reduced the otherwise overabundant dark-matter relic density to the observed value.

The new ATLAS results demonstrate the breadth and depth of its search programme for supersymmetry, while uncovering its gaps. Supersymmetry may still be hiding in the data, and several scenarios have been identified that will be targeted, benefiting from the incoming Run 3 data.

The post Electroweak SUSY after LHC Run 2 appeared first on CERN Courier.

]]>
News https://cerncourier.com/wp-content/uploads/2024/03/CCMarApr24_EF_ATLAS_feature.jpg
Electroweak precision at the LHC https://cerncourier.com/a/electroweak-precision-at-the-lhc/ Mon, 09 Sep 2024 12:53:50 +0000 https://preview-courier.web.cern.ch/?p=111015 Geared for discovery more so than delicacy, the LHC is defying expectations by rivalling lepton colliders for precision.

The post Electroweak precision at the LHC appeared first on CERN Courier.

]]>
The Standard Model – an inconspicuous name for one of the great human inventions. It describes all known elementary particles and their interactions, except for gravity. About 19 free parameters tune its behaviour. To the best of our knowledge, they could in principle take any value, and no underlying theory yet conceived can predict their values. They include particle masses, interaction strengths, important technical numbers such as mixing angles and phases, and the vacuum strength of the Higgs field, which theorists believe has alone among fundamental fields permeated every cubic attometre of the universe, since almost the beginning of time. Measuring these parameters is the most fundamental experimental task available to modern science.

The basic constituents of matter interact through forces which are mediated by virtual particles that ping back and forth, delivering momentum and quantum numbers. The gluon mediates the strong interaction, the photon mediates the electromagnetic interaction, and the W and Z bosons mediate the weak interaction. Although the electromagnetic and weak forces operate very differently to each other in everyday life, in the Standard Model they are two manifestations of the broken electroweak interaction – an interaction that broke when the Higgs field switched on throughout the universe, giving mass to matter particles, the W and Z bosons, and the Higgs boson itself, via the Brout–Englert–Higgs (BEH) mechanism. The electroweak theory has been extraordinarily successful in describing experimental results, but it remains mysterious – and the BEH mechanism is the origin of some of those free parameters. The best way to test the electroweak model is to over-constrain its free parameters using precision measurements and try to find a breaking point.

An artist’s visualisation of a proton

Ever since the late 1960s, when Steven Weinberg, Sheldon Glashow and Abdus Salam unified the electromagnetic and weak forces using the BEH mechanism, CERN has had an intimate experimental relationship with the electroweak theory. In 1973 the Z boson was indirectly discovered by observing “neutral current” events in the Gargamelle bubble chamber, using a neutrino beam from the Proton Synchrotron. The W boson was discovered in 1983 at the Super Proton Synchrotron collider, followed by the direct observation of the Z boson in the same machine soon after. The 1990s witnessed a decade of exquisite electroweak precision measurements at the Large Electron Positron (LEP) collider at CERN and the Stanford Linear Collider (SLC) at SLAC National Accelerator Laboratory in the US, before the crown jewel of the electroweak sector, the Higgs boson, was discovered by the ATLAS and CMS collaborations at the Large Hadron Collider (LHC) in 2012 – a remarkable success that delivered the last to be observed, and arguably most mysterious, missing piece of the Standard Model.

What was not expected, was that the ATLAS, CMS and LHCb experiments at the LHC would go on to make electroweak measurements that rival in precision those made at lepton colliders.

Discovery or precision?

Studying the electroweak interaction requires a supply of W and Z bosons. For that, you need a collider. Electrons and positrons are ideally suited for the task as they interact exclusively via the electroweak interaction. By precisely tuning the energy of electron–positron collisions, experiments at LEP and the SLC tested the electroweak sector with an unprecedented 0.1% accuracy at the energy scale of the Z-boson mass (mZ).

The ATLAS detector

Hadron colliders like the LHC have different strengths and weaknesses. Equipped to copiously produce all known Standard Model particles – and perhaps also hypothetical new ones – they are the ultimate instruments for probing the high-energy frontier of our understanding of the microscopic world. The protons they collide are not elementary, but a haze of constituent quarks and gluons that bubble and fizz with quantum fluctuations. Each constituent “parton” carries an unpredictable fraction of the proton’s energy. This injects unavoidable uncertainty into studies of hadron collisions that physicists attempt to encode in probabilistic parton distribution functions. What’s more, when a pair of partons from the two opposing protons interact in an interesting way, the result is overlaid by numerous background particles originating from the remaining partons that were untouched by the original collision – a complexity that is exacerbated by the difficult-to-model strong force which governs the behaviour of quarks and gluons. As a result, hadron colliders have a reputation for being discovery machines with limited precision.

The LHCb detector

The LHC has collided protons at the energy frontier since 2010, delivering far more collisions than comparable previous machines such as the Tevatron at Fermilab in the US. This has enabled a comprehensive search and measurement programme. Following the discovery of the Higgs boson in 2012, measurements have so far verified its place in the electroweak sector of the Standard Model, although the relative precisions of many measurements are currently far lower than those achieved for the W and Z bosons at LEP. But in defiance of expectations, the capabilities of the LHC experiments and the ingenuity of analysts have also enabled many of the world’s most precise measurements of the electroweak interaction. Here, we highlight five.

1. Producing W and Z bosons

When two streams of objects meet, how many strike each other depends on their cross-sectional area. Though quarks and other partons are thought to be fundamental objects with zero extent, particle physicists borrow this logic for particle beams, and extend it by subdividing the metaphorical cross section according to the resulting interactions. The range of processes used to study W and Z bosons at the LHC spans a remarkable eight orders of magnitude in cross section.

WW, WZ and ZZ cross sections as a function of centre-of-mass energy

The most common interaction is the production of single W and Z bosons through the annihilation of a quark and an antiquark in the colliding protons. Measurements with single W and Z boson events have now reached a precision well below 1% thanks to the excellent calibration of the detector performance. They are a prodigious tool for testing and improving the modelling of the underlying process, for example using parton distribution functions.

The second most common interaction is the simultaneous production of two bosons. Measurements of “diboson” processes now routinely reach a precision better than 5%. Since the start of the LHC operation, the accelerator has operated at several collision energies, allowing the experiments to map diboson cross sections as a function of energy. Measurements of the cross sections for creating WW, WZ and ZZ pairs exhibit remarkable agreement with state-of-the art Standard Model predictions (see “Diboson production” figure).

The large amount of collected data at the LHC has recently allowed us to move the frontier to the observation of extremely infrequent “triboson” processes with three W or Z bosons, or photons, produced simultaneously – the first step towards confirming the existence of the quartic self-interaction between the electroweak bosons.

2. The weak mixing angle

The Higgs potential is famously thought to resemble a Mexican hat. The Higgs field that permeates space could in principle exist with a strength corresponding to any point on its surface. Theorists believe it settled somewhere in the brim a picosecond or so after the Big Bang, breaking the perfect symmetry of the hat’s apex, where its value was zero. This switched the Higgs field on throughout the universe – and the massless gauge bosons of the unified electroweak theory mixed to form the photon and W and Z boson mass eigenstates that mediate the broken electroweak interaction today. The weak mixing angle θW is the free parameter of the Standard Model which defines that mixing.

Measurements of the effective weak mixing angle

The θW angle can be studied using a beautifully simple interaction: the annihilation of a quark and its antiquark to create an electron and a positron or a muon and an antimuon. When the pair has an invariant mass in the vicinity of mZ, there is a small preference for the negatively charged lepton to be produced in the same direction as the initial quark. This arises due to quantum interference between the Z boson’s vector and axial-vector couplings, whose relative strengths depend on θW.

The unique challenge at a proton–proton collider like the LHC is that the initial directions of the quark and the antiquark can only be inferred using our limited knowledge of parton distribution functions. These systematic uncertainties currently dominate the total uncertainty, although they can be reduced somewhat by using information on lepton pairs produced away from the Z resonance. The CMS and LHCb collaborations have recently released new measurements consistent with the Standard Model prediction with a precision comparable to that of the LEP and SLC experiments (see “Weak mixing angle” figure).

Quantum physics effects play an interesting role here. In practice, it is not possible to experimentally isolate “tree level” properties like θW, which describe the simplest interactions that can be drawn on a Feynman diagram. Measurements are in fact sensitive to the effective weak mixing angle, which includes the effect of quantum interference from higher-order diagrams.

A crucial prediction of electroweak theory is that the masses of the W and Z bosons are, at leading order, related by the electroweak mixing angle: sin2θW = 1–m2W/m2Z, where mW and mZ are the masses of the W and Z bosons. This relationship is modified by quantum loops involving the Higgs boson, the top quark and possibly new particles. Measuring the parameters of the electroweak theory precisely, therefore, allows us to test for any gaps in our understanding of nature.

Surprisingly, combining this relationship with the mZ measurement from LEP and the CMS measurement of θW also allows a competitive measurement of mW. A measurement of sin2θW with a precision of 0.0003 translates into a prediction of mW with 15 MeV precision, which is comparable to the best direct measurements.

3. The mass and width of the W boson

Precisely measuring the mass of the W boson is of paramount importance to efforts to further constrain the relationships between the parameters of the electroweak theory, and probe possible beyond-the-Standard Model contributions. Particle lifetimes also offer a sensitive test of the electroweak theory. Because of their large masses and numerous decay channels, the W and Z bosons have mean lifetimes of less than 10–24 s. Though this is an impossibly brief time interval to measure directly, Heisenberg’s uncertainty principle smudges a particle’s observed mass by a certain “width” when it is produced in a collider. This width can be measured by fitting the mass distribution of many virtual particles. It is reciprocally related to the particle’s lifetime.

Measurement of the W boson’s mass and width

While lepton-collider measurements of the properties of the Z boson were extensive and achieved remarkable precision, the same is not quite true for the W boson. The mass of the Z boson was measured with a precision of 0.002%, but the mass of the W boson was measured with a precision of only 0.04% – a factor 20 worse. The reason is that while single Z bosons were copiously produced at LEP and SLC, W bosons could not be produced singly, due to charge conservation. W+W pairs were produced, though only at low rates at LEP energies.

In contrast to LEP, hadron colliders produce large quantities of single W bosons through quark–antiquark annihilation. The LHC produces more single W bosons in a minute than all the W-boson pairs produced in the entire lifetime of LEP. Even when only considering decays to electrons or muons and their respective neutrinos – the most precise measurements – the LHC experiments have recorded billions of W-boson events.

But there are obstacles to overcome. The neutrino in the final state escapes undetected. Its transverse momentum with respect to the beam direction can only be measured indirectly, by measuring all other products of the collision – a major experimental challenge in an environment with not just one, but up to 60 simultaneous proton–proton collisions. Its longitudinal momentum cannot be measured at all. And as the W bosons are not produced at rest, extensive theoretical calculations and ancillary measurements are needed to model their momenta, incurring uncertainties from parton distribution functions.

Despite these challenges, the latest measurement of the W boson’s mass by the ATLAS collaboration achieved a precision of roughly 0.02% (see “Mass and width” figure, top). The LHCb collaboration also recently produced its first measurement of the W-boson mass using W bosons produced close to the beam line with a precision at the 0.04% level, dominated for now by the size of the data sample. Owing to the complementary detector coverage of the LHCb experiment with respect to the ATLAS and CMS experiments, several uncertainties are reduced when these measurements are combined.

The Tevatron experiments CDF and D0 also made precise W-boson measurements using proton–antiproton collisions at a lower centre-of-mass energy. The single most precise mass measurement, at the 0.01% level, comes from CDF. It is in stark disagreement with the Standard Model prediction and disagrees with the combination of other measurements.

A highly anticipated measurement by the CMS collaboration may soon weigh in decisively in favour either of the CDF measurement or the Standard Model. The CMS measurement will combine innovative analysis techniques using the Z boson with a larger 13 TeV data set than the 7 TeV data used by the recent ATLAS measurement, enabling more powerful validation samples and thereby greater power to reduce systematic uncertainties.

Measurements of the W boson’s width are not yet sufficiently precise to constrain the Standard Model significantly, though the strongest constraint so far comes from the ATLAS collaboration (see “Mass and width” figure, bottom). Further measurements are a promising avenue to test the Standard Model. If the W boson decays into any hitherto undiscovered particles, its lifetime should be shorter than predicted, and its width greater, potentially indicating the presence of new physics.

4. Couplings of the W boson to leptons

Within the Standard Model, the W and Z bosons have equal couplings to leptons of each of the three generations – a property known as lepton flavour universality (LFU). Any experimental deviation from LFU would indicate new physics.

Ratios of branching fractions for the W boson

As with mass and width, lepton colliders’ precision was superior for the Z boson than the W boson. LEP confirmed LFU in leptonic Z-boson decays to about 0.3%. Comparing the three branching fractions of the W boson in the electron, muon and tau–lepton decay channels, the combination of the four LEP experiments reached a precision of only about 2%.

At the LHC, the large cross section for producing top quark–antiquark pairs that both decay into a W boson and a bottom quark offers a unique sample of W-boson pairs for high-precision studies of their decays. The resulting measurements are the most precise tests of LFU for all three possible comparisons of the coupling of the lepton flavours to the W boson (see “Couplings to leptons” figure).

Regarding the tau lepton to muon ratio, the ATLAS collaboration observed 0.992 ± 0.013 decays to a tau for every one decay to a muon. This result favours LFU and is twice as precise than the corresponding LEP result of 1.066 ± 0.025, which exhibits a deviation of 2.6 standard deviations from unity. Because of the relatively long tau lifetime, ATLAS was able to separate muons produced in the decay of tau leptons from those produced promptly by observing the tau decay length of the order of 2 mm.

The best tau to electron measurement is provided by a simultaneous CMS measurement of all the leptonic and hadronic decay branching fractions of the W boson. The analysis splits the top quark–antiquark pair events based on the multiplicity and flavour of reconstructed leptons, the number of jets, and the number of jets identified as originating from the hadronisation of b quarks. All CMS ratios are consistent with the LFU hypothesis and reduce tension with the Standard Model prediction.

Regarding the muon to electron ratio, measurements have been performed by several LHC and Tevatron experiments. The observed results are consistent with LFU, with the most precise measurement from the ATLAS experiment boasting a precision better than 0.5%.

5. The invisible width of the Z boson

A groundbreaking measurement at LEP deduced how often a particle that cannot be directly observed decays to particles that cannot be detected. The particle in question is the Z boson. By scanning the energy of electron–positron collisions and measuring the broadness of the “lineshape” of the smudged bump in interactions around the mass of the Z, LEP physicists precisely measured its width. As previously noted, a particle’s width is reciprocal to its lifetime and therefore proportional to its decay rate – something that can also be measured by directly accounting for the observed rate of decays to visible particles of all types. The difference between the two numbers is due to Z-boson decays to so-called invisible particles that cannot be reconstructed in the detector. A seminal measurement concluded that exactly three species of light neutrino couple to the Z boson.

Invisible width measurements

The LEP experiments also measured the invisible width of the Z boson using an ingenious method that searched for solitary “recoils”. Here, the trick was to look for the rare occasion when the colliding electron or positron emitted a photon just before creating a virtual Z boson that decayed invisibly. Such events would yield nothing more than a single photon recoiling from an otherwise invisible Z-boson decay.

The ATLAS and CMS collaborations recently performed similar measurements, requiring the invisibly decaying Z boson to be produced alongside a highly energetic jet in place of a recoil photon. By taking the ratio with equivalent recoil decays to electrons and muons, they achieved remarkable uncertainties of around 2%, equivalent to LEP, despite the much more challenging environment (see “Invisible width” figure). The results are consistent with the Standard Model’s three generations of light neutrinos.

Future outlook

Building on these achievements, the LHC experiments are now readying themselves for a more than comparable experimental programme, which is yet to begin. Following the ongoing run of the LHC, a high-luminosity upgrade (HL-LHC) is scheduled to operate throughout the 2030s, delivering a total integrated luminosity of 3 ab–1 to both ATLAS and CMS. The LHCb experiment also foresees a major upgrade to collect an integrated luminosity of more than 300 fb–1 by the end of the LHC operations. A tenfold data set, upgraded detectors and experimental methods, and improvements to theoretical modelling will greatly extend both experimental precision and the reach of direct and indirect searches for new physics. Unprecedented energy scales will be probed and anomalies with respect to the Standard Model may become apparent.

The Large Hadron Collider

Despite the significant challenges posed by systematic uncertainties, there are good prospects to further improve uncertainties in precision electroweak observables such as the mass of the W boson and the effective weak mixing angle, thanks to the larger angular acceptances of the new inner tracking devices currently under production by ATLAS and CMS. A possible programme of high-precision measurements in electron–proton collisions, the LHeC, could deliver crucial input to reduce uncertainties such as from parton distribution functions. The LHeC has been proposed to run concurrently with the HL-LHC by adding an electron beam to the LHC.

Beyond the HL-LHC programme, several proposals for future particle colliders have captured the imagination of the global particle-physics community – and not least the two phases of the Future Circular Collider (FCC) being studied at CERN. With a circumference three to four times greater than that of the LEP/LHC tunnel, electron–positron collisions could be delivered with very high luminosity and centre-of-mass energies from 90 to 365 GeV in the initial FCC-ee phase. The FCC-ee would facilitate an impressive leap in the precision of most electroweak observables. Projections estimate a factor of 10 improvement for Z-boson measurements and up to 100 for W-boson measurements. For the first time, the top quark could be produced in an environment where it is not colour-connected to initial hadrons, in some cases reducing uncertainties by a factor of 10 or more.

The LHC collaborations have made remarkable strides forward in probing the electroweak theory – a theory of great beauty and consequence for the universe. But its most fundamental workings are subtle and elusive. Our exploration is only just beginning.

The post Electroweak precision at the LHC appeared first on CERN Courier.

]]>
Feature Geared for discovery more so than delicacy, the LHC is defying expectations by rivalling lepton colliders for precision. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_EW_CMS.jpg
Homing in on the Higgs self-interaction https://cerncourier.com/a/homing-in-on-the-higgs-self-interaction/ Fri, 05 Jul 2024 09:37:21 +0000 https://preview-courier.web.cern.ch/?p=110839 The simplest possible interaction in nature is when three identical particle lines meet at a single vertex.

The post Homing in on the Higgs self-interaction appeared first on CERN Courier.

]]>
Non-resonant and resonant processes driving di-Higgs production at the LHC

The simplest possible interaction in nature is when three identical particle lines, with the same quantum numbers, meet at a single vertex. The Higgs boson is the only known elementary particle that can exhibit such behaviour. More importantly, the strength of the coupling between three or even four Higgs bosons will reveal the first picture of the shape of the Brout–Englert–Higgs potential, responsible for the evolution of the universe in its first moments as well as possibly its fate.

Since the discovery of the Higgs boson at the LHC in 2012, the ATLAS and CMS collaborations have measured its properties and interactions with increasing precision. This includes its couplings to the gauge bosons and to third-generation fermions, its production cross sections, mass and width. So far, the boson appears as the Standard Model (SM) says it should. But the picture is still fuzzy, and many more measurements are needed. After all, the Higgs boson may interact with new particles suggested by theories beyond the SM to shed light on mysteries including the nature of the electroweak phase transition.

Line of attack

“The Higgs self-coupling is the next big thing since the Higgs discovery, and di-Higgs production is our main line of attack,” says Jana Schaarschmidt of ATLAS. “The experiments are making tremendous progress towards measuring Higgs-boson pair production at the LHC – far more than was imagined would be possible 12 years ago – thanks to improvements in analysis techniques and machine learning in particular.”

The dominant process for di-Higgs production at the LHC, gluon–gluon fusion, proceeds via a box or triangle diagram, the latter offering access to the trilinear Higgs coupling constant λ (see figure). Destructive interference between the two processes makes di-Higgs production extremely rare, with a cross section at the LHC about 1000 times smaller than that for single-Higgs production. Many different decay channels are available to ATLAS and CMS. Those with a high probability to occur are chosen if they can also provide a clean way to be distinguished from backgrounds. The most sensitive channels are those with one Higgs boson decaying to a b-quark pair and the other decaying either to a pair of photons, τ leptons or b quarks.

During this year’s Rencontres de Moriond, ATLAS presented new results in the HH → bbbb and HH → multileptons channels and CMS in the HH → γγττ channel. In May, ATLAS released a combination of searches for HH production in five channels using the complete LHC Run 2 dataset. The combination provides the best expected sensitivities to HH production (excluding values more than 2.4 times the SM prediction) and to the Higgs boson self-coupling. A combination of HH searches published by CMS in 2022 obtains a similar sensitivity to the di-Higgs cross-section limits. “In late 2023 we put out a preliminary result combining single-Higgs and di-Higgs analyses to constrain the Higgs self-coupling, and further work on combining all the latest analyses is ongoing,” explains Nadjieh Jafari of CMS.

The Higgs self-coupling is the next big thing since the Higgs discovery

Considerable improvements are expected with the LHC Run 3 and much larger High-Luminosity LHC (HL-LHC) datasets. Based on extrapolations of early subsets of its Run 2 analyses, ATLAS expects to detect SM di-Higgs production with a significance of 3.2σ (4.6σ) with (without) systematic uncertainties by the end of the HL-LHC era. With similar progress at CMS, a di-Higgs observation is expected to be possible at the HL-LHC even with current analy­sis techniques, along with improved knowledge of λ. ATLAS, for example, expects to be able to constrain λ to be between 0.5 and 1.6 times the SM expectation at the level of 1σ.

Testing the foundations

Physicists are also starting to place limits on possible new-physics contributions to HH production, which can originate either from loop corrections involving new particles or from non-standard couplings between the Higgs boson and other SM particles. Several theories beyond the SM, including two-Higgs-doublet and composite-Higgs models, also predict the existence of heavy scalar particles that can decay resonantly into a pair of Higgs bosons. “Large anomalous values of λ are already excluded, and the window of possible values continues to shrink towards the SM as the sensitivity grows,” says Schaarschmidt. “Furthermore, in recent di-Higgs analyses ATLAS and CMS have been able to establish a strong constraint on the coupling between two Higgs bosons and two vector bosons.”

For Christophe Grojean of the DESY theory group, the principal interest in di-Higgs production is to test the foundations of quantum field theory: “The basic principles of the SM are telling us that the way the Higgs boson interacts with itself is mostly dictated by its expectation value (linked to the Fermi constant, i.e. the muon and neutron lifetimes) and its mass. Verifying this prediction experimentally is therefore of prime importance.”

The post Homing in on the Higgs self-interaction appeared first on CERN Courier.

]]>
News The simplest possible interaction in nature is when three identical particle lines meet at a single vertex. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_NA_Higgs_feature.jpg
Six rare decays at the energy frontier https://cerncourier.com/a/six-rare-decays-at-the-energy-frontier/ Fri, 05 Jul 2024 09:31:53 +0000 https://preview-courier.web.cern.ch/?p=110780 Andrzej Buras explains how two rare kaon decays and four rare B-meson decays will soon probe for new physics beyond the reach of direct searches at colliders.

The post Six rare decays at the energy frontier appeared first on CERN Courier.

]]>
Thanks to its 13.6 TeV collisions, the LHC directly explores distance scales as short as 5 × 10–20 m. But the energy frontier can also be probed indirectly. By studying rare decays, distance scales as small as a zeptometre (10–21 m) can be resolved, probing the existence of new particles with masses as high as 100 TeV. Such particles are out of the reach of any high-energy collider that could be built in this century.

The key concept is the quantum fluctuation. Just because a collision doesn’t have enough energy to bring a new particle into existence does not mean that a very heavy new particle cannot inform us about its existence. Thanks to Heisenberg’s uncertainty principle, new particles could be virtually exchanged between the other particles involved in the collisions, modifying the probabilities for the processes we observe in our detectors. The effect of massive new particles could be unmistakable, giving physicists a powerful tool for exploring more deeply into the unknown than accelerator technology and economic considerations allow direct searches to go.

The effect of massive new particles could be unmistakable

The search for new particles and forces beyond those of the Standard Model is strongly motivated by the need to explain dark matter, the huge range of particle masses from the tiny neutrino to the massive top quark, and the asymmetry between matter and antimatter that is responsible for our very existence. As direct searches at the LHC have not yet provided any clue as to what these new particles and forces might be, indirect searches are growing in importance. Studying very rare processes could allow us to see imprints of new particles and forces acting at much shorter distance scales than it is possible to explore at current and future colliders.

Anticipating the November Revolution

The charm quark is a good example. The story of its direct discovery unfolded 50 years ago, in November 1974, when teams at SLAC and MIT simultaneously discovered a charm–anticharm meson in particle collisions. But four years earlier, Sheldon Glashow, John Iliopoulos and Luciano Maiani had already predicted the existence of the charm quark thanks to the surprising suppression of the neutral kaon’s decay into two muons.

Neutral kaons are made up of a strange quark and a down antiquark, or vice versa. In the Standard Model, their decay to two muons can proceed most simply through the virtual exchange of two W bosons, one virtual up quark and a virtual neutrino. The trouble was that the rate for the neutral kaon decay to two muons predicted in this  manner turned out to be many orders of magnitude larger than observed experimentally.

NA62 experiment

Glashow, Iliopoulos and Maiani (GIM) proposed a simple solution. With visionary insight, they hypothesised a new quark, the charm quark, which would totally cancel the contribution of the up quark to this decay if their masses were equal to each other. As the rate was non-vanishing and the charm quark had not yet been observed experimentally, they concluded that the mass of the charm quark must be significantly larger than that of the up quark.

Their hunch was correct. In early 1974, months before its direct discovery, Mary K Gaillard and Benjamin Lee predicted the charm quark’s mass by analysing another highly suppressed quantity, the mass difference in K0K0 mixing.

As modifications to the GIM mechanism by new heavy particles are still a hot prospect for discovering new physics in the 2020s, the details merit a closer look. Years earlier, Nicola Cabibbo had correctly guessed that weak interactions act between up quarks and a mixture (d cos θ + s sin θ) of the down and strange quarks. We now know that charm quarks interact with the mixture (–d sin θ + s cos θ). This is just a rotation of the down and strange quarks through this Cabibbo angle. The minus sign causes the destructive interference observed in the GIM mechanism.

With the discovery of a third generation of quarks, quark mixing is now described by the Cabibbo–Kobayashi–Maskawa (CKM) matrix – a unitary three-dimensional rotation with complex phases that parameterise CP violation. Understanding its parameters may prove central to our ability to discover new physics this decade.

On to the 1980s

The story of indirect discoveries continued in the late 1980s, when the magnitude of B0d – B0d mixing implied the existence of a heavy top quark, which was confirmed in 1995, completing the third generation of quarks. The W, Z and Higgs bosons were also predicted well in advance of their discoveries. It’s only natural to expect that indirect searches for new physics will be successful at even shorter distance scales.

Belle II experiment at KEK

Rare weak decays of kaons and B mesons that are strongly suppressed by the GIM mechanism are expected to play a crucial role. Many channels of interest are predicted by the Standard Model to have branching ratios as low as 10–11, often being further suppressed by small elements of the CKM matrix. If the GIM mechanism is violated by new-physics contributions, these branching ratios – the fraction of times a particle decays that way – could be much larger.

Measuring suppressed branching ratios with respectable precision this decade is therefore an exciting prospect. Correlations between different branching ratios can be particularly sensitive to new physics and could provide the first hints of physics beyond the Standard Model. A good example is the search for the violation of lepton-flavour universality (CERN Courier May/June 2019 p33). Though hints of departures from muon–electron universality seem to be receding, hints that muon–tau universality may be violated still remain, and the measured branching ratios for B  K(K*+µ differ visibly from Standard Model predictions.

The first step in this indirect strategy is to search for discrepancies between theoretical predictions and experimental observables. The main challenge for experimentalists is the low branching ratios for the rare decays in question. However, there are very good prospects for measuring many of these highly suppressed branching ratios in the coming years.

Six channels for the 2020s

Six channels stand out today for their superb potential to observe new physics this decade. If their decay rates defy expectations, the nature of any new physics could be identified by studying the correlations between these six decays and others.

The first two channels are kaon decays: the measurements of K+ π+νν by the NA62 collaboration at CERN (see “Needle in a haystack” image), and the measurement of KL π0νν by the KOTO collaboration at J-PARC in Japan. The branching ratios for these decays are predicted to be in the ballpark of 8 × 10–11 and 3 × 10–11, respectively.

Independent observables

The second two are measurements of B  Kνν and B  K*νν by the Belle II collaboration at KEK in Japan. Branching ratios for these decays are expected to be much higher, in the ballpark of 10–5.

The final two channels, which are only accessible at the LHC, are measurements of the dimuon decays Bs µ+µ and Bd µ+µ by the LHCb, CMS and ATLAS collaborations. Their branching ratios are about 4 × 10–9 and 10–10 in the Standard Model. Though the decays B  K(K*+µare also promising, they are less theoretically clean than these six.

The main challenge for theorists is to control quantum-chromodynamics (QCD) effects, both below 10–16 m, where strong interactions weaken, and in the non-perturbative region at distance scales of about 10–15 m, where quarks are confined in hadrons and calculations become particularly tricky. While satisfactory precision has been achieved at short-distance scales over the past three decades, the situation for non-perturbative computations is expected to improve significantly in the coming years, thanks to lattice QCD and analytic approaches such as dual QCD and chiral perturbation theory for kaon decays, and heavy-quark effective field theory for B decays.

Another challenge is that Standard Model predictions for the branching ratios require values for four CKM parameters that are not predicted by the Standard Model, and which must be measured using kaon and B-meson decays. These are the magnitude of the up-strange (Vus) and charm-bottom (Vcb) couplings and the CP-violating phases β and γ. The current precision on measurements of Vus and β is fully satisfactory, and the error on γ = (63.8 ± 3.5)° should be reduced to 1° by LHCb and Belle II in the coming years. The stumbling block is Vcb, where measurements currently disagree. Though experimental problems have not been excluded, the tension is thought to originate in QCD calculations. While measurements of exclusive decays to specific channels yield 39.21(62) × 10–3, inclusive measurements integrated over final states yield 41.96(50) × 10–3. This discrepancy makes the predicted branching ratios differ by 16% for the four B-meson decays, and by 25% and 35% for K+ π+νν and KL π0νν. These discrepancies are a disaster for the theorists who had succeeded over many years of work to reduce QCD uncertainties in these decays to the level of a few percent.

One solution is to replace the CKM dependence of the branching ratios with observables where QCD uncertainties are under good control, for example: the mass differences in B0s  B0s and B0d  B0d mixing (∆Ms and ∆Md); a parameter that measures CP violation in K0 – K0 mixing (εK); and the CP-asymmetry that yields the angle β. Fitting these observables to the experimental data avoids us being forced to choose between inclusive and exclusive values for the charm-bottom coupling, and avoids the 3.5° uncertainty on γ, which in this strategy is reduced to 1.6°. Uncertainty on the predicted branching ratios is thereby reduced to 6% and 9% for B  Kνν and B  K*νν, to 5% for the two kaon decays, and to 4% for Bs µ+µ and Bd µ+µ.

So what is the current experimental situation for the six channels? The latest NA62 measurement of K+ π+νν is 25% larger than the Standard Model prediction. Its 36% uncertainty signals full compatibility at present, and precludes any conclusions about the size of new physics contributing to this decay. Next year, when the full analysis has been completed, this could turn out to be possible. It is unfortunate that the HIKE proposal was not adopted (CERN Courier May/June 2024 p7), as NA62’s expected precision of 15% could have been reduced to 5%. This could turn out to be crucial for the discovery of new physics in this decay.

The present upper bound on KL π0νν from KOTO is still two orders of magnitude above the Standard Model prediction. This bound should be lowered by at least one order of magnitude in the coming years. As this decay is fully governed by CP violation, one may expect that new physics will impact it significantly more than CP-conserving decays such as K+ π+νν.

Branching out from Belle

At present, the most interesting result concerns a 2023 update from Belle II to the measured branching ratio for B+ K+νν (see “Interesting excess” image). The resulting central value from Belle II and BaBar is currently a factor of 2.6 above the Standard Model prediction. This has sparked many theoretical analyses around the world, but the experimental error of 30% once again does not allow for firm conclusions. Measurements of other charge and spin configurations of this decay are pending.

Finally, both dimuon B-meson decays are at present consistent with Standard Model predictions, but significant improvements in experimental precision could still reveal new physics at work, especially in the case of Bd.

Hypothetical future measurements of branching ratios

It will take a few years to conclude if new physics contributions are evident in these six branching ratios, but the fact that all are now predicted accurately means that we can expect to observe or exclude new physics in them before the end of the decade. This would be much harder if measurements of the Vcb coupling were involved.

So far, so good. But what if the observables that replaced Vcb and γ are themselves affected by new physics? How can they be trusted to make predictions against which rare decay rates can be tested?

Here comes some surprisingly good news: new physics does not appear to be required to simultaneously fit them using our new basis of observables ΔMd, εK and ΔMs, as they intersect at a single point in the Vcbγ plane (see “No new physics” figure). This analysis favours the inclusive determination of Vcb and yields a value for γ that is consistent with the experimental world average and a factor of two more accurate. It’s important to stress, though, that non-perturbative four-flavour lattice-QCD calculations of ∆Ms and ∆Md by the HPQCD lattice collaboration played a key role here. It is crucial that another lattice QCD collaboration repeat these calculations, as the three curves cross at different points in three-flavour calculations that exclude charm.

Interesting years are ahead in the field of indirect searches for new physics

In this context, one realises the advantages of Vcbγ plots compared to the usual unitarity-triangle plots, where Vcb is not seen and 1° improvements in the determination of γ are difficult to appreciate. In the late 2020s, determining Vcb and γ from tree-level decays will be a central issue, and a combination of Vcb-independent and Vcb-dependent approaches will be needed to identify any concrete model of new physics.

We should therefore hope that the tension between inclusive and exclusive determinations of Vcb will soon be conclusively resolved. Forthcoming measurements of our six rare decays may then reveal new physics at the energy frontier (see “New physics” figure). With a 1° precision measurement of γ on the horizon, and many Vcb-independent ratios available, interesting years are ahead in the field of indirect searches for new physics.

In 1676 Antonie van Leeuwenhoek discovered a microuniverse populated by bacteria, which he called animalcula, or little animals. Let us hope that we will, in this decade, discover new animalcula on our flavour expedition to the zeptouniverse.

The post Six rare decays at the energy frontier appeared first on CERN Courier.

]]>
Feature Andrzej Buras explains how two rare kaon decays and four rare B-meson decays will soon probe for new physics beyond the reach of direct searches at colliders. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_FLAVOUR_frontis.jpg
In defiance of cosmic-ray power laws https://cerncourier.com/a/in-defiance-of-cosmic-ray-power-laws/ Fri, 05 Jul 2024 08:44:45 +0000 https://preview-courier.web.cern.ch/?p=110771 From its pristine vantage point on the International Space Station, the Calorimetric Electron Telescope, CALET, has uncovered anomalies in the spectra of protons and electrons below the cosmic-ray knee.

The post In defiance of cosmic-ray power laws appeared first on CERN Courier.

]]>
The Calorimetric Electron Telescope

In a series of daring balloon flights in 1912, Victor Hess discovered radiation that intensified with altitude, implying extra-terrestrial origins. A century later, experiments with cosmic rays have reached low-Earth orbit, but physicists are still puzzled. Cosmic-ray spectra are difficult to explain using conventional models of galactic acceleration and propagation. Hypotheses for their sources range from supernova remnants, active galactic nuclei and pulsars to physics beyond the Standard Model. The study of cosmic rays in the 1940s and 1950s gave rise to particle physics as we know it. Could these cosmic messengers be about to unlock new secrets, potentially clarifying the nature of dark matter?

The cosmic-ray spectrum extends well into the EeV regime, far beyond what can be reached by particle colliders. For many decades, the spectrum was assumed to be broken into intervals, each following a power law, as Enrico Fermi had historically predicted. The junctures between intervals include: a steepening decline at about 3 × 106 GeV known as the knee; a flattening at about 4 × 109 GeV known as the ankle; and a further steepening at the supposed end of the spectrum somewhere above 1010 GeV (10 EeV).

The Calorimetric Electron Telescope detector

While the cosmic-ray population at EeV energies may include contributions from extra-galactic cosmic rays, and the end of the spectrum may be determined by collisions with relic cosmic-microwave-background photons – the Greisen–Zatsepin–Kuzmin cutoff – the knee is still controversial as the relative abundance of protons and other nuclei is largely unknown. What’s more, recent direct measurements by space-borne instruments have discovered “spectral curvatures” below the knee. These significant deviations from a pure power law range from a few hundred GeV to a few tens of TeV. Intriguing anomalies in the spectra of cosmic-ray electrons and positrons have also been observed below the knee.

Electron origins

The Calorimetric Electron Telescope (CALET; see “Calorimetric telescope” figure) on board the International Space Station (ISS) provides the highest-energy direct measurements of the spectrum of cosmic-ray electrons and positrons. Its goal is to observe discrete sources of high-energy particle acceleration in the local region of our galaxy. Led by the Japan Aerospace Exploration Agency, with the participation of the Italian Space Agency and NASA, CALET was launched from the Tanegashima Space Center in August 2015, becoming the second high-energy experiment operating on the ISS following the deployment of AMS-02 in 2011. During 2017 a third experiment, ISS-CREAM, joined AMS-02 and CALET, but its observation time ended prematurely.

A candidate electron event in CALET

As a result of radiative losses in space, high-energy cosmic-ray electrons are expected to originate just a few thousand light-years away, relatively close to Earth. CALET’s homogeneous calorimeter (fully active, with no absorbers) is optimised to reconstruct such particles (see “Energetic electron” figure). With the exception of the highest energies, anisotropies in their arrival direction are typically small due to deflections by turbulent interstellar magnetic fields.

Energy spectra also contain crucial information as to where and how cosmic-ray electrons are accelerated. And they could provide possible signatures of dark matter. For example, the presence of a peak in the spectrum could be a sign of dark-matter decay, or dark-matter annihilation into an electron–positron pair, with a detected electron or positron in the final state.

Direct measurements of the energy spectra of charged cosmic rays have recently achieved unprecedented precision thanks to long-term observations of electrons and positrons of cosmic origin, as well as of individual elements from hydrogen to nickel, and even beyond. Space-borne instruments such as CALET directly identify cosmic nuclei by measuring their electric charge. Ground-based experiments must do so indirectly by observing the showers they generate in the atmosphere, incurring large systematic uncertainties. Either way, hadronic cosmic rays can be assumed to be fully stripped of atomic electrons in their high-temperature regions of origin.

A rich phenomenology

The past decade has seen the discovery of unexpected features in the differential energy spectra of both leptonic and hadronic cosmic rays. The observation by PAMELA and AMS of an excess of positrons above 10 GeV has generated widespread interest and still calls for an unambiguous explanation (CERN Courier December 2016 p26). Possibilities include pair production in pulsars, in addition to the well known interactions with the interstellar gas, and the annihilation of dark matter into electron–positron pairs.

Combined electron and positron flux measurements as a function of kinetic energy

Regarding cosmic-ray nuclei, significant deviations of the fluxes from pure power-law spectra have been observed by several instruments in flight, including by CREAM on balloon launches from Antarctica, by PAMELA and DAMPE aboard satellites in low-Earth orbit, and by AMS-02 and CALET on the ISS. Direct measurements have also shown that the energy spectra of “primary” cosmic rays is different from those of “secondary” cosmic rays created by collisions of primaries with the interstellar medium. This rich phenomenology, which encodes information on cosmic-ray acceleration processes and the history of their propagation in the galaxy, is the subject of multiple theoretical models.

An unexpected discovery by PAMELA, which had been anticipated by CREAM and was later measured with greater precision by AMS-02, DAMPE and CALET, was the observation of a flattening of the differential energy spectra of protons and helium. Starting from energies of a few hundred GeV, the proton flux shows a smooth and progressive hardening (increase in gradient) of the spectrum that continues up to around 10 TeV, above which a completely different regime is established. A turning point was the subsequent discovery by CALET and DAMPE of an unexpected softening of proton and helium fluxes above about 10 TeV/Z, where the atomic number Z is one for protons and two for helium. The presence of a second break challenges the conventional “standard model” of cosmic-ray spectra and calls for a further extension of the observed energy range, currently limited to a few hundred TeV.

At present, only two experiments in low-Earth orbit have an energy reach beyond 100 TeV: CALET and DAMPE. They rely on a purely calorimetric measurement of the energy, while space-borne magnetic spectrometers are limited to a maximum magnetic “rigidity” – a particle’s momentum divided by its charge – of a few teravolts. Since the end of PAMELA’s operations in 2016, AMS-02 is now the only instrument in orbit with the ability to discriminate the sign of the charge. This allows separate measurements of the high-energy spectra of positrons and antiprotons – an important input to the observation of final states containing antiparticles for dark-matter searches. AMS-02 is also now preparing for an upgrade: an additional silicon tracker layer will be deployed at the top of the instrument to enable a significant increase in its acceptance and energy reach (CERN Courier March/April 2024 p7).

Pioneering observations

CALET was designed to extend the energy reach beyond the rigidity limit of present space-borne spectrometers, enabling measurements of electrons up to 20 TeV and measurements of hadrons up to 1 PeV. As an all-calorimetric instrument with no magnetic field, its main science goal is to perform precision measurements of the detailed shape of the inclusive spectra of electrons and positrons.

The Vela Pulsar

Thanks to its advanced imaging calorimeter, CALET can measure the kinetic energy of incident particles well into TeV energies, maintaining excellent proton–electron discrimination throughout. CALET’s homogeneous calorimeter has a total thickness of 30 radiation lengths, allowing for a full containment of electron showers. It is preceded by a high-granularity pre-shower detector with imaging capabilities that provide a redundant measurement of charge via multiple energy-loss measurements. The calibration of the two instruments is the key to controlling the energy scale, motivating beam tests at CERN before launch.

A first important deviation from a scale-invariant power-law spectrum was found for electrons near 1 TeV. Here, CALET and DAMPE observed a significant flux reduction, as expected from the large radiative losses of electrons during their travel in space. CALET has now published a high-statistics update up to 7.5 TeV, reporting the presence of candidate electrons above the 1 TeV spectral break (see “Electron break” figure).

This unexplored region may hold some surprises. For example, the detection of even higher energy electrons, such as the 12 TeV candidate recently found by CALET, may indicate the contribution of young and nearby sources such as the Vela supernova remnant, which is known to host a pulsar (see “Pulsar home” image).

CALET was designed to extend the energy reach beyond the rigidity limit of present space-borne spectrometers

A second unexpected finding is the observation of a significant reduction in the proton flux around 10 TeV. This bump and dip were also observed by DAMPE and anticipated by CREAM, albeit with low statistics (see “Proton bump” figure). A precise measurement of the flux has allowed CALET to fit the spectrum with a double-broken power law: after a spectral hardening starting at a few hundred GeV, which is also observed by AMS-02 and PAMELA, and which progressively increases above 500 GeV, a steep softening takes place above 10 TeV.

Proton flux measurements as a function of the kinetic energy

A similar bump and dip have been observed in the helium flux. These spectral features may result from a single physical process that generates a bump in the cosmic-ray spectrum. Theoretical models include an anomalous diffusive regime near the acceleration sources, the dominance of one or more nearby supernova remnants, the gradual release of cosmic rays from the source, and the presence of additional sources.

CALET is also a powerful hunter of heavier cosmic rays. Measurements of the spectra of boron, carbon and oxygen ions have been extended in energy reach and precision, providing evidence of a progressive spectral hardening for most of the primary elements above a few hundred GeV per nucleon. The boron-to-carbon flux ratio is an important input for understanding cosmic-ray propagation. This is because diffusion through the interstellar medium causes an additional softening of the flux of secondary cosmic rays such as boron with respect to primary cosmic rays such as carbon (see “Break in B/C?” figure). The collaboration also recently published the first high-resolution flux measurement of nickel (Z = 28), revealing the element to have a very similar spectrum to iron, suggesting similar acceleration and propagation behaviour.

CALET is also studying the spectra of sub-iron elements, which are poorly known above 10 GeV per nucleon, and ultra-heavy galactic cosmic rays such as zinc (Z = 30), which are quite rare. CALET studies abundances up to Z = 40 using a special trigger with a large acceptance, so far revealing an excellent match with previous measurements from ACE-CRIS (a satellite-based detector), SuperTIGER (a balloon-borne detector) and HEAO-3 (a satellite-based detector decommissioned in the 1980s). Ultra-heavy galactic cosmic rays provide insights into cosmic-ray production and acceleration in some of the most energetic processes in our galaxy, such as supernovae and binary-neutron-star mergers.

Gravitational-wave counterparts

In addition to charged particles, CALET can detect gamma rays with energies between 1 GeV and 10 TeV, and study the diffuse photon background as well as individual sources. To study electromagnetic transients related to complex phenomena such as gamma-ray bursts and neutron-star mergers, CALET is equipped with a dedicated monitor that to date has detected more than 300 gamma-ray bursts, 10% of which are short bursts in the energy range 7 keV to 20 MeV. The search for electromagnetic counterparts to gravitational waves proceeds around the clock by following alerts from LIGO, VIRGO and KAGRA. No X-ray or gamma-ray counterparts to gravitational waves have been detected so far.

CALET measurements of the boron to carbon flux ratio

On the low-energy side of cosmic-ray spectra, CALET has contributed a thorough study of the effect of solar activity on galactic cosmic rays, revealing charge dependence on the polarity of the Sun’s magnetic field due to the different paths taken by electrons and protons in the heliosphere. The instrument’s large-area charge detector has also proven to be ideal for space-weather studies of relativistic electron precipitation from the Van Allen belts in Earth’s magnetosphere.

The spectacular recent experimental advances in cosmic-ray research, and the powerful theoretical efforts that they are driving, are moving us closer to a solution to the century-old puzzle of cosmic rays. With more than four billion cosmic rays observed so far, and a planned extension of the mission to the nominal end of ISS operativity in 2030, CALET is expected to continue its campaign of direct measurements in space, contributing sharper and perhaps unexpected pictures of their complex phenomenology.

The post In defiance of cosmic-ray power laws appeared first on CERN Courier.

]]>
Feature From its pristine vantage point on the International Space Station, the Calorimetric Electron Telescope, CALET, has uncovered anomalies in the spectra of protons and electrons below the cosmic-ray knee. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_COSMIC_frontis.jpg
LHC physicists spill the beans in Boston https://cerncourier.com/a/lhc-physicists-spill-the-beans-in-boston/ Fri, 05 Jul 2024 07:49:34 +0000 https://preview-courier.web.cern.ch/?p=110910 Dedicated solely to LHC physics, the LHCP conference is a vital gathering for experts in the field. The 12th edition was no exception, attracting 450 physicists to Northeastern University in Boston from 3 to 7 June.

The post LHC physicists spill the beans in Boston appeared first on CERN Courier.

]]>
Dedicated solely to LHC physics, the LHCP conference is a vital gathering for experts in the field. The 12th edition was no exception, attracting 450 physicists to Northeastern University in Boston from 3 to 7 June. Participants discussed recent results, data taking at a significantly increased instantaneous luminosity in Run 3, and progress on detector upgrades planned for the high-luminosity LHC (HL-LHC).

The study of the Higgs boson remains central to the LHC programme. ATLAS reported a new result on Standard Model (SM) Higgs-boson production with decays to tau leptons, achieving the most precise single-channel measurement of the vector-boson-fusion production mode to date. Determining the production modes of the Higgs boson precisely may shed light on the existence of new physics that would be observed as deviations from the SM predictions.

Beyond single Higgs production, the di-Higgs production (HH) search is one of the most exciting and fundamental topics for LHC physics in the coming years as it directly probes the Higgs potential (see “Homing in on the Higgs self-interaction“). ATLAS has combined results for HH production in multiple final states, providing the best-expected sensitivity to the HH production cross-section and Higgs-boson self-coupling, allowing κλ (the Higgs self-coupling with respect to the SM value) to be within the range –1.2 < κλ< 7.2.

The search for beyond-the-SM (BSM) physics to explain the many unresolved questions about our universe is being conducted with innovative ideas and methods. CMS has presented new searches involving signatures with two tau leptons, examining the hypotheses of an excited tau lepton and a heavy neutral spin-1 gauge boson (Z) produced via Drell-Yan and, for the first time, via vector boson fusion. These results set stringent constraints on BSM models with enhanced couplings to third-generation fermions.

Other new-physics theoretical models propose additional BSM Higgs bosons. ATLAS presented a search for such particles being produced in association with top quarks, setting limits on their cross-section that significantly improve upon previous ATLAS  results. Additional BSM Higgs bosons could explain puzzles such as dark matter, neutrino oscillations and the observed matter–antimatter asymmetry in the universe.

The dark side

Some BSM models imply that dark-matter particles could arise as composite mesons or baryons of a new strongly-coupled theory that is an extension of the SM. ATLAS investigated this dark sector through searches for high-multiplicity hadronic final states, providing the first direct collider constraints on this model to complement direct dark-matter-detection experimental results.

CMS have used low-pileup inelastic proton–proton collisions to measure event-shape variables related to the overall distribution of charged particles. These measurements showed the particle distribution to be more isotropic than predicted by theoretical models.

LHCP conference talk

The LHC experiments also presented multiple analyses of proton–lead (p–Pb) and pp collisions, exploring the potential production of quark–gluon plasma (QGP) – a hot and dense phase of deconfined quarks and gluons found in the early universe that is frequently studied in heavy-ion Pb–Pb collisions, among others, at the LHC. Whether it can be created in smaller collision systems is still inconclusive.

ALICE reported a high-precision measurement of the elliptic flow of anti-helium-3 in QGP using the first Run-3 Pb–Pb run. The much larger data sample compared to the previous Run 2 measurement allowed ALICE to distinguish production models for these rarely produced particles for the first time. ALICE also reported the first measurement of an impact-parameter-dependent angular anisotropy in the decay of coherently photo-produced ρ0 mesons in ultra-peripheral Pb–Pb collisions. In these collisions, quantum interference effects cause a decay asymmetry that is inversely proportional to the impact parameter.

CMS reported its first measurement of the complete set of optimised CP-averaged observables from the process B0 K*0μ+μ. These measurements are significant because they could reveal indirect signs of new physics or subtle effects induced by low-energy strong interactions. By matching the current best experimental precision, CMS contributes to the ongoing investigation of this process.

LHCb presented measurements of the local and non-local contributions across the full invariant-mass spectrum of B0* K*0μ+μ, tests of lepton flavour universality in semileptonic b decays, and mixing and CP violation in D  Kπ decays.

The future of the field was discussed in a well-attended panel session, which emphasised exploring the full potential of the HL-LHC and engaging younger generations

From a theoretical perspective, progress in precision calculations has exceeded expectations. Many processes are now known to next-to-next-to-leading order or even next-to-next-to-next-to-leading order (N3LO) accuracy. The first parton distribution functions approximating N3LO accuracy have been released and reported at LHCP, and modern parton showers have set new standards in perturbative accuracy.

In addition to these advances, several new ideas and observables are being proposed. Jet substructure, for instance, is becoming a precision science and valuable tool due to its excellent theoretical properties. Effective field theory (EFT) methods are continuously refined and automated, serving as crucial bridges to new theories as many ultraviolet theories share the same EFT operators. Synergies between flavour physics, electroweak effects and high-transverse-momentum processes at colliders are particularly evident within this framework. The use of the LHC as a photon collider showcases the extraordinary versatility of LHC experiments and their synergy with theoretical advancements.

Discovery machine

The HL-LHC upgrade was thoroughly discussed, with several speakers highlighting the importance and uniqueness of its physics programme. This includes fundamental insights into the Higgs potential, vector-boson scattering, and precise measurements of the Higgs boson and other SM parameters. Thanks to the endless efforts by the four collaborations to improve their performances, the LHC already rivals historic lepton colliders for electroweak precision in many channels, despite the cleaner signatures of lepton collisions. The HL-LHC will be capable of providing extraordinarily precise measurements while also serving as a discovery machine for many years to come.

The future of the field was discussed in a well-attended panel session, which emphasised exploring the full potential of the HL-LHC and engaging younger generations. Preserving the unique expertise and knowledge cultivated within the CERN community is imperative. Next year’s LHCP conference will be held at National Taiwan University in Taipei from 5 to 10 June.

The post LHC physicists spill the beans in Boston appeared first on CERN Courier.

]]>
Meeting report Dedicated solely to LHC physics, the LHCP conference is a vital gathering for experts in the field. The 12th edition was no exception, attracting 450 physicists to Northeastern University in Boston from 3 to 7 June. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_FN_LHCP1.jpg
CMS studies single-top production https://cerncourier.com/a/cms-studies-single-top-production/ Fri, 05 Jul 2024 07:41:46 +0000 https://preview-courier.web.cern.ch/?p=110825 Being the most massive known elementary particle, top quarks are a focus for precision measurements and searches for new phenomena.

The post CMS studies single-top production appeared first on CERN Courier.

]]>
CMS figure 1

Being the most massive known elementary particle, top quarks are a focus for precision measurements and searches for new phenomena. At the LHC, they are copiously produced in pairs via quantum chromodynamic (QCD) interactions, and, to a much lesser extent, in single modes through the electroweak force. Precisely measuring the single-top cross section provides a stringent test for the electroweak sector of the Standard Model (SM) of particle physics.

In September 2022, only four months after the start of the Run 3, the CMS collaboration released the first measurement using data at the new collision energy of 13.6 TeV: the production cross section of a top quark together with its antiparticle (tt). The collaboration can now also report a measurement of the production of a single top quark in association with a W boson (tW) based on the full dataset recorded in 2022. As well as testing the electroweak sector, constraining tW allows it to be better disentangled from the dominant tt process – a channel where precision improves our knowledge of higher orders of accuracy in perturbative QCD.

CMS figure 2

tW is a challenging measurement as it is 10 times less likely than tt production but has almost the same detection signature. This analysis selects events where both the top quark and the W boson ultimately decay to leptons. The signal therefore consists of two leptons (electrons or muons), a jet initiated from a bottom quark, and possibly extra jets coming from additional radiation. No single observable can discriminate the signal from the background, so a random forest (RF) is employed in events that contain either one or two jets, one of which comes from a bottom quark. The RF is a collection of decision trees collaborating to distinguish the tW signal from the tt background. The output of the RF, for events with one jet identified as coming from a bottom quark, is shown in figure 1. The higher the RF discriminant, the higher the relative proportion of signal events.

To achieve a higher precision, an extra handle is used to control the tt background: information from events with two b-quark jets. Such events are more likely to come from the decay of a tt pair. The measurement yields a precise value for the tW cross section. Figure 2 shows tW cross-section measurements by CMS at different centre-of-mass energies, including the new measurement in proton–proton collisions of 13.6 TeV. All measurements are consistent with state-of-the-art theory calculations. The first tW measurement at the new LHC energy frontier uses only part of the data but is already as precise as the earlier measurement, which used the entire Run 2 sample at 13 TeV. Exploiting the full Run 3 data sample will push the precision frontier forward and provide an even more stringent SM probe in the top quark sector.

The post CMS studies single-top production appeared first on CERN Courier.

]]>
News Being the most massive known elementary particle, top quarks are a focus for precision measurements and searches for new phenomena. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_EF_CMS_feature.jpg
LHCb targets rare radiative decay https://cerncourier.com/a/lhcb-targets-rare-radiative-decay/ Mon, 13 May 2024 08:11:13 +0000 https://preview-courier.web.cern.ch/?p=110561 The LHCb collaboration has reported the first search for the decay of the neutral strange B-meson to a pair of muons and a reconstructed photon.

The post LHCb targets rare radiative decay appeared first on CERN Courier.

]]>
LHCb figure 1

Rare radiative b-hadron decays are powerful probes of the Standard Model (SM) sensitive to small deviations caused by potential new physics in virtual loops. One such process is the decay of B0s→ μ+μγ. The dimuon decay of the B0s meson is known to be extremely rare and has been measured with unprecedented precision by LHCb and CMS. While performing this measurement, LHCb also studied the B0s→ μ+μγ decay, partially reconstructed due to the missing photon, as a background component of the B0s→ μ+μ process and set the first upper limit on its branching fraction to 2.0 × 10–9 at 95% CL (red arrow in figure 1). However, this search was limited to the high-dimuon-mass region, whereas several theoretical extensions of the SM could manifest themselves in lower regions of the dimuon-mass spectrum. Reconstructing the photon is therefore essential to explore the spectrum thoroughly and probe a wide range of physics scenarios.

The LHCb collaboration now reports the first search for the B0s→ μ+μγ decay with a reconstructed photon, exploring the full dimuon mass spectrum. Photon reconstruction poses additional experimental challenges, such as degrading the mass resolution of the B0s candidate and introducing additional background contributions. To cope with this ambitious search, machine-learning algorithms and new variables have been specifically designed with the aim of discriminating the signal among background processes with similar signatures. The analysis is performed separately for three dimuon mass ranges to exploit any differences along the spectrum, such as the ϕ(1020) meson contribution in the low invariant mass region. The μ+μγ invariant mass distributions of the selected candidates are fitted, including all background contributions and the B0s→ μ+μγ signal component. Figure 2 shows the fit for the lowest dimuon mass region.

LHCb figure 2

No significant signal of B0s→ μ+μγ is found in any of the three dimuon mass regions, consistent with the background-only hypothesis. Upper bounds on the branching fraction are set and can be seen as the black arrows in figure 1. The mass fit is also performed for the combined candidates of the three dimuon mass regions to set a combined upper limit on the branching fraction to 2.8 × 10–8 at 95% CL.

The SM theoretical predictions of b decays becomes particularly difficult to calculate when a photon is involved, and they have large uncertainties due to the B0s→ γ local form factors. The B0s→ μ+μγ decay provides a unique opportunity to validate the different theoretical approaches, which do not agree with each other, as shown by the coloured bands in figure 1. Theoretical calculations of the branching fractions are currently below the experimental limits. The upgraded LHCb detector and the increased luminosity of the LHC’s Run 3 is currently providing conditions for studying rare radiative b-hadron decays with greater precision and, eventually, for finding evidence for the B0s→ μ+μγ decay.

The post LHCb targets rare radiative decay appeared first on CERN Courier.

]]>
News The LHCb collaboration has reported the first search for the decay of the neutral strange B-meson to a pair of muons and a reconstructed photon. https://cerncourier.com/wp-content/uploads/2024/04/CCMayJun24_EF_LHCb_feature.jpg
Boosting physics with precision and intensity https://cerncourier.com/a/boosting-physics-with-precision-and-intensity/ Sat, 04 May 2024 15:26:31 +0000 https://preview-courier.web.cern.ch/?p=110676 Physics Beyond Colliders' annual workshop convened 175 physicists at CERN to provide updates on ongoing projects and explore new proposals.

The post Boosting physics with precision and intensity appeared first on CERN Courier.

]]>
The Physics Beyond Colliders (PBC) initiative has diversified the landscape of experiments at CERN by supporting smaller experiments and showcasing their capabilities. Its fifth annual workshop convened around 175 physicists from 25 to 27 March to provide updates on the ongoing projects and to explore new proposals to tackle the open questions of the Standard Model and beyond.

This year, the PBC initiative has significantly strengthened CERN’s dark-sector searches, explained Mike Lamont and Joachim Mnich, directors for accelerators and technology, and research and computing, respectively. In particular, the newly approved SHiP proton beam-dump experiment (see SHiP to chart hidden sector) will complement the searches for light dark-sector particles that are presently conducted with NA64’s versatile setup, which is suitable for electron, positron, muon and hadron beams.

First-phase success

The FASER and SND experiments, now taking data in the LHC tunnel, are two of the successes of the PBC initiative’s first phase. Both search for new physics and study high-energy neutrinos along the LHC collision axis. FASER’s successor, FASER2, promises a 10,000-fold increase in sensitivity to beyond-the-Standard Model physics, said Jonathan Feng (UC Irvine). With the potential to detect thousands of TeV-scale neutrinos a day, it could also measure parton distribution functions and thereby enhance the physics reach of the high-luminosity LHC (HL-LHC). FASER2 may form part of the proposed Forward Physics Facility, set to be located 620 m away, along a tangent from the HL-LHC’s interaction point 1. A report on the facility’s technical infrastructure is scheduled for mid-2024, with a letter of intent foreseen in early 2025. By contrast, the CODEX-b and ANUBIS experiments are being designed to search for feebly interacting particles transverse to LHCb and ATLAS, respectively. In all these endeavours, the Feebly Interacting Particle Physics Centre will act as a hub for exchanges between experiment and theory.

Francesco Terranova (Milano-Bicocca) and Marc Andre Jebramcik (CERN) explained how ENUBET and NuTAG have been combined to optimise a “tagged” neutrino beam for cross-section measurements, where the neutrino flavour is known by studying the decay process of its parent hadron. In the realm of quantum chromodynamics, SPS experiments with lead ions (the new NA60+ experiment) and light ions (NA61/SHINE) are aiming to decode the phases of nuclear matter in the non-perturbative regime. Meanwhile, AMBER is proposing to determine the charge radii of kaons and pions, and to perform meson spectroscopy, in particularwith kaons.

The LHCspin collaboration presented a plan to open a new frontier of spin physics at the LHC building upon the successful operation of the SMOG2 gas cell that is upstream of the LHCb detector. Studying collective phenomena at the LHC in this way could probe the structure of the nucleon in a so-far little-explored kinematic domain and make use of new probes such as charm mesons, said Pasquale Di Nezza (INFN Frascati).

Measuring moments

The TWOCRYST collaboration aims to demonstrate the feasibility and the performance of a possible fixed-target experiment in the LHC to measure the electric and magnetic dipole moments (EDMs and MDMs) of charmed baryons, offering a complementary probe of searches for CP violation in the Standard Model. The technique would use two bent crystals: the first to deflect protons from the beam halo onto a target, with the resulting charm baryons then deflected by the second (precession) crystal onto a detector such as LHCb, while at the same time causing their spins to precess in the strong electric and magnetic fields of the deformed crystal lattice, explained Pascal Hermes (CERN).

New ideas ranged from the measurement of molecular electric dipole moments at ISOLDE to measuring the gravitational field of the LHC beam

Several projects to detect axion-like particles were discussed, including a dedicated superconducting cavity for heterodyne detection being jointly developed by PBC and CERN’s Quantum Technology Initiative. Atom interferometry is another subject of common interest, with PBC demonstrating the technical feasibility of installing an atom interferometer with a baseline of 100 m in one of the LHC’s access shafts. Other new ideas ranged from the measurement of molecular EDMs at ISOLDE to measuring the gravitational field of the LHC beam.

With the continued determination to fully exploit the scientific potential of the CERN accelerator complex and infrastructure for projects that are complementary to high-energy-frontier colliders testified by many fruitful discussions, the annual meeting concluded as a resounding success. The PBC community ended the workshop by thanking co-founder Claude Vallée (CPPM Marseille), who retired as a PBC convener after almost a decade of integral work, and welcomed Gunar Schnell (Ikerbasque and UPV/EHU Bilbao), who will take over as convener.

The post Boosting physics with precision and intensity appeared first on CERN Courier.

]]>
Meeting report Physics Beyond Colliders' annual workshop convened 175 physicists at CERN to provide updates on ongoing projects and explore new proposals. https://cerncourier.com/wp-content/uploads/2024/05/CCMayJun24_FN_PBC.jpg
SHiP to chart hidden sector https://cerncourier.com/a/ship-to-chart-hidden-sector/ Fri, 03 May 2024 12:58:02 +0000 https://preview-courier.web.cern.ch/?p=110615 In March, CERN selected a new experiment called SHiP to search for hidden particles using high-intensity proton beams from the SPS.

The post SHiP to chart hidden sector appeared first on CERN Courier.

]]>
Layout of the SHiP experiment

In March, CERN selected a new experiment called SHiP to search for hidden particles using high-intensity proton beams from the SPS. First proposed in 2013, SHiP is scheduled to operate in the North Area’s ECN3 hall from 2031, where it will enable searches for new physics at the “coupling frontier” complementary to those at high-energy and precision-flavour experiments.

Interest in hidden sectors has grown in recent years, given the absence of evidence for non-Standard Model particles at the LHC, yet the existence of several phenomena (such as dark matter, neutrino masses and the cosmic baryon asymmetry) that require new particles or interactions. It is possible that the reason why such particles have not been seen is not that they are too heavy but that they are light and extremely feebly interacting. With such small couplings and mixings, and thus long lifetimes, hidden particles are extremely difficult to constrain. Operating in a beam-dump configuration that will produce copious quantities of photons and charm and beauty hadrons, SHiP will generically explore hidden-sector particles in the MeV to multiple-GeV mass range.

Optimised searching

SHiP is designed to search for signatures of models with hidden-sector particles, which include heavy neutral leptons, dark photons and dark scalars, by full reconstruction and particle identification of Standard Model final states. It will also search for light–dark-matter scattering signatures via the direct detection of atomic–electron or nuclear recoils in a high-density medium, and is optimised to make measurements of tau neutrinos and of neutrino-induced charm production by all three neutrinos species.

The experiment will be built in the existing TCC8/ECN3 experimental facility in the North Area. The beam-dump setup consists of a high-density proton target located in the target bunker, followed by a hadron stopper and a muon shield. Sharing the SPS beam time with other fixed-target experiments and the LHC should allow around 6 × 1020 protons on target to be produced during 15 years of nominal operation. The detector itself consists of two parts that are designed to be sensitive to as many physics models and final states as possible. The scattering and neutrino detector will search for light dark matter and perform neutrino measurements. Further downstream is the much larger hidden-sector decay spectrometer, which is designed to reconstruct the decay vertex of a hidden-sector particle, measure its mass and provide particle identification of the decay products in an extremely low-background environment.

One of the most critical and challenging components of the facility is the proton target, which has to sustain an energy of 2.6 MJ impinging on it every 7.2 s. Another is the muon shield. To control the beam-induced background from muons, the flux in the detector acceptance must be reduced by some six orders of magnitude over the shortest possible distance, for which an active muon shield entirely based on magnetic deflection has been developed.

One of the most critical and challenging components of the facility is the proton target

The focus of the SHiP collaboration now is to produce technical design reports. “Given adequate funding, we believe that the TDR phase for BDF/SHiP will take us about three years, followed by production and construction, with the aim to commission the facility towards the end of 2030 and the detector in 2031,” says SHiP spokesperson Andrey Golutvin of Imperial College London. “This will allow up to two years of data-taking during Run 4, before the start of Long Shutdown 4, which would be the obvious opportunity to improve or consolidate, if necessary, following the experience of the first years of data taking.”

The decision to proceed with SHiP concluded a process that took more than a year, involving the Physics Beyond Colliders study group and the SPS and PS experiments committee. Two other experiments, HIKE and SHADOWS, were proposed to exploit the high-intensity beam from the SPS. Continuing the successful tradition of kaon experiments in the ECN3 hall, which currently hosts the NA62 experiment, HIKE (high-intensity kaon experiment) proposed to search for new physics in rare charged and neutral kaon decays while also allowing on-axis searches for hidden particles. For SHADOWS (search for hidden and dark objects with the SPS), which would have taken data concurrently with HIKE when the beamline is operated in beam-dump mode, the focus was low-background searches for off-axis hidden-sector particles in the MeV-GeV region.

“In terms of their science, SHiP and HIKE/SHADOWS were ranked equally by the relevant scientific committees,” explains CERN director for research and computing Joachim Mnich. “But a decision had to be made, and SHiP was a strategic choice for CERN.”

The post SHiP to chart hidden sector appeared first on CERN Courier.

]]>
News In March, CERN selected a new experiment called SHiP to search for hidden particles using high-intensity proton beams from the SPS. https://cerncourier.com/wp-content/uploads/2024/05/CCMayJun24_NA_SHiP_feature.jpg
Probing resonant production of Higgs bosons https://cerncourier.com/a/probing-resonant-production-of-higgs-bosons/ Fri, 19 Apr 2024 06:11:22 +0000 https://preview-courier.web.cern.ch/?p=110451 No known particle is heavy enough to decay into two Higgs bosons. The resonant production of Higgs pairs would therefore be clear evidence for new physics.

The post Probing resonant production of Higgs bosons appeared first on CERN Courier.

]]>
CMS figure 1

Besides being a cornerstone of the Standard Model (SM), the Higgs boson (H) opens a very powerful path to search for physics beyond the SM. In particular, in the SM there are no particles that are sufficiently heavy to decay into two Higgs bosons. Therefore, if we observe the resonant production of HH pairs, for example, we have clear evidence for the existence of new physics, as predicted by models with an extended Higgs sector.

The CMS collaboration recently conducted a search for the resonant production of Higgs-boson pairs. The analysis combines six different analyses and five HH final states, targeting H decays into b quarks, photons, τ leptons and W bosons. As figure 1 shows for a spin-0 resonance (denoted X), the combination of the decay modes covers a wide mass range, from 280 GeV to 4 TeV. While no resonant signal is observed, stringent upper limits on the pp → X → HH cross section are obtained, which reach values of about 0.2 fb at the highest masses. These are the strongest observed limits to date for a scalar mass below 320 GeV or above 800 GeV.

CMS figure 2

One possible candidate for such a resonance is a heavy scalar from an extended Higgs sector, as predicted in the Minimal Supersymmetric Standard Model (MSSM), which features three neutral and two charged Higgs bosons. Figure 2 shows the excluded region of the model parameter tanβ (the ratio of vacuum expectation values of the two underlying Higgs doublets) as a function of the mass of the CP-odd Higgs boson, mA. The HH combination is sensitive up to well beyond tanβ = 6, just above the HH threshold, and its exclusion extends up to beyond 600 GeV, outperforming the lower limits from the (also shown) searches of single heavy Higgs-boson production in this mass range. Compared to other direct searches, there is unique sensitivity for mA > 450 GeV and tanβ < 5.

This result is part of a recent comprehensive review article on resonant Higgs-boson production searches by the CMS collaboration, covering the VH, HH and YH final states, with V denoting a W or Z boson and Y representing an additional new boson.

The post Probing resonant production of Higgs bosons appeared first on CERN Courier.

]]>
News No known particle is heavy enough to decay into two Higgs bosons. The resonant production of Higgs pairs would therefore be clear evidence for new physics. https://cerncourier.com/wp-content/uploads/2019/06/CMS-2.jpg
FCC: the physics case https://cerncourier.com/a/fcc-the-physics-case/ Wed, 27 Mar 2024 18:15:06 +0000 https://preview-courier.web.cern.ch/?p=110303 By providing considerable advances in sensitivity, precision and energy reach, the Future Circular Collider is the perfect vehicle with which to navigate the new physics landscape.

The post FCC: the physics case appeared first on CERN Courier.

]]>
Results from the LHC so far have transformed the particle-physics landscape. The discovery of the Higgs boson with a mass of 125 GeV – in agreement with the prediction from earlier precision measurements at LEP and other colliders – has completed the long-predicted matrix of particles and interactions of the Standard Model (SM) and cleared the decks for a new phase of exploration. On the other hand, the lack of evidence for an anticipated supporting cast of particles beyond the SM (BSM) gives no clear guidance as to what form this exploration may take. For the first time since the Fermi theory almost a century ago, particle physicists are voyaging into completely uncharted territory, where our only compass is the certitude that the SM in isolation cannot account for all observations. This absence of theoretical guidance calls for a powerful experimental programme to push the frontiers of the unknown as far as possible.

The absence of LHC signals for new phenomena in the TeV range requires physicists to think differently about the open questions in fundamental physics. These include the abundance of matter over antimatter, the nature of dark matter, the quark and lepton flavour puzzle in general, and the non-zero nature of neutrino masses in particular. Solutions could be at even higher energies, at the price of either an unnatural value of the electroweak scale or an ingenious but still elusive structure. Radically new physics scenarios have been devised, often involving light and very-weakly coupled structures. Neither the mass scale (from meV to ZeV) of this new physics nor the intensity of its couplings (from 1 to 10–12 or less) to the SM are known, calling for a versatile exploration tool.

An illustration of a detector for FCC-hh

By providing considerable advances in sensitivity, precision and, eventually, energy far above the TeV scale, the integrated Future Circular Collider (FCC) programme is the perfect vehicle with which to navigate this new landscape. Its first stage FCC-ee, an e+e collider operating at centre-of-mass energies ranging from below the Z pole (90 GeV) to beyond the top-quark pair-production threshold (365 GeV), would map the properties of the Higgs and electroweak gauge bosons and the top quark with precisions that are orders of magnitude better than today, acquiring sensitivity to the processes that led to the formation of the Brout–Englert–Higgs field a fraction of a nanosecond after the Big Bang. A comprehensive campaign of precision electroweak, QCD, flavour, tau, Higgs and top-quark measurements sensitive to tiny deviations from the predicted SM behaviour would probe energy scales far beyond the direct kinematic reach, while a subsequent pp collider (FCC-hh) would improve – by about an order of magnitude – the direct discovery reach for new particles. Both machines are strongly motivated in their own rights. Together, they offer the furthest physics reach of all proposed future colliders, and put the fundamental scalar sector of the universe centre-stage.

A scalar odyssey

The power of FCC-ee to probe the Higgs boson and other SM particles at much higher resolution would allow physicists to peer further into the cloud of quantum fluctuations surrounding them. The combination of results from previous lepton and hadron colliders at CERN and elsewhere has shown that electroweak symmetry breaking is consistent with its SM parameterisation, but its origin (and the origin of the Higgs boson itself) demands a deeper explanation. The FCC is uniquely placed to address this mystery via a combination of per-mil-level Higgs-boson and parts-per-millon gauge-boson measurements, along with direct high-energy exploration, to comprehensively probe symmetry-based explanations for an electroweak hierarchy. In particular, measurements of the Higgs boson’s self-coupling at the FCC would test whether the electroweak phase transition was first- or second-order, revealing whether it could have potentially played a role in setting the out-of-equilibrium condition necessary for creating the matter–antimatter asymmetry.

FCC-ee baseline design luminosity

While the Brout–Englert–Higgs mechanism nicely explains the pattern of gauge-boson masses, the peculiar structure of quark and lepton masses (as well as the quark mixing angles) is ad hoc within the SM and could be the low-energy imprint of some new dynamics. The FCC will probe such potential new symmetries and forces, in particular via detailed studies of b and τ decays and of b → τ transitions, and significantly extend knowledge of flavour physics. A deeper understanding of approximate conservation laws such as baryon- and lepton-number conservation (or the absence thereof in the case of Majorana neutrinos) would test the limits of lepton-flavour universality and violation, for example, and could reveal new selection rules governing the fundamental laws. Measuring the first- and second-generation Yukawa couplings will also be crucial to complete our understanding, with a potential FCC-ee run at the s-channel Higgs resonance offering the best sensitivity to the electron Yukawa coupling. Stepping back, the FCC would sharpen understanding of the SM as a low-energy effective field theory approximation of a deeper, richer theory by extending the reach of direct and indirect exploration by about one order of magnitude.

The unprecedented statistics from FCC-ee also make it uniquely sensitive to exploring weakly coupled dark sectors and other candidates for new physics beyond the SM (such as heavy axions, dark photons and long-lived particles). Decades of searches across different experiments have pushed the mass of the initially favoured dark-matter candidate (weakly interacting massive particles, WIMPs) progressively beyond the reach of the highest energy e+e colliders. As a consequence, hidden sectors consisting of new particles that interact almost imperceptibly with the SM are rapidly gaining popularity as an alternative that could hold the answer not only to this problem but to a variety of others, such as the origin of neutrino masses. If dark matter is a doublet or a triplet WIMP, FCC-hh would cover the entire parameter space up to the upper mass limit for thermal relic. The FCC could also host a range of complementary detector facilities to extend its capabilities for neutrino physics, long-lived particles and forward physics.

For the first time since the Fermi theory almost a century ago, particle physicists are voyaging into completely uncharted territory

Completing this brief, high-level summary of the FCC physics reach are the origins of exotic astrophysical and cosmological signals, such as stochastic gravitational waves from cosmological phase transitions or astrophysical signatures of high-energy gamma rays. These phenomena, which include a modified electroweak phase transition, confining new physics in a dark sector, or annihilating TeV-scale WIMPs, could arise due to new physics which is directly accessible only to an energy-frontier facility.

Precision rules

Back in 2011, the original incarnation of a circular e+e collider to follow the LHC (dubbed LEP3) was to create a high-luminosity Higgs factory operating at 240 GeV in the LEP/LHC tunnel, providing similar precision to that at a linear collider running at the same centre-of-mass energy for a much smaller price tag. Choosing to build a larger 80–100 km version not only allows the tunnel and infrastructure to be reused for a 100 TeV hadron collider, but extends the FCC-ee scientific reach significantly beyond the study of the Higgs boson alone. The unparalleled control of the centre-of-mass energy via the use of resonant depolarisation and the unrivalled luminosity of an FCC-ee with four interaction points would produce around 6 × 1012 Z bosons, 2.4 × 108 W pairs (offering ppm precision on the Z and W masses and widths), 2 × 106 Higgs bosons and 2 × 106 top-quark pairs (impossible to produce with e+e collisions in the LEP/LHC tunnel) in as little as 16 years.

FCC-hh discovery reach

From the Fermi interaction to the discovery of the W and Z, and from electroweak measurements to the discovery of the top quark and the Higgs boson, greater precision has operated as a route to discoveries. Any deviation from the SM predictions, interpreted as the manifestation of new contact interactions, will point to a new energy scale that will be explored directly in a later stage. One of the findings of the FCC feasibility study is the richness of the FCC-ee Z-pole run, which promises comprehensive measurements of the Z lineshape and many electroweak observables with a 50-fold increase in precision, as well as direct and uniquely precise determinations of the electromagnetic and strong coupling constants. The comparison between these data and commensurately precise SM predictions would severely constrain the existence of new physics via virtual loops or mixing, corresponding to a factor-of-seven increase in energy scale – a jump similar to that from the LHC to FCC-hh. The Z-pole run also enables otherwise unreachable flavour (b, τ) physics, studies of QCD and hadronisation, searches for rare or forbidden decays, and exploration of the dark sector.

After the Z-pole run, the W boson provides a further precision tool at FCC-ee. Its mass is one of the most precisely measured parameters that can be calculated in the SM and is thus of utmost importance. In the planned WW-threshold run, current knowledge can be improved by more than an order of magnitude to test the SM as well as a plethora of new-physics models at a higher quantum level. Together, the very-high-luminosity Z and W runs will determine the gauge-boson sector with the sharpest precision ever.

Going to its highest energy, FCC-ee would explore physics associated with the heaviest known particle, the top quark, whose mass plays a fundamental role in the prediction of SM processes and for the cosmological fate of the vacuum. An improvement in precision by more than an order of magnitude will go hand in hand with a significant improvement in the strong coupling constant, and is crucial for precision exploration beyond the SM.

High-energy synergies

A later FCC-hh stage would complement and substantially extend the FCC-ee physics reach in nearly all areas. Compared to the LHC, it would increase the energy for direct exploration by a factor of seven, with the potential to observe new particles with masses up to 40 TeV (see “Direct exploration” figure). The day FCC-hh directly finds a signal for beyond-SM physics, the precision measurements from FCC-ee will be essential to pinpoint its microscopic origin. Indirectly, FCC-hh will be sensitive to energies of around 100 TeV, for example in the tails of Drell–Yan distributions. The large production of SM particles, including the Higgs boson, at large transverse momentum allows measurements to be performed in kinematic regions with optimal signal-to-background ratio and reduced experimental systematic uncertainties, testing the existence of effective contact interactions in ways that are complementary to what is accessible at lepton colliders. Dedicated FCC-hh experiments, for instance with forward detectors, would enrich further the new-physics opportunities and hunt for long-lived and millicharged particles.

Minimal potential physics programme for FCC-ee

Further increasing the synergies between FCC-ee and FCC-hh is the importance of operating four detectors (instead of two as in the conceptual design study), which has led to an optimised ring layout with a new four-fold period­icity. With four interaction points, FCC-ee provides a net gain in integrated luminosity for a given physics outcome. It also allows for a range of detector solutions to cover all physics opportunities, strengthens the robustness of systematic-uncertainty estimates and discovery claims, and opens several key physics targets that are tantalisingly close (but missed) with only two detectors. The latter include the first 5σ observation of the Higgs-boson self-coupling, and the opportunity to access the Higgs-boson coupling to electrons – one of FCC-ee’s toughest physics challenges.

No physics case for FCC would be complete without a thorough assessment of the corresponding detector challenges. A key deliverable of the feasibility study is a complete set of specifications ensuring that calorimeters, tracking and vertex detectors, muon detectors, luminometers and particle-identification devices meet the physics requirements. In the context of a Higgs factory operating at the ZH production threshold and above, these requirements have already been studied extensively for proposed linear colliders. However, the different experimental environment and the huge statistics of FCC-ee demand that they are revisited. The exquisite statistical uncertainties anticipated on key electroweak measurements at the Z peak and at the WW threshold call for a superb control of the systematic uncertainties, which will put considerable demands on the acceptance, construction quality and stability of the detectors. In addition, the specific discovery potential for very weakly coupled particles must be kept in mind.

The software and computing demands of FCC are an integral element of the feasibility study. From the outset, the driving consideration has been to develop a single software “ecosystem” adaptable to any future collider and usable by any future experiment, based on the best software available. Some tools, such as flavour tagging, significantly exceed the performance of algorithms previously used for linear-collider studies, but there is still much work needed  to bring the software to the level required by the FCC-ee. This includes the need for more accurate simulations of beam-related quantities, the machine-detector interface and the detectors themselves. In addition, various reconstruction and analysis tools for use by all collaborators need to be developed and implemented, reaping the benefits from the LHC experience and past linear-collider studies, and computing resources for regular simulated data production need to be evaluated.

Powerful plan

The alignment of stars – that from the initial concept in 2011/2012 of a 100 km-class electron–positron collider in the same tunnel as a future 100 TeV proton–proton collider led to the 2020 update of the European strategy for particle physics endorsing the FCC feasibility study as a top priority for CERN and its international partners – provides the global high-energy physics community with the most powerful exploration tool. FCC-ee offers ideal conditions (luminosity, centre-of-mass energy calibration, multiple experiments and possibly monochromatisation) for the study of the four heaviest particles of the SM with a flurry of opportunities for precision measurements, searches for rare or forbidden processes, and the possible discovery of feebly coupled particles. It is also the perfect springboard for a 100 TeV hadron collider, for which it provides a great part of the infrastructure. Strongly motivated in their own rights, together these two machines offer a uniquely powerful long-term plan for 21st-century particle physics.

The post FCC: the physics case appeared first on CERN Courier.

]]>
Feature By providing considerable advances in sensitivity, precision and energy reach, the Future Circular Collider is the perfect vehicle with which to navigate the new physics landscape. https://cerncourier.com/wp-content/uploads/2024/03/CCMarApr24_FCC_PHYSICS_PX.jpg
Magnetic monopoles where art thou? https://cerncourier.com/a/magnetic-monopoles-where-art-thou/ Wed, 17 Jan 2024 09:56:40 +0000 https://preview-courier.web.cern.ch/?p=110058 ATLAS researchers have set new upper cross-section limits and lower mass limits on magnetic monopoles and high-electric-charge objects.

The post Magnetic monopoles where art thou? appeared first on CERN Courier.

]]>
ATLAS figure 1

Magnetic monopoles are hypothetical particles that possess a magnetic charge. In 1864 James Clerk Maxwell assumed that magnetic monopoles didn’t exist because no one had ever observed one. Hence, he did not incorporate the concept of magnetic charges in his unified theory of electricity and magnetism, despite their being fully consistent with classical electrodynamics. Interest in magnetic monopoles intensified in 1931 when Dirac showed that quantum mechanics can accommodate magnetic charges, g, allowed by the quantisation condition g = Ne  = NgD, where e is the elementary electric charge, α is the fine structure constant, gD is the fundamental magnetic charge and N is an integer. Grand unified theories predict very massive magnetic monopoles, but several recent extensions of the Standard Model feature monopoles in a mass range accessible at the LHC. Scientists have explored cosmic rays, particle collisions, polar volcanic rocks and lunar materials in their quest for magnetic monopoles, yet no experiment has found conclusive evidence thus far.

Signature strategy

The ATLAS collaboration recently reported the results of the search for magnetic monopoles using the full LHC Run 2 dataset recorded in 2015–2018. Magnetic charge conservation dictates that magnetic monopoles are stable and would be created in pairs of oppositely charged particles. Point-like magnetic monopoles could be produced in proton–proton collisions via two mechanisms: Drell–Yan, in which a virtual photon from the collision creates a magnetic mono­pole pair; or photon-fusion, whereby two virtual photons scattering off proton collisions interact to create a magnetic monopole pair. Dirac’s quantisation condition implies that a 1gD monopole would ionise matter in a similar way as a high-electric-charge object (HECO) of charge 68.5e. Hence, magnetic monopoles and HECOs are expected to be highly ionising. In contrast to the behaviour of electrically charged particles, however, the Lorentz force on a monopole in the solenoidal magnetic field encompassing the ATLAS inner tracking detector would cause it to be accelerated in the direction of the field rather than in the orthogonal plane – a trajectory that precludes the application of usual track-reconstruction methods. The ATLAS detection strategy therefore relies on characterising the highly ionising signature of magnetic monopoles and HECOs in the electromagnetic calorimeter and in the transition radiation tracker.

This is the first ATLAS analysis to consider the photon-fusion production mechanism

The ATLAS search considered magnetic monopoles of magnetic charge 1gD and 2gD, and HECOs of 20e, 40e, 60e, 80e and 100e of both spin-0 and spin-½ in the mass range 0.2–4 TeV. ATLAS is not sensitive to higher charge monopoles or HECOs because they stop before the calorimeter due to their higher ionisation. Since particles in the considered mass range are too heavy to produce significant electromagnetic showers in the calorimeter, their narrow high-energy deposits are readily distinguished from the broader lower-energy ones of electrons and photons. Events with multiple high-energy deposits in the transition radiation tracker aligned with a narrow high-energy deposit in the calorimeter are therefore characteristic of magnetic monopoles and HECOs.

Random combinations of rare processes, such as superpositions of high-energy electrons, could potentially mimic such a signature. Since such rare processes cannot be easily simulated, the background in the signal region is estimated to be 0.15 ± 0.04 (stat) ± 0.05 (syst) events through extrapolation from the lower ionisation event yields in the data.

With no magnetic monopole or HECO candidate observed in the analysed ATLAS data, upper cross-section limits and lower mass limits on these particles were set at 95% confidence level. The Drell–Yan cross-section limits are approximately a factor of three better than those from the previous search using the 2015–2016 Run 2 data.

This is the first ATLAS analysis to consider the photon-fusion production mechanism, the results of which are shown in figure 1 (left) for spin-½ monopoles. ATLAS is also currently the most sensitive experiment to magnetic monopoles in the charge range 1-2gD, as shown in figure 1 (right), and to HECOs in the charge range of 20–100e. The collaboration is further refining search techniques and developing new strategies to search for magnetic monopoles and HECOs in both Run 2 and Run 3 data.

The post Magnetic monopoles where art thou? appeared first on CERN Courier.

]]>
News ATLAS researchers have set new upper cross-section limits and lower mass limits on magnetic monopoles and high-electric-charge objects. https://cerncourier.com/wp-content/uploads/2024/01/CCJanFeb24_EF-ATLAS_feature.jpg
Golden anniversaries in Spain https://cerncourier.com/a/golden-anniversaries-in-spain/ Wed, 17 Jan 2024 09:40:27 +0000 https://preview-courier.web.cern.ch/?p=110082 Celebrating 50 years of the International Meeting on Fundamental Physics and the National Centre for Particle Physics, Astroparticles and Nuclear Physics.

The post Golden anniversaries in Spain appeared first on CERN Courier.

]]>
The golden jubilees of the International Meeting on Fundamental Physics (IMFP23) and the National Centre for Particle Physics, Astroparticles and Nuclear Physics (CPAN) Days were celebrated from 2 to 6 October 2023 at Palacio de la Magdalena in Santander, Spain, organised by the Institute of Physics of Cantabria (IFCA). More than 180 participants representing the entire Spanish community in these disciplines, together with several international researchers, convened to foster cooperation between Spanish research groups and identify key priorities.

The congress started with parallel meetings on LHC physics, astroparticle physics, nuclear physics and theoretical physics. Two extra sessions were held, one covering technology transfer and the other discussing instrumentation R&D aimed at supporting the HL-LHC, future Higgs factories, and other developments in line with the European strategy for particle physics. The opening ceremony was followed by a lecture by Manuel Aguilar (CIEMAT), who gave an overview of the past 50 years of research in high-energy physics in Spain and the IMFP series. The first edition, held in Formigal (Spanish Pyrenees) in February 1973, was of great significance given the withdrawal of Spain from CERN in 1969, which put high-energy physics in Spain in a precarious position. The participation of prestigious foreign scientists in the first and subsequent editions undoubtedly contributed to the return of Spain to CERN in 1983.

LHC physics was one of the central themes of the event, in particular the first results from Run 3 as well as improvements in theoretical precision and Spain’s contribution to the HL-LHC upgrades. Other discussions and presentations focused on the search for new physics and especially dark-matter candidates, as well as new technologies such as quantum sensors. The conference also reviewed the status of studies related to neutrino oscillations and mass measurements, as well as searches for neutrinoless double beta decay and high-energy neutrinos in astrophysics. Results from gamma-ray and gravitational-wave observatories were discussed, as well as prospects for future experiments.

The programme included plenary sessions devoted to nuclear physics (such as the use of quantum computing to study the formation of nuclei), QCD studies in collisions of very high-energy heavy ions and in neutron stars, and nuclear reactions in storage rings. New technologies applied in nuclear and high-energy physics and their most relevant applications, especially in medical physics, complemented the programme alongside an overview of observational cosmology.

Roundtable discussions focused on grants offered by the European Research Council, R&D strategies and, following a clear presentation of the perspectives of future accelerators by ECFA chair Karl Jacobs (University of Freiburg), possible Spanish strategies for future projects with the participation of industry representatives. The congress also covered science policy, with the participation of the national programme manager Pilar Hernández (University of Valencia).

Prior to the opening of the conference, 170 students from various schools in Cantabria were welcomed to take part in an outreach activity “A morning among scientists” organised by IFCA and CPAN, while Álvaro de Rújula (University of Boston) gave a public talk on artificial intelligence. Finally, an excellent presentation by Antonio Pich (University of Valencia) on open questions in high-energy physics brought the conference to a close.

The post Golden anniversaries in Spain appeared first on CERN Courier.

]]>
Meeting report Celebrating 50 years of the International Meeting on Fundamental Physics and the National Centre for Particle Physics, Astroparticles and Nuclear Physics. https://cerncourier.com/wp-content/uploads/2024/01/CCJanFeb24_FN_ciemat.jpg
Scrutinising g-2 from all angles https://cerncourier.com/a/scrutinising-g-2-from-all-angles/ Thu, 11 Jan 2024 16:41:44 +0000 https://preview-courier.web.cern.ch/?p=109882 The proposed MUonE experiment at CERN aims to provide an independent determination of hadronic contributions to the anomalous magnetic moment of the muon.

The post Scrutinising g-2 from all angles appeared first on CERN Courier.

]]>
The anomalous magnetic moment of the muon has long exhibited an intriguing tension between experiment and theory. The latest measurement from Fermilab is around 5σ higher than the official Standard Model prediction, but newer calculations based on lattice-QCD reduce the gap significantly. Confusion surrounds how best to determine the leading quantum correction to the muon’s magnetic moment: a process called hadronic vacuum polarisation (HVP), whereby a virtual photon briefly transforms into a hadronic blob before being reabsorbed.

While theorists are working hard to resolve this tension, the MUonE project aims to provide an independent determination of HVP using an intense muon beam from the CERN Super Proton Synchrotron. Whereas HVP is traditionally determined via hadron-production cross sections in e+e data, or via theory-based estimates from recent lattice calculations, MUonE would make a very precise measurement of the shape of the differential cross section of μ+e μ+e scattering. This will enable a direct measurement of the hadronic contribution to the running of the electromagnetic coupling constant α, which governs the HVP process.

MUonE was first proposed in 2017 as part of the Physics Beyond Colliders initiative, and a test run in 2018 was performed to validate the basic idea of a detector. Following a decision by CERN in 2019 to carry out a three-week long pilot run to validate the experimental idea, the MUonE team collected data at the M2 beamline from 21 August to 10 September 2023, using a 160 GeV/c muon beam fired at atomic electrons in a fixed target located at CERN’s North Area. The main purpose of the run was to verify the system’s engineering and to attempt to measure the leptonic corrections to the running of α, for which an analysis is in progress.

The full experiment would have 40 stations comprising a 1.5 cm thick beryllium target followed by a tracking system, which can measure the scattering angles with high precision; further downstream lies an electromagnetic calorimeter and a muon detector. During the 2023 run, two MUonE stations followed by a calorimeter were installed, and a further tracking station without target was placed upstream of the apparatus to detect the incoming muons; the upstream station, towards the beam and without target, was dedicated to tracking the incoming muons. The next step is to install further detetor stations in stages.

“The original schedule has been delayed, partly due to the COVID pandemic, and the final measurement is expected to be performed after Long Shutdown 3,” explains MUonE collaboration board chair Clara Matteuzzi (INFN Milano Bicocca). “A first stage with a scaled detector, comprising a few stations followed by a calorimeter and a muon identifier, which could provide a very first measurement of HVP with low accuracy and a demonstration of the whole concept before the full final run, is under consideration.”

The overall goal of the experiment is to gather around 3.5 × 1012 elastic scattering events with an electron energy larger than 1 GeV, during three years of data-taking at the M2 beam. This would allow the team to achieve a statistical error of 0.3% and thus make MUonE competitive with the latest HVP results computed by other means. The challenge, however, is to keep the systematic error at the level of the statistical one.

“This successful test run gives MUonE confidence that the final goal can be reached, and we are very much looking forward to submitting the proposal for the full run,” adds Matteuzzi.

The post Scrutinising g-2 from all angles appeared first on CERN Courier.

]]>
News The proposed MUonE experiment at CERN aims to provide an independent determination of hadronic contributions to the anomalous magnetic moment of the muon. https://cerncourier.com/wp-content/uploads/2024/01/CCJanFeb24_NA_muone.jpg
Kaon physics at a turning point https://cerncourier.com/a/kaon-physics-at-a-turning-point/ Tue, 21 Nov 2023 11:07:56 +0000 https://preview-courier.web.cern.ch/?p=109752 More than 100 kaon experts met at CERN in September for a hybrid workshop to take stock of the experimental and theoretical opportunities in kaon physics in the coming decades.

The post Kaon physics at a turning point appeared first on CERN Courier.

]]>
Only two experiments worldwide are dedicated to the study of rare kaon decays: NA62 at CERN and KOTO at J-PARC in Japan. NA62 plans to conclude its efforts in 2025, and both experiments are aiming to reach important milestones on this timescale. The future experimental landscape for kaon physics beyond this date is by no means clear, however. With proposals for next-generation facilities such as HIKE at CERN and KOTO-II at J-PARC currently under scrutiny, more than 100 kaon experts met at CERN from 11 to 14 September for a hybrid workshop to take stock of the experimental and theoretical opportunities in kaon physics in the coming decades.

Kaons, which contain one strange and either a lighter up or down quark, have played a central role in the development of the Standard Model (SM). Augusto Ceccucci (CERN) pointed out that many of the SM’s salient features – including flavour mixing, parity violation, the charm quark and CP violation – were discovered through the study of kaons, leading to the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix. The full particle content of the SM was finally experimentally established at CERN with the Higgs-boson discovery in 2012, but many open questions remain.

The kaon’s special role in this context was the central topic of the workshop. The study of rare kaon decays provides a unique sensitivity to new physics, up to  scales higher than those at collider experiments. In the SM, the rare decay of a charged or neutral kaon into a pion plus a pair of charged or neutral leptons is strongly suppressed, even more so than the similar rare B-meson decays. This is due to the absence at tree-level of flavour-changing neutral current interactions (e.g. s → d) in the SM. Such a transition can only proceed at loop level involving the creation of at least one very heavy (virtual) electroweak gauge boson (figure “Decayed”, left). While experimentally this suppression constitutes a formidable challenge in identifying the decay products amongst a variety of background signals, new-physics contributions could leave a significantly measurable imprint through tree-level or virtual contributions. In contrast to rare B decays, the “gold-plated” rare kaon decay channels K+→π+νν and KL→π0νν do not suffer from large hadronic uncertainties and are experimentally clean due to the limited number of possible decay channels.

kaons_at_cern_diagram

The charged-kaon decay is currently being studied at NA62, and a measurement of its branching ratio with a precision of 15% is expected by 2025. However, as highlighted by NA62 physics coordinator Karim Massri (Lancaster University), to improve this measurement and thus significantly increase the  likelihood of a discovery, the experimental precision must be reduced to the level of the theoretical prediction, i.e. 5%. This can only be achieved with a next-generation experiment. The HIKE experiment, a proposed high-intensity kaon factory at CERN currently under approval, would reach the 5% precision goal on the measurement of K+→π+νν during its first phase of operation. experiment, a future high-intensity kaon factory at CERN currently under approval, will reach the 5% precision goal on the measurement of K+→π+νν during its first phase of operation. Afterwards, a second phase with a neutral KL beam aiming at the first observation of the very rare decays KL→π0+ is foreseen. With a setup and detectors optimised for the measurement of the most challenging processes, the HIKE programme would be able to achieve unprecedented precision on most K+ and KL decays.

For KOTO, Koji Shimi and Hashime Nanjo reported on the experimental progress on KL→π0+ and presented a new bound on its branching ratio. A planned phase two of KOTO, if funded, aims to measure the branching ratio with a precision of 20%. Although principally designed for the study of (rare) bottom-quark decays, LHCb can also provide information about the rare decay of the shorter-lived KS.Radoslav Marchevski (EPFL Lausanne) presented the status and the prospects for a proposed LHCb-Phase II upgrade.

From the theory perspective, underpinned by impressive new perturbative, lattice QCD and effective-field-theory calculations presented at the workshop, the planned measurement of K+→π+νν at HIKE clearly has discovery potential, remarked Gino Isidori (University of Zurich). Together with other rare decay channels such as KL→μ+μ, KL→π0+ and K+→π++that would be measured by HIKE, added Giancarlo D’Ambrosio (INFN), the combined global theory analyses of experimental data will allow for discovering new physics if it exists within the reach of the experiment, and for providing solid constraints for new physics.

A decision on HIKE and other proposed experiments in CERN’s North Area will take place in early December.

The post Kaon physics at a turning point appeared first on CERN Courier.

]]>
Meeting report More than 100 kaon experts met at CERN in September for a hybrid workshop to take stock of the experimental and theoretical opportunities in kaon physics in the coming decades. https://cerncourier.com/wp-content/uploads/2023/11/kaons_at_cern.png
Setting sail for HEP in Hamburg https://cerncourier.com/a/setting-sail-for-hep-in-hamburg/ Thu, 09 Nov 2023 14:36:36 +0000 https://preview-courier.web.cern.ch/?p=109689 The intense programme of EPS-HEP 2023 underlined the vibrancy and diversity of the field.

The post Setting sail for HEP in Hamburg appeared first on CERN Courier.

]]>
The European Physical Society Conference on High Energy Physics (EPS-HEP), which took place in Hamburg from 21 to 25 August, attracted around 900 physicists in-person and online to discuss a plethora of topics and results. An intense programme underlined both the vibrancy and diversity of the field, including the first evidence for a stochastic gravitational-wave background as well as the latest measurement of the anomalous magnetic moment of the muon – the latter sparking many discussions that continued during the breaks.

The participants were treated to many LHC Run 2 legacy results, as well as brand-new ones using freshly analysed Run 3 data. A large chunk of these results comprised precision measurements of the Higgs boson in view of gaining a deeper understanding of the origin of electroweak symmetry breaking. As the Higgs boson is deeply connected to many open questions potentially linked to physics beyond the Standard Model (SM), such as the origin of particle masses and flavour, studying it in the context of effective field theory is a particularly hot topic. A rich potential programme of “simplified” models for Higgs physics that can better quantify the reach of the LHC and offer new observables is also under development.

New frontiers

The ATLAS and CMS collaborations presented no fewer than 37 and 27 new preliminary results, respectively. Besides Higgs-sector physics, the experiments revealed their latest results of searches for physics beyond the SM, including new limits on the existence of supersymmetric and dark-matter particles. At the intensity frontier, the latest search for the ultra-rare decay K+ π+e+ee+e from the NA62 experiment placed upper limits on dark-boson candidate masses, underlining the powerful complementarity between CERN’s fixed-target and LHC programmes. The Belle II collaboration presented first evidence of the decay B+ K+νν, as well as the result of their R(X) = Br(B  Xτντ)/Br(B  Xℓν) measurement – the first at a B factory. The LHCb collaboration also presented an update of its recent R(D*) = Br(B  D*τντ)/Br(B  D*ν) measurement. Another highlight was LHCb’s observation of the hypernuclei antihypertriton and hypertriton.

Intense discussions took place on novel and potentially game-changing accelerator concepts

The state of the art in neutrino physics was presented, covering the vast landscape of experiments seeking to shed light on the three-flavour paradigm as well as the origin of the neutrino masses and mixings. So far, analyses by T2K and NOvA show a weak preference for a normal mass ordering, while the inverted mass ordering is not yet ruled out. With a joint analysis between T2K and NOvA in progress, updates are expected next year. At CERN the FASER experiment, which made the first observation of muon neutrinos at a collider earlier this year, presented the first observation of collider electron neutrinos. Looking outwards, a long-awaited discovery of galactic neutrinos was presented by IceCube.

The current FCC feasibility status was presented, along with that of other proposed colliders that could serve as Higgs factories. The overarching need to join forces between the circular- and linear-collider communities and to use all the gained knowledge for getting at least one accelerator approved was reflected during the discussions and many talks, as were the sustainability and energy consumption of detector and accelerator concepts. Intense discussions took place on novel and potentially game-changing accelerator concepts, such as energy recovery technologies or plasma acceleration. While not yet ready to be used on a large scale, they promise to have a big impact on the way accelerators are built in the future. Beyond colliders, the community also looked ahead to the DUNE and Hyper-Kamiokande experiments, and to proposed experiments such as the Einstein Telescope and those searching for axions.

A rich social programme included a public lecture by Andreas Hoecker (CERN) about particle physics at the highest energies, a concert with an introduction to the physics of the organ by Wolfgang Hillert (University of Hamburg), as well as an art exhibition called “High Energy” and a Ukrainian photo exhibition depicting science during times of war.

The next EPS-HEP conference will take place in 2025 in Marseille.

The post Setting sail for HEP in Hamburg appeared first on CERN Courier.

]]>
Meeting report The intense programme of EPS-HEP 2023 underlined the vibrancy and diversity of the field. https://cerncourier.com/wp-content/uploads/2023/11/CCNovDec23_FN_EPS.jpg
We need to talk about CERN’s future https://cerncourier.com/a/we-need-to-talk-about-cerns-future/ Fri, 03 Nov 2023 12:29:52 +0000 https://preview-courier.web.cern.ch/?p=109636 Fighting for the most adequate words and pictures that give meaning to what we are doing is crucial to keep the community focused and motivated for the long march ahead, says Urs Wiedemann.

The post We need to talk about CERN’s future appeared first on CERN Courier.

]]>
In big science, long-term planning for future colliders is a careful process of consensus building. Particle physics has successfully institutionalised this discourse in the many working groups and R&D projects that contribute, for example, to the European strategy updates and the US Snowmass exercise. But long timescales and political dimensions can render these processes impersonal and uninspiring. Ultimately, a powerful vision that captures the imagination of current and future generations must go beyond consensus building; it should provide a crisp, common intellectual denominator of how we talk about what we are doing and why we are doing it.

A lack of uniqueness

For several decades, the hunt for the Higgs boson has been central to such a captivating narrative. Today, 11 years after its discovery, all other fundamental open questions remain open, and questions about the precise nature of the Higgs mechanism have become newly accessible to experimentation. What the field is facing today is not a lack of long-term challenges and opportunities, but a lack of uniqueness of one scientific hypothesis behind which a broad and intrinsically heterogeneous international research community could be assembled most easily.

We need to learn how to communicate this reality more effectively. Particle physics, even if no longer driven by the hypothesis of a particular particle within guaranteed experimental reach, continues to have a well-defined aim in understanding the fundamental composition of the universe. From discussions, however, I sense that many of my colleagues find it harder to develop long-term motivation in this more versatile situation. As a theorist I know that nature does not care about the words I attach to its equations. And yet, our research community is not immune to the motivational power of snappy formulations.

Urs Wiedemann

The exploration of the Higgs sector provides a two-decade-long perspective for future experimentation at the LHC and its high-luminosity upgrade (HL-LHC). However, any thorough exploration of the Brout–Englert–Higgs mechanism exceeds the capabilities of the HL-LHC and motivates a new machine. Why is it then challenging to communicate to the greater public that collecting 3 ab–1 of data by the end of the HL-LHC is more than filling-in details on a discovery made in 2012? How can our narrative better reflect the evolving emphasis of our research? Should we talk, for example, about the Higgs’ self-interaction as a “fifth force”? Or would this be misleading cheerleader language, given that the Higgs self-coupling, unlike the other forces in the Standard Model Lagrangian, is not gauged? Whatever the best pitch is, it deserves to be sharpened within our community and more homogeneously disseminated.

Another compelling narrative for a future collider is the growing synergy with other fields. In recent decades, space-based astrophysical observatories have started to reach a complexity and cost comparable to the LHC. In addition, there is a multitude of smaller astrophysical observatories. We should welcome the important complementarities between lab-based experimental and space-based observational approaches. In the case of dark matter, for example, there are strong generic reasons to expect that collider experiments can constrain (and finally, establish) the microscopic nature of dark matter and that the solution lies in experimentally unchartered territory, such as either very massive or very feebly interacting particles.

What makes the physics of the infinitesimally small exciting for the public is also what makes it difficult to communicate

What makes the physics of the infinitesimally small exciting for the public is also what makes it difficult to communicate, starting with subtle differences in the use of everyday language. For a lay audience, for instance, a “search for something” is easy to picture, and not finding the something is a failure. In physics, however, particles can reveal themselves in quantum fluctuations even if the energy needed to produce them can’t be reached. Far from being a failure, not-finding with increased precision becomes an intrinsic mark of progress. When talking to non-scientists, should we try to bring to the forefront such unique and subtle features of our search logic? Could this be a safeguard against the foes of our science who misrepresent the perspectives and consequences of our research by naively equating any unconfirmed hypothesis with failure? Or is this simply too subtle and intellectual to be heard?

Clearly, in our everyday work at CERN, getting the numbers out is the focus. But going beyond this operational attitude and fighting for the most adequate words and pictures that give meaning to what we are doing is crucial to keep the community focused and motivated for the long march ahead.

• Adapted from text originally published in the CERN Staff Association newsletter.

The post We need to talk about CERN’s future appeared first on CERN Courier.

]]>
Opinion Fighting for the most adequate words and pictures that give meaning to what we are doing is crucial to keep the community focused and motivated for the long march ahead, says Urs Wiedemann. https://cerncourier.com/wp-content/uploads/2023/11/CCNovDec23_VIEW-telepathy.jpg
Getting to the bottom of muon g-2 https://cerncourier.com/a/getting-to-the-bottom-of-muon-g-2/ Fri, 03 Nov 2023 12:10:42 +0000 https://preview-courier.web.cern.ch/?p=109639 The sixth plenary workshop of the Muon g-2 Theory Initiative covered the status and strategies for future improvements of the Standard Model prediction for the anomalous magnetic moment of the muon.

The post Getting to the bottom of muon g-2 appeared first on CERN Courier.

]]>
Muon g-2 Theory Initiative

About 90 physicists attended the sixth plenary workshop of the Muon g-2 Theory Initiative, held in Bern from 4 to 8 September, to discuss the status and strategies for future improvements of the Standard Model (SM) prediction for the anomalous magnetic moment of the muon. The meeting was particularly timely given the recent announcement of the results from runs two and three of the Fermilab g-2 experiment (Muon g-2 update sets up showdown with theory), which reduced the uncertainty of the world average to 0.19 ppm, in dire need of a SM prediction at commensurate precision. The main topics of the workshop were the two hadronic contributions to g-2, hadronic vacuum polarisation (HVP) and hadronic light-by-light scattering (HLbL), evaluated either with a lattice–QCD or data-driven approach.

Hadronic vacuum polarisation

The first one-and-a-half days were devoted to the evaluation of HVP – the largest QCD contribution to g-2, whereby a virtual photon briefly transforms into a hadronic “blob” before being reabsorbed – from e+e data. The session started with a talk from the CMD-3 collaboration at the VEPP-2000 collider, whose recent measurement of the e+e π+π cross section generated shock waves earlier this year by disagreeing (at the level of 2.5–5σ) with all previous measurements used in the Theory Initiative’s 2020 white paper. The programme also featured a comparison with results from the earlier CMD-2 experiment, and a report from seminars and panel discussions organised by the Theory Initiative in March and July on the details of the CMD-3 result. While concerns remain regarding the estimate of certain systematic effects, no major shortcomings could be identified.

Further presentations from BaBar, Belle II, BESIII, KLOE and SND detailed their plans for new measurements of the 2π channel, which in the case of BaBar and KLOE involve large data samples never analysed before for this measurement. Emphasis was put on the role of radiative corrections, including a recent paper by BaBar on additional radiation in initial-state-radiation events and, in general, the development of higher-order Monte Carlo generators. Intensive discussions reflected a broad programme to clarify the extent to which tensions among the experiments can be due to higher-order radiative effects and structure-dependent corrections. Finally, updated combined fits were presented for the 2π and 3π channels, for the former assessing the level of discrepancy among datasets, and for the latter showing improved determinations of isospin-breaking contributions.

CMD-3 generated shock waves by disagreeing with all previous measurements at the level of 2.5-5σ

Six lattice collaborations (BMW, ETMC, Fermilab/HPQCD/MILC, Mainz, RBC/UKQCD, RC*) presented updates on the status of their respective HVP programmes. For the intermediate-window quantity (the contribution of the region of Euclidean time between about 0.4–1.0 fm, making up about one third of the total), a consensus has emerged that differs from e+e-based evaluations (prior to CMD-3) by about 4σ, while the short-distance window comes out in agreement. Plans for improved evaluations of the long-distance window and isospin-breaking corrections were presented, leading to the expectation of new, full computations for the total HVP contribution in addition to the BMW result in 2024. Several talks addressed detailed comparisons between lattice-QCD and data-driven evaluations, which will allow physicists to better isolate the origin of the differences once more results from each method become available. A presentation on possible beyond-SM effects in the context of the HVP contribution showed that it seems quite unlikely that new physics can be invoked to solve the puzzles.

Light-by-light scattering

The fourth day of the workshop was devoted to the HLbL contribution, whereby the interaction of the muon with the magnetic field is mediated by a hadronic blob connected to three virtual photons. In contrast to HVP, here the data-driven and lattice-QCD evaluations agree. However, reducing the uncertainty by a further factor of two is required in view of the final precision expected from the Fermilab experiment. A number of talks discussed the various contributions that feed into improved phenomenological evaluations, including sub-leading contributions such as axial-vector intermediate states as well as short-distance constraints and their implementation. Updates on HLbL from lattice QCD were presented by the Mainz and RBC/UKQCD groups, as were results on the pseudoscalar transition form factor by ETMC and BMW. The latter in particular allow cross checks of the numerically dominant pseudoscalar- pole contributions between lattice QCD and data-driven evaluations.

It is critical that the Theory Initiative work continues beyond the lifespan of the Fermilab experiment

On the final day, the status of alternative methods to determine the HVP contribution were discussed, first from the MUonE experiment at CERN, then from τ data (by Belle, CLEOc, ALEPH and other LEP experiments). First MUonE results could become available at few-percent precision with data taken in 2025, while a competitive measurement would proceed after Long Shutdown 3. For the τ data, new input is expected from the Belle II experiment, but the critical concern continues to be control over isospin-breaking corrections. Progress in this direction from lattice QCD was presented by the RBC/UKQCD collaboration, together with a roadmap showing how, potentially in combination with data-driven methods, τ data could lead to a robust, complementary determination of the HVP contribution.

The workshop concluded with a discussion on how to converge on a recommendation for the SM prediction in time for the final Fermilab result, expected in 2025, including new information expected from lattice QCD, the BaBar 2π analysis and radiative corrections. A final decision for the procedure for an update of the 2020 white paper is planned to be taken at the next plenary meeting in Japan in September 2024. In view of the long-term developments discussed at the workshop – not least the J-PARC Muon g-2/EDM experiment, due to start taking data in 2028 – it is critical that the work by the Theory Initiative continues beyond the lifespan of the Fermilab experiment, to maximise the amount of information on physics beyond the SM that can be inferred from precision measurements of the anomalous magnetic moment of the muon.

The post Getting to the bottom of muon g-2 appeared first on CERN Courier.

]]>
Meeting report The sixth plenary workshop of the Muon g-2 Theory Initiative covered the status and strategies for future improvements of the Standard Model prediction for the anomalous magnetic moment of the muon. https://cerncourier.com/wp-content/uploads/2023/11/CCNovDec23_FN_muon_feature.jpg
Looking forward at the LHC https://cerncourier.com/a/looking-forward-at-the-lhc/ Fri, 01 Sep 2023 12:55:49 +0000 https://preview-courier.web.cern.ch/?p=109206 The proposed Forward Physics Facility at CERN offers a broad programme ranging from neutrino, QCD and hadron-structure studies to beyond-the-Standard Model searches.

The post Looking forward at the LHC appeared first on CERN Courier.

]]>
Proposed Forward Physics Facility

The Forward Physics Facility (FPF) is a proposed new facility to operate concurrently with the High-Luminosity LHC, housing several new experiments on the ATLAS collision axis. The FPF offers a broad, far-reaching physics programme ranging from neutrino, QCD and hadron-structure studies to beyond-the-Standard Model (BSM) searches. The project, which is being studied within the Physics Beyond Colliders initiative, would exploit the pre-existing HL-LHC beams and thus have minimal energy-consumption requirements.

On 8 and 9 June, the 6th workshop on the Forward Physics Facility was held at CERN and online. Attracting about 160 participants, the workshop was organised in sessions focusing on the facility design, the proposed experiments and physics studies, leaving plenty of time for discussion about the next steps.

Groundbreaking

Regarding the facility itself, CERN civil-engineering experts presented its overall design: a 65 m-long, 10 m-high/wide cavern connected to the surface via an 88 m-deep shaft. The facility is located 600 m from the ATLAS collision point, in the SM18 area of CERN. A workshop highlight was the first results from a site investigation study, whereby a 20 cm-diameter core was taken at the proposed location of the FPF shaft to a depth of 100 m. The initial analysis of the core showed that the geological conditions are positive for work in this area. Other encouraging studies towards confirming the FPF feasibility were FLUKA simulations of the expected muon flux in the cavern (the main background for the experiments), the expected radiation level (shown to allow people to enter the cavern during LHC operations with various restrictions), and the possible effect on beam operations of the excavation works. One area where more work is required concerns the possible need to install a sweeper magnet in the LHC tunnel between ATLAS and the FPF to reduce the muon backgrounds.

Currently there are five proposed experiments to be installed in the FPF: FASER2 (to search for decaying long-lived particles); FASERν2 and AdvSND (dedicated neutrino detectors covering complementary rapidity regions); FLArE (a liquid-argon time projection chamber for neutrino physics and light dark-matter searches); and FORMOSA (a scintillator-based detector to search for milli-charged particles). The three neutrino detectors offer complementary designs to exploit the huge number of TeV energy neutrinos of all flavours that would be produced in such a forward-physics configuration. Four of these have smaller pathfinder detectors, FASER(ν), SND@LHC and milliQan that are already operating during LHC Run 3. First results from these pathfinder experiments were presented at the CERN workshop, including the first ever direct observation of collider neutrinos by FASER and SND@LHC, which provide a key proof of principle for the FPF. The latest conceptual design and expected performance of the FPF experiments were presented. Furthermore, first ideas on models to fund these experiments are in place and were discussed at the workshop.

In the past year, much progress has been made in quantifying the physics case of the FPF. It effectively extends the LHC with a “neutrino–ion collider’’ with complementary reach to the Electron–Ion Collider under construction in the US. The large number of high-energy neutrino interactions that will be observed at the FPF allows detailed studies of deep inelastic scattering to constrain proton and nuclear parton distribution functions (PDFs). Dedicated projections of the FPF reveal that uncertainties in light-quark PDFs could be reduced by up to a factor of two or even more compared to current models, leading to improved HL-LHC predictions for key measurements such as the W-boson mass.

In the past year, much progress has been made in quantifying the physics case of the FPF

High-energy electrons and tau neutrinos at the FPF predominantly arise from forward charm production. This is initiated by gluon–gluon scattering involving very low and high momentum fractions, with the former reaching down to Bjorken-x values of 10–7 – beyond the range of any other experiment. The same FPF measurements of forward charm production are relevant for testing different models of QCD at small-x, which would be instrumental for Higgs production at the proposed Future Circular Collider (FCC-hh). This improved modeling of forward charm production is also essential for understanding the backgrounds to diffuse astrophysics neutrinos at telescopes such as IceCube and KM3NeT. In addition, measurements of the ratio of electron-to-muon neutrinos at the FPF probe forward kaon-to-pion production ratios that could explain the so-called muon puzzle (a deficit in muons in simulations compared to measurements), affecting cosmic-ray experiments.

The FPF experiments would also be able to probe a host of BSM scenarios in uncharted regions of parameter space, such as dark-matter portals, dark Higgs bosons and heavy neutral leptons. Furthermore, experiments at the FPF will be sensitive to the scattering of light dark-matter particles produced in LHC collisions, and the large centre-of-mass energy enables probes of models, such as quirks (long-lived particles that are charged under a hidden-sector gauge interaction), and some inelastic dark-matter candidates, which are inaccessible at fixed-target experiments. On top of that, the FPF experiments will significantly improve the sensitivity of the LHC to probe millicharged particles.

The June workshop confirmed both the unique physics motivation for the FPF and the excellent progress in technical and feasibility studies towards realising it. Motivated by these exciting prospects, the FPF community is now working on a Letter of Intent to submit to the LHC experiments committee as the next step.

The post Looking forward at the LHC appeared first on CERN Courier.

]]>
Meeting report The proposed Forward Physics Facility at CERN offers a broad programme ranging from neutrino, QCD and hadron-structure studies to beyond-the-Standard Model searches. https://cerncourier.com/wp-content/uploads/2023/08/CCSepOct23_FN_forward_feature.jpg
Using top quarks to probe nature’s secrets https://cerncourier.com/a/using-top-quarks-to-probe-natures-secrets/ Fri, 01 Sep 2023 12:51:28 +0000 https://preview-courier.web.cern.ch/?p=109178 CMS recently performed a search for new physics using effective field theory, analysing data containing top quarks with additional final-state leptons.

The post Using top quarks to probe nature’s secrets appeared first on CERN Courier.

]]>
CMS figure 1

Despite its exceptional success, we know that the standard model (SM) is incomplete. To date, the LHC has not yet found clear indications of physics beyond the SM (BSM), which might mean that the BSM energy scale is above what can be directly probed at the LHC. An alternative way to probe BSM physics is through searches of off-shell effects, which can be done using the effective field theory framework (EFT). By treating the SM Lagrangian as the lowest order term in a perturbative expansion, EFT allows us to include higher-dimension operators in the Lagrangian, while respecting the experimentally verified SM symmetries.

Operators

The CMS collaboration recently performed a search for BSM physics using EFT, analysing data containing top quarks with additional final-state leptons. The top quark is of particular interest because of its large mass, resulting in a Higgs–Yukawa coupling of order unity. Many BSM models connect the top-quark mass to large couplings to new physics. In the context of top quark EFT, there are 59 total operators at dimension six, controlled by the so-called Wilson coefficients, 26 of which produce final-state leptons. These coefficients enter the model as corrections to the SM matrix element, with a first term corresponding to the interference between the SM and BSM contributions, and a second term reflecting pure BSM effects. 

The analysis was performed on the Run 2 proton–proton collisions sample, corresponding to an integrated luminosity of 138 fb–1. It obtained limits on those 26 dimension-six coefficients, simulated at detector level with leading order precision (plus an additional parton when possible), exploiting six final-state signals, with different numbers of top quarks and leptons: ttH, ttν, ttℓℓ, tℓℓq, tHq and tttt. The analysis splits the data into 43 discrete categories, based primarily on lepton multiplicity, total lepton charge, and total jet or b-quark jet multiplicities. The events are analysed as differential distributions in the kinematics of the final-state leptons and jets. 

CMS figure 2

A statistical analysis is performed using a profiled likelihood to extract the 68% and 95% confidence intervals for all 26 Wilson coefficients by varying one of them while profiling the other 25. All the coefficients are compatible with zero (i.e. in agreement with the SM) at the 95% confidence level. For many of them, these results are the most competitive to date, even when compared to analyses that fit only one or two coefficients. Figure 1 shows how the 95% confidence intervals (2σ limit) translate into upper limits on the energy scale of the probed BSM interaction. 

The CMS collaboration will continue to refine these measurements by expanding upon the final-state observables and leveraging the Run 3 data sample. With the HL-LHC quickly approaching, the future of BSM physics searches is full of potential.

The post Using top quarks to probe nature’s secrets appeared first on CERN Courier.

]]>
News CMS recently performed a search for new physics using effective field theory, analysing data containing top quarks with additional final-state leptons. https://cerncourier.com/wp-content/uploads/2023/08/CCSepOct23_Ef_CMStracks.jpg
Muon g-2 update sets up showdown with theory https://cerncourier.com/a/new-muon-g-2-result-bolsters-earlier-measurement-2/ Thu, 10 Aug 2023 12:38:03 +0000 https://preview-courier.web.cern.ch/?p=109607 Combining data from Run 1 to Run 3, the Fermilab Muon g-2 collaboration has determined the anomalous magnetic of the muon with twice the precision of its initial result.

The post Muon g-2 update sets up showdown with theory appeared first on CERN Courier.

]]>
Muon g-2 measurement

On 10 August, the Muon g-2 collaboration at Fermilab presented its latest measurement of the anomalous magnetic moment of the muon aμ. Combining data from Run 1 to Run 3, the collaboration found aμ = 116 592 055 (24) × 10–11, representing a factor-of-two improvement on the precision of its initial 2021 result. The experimental world average for aμ now stands more than 5σ above the Standard Model (SM) prediction published by the Muon g-2 Theory Initiative in 2020. However, calculations based on a different theoretical approach (lattice QCD) and a recent analysis of e+e data that feeds into the prediction are in tension with the 2020 calculation, and more work is needed before the discrepancy is understood.

The anomalous magnetic moment of the muon aμ = (g-2)/2 (where g is the muon’s gyromagnetic ratio) is the difference between the observed value of the muon’s magnetic moment and the Dirac prediction (g = 2) due to contributions of virtual particles. This makes measurements of aμ, which is one of the most precisely calculated and measured quantities in physics, an ideal testbed for physics beyond the SM. To measure it, a muon beam is sent into a superconducting storage ring reused from the former g-2 experiment at Brookhaven National Laboratory. Initially aligned, the muon spin axes precess as they interact with the magnetic field. Detectors located along the ring’s inner circumference allow the precession rate and thus aμ to be determined. Many improvements to the setup have been made since the first run, including better running conditions, more stable beams and an improved knowledge of the magnetic field.

The new result is based on data taken from 2019 and 2020, and has four times the statistics compared to the 2021 result. The collaboration also decreased the systematic uncertainty to levels beyond its initial goals. Currently, about 25% of the total data (Run 1–Run 6) has been analysed. The collaboration plans to publish its final results in 2025, targeting a precision of 0.14 ppm compared to the current 0.2 ppm. “We have moved the accuracy bar of this experiment one step further and now we are waiting for the theory to complete the calculations and cross-checks necessary to match the experimental accuracy,” explains collaboration co-spokesperson Graziano Venanzoni of INFN Pisa and the University of Liverpool. “A huge experimental and theoretical effort is going on, which makes us confident that theory prediction will be in time for the final experimental result from FNAL in a few years from now.”

The theoretical picture is foggy. The SM prediction for the anomalous magnetic moment receives contributions from the electromagnetic, electroweak and strong interactions. While the former two can be computed to high precision in perturbation theory, it is only possible to compute the latter analytically in certain kinematic regimes. Contributions from hadronic vacuum polarisation and hadronic light-by-light scattering dominate the overall theoretical uncertainty on aμ at 83% and 17%, respectively.

To date, the experimental results are confronted with two theory predictions: one by the Muon g-2 Theory Initiative based on the data-driven “R-ratio” method, which relies on hadronic cross-section measurements, and one by the Budapest–Marseille–Wuppertal (BMW) collaboration based on simulations of lattice QCD and QED. The latter significantly reduces the discrepancy between the theoretical and measured values. Adding a further puzzle, a recently published value of hadronic cross-section measurements by the CMD-3 collaboration that contrasts with all other experiments narrows the gap between the Muon g-2 Theory Initiative and the BMW predictions (see p19).

“This new result by the Fermilab Muon g-2 experiment is a true milestone in the precision study of the Standard Model,” says lattice gauge theorist Andreas Jüttner of CERN and the University of Southampton. “This is really exciting – we are now faced with getting to the roots of various tensions between experimental and theoretical findings.”

The post Muon g-2 update sets up showdown with theory appeared first on CERN Courier.

]]>
News Combining data from Run 1 to Run 3, the Fermilab Muon g-2 collaboration has determined the anomalous magnetic of the muon with twice the precision of its initial result. https://cerncourier.com/wp-content/uploads/2023/11/CCNovDec23_NA_MUON_feature.jpg
Probing for periodic signals https://cerncourier.com/a/probing-for-periodic-signals/ Wed, 05 Jul 2023 10:03:42 +0000 https://preview-courier.web.cern.ch/?p=108815 ATLAS sees no significant deviation from the background-only hypothesis in its recent search for heavy gravitons predicted by "clockwork gravity".

The post Probing for periodic signals appeared first on CERN Courier.

]]>
ATLAS figure 1

New physics may come at us in unexpected ways that may be completely hidden to conventional search methods. One unique example of this is the narrowly spaced, semi-periodic spectra of heavy gravitons predicted by the clockwork gravity model. Similar to models with extra dimensions, the clockwork model addresses the hierarchy problem between the weak and Planck scales, not by stabilising the weak scale (as in supersymmetry, for example), but by bringing the fundamental higher dimensional Planck scale down to accessible energies. The mass spectrum of the resulting graviton tower in the clockwork model is described by two parameters: k, a mass parameter that determines the onset of the tower, and M5, the five-dimensional reduced Planck mass that controls the overall cross-section of the tower’s spectrum.

At the LHC, these gravitons would be observed via their decay into two light Standard Model particles. However, conventional bump/tail hunts are largely insensitive to this type of signal, particularly when its cross section is small. A recent ATLAS analysis approaches the problem from a completely new angle by exploiting the underlying approximate periodicity feature of the two-particle invariant mass spectrum.

Graviton decays with dielectron or diphoton final states are an ideal testbed for this search due to the excellent energy resolution of the ATLAS detector. After convolving the mass spectrum of the graviton tower with the ATLAS detector resolution corresponding to these final states, it resembles a wave-packet (like the representation of a free particle propagating in space as a pulse of plane-wave superposition with a finite momenta range). This implies that a transformation exploiting the periodic nature of the signal may be helpful.

ATLAS figure 2

Figure 1 shows how a particularly faint clockwork signal would emerge in ATLAS for the diphoton final state. It is compared with the data and the background-only fit obtained from an earlier (full Run 2) ATLAS search for resonances with the same final state. As an illustration, the signal shape is given without realistic statistical fluctuations. The tiny “bumps” or the shape’s integral over the falling background cannot be detected with conventional bump/tail-hunting methods. Instead, for the first time, a continuous wavelet transformation is applied to the mass distribution. The problem is therefore transformed to the “scalogram” space, i.e. the mass versus scale (or inverse frequency) space, as shown in figure 2 (left). The large red area at high scales (low frequencies) represents the falling shape of the background, while the signal from figure 1 now appears as a clear, distinct local “blob” above mγγ = k and at low scales (high frequencies).

The strongest exclusion contours to date are placed in the clockwork parameter space

With realistic statistical fluctuations and uncertainties, these distinct “blobs” may partially wash out, as shown in figure 2 (right). To counteract this effect, the analysis uses multiple background-only and background-plus-signal scalograms to train a binary convolutional neural-network classifier. This network is very powerful in distinguishing between scalograms belonging to the two classes, but it is also model-specific. Therefore, another search for possible periodic signals is performed independently from the clockwork model hypothesis. This is done in an “anomaly detection” mode using an autoencoder neural-network. Since the autoencoder is trained on multiple background-only scalograms (unlabelled data) to learn the features of the background (unsupervised learning), it can predict the compatibility of a given scalogram with the background-only hypothesis. A statistical test based on the two networks’ scores is derived to check the data compatibility with the background-only or the background+signal hypotheses.

Applying these novel procedures to the dielectron and diphoton full Run 2 data, ATLAS sees no significant deviation from the background-only hypothesis in either the clockwork-model search or in the model-independent one. The strongest exclusion contours to date are placed in the clockwork parameter space, pushing the sensitivity to beyond 11 TeV in M5. Despite the large systematic uncertainties in the background model, these do not exhibit any periodic structure in the mass space and their impact is naturally reduced when transforming to the scalogram space. The sensitivity of this analysis is therefore mostly limited by statistics and is expected to improve with the full Run 3 dataset.

The post Probing for periodic signals appeared first on CERN Courier.

]]>
News ATLAS sees no significant deviation from the background-only hypothesis in its recent search for heavy gravitons predicted by "clockwork gravity". https://cerncourier.com/wp-content/uploads/2023/07/CCJulAug23_EF_ATLAS2.jpg
Extreme detector design for a future circular collider https://cerncourier.com/a/extreme-detector-design-for-a-future-circular-collider/ Mon, 03 Jul 2023 13:31:46 +0000 https://preview-courier.web.cern.ch/?p=108725 A pileup of 1000 proton–proton collisions per bunch-crossing is just one of the challenges in extracting physics from a next-generation hadron collider to follow the LHC.

The post Extreme detector design for a future circular collider appeared first on CERN Courier.

]]>
FCC-hh reference detector

The Future Circular Collider (FCC) is the most powerful post-LHC experimental infrastructure proposed to address key open questions in particle physics. Under study for almost a decade, it envisions an electron–positron collider phase, FCC-ee, followed by a proton–proton collider in the same 91 km-circumference tunnel at CERN. The hadron collider, FCC-hh, would operate at a centre-of-mass energy of 100 TeV, extending the energy frontier by almost an order of magnitude compared to the LHC, and provide an integrated luminosity a factor of 5–10 larger. The mass reach for direct discovery at FCC-hh will reach several tens of TeV and allow, for example, the production of new particles whose existence could be indirectly exposed by precision measurements at FCC-ee. 

The potential of FCC-hh offers an unprecedented opportunity to address fundamental unknowns about our universe

At the time of the kickoff meeting for the FCC study in 2014, the physics potential and the requirements for detectors at a 100 TeV collider were already heavily debated. These discussions were eventually channelled into a working group that provided the input to the 2020 update of the European strategy for particle physics and recently concluded with a detailed writeup in a 300-page CERN Yellow Report. To focus the effort, it was decided to study one reference detector that is capable of fully exploiting the FCC-hh physics potential. At first glance it resembles a super CMS detector with two LHCb detectors attached (see “Grand designs” image). A detailed detector performance study followed, allowing a very efficient study of the key physics capabilities. 

The first detector challenge at FCC-hh is related to the luminosity, which is expected to reach 3 × 1035 cm–2s–1. This is six times larger than the HL-LHC luminosity and 30 times larger than the nominal LHC luminosity. Because the FCC will operate beams with a 25 ns bunch spacing, the so-called pile-up (the number of pp collisions per bunch crossing) scales by approximately the same factor. This results in almost 1000 simultaneous pp collisions, requiring a highly granular detector. Evidently, the assignment of tracks to their respective vertices in this environment is a formidable task. 

Longitudinal cross-section of the FCC-hh reference detector

The plan to collect an integrated pp luminosity of 30 ab–1 brings the radiation hardness requirements for the first layers of the tracking detector close to 1018 hadrons/cm2, which is around 100 times more than the requirement for the HL-LHC. Still, the tracker volume with such high radiation load is not excessively large. From a radial distance of around 30 cm outwards, radiation levels are already close to those expected for the HL-LHC, thus the silicon technology for these detector regions is already available.

The high radiation levels also need very radiation-hard calorimetry, making a liquid-argon calorimeter the first choice for the electromagnetic calorimeter and forward regions of the hadron calorimeter. The energy deposit in the very forward regions will be 4 kW per unit of rapidity and it will be an interesting task to keep cryogenic liquids cold in such an environment. Thanks to the large shielding effect of the calorimeters, which have to be quite thick to contain the highest energy particles, the radiation levels in the muon system are not too different from those at the HL-LHC. So the technology needed for this system is available. 

Looking forward 

At an energy of 100 TeV, important SM particles such as the Higgs boson are abundantly produced in the very forward region. The forward acceptance of FCC-hh detectors therefore has to be much larger than at the LHC detectors. ATLAS and CMS enable momentum measurements up to pseudorapidities (a measure of the angle between the track and beamline) of around η = 2.5, whereas at FCC-hh this will have to be extended to η = 4 (see “Far reaching” figure). Since this is not achievable with a central solenoid alone, a forward magnet system is assumed on either side of the detector. Whether the optimum forward magnets are solenoids or dipoles still has to be studied and will depend on the requirements for momentum resolution in the very forward region. Forward solenoids have been considered that extend the precision of momentum measurements by one additional unit of rapidity. 

Momentum resolution versus pseudorapidity

A silicon tracking system with a radius of 1.6 m and a total length of 30 m provides a momentum resolution of around 0.6% for low-momentum particles, 2% at 1 TeV and 20% at 10 TeV (see “Forward momentum” figure). To detect at least 90% of the very forward jets that accompany a Higgs boson in vector-boson-fusion production, the tracker acceptance has to be extended up to η = 6. At the LHC such an acceptance is already achieved up to η = 4. The total tracker surface of around 400 m2 at FCC-hh is “just” a factor two larger than the HL-LHC trackers, and the total number of channels (16.5 billion) is around eight times larger.

It is evident that the FCC-hh reference detector is more challenging than the LHC detectors, but not at all out of reach. The diameter and length are similar to those of the ATLAS detector. The tracker and calorimeters are housed inside a large superconducting solenoid 10 m in diameter, providing a magnetic field of 4 T. For comparison, CMS uses a solenoid with the same field and an inner diameter of 6 m. This difference does not seem large at first sight, but of course the stored energy (13 GJ) is about five times larger than the CMS coil, which needs very careful design of the quench protection system.

For the FCC-hh calorimeters, the major challenge, besides the high radiation dose, is the required energy resolution and particle identification in the high pile-up environment. The key to achieve the required performance is therefore a highly segmented calorimeter. The need for longitudinal segmentation calls for a solution different from the “accordion” geometry employed by ATLAS. Flat lead/steel absorbers that are inclined by 50 degrees with respect to the radial direction are interleaved with liquid-argon gaps and straight electrodes with high-voltage and signal pads (see “Liquid argon” figure). The readout of these pads on the back of the calorimeter is then possible thanks to the use of multi-layer electrodes fabricated as straight printed circuit boards. This idea has already been successfully prototyped within the CERN EP detector R&D programme.

The considerations for a muon system for the reference detector are quite different compared to the LHC experiments. When the detectors for the LHC were originally conceived in the late 1980s, it was not clear whether precise tracking in the vicinity of the collision point was possible in this unprecedented radiation environment. Silicon detectors were excessively expensive and gas detectors were at the limit of applicability. For the LHC detectors, a very large emphasis was therefore put on muon systems with good stand-alone performance, specifically for the ATLAS detector, which is able to provide a robust measurement of, for example, the decay of a Higgs particle into four muons, with the muon system alone. 

Liquid argon

Thanks to the formidable advancement of silicon-sensor technology, which has led to full silicon trackers capable of dealing with around 140 simultaneous pp collisions every 25 ns at the HL-LHC, standalone performance is no longer a stringent requirement. The muon systems for FCC-hh can therefore fully rely on the silicon trackers, assuming just two muon stations outside the coil that measure the exit point and the angle of the muons. The muon track provides muon identification, the muon angle provides a coarse momentum measurement for triggering and the track position provides improved muon momentum measurement when combined with the inner tracker. 

The major difference between an FCC-hh detector and CMS is that there is no yoke for the return flux of the solenoid, as the cost would be excessive and its only purpose to shield the magnetic field towards the cavern. The baseline design assumes the cavern infrastructure can be built to be compatible with this stray field. Infrastructure that is sensitive to the magnetic field will be placed in the service cavern 50 m from the solenoid, where the stray field is sufficiently low.

Higgs self-coupling

The high granularity and acceptance of the FCC-hh reference detector will result in about 250 TB/s of data for calorimetry and the muon system, about 10 times more than the ATLAS and CMS HL-LHC scenarios. There is no doubt that it will be possible to digitise and read this data volume at the full bunch-crossing rate for these detector systems. The question remains whether the data rate of almost 2500 TB/s from the tracker can also be read out at the full bunch-crossing rate or whether calorimeter, muon and possible coarse tracker information need to be used for a first-level trigger decision, reducing the tracker readout rate to the few MHz level, without the loss of important physics. Even if the optical link technology for full tracker readout were available and affordable, sufficient radiation hardness of devices and infrastructure constraints from power and cooling services are prohibitive with current technology, calling for R&D on low-power radiation-hard optical links. 

Benchmarks physics

The potential of FCC-hh in the realms of precision Higgs and electroweak physics, high mass reach and dark-matter searches offers an unprecedented opportunity to address fundamental unknowns about our universe. The performance requirements for the FCC-hh baseline detector have been defined through a set of benchmark physics processes, selected among the key ingredients of the physics programme. The detector’s increased acceptance compared to the LHC detectors, and the higher energy of FCC-hh collisions, will allow physicists to uniquely improve the precision of measurements of Higgs-boson properties for a whole spectrum of production and decay processes complementary to those accessible at the FCC-ee. This includes measurements of rare processes such as Higgs pair-production, which provides a direct measure of the Higgs self-coupling – a crucial parameter for understanding the stability of the vacuum and the nature of the electroweak phase transition in the early universe – with a precision of 3 to 7% (see “Higgs self-coupling” figure).

Dark matters

Moreover, thanks to the extremely large Higgs-production rates, FCC-hh offers the potential to measure rare decay modes in a novel boosted kinematic regime well beyond what is currently studied at the LHC. These include the decay to second-generation fermions, muons, which can be measured to a precision of 1%. The Higgs branching fraction to invisible states can be probed to a value of 10–4, allowing the parameter space for dark matter to be further constrained. The much higher centre-of-mass energy of FCC-hh, meanwhile, significantly extends the mass reach for discovering new particles. The potential for detecting heavy resonances decaying into di-muons and di-electrons extends to 40 TeV, while for coloured resonances like excited quarks the reach extends to 45 TeV, thus extending the current limit by almost an order of magnitude. In the context of supersymmetry, FCC-hh will be capable of probing stop squarks with masses up to 10 TeV, also well beyond the reach of the LHC.

In terms of dark-matter searches, FCC-hh has immense potential – particularly for probing scenarios of weakly interacting massive particles such as higgsinos and winos (see “Dark matters” figure). Electroweak multiplets are typically elusive, especially in hadron collisions, due to their weak interactions and large masses (needed to explain the relic abundance of dark matter in our universe). Their nearly degenerate mass spectrum produces an elusive final state in the form of so-called “disappearing tracks”. Thanks to the dense coverage of the FCC-hh detector tracking system, a general-purpose FCC-hh experiment could detect these particle decays directly, covering the full mass range expected for this type of dark matter. 

A detector at a 100 TeV hadron collider is clearly a challenging project. But detailed studies have shown that it should be possible to build a detector that can fully exploit the physics potential of such a machine, provided we invest in the necessary detector R&D. Experience with the Phase-II upgrades of the LHC detectors for the HL-LHC, developments for further exploitation of the LHC and detector R&D for future Higgs factories will be important stepping stones in this endeavour.

The post Extreme detector design for a future circular collider appeared first on CERN Courier.

]]>
Feature A pileup of 1000 proton–proton collisions per bunch-crossing is just one of the challenges in extracting physics from a next-generation hadron collider to follow the LHC. https://cerncourier.com/wp-content/uploads/2023/06/CCJulAug23_FCChh_feature.jpg
Cold atoms for new physics https://cerncourier.com/a/cold-atoms-for-new-physics/ Wed, 03 May 2023 09:24:17 +0000 https://preview-courier.web.cern.ch/?p=108496 Atom interferometry holds great promise for making ultra-sensitive measurements in fundamental physics.

The post Cold atoms for new physics appeared first on CERN Courier.

]]>
atom_interferometry_workshop_2023

On 13 and 14 March CERN hosted an international workshop on atom interferometry and the prospects for future large-scale experiments employing this quantum-sensing technique. The workshop had some 300 registered participants, of whom about half participated in person. As outlined in a keynote introductory colloquium by Mark Kasevich (Stanford), one of the pioneers of the field, this quantum sensing technology holds great promise for making ultra-sensitive measurements in fundamental physics. Like light interferometry, atom interferometry involves measuring interference patterns, but between atomic wave packets rather than light waves. Interactions between coherent waves of ultralight bosonic dark matter and Standard Model particles could induce an observable shift in the interference phase, as could the passage of gravitational waves.

Atom interferometry is a well-established concept that can provide exceptionally high sensitivity, e.g., to inertial/gravitational effects. Experimental designs take advantage of features used by state-of-the-art atomic clocks in combination with established techniques for building inertial sensors. This makes atom interferometry an ideal candidate to hunt for physics beyond the Standard Model such as waves of ultralight bosonic dark matter, or to measure gravitational waves in a frequency range around 1 Hz that is inaccessible to laser interference experiments on Earth, such as LIGO, Virgo and KAGRA, or the upcoming space-borne experiment LISA. As discussed during the workshop, measurements of gravitational waves in this frequency range could reveal mergers of black holes with masses intermediate between those accessible to laser interferometers, casting light on the formation of the supermassive black holes known to inhabit the centres of galaxies. Atom interferometer experiments can also explore the limits of quantum mechanics and its interface with gravity, for example by measuring a gravitational analogue of the Aharonov-Bohm effect.

A deep shaft at Point 4 of the LHC is a promising location for an atom interferometer with a vertical baseline of over 100 m

Although the potential of atom interferometers for fundamental scientific measurements was the principal focus of the meeting, it was also emphasised that technologies based on the same principles also have wide-ranging practical applications. These include gravimetry, geodesy, navigation, time-keeping and Earth observation from space, providing, for example, a novel and sensitive technique for monitoring the effects of climate change through measurements of the Earth’s gravitational field.

Several large atom interferometers with a length of 10m already exist, for example at Stanford University, or are planned, for example in Hanover (VLBAI), Wuhan and at Oxford University (AION). However, many of the proposed physics measurements require next-generation setups with a length of 100m, and such experiments are under construction at Fermilab (MAGIS), in France (MIGA) and in China (ZAIGA). The Atomic Interferometric Observatory and Network (AION) collaboration is evaluating possible sites in the UK and at CERN. In this context, a recent conceptual feasibility study supported by the CERN Physics Beyond Colliders study group concluded that a deep shaft at Point 4 of the LHC is a promising location for an atom interferometer with a vertical baseline of over 100 m. The March workshop provided a forum for discussing such projects, their current status, future plans and prospective sensitivities.

Looking further ahead, participants discussed the prospects for one or more km-scale atom interferometers, which would provide the maximal sensitivity possible with a terrestrial experiment to search for ultralight dark matter and gravitational waves. It was agreed that the global community interested in such experiments would work together towards establishing an informal proto-collaboration that could develop the science case for such facilities, provide a forum for exchanging ideas how to develop the necessary technological advances and develop a roadmap for their realisation.

A highlight of the workshop was a poster session that provided an opportunity for 30 early-career researchers to present their ideas and current work on projects exploiting the quantum properties of cold atoms and related topics. The liveliness of this session showed how this interdisciplinary field at the boundaries between atomic physics, particle physics, astrophysics and cosmology is inspiring the next generation of researchers. These researchers may form the core of the team that will lead atom interferometers to their full potential.

The post Cold atoms for new physics appeared first on CERN Courier.

]]>
Meeting report Atom interferometry holds great promise for making ultra-sensitive measurements in fundamental physics. https://cerncourier.com/wp-content/uploads/2023/05/atom_interferometry_workshop_ft.png
Searching for dark photons in beam-dump mode https://cerncourier.com/a/searching-for-dark-photons-in-beam-dump-mode/ Mon, 24 Apr 2023 14:15:32 +0000 https://preview-courier.web.cern.ch/?p=108238 A search for dark photons decaying to a di-muon final state proves the capability of NA62 for studying physics in beam-dump mode.

The post Searching for dark photons in beam-dump mode appeared first on CERN Courier.

]]>
NA62 detector

Faced with the no-show of phenomena beyond the Standard Model at the high mass and energy scales explored so far by the LHC, it has recently become a much considered possibility that new physics hides “in plain sight”, namely at mass scales that can be very easily accessed but at very small coupling strengths. If this were the case, then high-intensity experiments have an advantage: thanks to the large number of events that can be generated, even the most feeble couplings corresponding to the rarest processes can be accessible.

Such a high-intensity experiment is NA62 at CERN’s North Area. Designed to measure the ultra-rare kaon decay K → πνν, it has also released several results probing the existence of weakly coupled processes that could become visible in its apparatus, a prominent example being the decay of a kaon into a pion and an axion. But there is also an unusual way in which NA62 can probe this kind of physics using a configuration that was not foreseen when the experiment was planned, for which the first result was recently reported. 

During normal NA62 operations, bunches of 400 GeV protons from the SPS are fired onto a beryllium target to generate secondary mesons from which, using an achromat, only particles with a fixed momentum and charge are selected. These particles (among them kaons) are then propagated along a series of magnets and finally arrive at the detector 100 m downstream. In a series of studies starting in 2015, however, NA62 collaborators with the help of phenomenologists began to explore physics models that could be tested if the target was removed and protons were fired directly into a “dump” that can be arranged by moving the achromat collimators. They concluded that various processes exist in which new MeV-scale particles such as dark photons could be produced and detected from their decays into di-lepton final states. The challenge is to keep the muon-induced background under control, which cannot be easily understood from simulations alone. 

A breakthrough came in 2018 when beam physicists in the North Area understood how the beamline magnets could be operated in such a way as to vastly reduce the background of both muons and hadrons. Instead of using the two pairs of dipoles as a beam achromat for momentum selection, the currents in the second pair are set to induce additional muon sweeping. The scheme was verified during a 2021 run lasting 10 days, during which 1.4 × 1017 protons were collected on the beam dump. The first analysis of this rapidly collected dataset – a search for dark photons decaying to a di-muon final state – has now been performed.

Hypothesised to mediate a new gauge force, dark photons, A′, could couple to the Standard Model via mixing with ordinary photons. In the modified NA62 set-up, dark photons could be produced either via bremsstrahlung or decays of secondary mesons, the mechanisms differing in their cross-sections and distributions of the momenta and angles of the A′. No sign of A′ → μ+μ was found, excluding a region of parameter space for dark-photon masses between 215 and 550 MeV at 90% confidence. A preliminary result for a search for A′ → e+e was also presented at the Rencontres de Moriond in March.

“This result is a milestone,” explains analysis leader Tommaso Spadaro of LNF Frascati. “It proves the capability of NA62 for studying physics in the beam-dump configuration and paves the way for upcoming analyses checking other final states.” 

The post Searching for dark photons in beam-dump mode appeared first on CERN Courier.

]]>
News A search for dark photons decaying to a di-muon final state proves the capability of NA62 for studying physics in beam-dump mode. https://cerncourier.com/wp-content/uploads/2023/04/CCMayJun23_NA_na62.jpg
Searching for electroweak SUSY: a combined effort https://cerncourier.com/a/searching-for-electroweak-susy-a-combined-effort/ Mon, 24 Apr 2023 13:02:47 +0000 https://preview-courier.web.cern.ch/?p=108203 CMS reports new results from searches for the electroweak production of sleptons, charginos and neutralinos.

The post Searching for electroweak SUSY: a combined effort appeared first on CERN Courier.

]]>
CMS figure 1.

The CMS collaboration has been relentlessly searching for physics beyond the Standard Model (SM) since the start of the LHC. One of the most appealing new theories is supersymmetry or SUSY – a novel fermion-boson symmetry that gives rise to new particles, “naturally” leads to a Higgs boson almost as light as the W and Z bosons, and provides candidate particles for dark matter (DM).

By the end of LHC Run 2, in 2018, CMS had accumulated a high-quality data sample of proton–proton (pp) collisions at an energy of 13 TeV, corresponding to an integrated luminosity of 137 fb–1. With such a large data set, it was possible to search for the production of strongly interacting SUSY particles, i.e. the partners of gluons (gluinos) and quarks (squarks), as well as for SUSY partners of the W and Z bosons (electroweakinos: winos and binos), of the Higgs boson (higgsinos), and of the leptons (sleptons). The cross sections for the direct production of SUSY electroweak particles are several orders of magnitude lower than those for gluino and squark pair production. However, if the partners of gluons and quarks are heavier than a few TeV, it could be that the SUSY electro­weak sector is the only one accessible at the LHC. In the minimal SUSY extension of the SM, electroweakinos and higgsinos mix to form six mass eigenstates: two charged (charginos) and four neutral (neutralinos). The lightest neutralino is often considered to be the lightest SUSY particle (LSP) and a DM candidate.

CMS has recently reported results, based on the full Run 2 dataset, from searches for the electroweak production of sleptons, charginos and neutralinos. Decays of these particles to the LSP are expected to produce leptons, or Z, W and Higgs bosons. The Z and W bosons subsequently decay to leptons or quarks, while the Higgs boson primarily decays to b quarks. All final states have been explored with complementary channels to enhance the sensitivity to a wide range of electroweak SUSY mass hypotheses. These cover very compressed mass spectra, where the mass difference between the LSP and its parent particles is small (leading to low-momentum particles in the final state) as well as uncompressed scenarios that would instead produce highly boosted Z, W and Higgs bosons. None of the searches showed event counts that significantly deviate from the SM predictions.

CMS maximised the output of the Run 2 dataset, providing its legacy reference on electroweak SUSY searches

The next step was to statistically combine the results of mutually exclusive search channels to set the strongest possible constraints with the Run 2 dataset and interpret the results of searches in different final states under unique SUSY-model hypotheses. For the first time, fully leptonic, semi-leptonic and fully hadronic final states from six different CMS searches were combined to explore models that differ depending on whether the next-to-lightest supersymmetric partner (NLSP) is “wino-like” or “higgsino-like”, as shown in the left and right panels of figure 1, respectively. The former are now excluded up to NLSP masses of 875 GeV, extending the constraints obtained from individual searches by up to 100 GeV, while the latter are excluded up to NLSP masses of 810 GeV.

With this effort, CMS maximised the output of the Run 2 dataset, providing its legacy reference on electroweak SUSY searches. While the same data are still being used to search for new physics in yet uncovered corners of the accessible phase-space, CMS is planning to extend its reach in the upcoming years, profiting from the extension of the data set collected during LHC Run 3 at an unprecedented centre-of-mass energy of 13.6 TeV.

The post Searching for electroweak SUSY: a combined effort appeared first on CERN Courier.

]]>
News CMS reports new results from searches for the electroweak production of sleptons, charginos and neutralinos. https://cerncourier.com/wp-content/uploads/2023/05/CCMayJun23_EF_CMS2.jpg
Physics is about principles, not particles https://cerncourier.com/a/physics-is-about-principles-not-particles/ Wed, 01 Mar 2023 13:35:42 +0000 https://preview-courier.web.cern.ch/?p=107923 As long as the aim is to answer nature’s outstanding mysteries, the path is worth following, says Veronica Sanz.

The post Physics is about principles, not particles appeared first on CERN Courier.

]]>
Last year marked the 10th anniversary of the discovery of the Higgs particle. Ten years is a short lapse of time when we consider the profound implications of this discovery. Breakthroughs in science mark a leap in understanding, and their ripples may extend for decades and even centuries. Take Kirchhoffs’ blackbody proposal more than 150 years ago: a theoretical construction, an academic exercise that opened the path towards a quantum revolution, the implications of which we are still trying to understand today. 

Imagine now the vast network of paths opened by ideas, such as emission theory, that led to no fruition despite their originality. Was pursuing these useful, or a waste of resources? Scientists would answer that the spirit of basic research is precisely to follow those paths with unknown destinations; it’s how humanity reached the level of knowledge that sustains modern life. As particle physicists, as long as the aim is to answer nature’s outstanding mysteries, the path is worth following. The Higgs-boson discovery is the latest triumph of this approach and, as for the quantum revolution, we are still working hard to make sense of it. 

Particle discoveries are milestones in the history of our field, but they signify something more profound: the realisation of a new principle in nature. Naively, it may seem that the Higgs discovery marked the end of our quest to understand the TeV scale. The opposite is true. The behaviour of the Higgs boson, in the form it was initially proposed, does not make sense at a quantum level. As a fundamental scalar, it experiences quantum effects that grow with their energy, doggedly pushing its mass towards the Planck scale. The Higgs discovery solidified the idea that gauge symmetries could be hidden, spontaneously broken by the vacuum. But it did not provide an explanation of how this mechanism makes sense with a fundamental scalar sensitive to mysterious phenomena such as quantum gravity. 

Veronica Sanz

Now comes the hard part. From the plethora of ideas proposed during the past decades to make sense of the Higgs boson – supersymmetry being the most prominent – most physicists predicted that it would have an entourage of companion particles with electroweak or even strong couplings. Arguments of naturalness, that these companions should be close-by to prevent troublesome fine-tunings of nature, led to the expectation that discoveries would follow or even precede that of the Higgs. Ten years on, this wish has not been fulfilled. Instead, we are faced with a cold reality that can lead us to sway between attitudes of nihilism and hubris, especially when it comes to the question of whether particle physics has a future beyond the Higgs. Although these extremes do not apply to everyone, they are understandable reactions to viewing our field next to those with more immediate applications, or to the personal disappointment of a lifelong career devoted to ideas that were not chosen by nature. 

Such despondence is not useful. Remember that the no-lose theorem we enjoyed when planning the LHC, i.e. the certainty that we would find something new, Higgs boson or not, at the TeV scale, was an exception to the rules of basic research. Currently, there is no no-lose theorem for the LHC, or for any future collider. But this is precisely the inherent premise of any exploration worth doing. After the incredible success we have had, we need to refocus and unify our discourse. We face the uncertainty of searching in the dark, with the hope that we will initiate the path to a breakthrough, still aware of the small likelihood that this actually happens. 

The no-lose theorem we enjoyed when planning the LHC was an exception to the rules of basic research

Those hopes are shared by wider society, which understands the importance of exploring big questions. From searching for exoplanets that may support life to understanding the human mind, few people assume these paths will lead to immediate results. The challenge for our field is to work out a coherent message that can enthuse people. Without straying far from collider physics, we could notice that there is a different type of conversation going on in the search for dark matter. Here, there is no no-lose theorem either, and despite the current exclusion of most vanilla scenarios, there is excitement and cohesion, which are effectively communicated. As for our critics, they should be openly confronted and viewed as an opportunity to build stronger arguments.

We have powerful arguments to keep delving into the smallest scales, with the unknown nature of dark matter, neutrinos and the matter–antimatter asymmetry the most well-known examples. As a field, we need to renew the excitement that led us where we are, from the shock of watching alpha particles bounce back from a thin gold sheet, to building a colossus like the LHC. We should be outspoken about our ambition to know the true face of nature and the profound ideas we explore, and embrace the new path that the Higgs discovery has opened. 

The post Physics is about principles, not particles appeared first on CERN Courier.

]]>
Opinion As long as the aim is to answer nature’s outstanding mysteries, the path is worth following, says Veronica Sanz. https://cerncourier.com/wp-content/uploads/2023/02/CCMarApr23_VIEW_path.jpg
Fundamental symmetries and interactions at PSI https://cerncourier.com/a/fundamental-symmetries-and-interactions-at-psi/ Thu, 19 Jan 2023 11:07:52 +0000 https://preview-courier.web.cern.ch/?p=107777 The 2022 workshop brought together more than 190 participants to deepen relations between disciplines and scientists.

The post Fundamental symmetries and interactions at PSI appeared first on CERN Courier.

]]>
PSI_2022

The triennial workshop “Physics of fundamental Symmetries and Interactions – PSI2022” took place for the sixth time at the Paul Scherrer Institut (PSI) in Switzerland from 17 to 22 October, bringing the worldwide fundamental symmetries community together. More than 190 participants including some 70 young scientists welcomed the close communication of an in-person meeting built around 35 invited and 25 contributed talks.

A central goal of the meeting series is to deepen relations between disciplines and scientists. This year, exceptionally, participants connected with the FIPs workshop at CERN on the second day of the conference, due to the common topics discussed.

With PSI’s leading high-intensity muon and pion beams, many topics in muon physics and lepton-flavour violation were highlighted. These covered rare muon decays (μ → e + γ, μ → 3e) and muon conversion (μ → e), muonic atoms and proton structure, and muon capture. Presentations covered complementary experimental efforts at J-PARC, Fermilab and PSI. The status of the muon g-2 measurement was reviewed from an experimental and theoretical perspective, where lattice-QCD calculations from 2021 and 2022 have intensified discussions around the tension with Standard Model expectations.

Fundamental physics using cold and ultracold neutrons was a second cornerstone of the programme. Searches for a neutron electric dipole moment (EDM) were discussed in contributions by collaborations from TRIUMF, LANL, SNS, ILL and PSI, complemented by presentations on searches for EDMs in atomic and molecular systems. Along with new results from neutron-beta-decay measurements, the puzzle of the neutron lifetime keeps the community busy, with improving “bottle” and “beam” measurements presently differing by more than 5 standard deviations. Several talks highlighted possible explanations via neutron oscillations into sterile or mirror states.

The current status of direct neutrino-mass measurements and future outlook down into the meV range was covered together with updates on searches for neutrinoless double-beta decay. An overview of the hunt for the unknown at the dark-matter frontier was presented together with new limits and plans from various searches. Ultraprecise atomic clocks were discussed allowing checks of general relativity and the Standard Model, and for searches beyond established theories. The final session covered the latest results from antiproton and antihydrogen experiments at CERN, demonstrating the outstanding precision achieved in CPT tests with these probes. The workshop was a great success and participants look forward to reconvening at PSI2025.

The post Fundamental symmetries and interactions at PSI appeared first on CERN Courier.

]]>
Meeting report The 2022 workshop brought together more than 190 participants to deepen relations between disciplines and scientists. https://cerncourier.com/wp-content/uploads/2023/01/PSI_2022_ft.jpg
Chasing feebly interacting particles at CERN https://cerncourier.com/a/chasing-feebly-interacting-particles-at-cern/ Wed, 11 Jan 2023 16:48:52 +0000 https://preview-courier.web.cern.ch/?p=107710 The FIPs 2022 workshop held at CERN from 17 to 21 October aimed at shaping the FIPs programme in Europe.

The post Chasing feebly interacting particles at CERN appeared first on CERN Courier.

]]>
What is the origin of neutrino masses and oscillations? What is the nature of Dark Matter? What mechanism generated matter-antimatter-asymmetry? What drove the inflation of our Universe and provides an explanation to Dark Energy? What is the origin of the hierarchy of scales? These are outstanding questions in particle physics that still require an answer.

So far, the experimental effort has been driven by theoretical arguments that favoured the existence of new particles with relatively large couplings to the Standard Model (SM) and masses commensurate the mass of the Higgs boson. Searching for these particles has been one of the main goals of the physics programme of the LHC. However, several beyond-the-SM theories predict the existence of light (sub-GeV) particles, which interact very weakly with the SM fields. Such feebly interacting particles (FIPs) can provide elegant explanations to several unresolved problems in modern physics. Furthermore, searching for them requires specific and distinct techniques, creating new experimental challenges along with innovative theoretical efforts.

FIPs are currently one of the most debated and discussed topics in fundamental physics and were recommended by the 2020 update of the European strategy for particle physics as a compelling field to explore in the next decade. The FIPs 2022 workshop held at CERN from 17 to 21 October was the second in a series dedicated to the physics of FIPs, attracted 320 experts from collider, beam-dump and fixed-target experiments, as well as from the astroparticle, cosmology, axion and dark-matter communities gathered to discuss the progress in experimental searches and new developments in underlying theoretical models.

The main goal of the workshop was to create a base for a multi-disciplinary and interconnected approach. The breadth of open questions in particle physics and their deep interconnection requires a diversified research programme with different experimental approaches and techniques, together with a strong and focused theoretical involvement. In particular, FIPs 2022, which is strongly linked with the Physics Beyond Colliders initiative at CERN, aimed at shaping the FIPs programme in Europe. Topics under discussion include the impact that FIPs might have in stellar evolution, ΛCDM cosmological-model parameters, indirect dark-matter detection, neutrino physics, gravitational-wave physics and AMO (atoms-molecular-optical) physics. This is in addition to searches currently performed at colliders and extracted beam lines worldwide.

The main sessions were organised around three main themes: light dark matter in particle and astroparticle physics and cosmology; ultra-light FIPs and their connection with cosmology and astrophysics; and heavy neutral leptons and their connection with neutrino physics. In addition, young researchers in the field presented and discussed their work in the “new ideas” sessions.

FIPs 2022 aimed not only to explore new answers to the unresolved questions in fundamental physics, but to analyse the technical challenges and necessary infrastructure and collaborative networks required to answer them. Indeed, no single experiment or laboratory would be able by itself to cover the large parameter space in terms of masses and couplings that FIPs models suggest. Synergy and complementarity among a great variety of experimental facilities are therefore paramount, calling for a deep collaboration across many laboratories and cross-fertilisation among different communities and experimental techniques. We believe that a network of interconnected laboratories can become a sustainable, flexible and efficient way of addressing the particle physics questions in the next millennium.

The next appointment for the community is the retreat/school “FIPs in the ALPs” to be held in Les Houches from 15 to 19 May 2023, to be followed by the next edition of the FIPs workshop at CERN in autumn 2024.

The post Chasing feebly interacting particles at CERN appeared first on CERN Courier.

]]>
Meeting report The FIPs 2022 workshop held at CERN from 17 to 21 October aimed at shaping the FIPs programme in Europe. https://cerncourier.com/wp-content/uploads/2023/02/CCMarApr23_FN_FIPS.jpg
Preparing for post-LS3 scenarios https://cerncourier.com/a/preparing-for-post-ls3-scenarios/ Tue, 10 Jan 2023 12:06:22 +0000 https://preview-courier.web.cern.ch/?p=107534 The potential physics programme in CERN's ECN3 hall, which can host unique high-energy/high-intensity proton beams, was a focus of the fourth Physics Beyond Colliders annual workshop.

The post Preparing for post-LS3 scenarios appeared first on CERN Courier.

]]>
Proposed experimental programmes

The Physics Beyond Colliders (PBC) study was launched in 2016 to explore the opportunities offered by CERN’s unique accelerator and experimental-area complex to address some of the outstanding questions in particle physics through experiments that are complementary to the high-energy frontier. Following the recommendations of the 2020 update of the European strategy for particle physics, the CERN directorate renewed the mandate of the PBC study, continuing it as a long-term activity.

The fourth PBC annual workshop took place at CERN from 7 to 9 November 2022. The aim was to review the status of the studies, with a focus on the programmes under consideration for the start of operations after Long Shutdown 3 (LS3), scheduled for 2026–2029.

The North Area (NA) at CERN, where experiments are driven by beams from the Super Proton Synchrotron (SPS), is at the heart of many present and proposed explorations for physics beyond the Standard Model. The NA includes an underground cavern (ECN3), which can host unique high-energy/high-intensity proton beams. Several proposals for experiments have been made, all of which require higher intensity proton beams than are currently available. It is therefore timely to identify the synergies and implications of a future ECN3 high-intensity programme on the otherwise ongoing NA technical consolidation programme. 

The following proposals are being considered within the PBC study group:

• HIKE (High Intensity Kaon Experiment) is a proposed expansion of the current NA62 programme to study extremely rare decays of charged kaons and, in a second phase, those of neutral kaons. This would be complemented by searches for visible decays of feebly interacting particles (FIPs) that could emerge on-axis from the dump of an intense proton beam within a thick absorber that would contain all other known particles, except muons and neutrinos;

• SHADOWS (Search for Hidden And Dark Objects With the SPS) would search for visible FIP decays off-axis and could run in parallel to HIKE when operated in beam-dump mode. The proposed detector is compact and employs existing technologies to meet the challenges of reducing the muon background;

• SHiP (Search for Hidden Particles) would allow a full investigation of hidden sectors in the GeV mass range. Comprehensive design studies for SHiP and the Beam Dump Facility (BDF) in a dedicated experimental area were published in preparation for the European strategy update. During 2021, an analysis of alternative locations using existing infrastructure at CERN revealed ECN3 to be the most promising option;

• Finally, TauFV (Tau Flavour Violation) would conduct searches for lepton-flavour violating tau-lepton decays.

The HIKE, SHADOWS and BDF/SHiP collaborations have recently submitted letters of intent describing their proposals for experiments in ECN3. The technical feasibility of the experiments, their physics potential and implications for the NA consolidation are being evaluated in view of a possible decision by the beginning of 2023. A review of the experimental programme in the proposed high-intensity  facility will take place during 2023, in parallel with a detailed comparison of the sensitivity to FIPs in a worldwide context.

A vibrant programme

The NA could also host a vibrant ion-physics programme after LS3, with NA60++ aiming to measure the caloric curve of the strong-force phase transition with lead–ion beams, and NA61++ proposing to explore the onset of the deconfined nuclear medium, extending the scan in the momentum/ion space with collisions of lighter ion beams. The conceptual implementation of such schemes in the accelerators and experimental area is being studied and the results, together with the analysis of the physics potential, are expected during 2023.

The search for long-lived particles with dedicated experiments and the exploration of fixed-target physics is also open at the LHC. The proposed forward-physics facility, located in a cavern that could be built at a distance of 600 m along the beam direction from LHC Interaction Point 1, would take advantage of the large flux of high-energy particles produced in the very forward direction in LHC collisions. It is proposed to host a comprehensive set of detectors (FASER2, FASERν2, AdvSND, FORMOSA, FLArE) to explore a broad range of new physics and to study the highest energy neutrinos produced by accelerators. A conceptual design report of the facility, including detector design, background analysis and mitigation measures, civil engineering and integration studies is in preparation. Small prototypes of the MATHUSLA, ANUBIS and CODEX-b detectors aiming at the search for long-lived particles at large angles from LHC collisions are also being built for installation during the current LHC run.

The North Area at CERN is at the heart of many present and proposed explorations for physics beyond the Standard Model

A gas-storage cell (SMOG2) was installed in front of the LHCb experiment during the last LHC long shutdown, opening the way to high-precision fixed-target measurements at the LHC. The storage cell enhances the density of the gas and therefore the rate of the collisions by up to two orders of magnitude as compared to the previous internal gas target. SMOG2 has been successfully commissioned with neon gas, demonstrating that it can be operated in parallel to LHCb. Future developments include the injection of different types of gases and a polarised gas target to explore nucleon spin-physics at the LHC.

Crystal clear

Fixed-target experiments are also being developed that would extract protons from LHC beams by channelling the beam halo with a bent crystal.  The extracted protons would impinge on a target and be used for measurements of proton structure functions (“single crystal setup”) or estimation of the magnetic and electric dipole moments of short-lived heavy baryons (“double crystal setup”). In the latter case, the measurement would be based on the baryon spin precession in the strong electric field of a second bent crystal installed immediately downstream from the baryon-production proton target. A proof-of-principle experiment of the double-crystal setup is being designed for installation in the LHC to determine the channelling efficiency for long crystals at TeV energies, as well as to demonstrate the control and management of the secondary halo and validate the estimate of the achievable luminosity.

The technology know-how at CERN can also benefit non-accelerator experiments

The technology know-how and experience available at CERN can also benefit non-accelerator experiments such as the Atom Interferometer Observatory and Network (AION), proposed to be installed in one of the shafts at Point 4 of the LHC for mid-frequency gravitational-wave detection and ultra-light dark-matter searches, as well as the development of superconducting cavities for the Relic Axion Detector Experimental Setup (RADES) and for the heterodyne detection of axion-like particles.

During the workshop, progress on the possible applications of a gamma factory at CERN, as well as the status of the design of a Charged-Particle EDM Prototype Ring and of the R&D for novel monitored or tagged neutrino beamlines, were also presented.

The post Preparing for post-LS3 scenarios appeared first on CERN Courier.

]]>
Meeting report The potential physics programme in CERN's ECN3 hall, which can host unique high-energy/high-intensity proton beams, was a focus of the fourth Physics Beyond Colliders annual workshop. https://cerncourier.com/wp-content/uploads/2023/01/CCJanFeb23_FN_future_feature.jpg
Hunting dark matter with invisible Higgs decays https://cerncourier.com/a/hunting-dark-matter-with-invisible-higgs-decays/ Tue, 10 Jan 2023 11:57:18 +0000 https://preview-courier.web.cern.ch/?p=107616 Using data collected at 7, 8 and 13 TeV, the CMS collaboration has set a new upper limit on the probability that the Higgs boson decays to invisible particles.

The post Hunting dark matter with invisible Higgs decays appeared first on CERN Courier.

]]>
CMS figure 1

In the Standard Model (SM) of particle physics, the only way the Higgs boson can decay without leaving any traces in the LHC detectors is through the four-neutrino decay, H  ZZ  4ν, which has an expected branching fraction of only 0.1%. This very small value can be seen as a difficulty but is also an exciting opportunity. Indeed, several theories of physics beyond the SM predict considerably enhanced values for the branching fraction of invisible Higgs-boson decays. In one of the most interesting scenarios, the Higgs boson acts as a portal to the dark sector by decaying to a pair of dark matter (DM) particles. Measurements of the “Higgs to invisible” branching fraction are clearly among the most important tools available to the LHC experiments in their searches for direct evidence of DM particles.

The CMS collaboration recently reported the combined results of different searches for invisible Higgs-boson decays, using data collected at 7, 8 and 13 TeV centre-of-mass energies. To find such a rare signal among the overwhelming background produced by SM processes, the study considers events in most Higgs-boson production modes: via vector boson (W or Z) fusion, via gluon fusion and in association with a top quark–antiquark pair or a vector boson. In particular, the analysis looked at hadronically decaying vector bosons or top quark–antiquark pairs. A typical signature for invisible Higgs-boson decays is a large missing energy in the detector, so that the missing transverse energy plays a crucial role in the analysis. No significant signal has been seen, so a new and stricter upper limit is set on the probability that the Higgs boson decays to invisible particles: 15% at 95% confidence level.

This result has been interpreted in the context of Higgs-portal models, which introduce a dark Higgs sector and consider several dark Higgs-boson masses. The extracted upper limits on the spin-independent DM-nucleon scattering cross section, shown in figure 1 for a range of DM mass points, have better sensitivities than those of direct searches over the 1–100 GeV range of DM masses. Once the Run 3 data will be added to the analysis, much stricter limits will be reached or, if we are lucky, evidence for DM production at the LHC will be seen.

The post Hunting dark matter with invisible Higgs decays appeared first on CERN Courier.

]]>
News Using data collected at 7, 8 and 13 TeV, the CMS collaboration has set a new upper limit on the probability that the Higgs boson decays to invisible particles. https://cerncourier.com/wp-content/uploads/2023/01/CCJanFeb23_EF_CMS_feature.jpg
Discussing all things symmetry https://cerncourier.com/a/discussing-all-things-symmetry/ Fri, 16 Dec 2022 15:27:57 +0000 https://preview-courier.web.cern.ch/?p=107419 Muon-decay experiments at PSI were a highlight of the SSP2022 conference.

The post Discussing all things symmetry appeared first on CERN Courier.

]]>
After one-year delay due to the COVID pandemic, the 8th edition of the International Symposium on Subatomic Physics (SSP2022) took place in Vienna from 29 August to 2 September. Organised by the Stefan Meyer Institute for subatomic physics (SMI) of the Austrian Academy of Sciences and hosted at the University of Applied Arts, the in-person conference attracted 74 participants.

The conference programme began with a warm welcome from Eberhard Widmann (Austrian Academy of Sciences) who was delighted to resume the SSP series, the last one held in Aachen in spring 2018. As proposed by the International Advisory Committee, the scientific programme this year focused more on fundamental symmetries and interactions in theory and laboratory experiments compared to previous editions and included topics such as dark matter and cosmology.  51 invited and contributed talks, as well as 17 posters were presented, highlighting scientific achievements worldwide.

These included topics on searches for lepton-flavour violation and symmetries in heavy quark decays at BELLE in Japan, BESIII in Beijing, muon-decay experiments at the Paul Scherrer Institute, and the first direct test of T and CPT symmetries in Φ decays at DAΦNE in Frascati. Prospects to discover physics beyond the Standard Model, such as the g-2 measurement at Fermilab, or at high-energy colliders were also presented, as well as searches for the electric dipole moments (EDM) of the neutron, deuteron, muon and in atoms and molecules. Double β-decay experiments, sterile-neutrino searches and flavour oscillations were also discussed. Results and upper limits on CPT tests with antihydrogen, muonium and positronium were reported.

The meeting ended with presentations on advanced instrumentation and on upcoming future facilities at PSI, DESY, Mainz university and J-PARC. Many participants from regions such as China attended the conference online. Discussions on various subjects followed during the poster session, where master and PhD students presented their work and results. Stefan Paul (TU Munich) gave a public lecture in the picturesque Festsaal of the Austrian Academy of Sciences about the shortest length scales that humankind has explored so far and how laboratory experiments test theoretical models describing the beginning of the universe.

SSP2022 was a successful and enjoyable conference, which created many fruitful and at times lively discussions in the field of symmetries in subatomic physics. The many contributions together with the social events around the conference programme provided an inspiring environment for animated discussions. SSP2022 benefited from being a relatively small-scale conference and the natural lightness it brings when meeting new colleagues and carrying out in-depth conversations on physics topics that we are passionate about.

The post Discussing all things symmetry appeared first on CERN Courier.

]]>
Meeting report Muon-decay experiments at PSI were a highlight of the SSP2022 conference. https://cerncourier.com/wp-content/uploads/2022/12/discussing_all_things_symmetry.jpg
High-energy interactions in Bologna https://cerncourier.com/a/high-energy-interactions-in-bologna/ Mon, 05 Sep 2022 12:08:01 +0000 https://preview-courier.web.cern.ch/?p=106165 ICHEP2022 involved 1500 participants, 17 parallel sessions, 900 talks and 250 posters.

The post High-energy interactions in Bologna appeared first on CERN Courier.

]]>
Discussions at ICHEP

Involving around 1500 participants, 17 parallel sessions, 900 talks and 250 posters, ICHEP2022 (which took place in Bologna from 6 to 13 July) was a remarkable week of physics, technology and praxis. The energy and enthusiasm among the more than 1200 delegates who were able to attend in person was palpable. As the largest gathering of the community since the beginning of the pandemic – buoyed by the start of LHC Run 3 and the 10th anniversary of the Higgs-boson discovery – ICHEP2022 served as a powerful reminder of the importance of non-digital interactions.

Roberto Tenchini’s (INFN Pisa) heroic conference summary began with a reminder: it is 10 years since ICHEP included a session titled “Standard Model”, the theory being so successful that it now permeates most sessions. As an example, he highlighted cross-section predictions tested over 14 orders of magnitude at the LHC. Building on the Higgs@10 symposium at CERN on 4 July, the immense progress in understanding the properties and interactions of the Higgs boson (including legacy results with full Run 2 statistics in two papers by ATLAS and CMS published in Nature on 4 July) was centre stage. CERN Director-General Fabiola Gianotti gave a sweeping tour of the path to discovery and emphasised the connections between the Higgs boson and profound structural problems in the SM. Many speakers highlighted the concomitant role of the Higgs boson in exploring new physics, dashing notions that future precision measurements are “business as usual”. Chiara Mariotti (INFN Torino) pointed out that only 3% of the total Higgs data expected at the LHC has been analysed so far.

Hot topics

Another hot electroweak topic was CDF’s recent measurement of the mass of the W boson, as physicists try to understand what could cause it to lie so far from its prediction and from previous measurements. Andrea Rizzi (Pisa) confirmed that CMS is working hard on a W-mass analysis that will bring crucial information, on a time-scale to be decided. Patience is king with such a complex analysis, he said: “we are really trying to do the measurement the way we want to do it.”

CMS presented a total of 85 parallel talks and 28 posters, including new searches related to b-anomalies with taus, and the most precise measurement of Bs μ+μ. Among new results presented by ATLAS in 71 parallel talks and 59 posters were the observation of a four charm–quark state consistent with one seen by LHCb, joint-polarisation measurements of the W and Z bosons, and measurements of the total proton–proton cross section and the ratio of the real vs imaginary parts of the elastic-scattering amplitude. ATLAS and CMS also updated participants on many searches for new particles, in particular leptoquarks. Among highlights were searches by ATLAS for events with displaced vertices, which could be caused by long-lived particles, and by CMS for resonances decaying to Higgs bosons and pairs of either photons or b quarks, which show interesting excesses. “Se son rose fioriranno!” said Tenchini. 

The sigmas are rather higher for exotic hadrons. LHCb presented the discovery of a new strange pentaquark (with a minimum quark content ccuds) and two tetraquarks (one corresponding to the first doubly charged open-charm tetraquark with csud), taking the number of hadrons discovered at the LHC so far to well over 60, and introducing a new exotic-hadron naming scheme for “particle zoo 2.0” (Exotic hadrons brought into order by LHCb). LHCb also reported the first evidence for direct CP violation in the charm system (LHCb digs deeper in CP-violating charm decays) and a new precise measurement of the CKM angle γ. Vladimir Gligorov (LPNHE) described how, in addition to the flavour factories LHCb and Belle II, experiments including ATLAS, CMS, BESIII, NA62 and KOTO will be crucial to enable the next level of understanding in quark mixing. Despite no significant new results having been presented, the status of tests of lepton flavour universality (LFU) in B decays by LHCb generated lively discussions, while Toshinori Mori (Tokyo) described exciting prospects for LFU tests in charged-lepton flavour experiments, in particular MEG-II, which has just started operations at PSI, and the upcoming Mu2e and MUonE experiments.

ICHEP2022 served as a powerful reminder of the importance of non-digital interactions

Moving to leptons that are known to mix, neutrinos continue to play very important roles in understanding the smallest and largest scales, said Takaaki Kajita (Tokyo) via a link from the IUPAP Centennial Symposium taking place in parallel at ICTP Trieste. Status reports on DUNE, Hyper-K, JUNO, KM3NeT and SNB showed how these detectors will help constrain the still poorly-known PNMS matrix that describes leptonic mixing, while new results from NOvA and STEREO further reveal anomalous behaviour. Among the major open questions in neutrino physics summed-up by theorist Joachim Kopp (Mainz and CERN) were: how do neutrinos interact? What explains the oscillation anomalies? And how do supernova neutrinos oscillate?

Several plenary presentations showcased the increasing complementarity with astroparticle physics and cosmology, with the release of the first-science images from the James Webb Space Telescope on 12 July adding spice (Webb opens new era in observational astrophysics). Multiband gravitational-wave astronomy across 12 or more orders of magnitude in frequency will bloom in the next decade, predicted Giovanni Andrea Prodi (Trento), while larger datasets and synchronisation of experiments offer a bright future in all messengers, said Gwenhael De Wasseige (Louvain): “We are just at the beginning of the story.” The first results from the Lux–Zeplin experiment were presented, setting the tightest limits on spin-independent WIMP–nucleon cross-sections for WIMP masses above 9 GeV (CERN Courier September/October 2022 p13), while the increasingly crowded plot showing limits from direct searches for axions illustrate the vibrancy and shifting focus of dark-matter research. Indeed, among several sessions devoted to the exploration of high-energy QCD in heavy-ion, proton–lead and proton–proton collisions, Andrea Dainese (INFN Padova) described how the LHC is not only a collider of nuclei but an (anti-)nuclei factory relevant for dark-matter searches.

The unique ability of theorists to put numerous results and experiments in perspective was on full display. We should all renew the enthusiasm that built the LHC, and be a lot more outspoken about the profound ideas we explore, urged Veronica Sanz (Sussex); after all, she said, “we are searching for something that we know should be somewhere.” A timely talk by Gavin Salam (Oxford) summarised the latest understanding of QCD effects relevant to the muon g-2 and W-mass anomalies and also to future Higgs-boson measurements, concluding that, as we approach high precision, we should expect to be confronted by conceptual problems that we could, so far, ignore.

The unique ability of theorists to put numerous results and experiments in perspective was on full display

Accelerators (including a fast-paced summary of the HL-LHC niobium-tin magnet programme from Lucio Rossi), detectors (68 talks and posters revealing an increasingly holistic approach to detector design), computing (highlighting a period of rapid evolution thanks to optimisation, modernisation, machine-learning algorithms and increasing hardware diversity), industry, diversity and outreach were addressed in detail. A highly acclaimed outreach event in Bologna’s Piazza Maggiore on the evening of 12 July saw thousands of people listen to Fabiola Gianotti, Guido Tonelli, Gian Giudice and Antonio Zoccoli discuss the implications of the Higgs-boson discovery.

Only the narrowest snapshot of proceedings is possible in such a short report. What was abundantly clear from ICHEP2022 is that, following the discovery of the Higgs boson and as-yet no new particles beyond the SM, the field is in a fascinating and challenging period where confusion is more than matched by confidence that new physics must exist. The strategic direction of the field was addressed in two wide-ranging round-table discussions where laboratory directors and senior physicists answered questions submitted by participants. Much discussion concerned future colliders, and addressed a perceived worry in some quarters that the field is entering a period of decline. For anyone following the presentations at ICHEP2022, nothing could be further from the truth.

The post High-energy interactions in Bologna appeared first on CERN Courier.

]]>
Meeting report ICHEP2022 involved 1500 participants, 17 parallel sessions, 900 talks and 250 posters. https://cerncourier.com/wp-content/uploads/2022/09/CCSepOct22_FN_ICHEP.jpg
LHCb digs deeper in CP-violating charm decays https://cerncourier.com/a/lhcb-digs-deeper-in-cp-violating-charm-decays/ Thu, 07 Jul 2022 14:23:12 +0000 https://preview-courier.web.cern.ch/?p=102384 The LHCb collaboration announced a new measurement of the individual time-integrated CP asymmetry in the D0 → KK+ decay.

The post LHCb digs deeper in CP-violating charm decays appeared first on CERN Courier.

]]>
LHCb figure 1

To explain the large matter–antimatter asymmetry in the universe, the laws of nature need to be asymmetric under a combination of charge-conjugation (C) and parity (P) transformations. The Standard Model (SM) provides a mechanism for CP violation, but it is insufficient to explain the observed baryon asymmetry in the universe. Thus, searching for new sources of CP violation is important.

The non-invariance of the fundamental forces under CP can lead to different rates between a particle and an antiparticle decay. The CP violation in the decay of a particle is quantified through the parameter ACP, equal to the relative difference between the decay rate of the process and the decay rate of the CP-conjugated process. Three years ago, the LHCb collaboration reported the first observation of CP violation in the decay of charmed hadrons by measuring the difference between the time-integrated ACP in D KK+ and D π π+ decays, ΔACP. This difference was found to lie at the upper end of the SM expectation, prompting renewed interest in the charm-physics community. There is now an ongoing effort to understand whether this signal is consistent with the SM or a sign of new physics.

At the 41st ICHEP conference in Bologna on 7 July, the LHCb collaboration announced a new measurement of the individual time-integrated CP asymmetry in the D KK+ decay using the data sample collected during LHC Run 2. The measured value, ACP(KK+) = [6.8 ± 5.4 (stat) ± 1.6 (syst)] × 104, is almost three times more precise than the previous LHCb determination obtained with Run 1 data. This was thanks not only to a larger data sample but also the inclusion of additional control channels Ds K– Kπ+ and Ds KsK+. Together with the previous control channels, D K– ππ+ and D Ksπ+, these decays allow the separation between tiny signals of CP asymmetries from the much larger bias due to the asymmetric meson production and instrumental effects.

The combination of the measured values with the previously obtained ones of ACP(KK+) and ΔACP by LHCb allowed the determination of the direct CP asymmetries in the D π π+ and D K– K+ decays: [23.2 ± 6.1] × 104 and [7.7 ± 5.7] × 104, respectively, with correlated uncertainties (ρ = 0.88). This is the first evidence of direct CP violation in an individual charm–hadron decay (D0  π– π+), with a significance of 3.8σ.

The sum of the two direct asymmetries, which is expected to be equal to 0 in the limit of s–d quark symmetry (called U-spin symmetry), is equal to [30.8 ± 11.4] × 104. This corresponds to a departure from U-spin symmetry of 2.7σ. In addition, this result is essential to the theory community in the quest to clarify the theoretical picture of CP-violation in the charm system. Since the measurement is statistically limited, its precision will improve with the larger dataset collected during Run 3.

The post LHCb digs deeper in CP-violating charm decays appeared first on CERN Courier.

]]>
News The LHCb collaboration announced a new measurement of the individual time-integrated CP asymmetry in the D0 → KK+ decay. https://cerncourier.com/wp-content/uploads/2022/07/CCSepOct22_EF_LHCb_feature.jpg
The search for new physics: take three https://cerncourier.com/a/the-search-for-new-physics-take-three/ Mon, 02 May 2022 09:02:13 +0000 https://preview-courier.web.cern.ch/?p=99127 Searches for new physics at Run 3 will bring significant gains in sensitivity beyond the benefit provided by the increased amount of data.

The post The search for new physics: take three appeared first on CERN Courier.

]]>
An ATLAS mono-jet event

Aside from the discovery of the Higgs boson, the absence of additional elementary-particle discoveries is the LHC’s main result so far. For many physicists, it is also the more surprising one. Such further discoveries are suggested by the properties of the Higgs boson, which are now established experimentally to a large extent. The Higgs boson’s low mass, despite its susceptibility to quantum corrections from heavy particles that should push it orders-of-magnitude higher, and its hierarchy of coupling strengths to fermions present extreme, “unnatural” values that so far lack an explanation. Therefore, searches for new physics at the TeV energy scale remain strongly motivated, irrespective of the no-show so far. 

Naturalness has triggered the development of many new-physics models, but the large extent of their parameter space allows them to evade exclusion again and again. Whereas the discoveries of the past decades, including that of the Higgs boson, were driven by precise quantitative predictions, the search for physics beyond the Standard Model (SM) simply requires more perseverance.

LHC Run 3 will bring long-awaited new insights to the question of naturalness with respect to Higgs physics, as well as to many other SM puzzles such as the nature of dark matter or the cosmological matter–antimatter asymmetry. With considerably more data and a slightly higher centre-of-mass energy than at Run 2, in addition to new triggers and improved event reconstruction and physics-analysis techniques, a significant increase in sensitivity compared to the current results will be achieved. Searches for new phenomena with Run 3 data will also benefit from a much improved definition of the physics targets, thanks to information gathered during Run 2 and the various anomalies observed at lower energies.

The story so far

During the past 12 years, a broad search programme has emerged at the LHC in parallel with precision measurements (see “Pushing the precision frontier”). Initially, the most favoured new-physics scenario was supersymmetry (SUSY), a new fermion–boson symmetry that gives rise to supersymmetric partners of SM particles and naturally leads to a light Higgs boson close to the masses of the W and Z bosons. SUSY is expected to produce events containing jets and missing transverse energy (MET), the study of which at Run 2 placed exclusion limits on gluino masses as high as 2.3 TeV. More challenging searches for stop quarks, with background processes up to a million times more frequent than the predicted signal, were also performed thanks to the excellent performance of the ATLAS and CMS detectors. Yet, no signs of stops have been found up to a mass of 1.3 TeV, excluding a sizeable fraction of the SUSY parameter space suggested by naturalness arguments. Further SUSY searches were performed, including those for only weakly interacting SUSY particles (“electroweakinos”), where the Run 2 data allowed the experiments to surpass the sensitivity achieved by LEP in some scenarios. Half a century since SUSY was first proposed, ATLAS and CMS have demonstrated that the simplest models containing TeV-scale sparticle masses are not realised in nature (see “Stop quarks and electroweakinos” figure).

Stop quarks and electroweakinos

In fact, a large number of new-physics searches during LHC Run 1 and Run 2 targeted models other than SUSY, many of which also address the question of naturalness. Signs of extra spatial dimensions have been searched for in “mono-jet” events containing a single energetic jet and large MET, which could be caused by excited gravitons propagating in a higher dimensional space. Searches for vector-like quarks, as suggested by models with a composite Higgs boson, covered numerous complex final states with decays into all of the heavier known elementary particles. In these and other searches, the Higgs boson has entered the experimental toolkit, for example via the identification of high-momentum Higgs-boson decays reconstructed as large-radius jets.

The Higgs sector itself has been the subject of new-physics searches. These target additional Higgs bosons that would arise from an extended Higgs sector and exotic decays of the known Higgs boson, for instance into weakly interacting massive particles (WIMPs), which are candidates for dark matter. Improvements in both theoretical and data-driven background determinations have also allowed searches for Higgs-boson decays into invisible particles, with the Run 2 dataset setting an upper limit of 10% on their rate.

Searches for dark matter also continued to be performed in traditional channels, for example via the mono-jet signature. To increase the accuracy of this search using the full Run 2 statistics, theorists contributed differential background predictions that go beyond the next-to-leading order in perturbation theory to achieve an unprecedented background uncertainty of only 3% at MET values above 1 TeV. The resulting constraints on WIMP dark matter are complementary to those achieved with ultrasensitive detectors deep underground as well as astroparticle experiments. The absence of dark-matter signals in such established search channels led to the development of new models that predict a number of relevant but previously unexplored signatures.

LHC Run 3 will allow searches to go significantly beyond the sensitivity achieved with the Run 2 data 

In several respects, searches for new physics at the LHC experiments have gone well beyond what was foreseen at the time of their design. “Scouting” data streams were introduced to store small-size event records suitable for di-jet and di-muon resonance searches such that recording rates could be increased by up to two orders of magnitude within the available bandwidth. Consequently, the mass reach of these searches was extended to lower values whereas previously this was impossible due to the high background rates at low masses. Long-lived particle searches also opened a new frontier, motivating proposals for new LHC detectors.

Overall, LHC Run 1 and Run 2 led to an enormous diversification of new-physics searches at the energy frontier by ATLAS and CMS, with complementary searches conducted by LHCb targeting lower invariant masses. The absence of new-physics signals despite the exploration of a multitude of signatures with unforeseen precision is a strong experimental result that feeds back to the phenomenology community to shape this programme further. While the analysis of Run 2 data is still ongoing, the experience gained so far in terms of experimental techniques and investigated signatures puts the experimental collaborations in a better position to search for new physics at Run 3.

Experimental improvements

LHC Run 3 will allow searches to go significantly beyond the sensitivity achieved with the Run 2 data. ATLAS and CMS are expected to collect datasets with an integrated luminosity of up to 300 fb–1, adding to the 140 fb–1 collected in Run 2. Taking into account the additional, smaller benefit provided by the increase in the centre-of-mass energy from 13 to 13.6 TeV, new-physics search sensitivities will generally increase by a factor of two in terms of cross sections. Additional gains in sensitivity will result from the exploration of new territory in several respects.

Already at the level of data acquisition, significant improvements will increase the sensitivity of searches. The CMS higher level trigger system has been reinforced using graphics processing units to increase the recording rate in the data scouting stream from 9 to 30 kHz. ATLAS has extended this technique to encompass more final states, including photons and b-jets. These techniques extend the sensitivity to hadronic resonances with low masses and weak coupling strengths to a domain that has never been probed before.

Mass exclusions for spin-1 leptoquarks

The particularly challenging searches for new long-lived particles will also benefit from experimental advances. ATLAS has improved the reconstruction of displaced tracks, reducing the amount of fake tracks by a factor of 20 at similar efficiencies compared to the current data analysis. New, dedicated triggers have been developed by ATLAS and CMS to identify electrons, muons and tau-leptons displaced from the primary interaction vertex. These trigger developments will allow the collection of signal candidate events at unprecedented rates, for example to test exotic Higgs-boson decays into long-lived particles with branching ratios far below the current experimental limits. 

Likewise, ongoing developments in machine learning will contribute to the Run 3 search programme. While Run 1 physics analyses used generic, simple algorithms to distinguish between hypotheses, in Run 2 more powerful approaches of deep learning were introduced. For Run 3 their development continues, using a multitude of different algorithms tailored to the needs of event reconstruction and physics analysis to increase the reach of new-physics searches further.

New signatures

The Run 3 data will also be scrutinised in view of final states that either have been proposed more recently or that require a particularly large dataset. Examples of the latter are searches for electroweakinos, which have a production cross-section at the LHC at least two orders of magnitude smaller than strongly interacting SUSY particles. First results based on Run 2 data surpassed the sensitivity of the LEP experiments, including tests of unconventional “R-parity violating” scenarios in which electroweakinos can decay into only SM particles. This results in complicated final states containing electrons, muons and many jets but relatively low MET. Here, the challenging background determination could only be achieved thanks to machine-learning techniques, which lay the ground for further searches for particularly rare and challenging SUSY signals at Run 3.

If R-parity is not a symmetry, SUSY does not provide a WIMP dark-matter candidate. Among alternative explanations of the nature of this substance, models with bound-state dark matter are gaining increasing attention. In this new approach, strong interactions similar to quantum chromodynamics determine the particle spectrum in a dark sector that includes stable dark-matter candidate particles such as dark pions. At the LHC, coupling between such dark-sector particles and known ones would result in “semi-visible” jets comprising both types of particle (traditional dark-matter searches at the LHC have avoided such events to reduce background contributions). With the Run 2 data, CMS has already provided the very first collider constraints on these dark sectors, and more results from both ATLAS and CMS will follow in this and other proposed dark-sector scenarios.

Multiple deviations from the SM observed at lower energies are starting to shape the search programme at the energy frontier. The long-standing anomaly in the magnetic moment of the muon has recently reached a significance of 4.2σ, motivating increased efforts in searching for possible causes. One is the pair-production of a supersymmetric partner of the muon, for which models fit the low-energy data if the mass of this “smuon” is below 1 TeV and hence within the reach of the LHC. Another is to look for vector-like leptons, which are suggested by consistent extensions of the SM apart from SUSY, using final states containing a large number of leptons.

Multiple deviations from the SM observed at lower energies are starting to shape the search programme at the energy frontier

Moreover, the anomalies in B-meson decays consistently reported by BaBar, Belle and LHCb (see “A flavour of Run 3 physics”) have a strong and growing impact on the Run 3 search programme. Explanations for these anomalies require new particles with TeV-scale masses to fit the size of the observed effects and a hierarchy of fermion couplings to fit the deviations from lepton-flavour universality. Intriguingly these two requirements happen to coincide with the two peculiarities of the Higgs boson. Particular attention is now given to leptoquark searches investigating several production and decay modes. ATLAS and CMS have already started to probe leptoquark models suggested by the B-meson anomalies using Run 2 data (see “Leptoquarks” figure). While the analysis of key channels is ongoing, Run 3 will allow the experiments to probe a large fraction of the relevant parameter space. Furthermore, consistent models of leptoquarks include more new particles, namely colour-charged and colour-neutral bosons, vector-like quarks and vector-like leptons. These predict a variety of new-physics signatures that will further shape the Run 3 search programme.

In summary, searches for new physics at Run 3 will bring significant gains in sensitivity beyond the benefit provided by the increased amount of data. In particular, potential explanations of the anomalies observed at lower energies will be tested. Assuming that these anomalies point to new physics, the relevant searches with Run 3 data have a good chance of finding the first deviations from the SM at the TeV energy scale. Such an outcome would be of the utmost importance for particle physics, strengthening the case for the proposed Future Circular Collider at CERN.

The post The search for new physics: take three appeared first on CERN Courier.

]]>
Feature Searches for new physics at Run 3 will bring significant gains in sensitivity beyond the benefit provided by the increased amount of data. https://cerncourier.com/wp-content/uploads/2022/04/CCMayJun22_RUN3_searches_feature.jpg
Pushing the precision frontier https://cerncourier.com/a/pushing-the-precision-frontier/ Mon, 02 May 2022 08:56:22 +0000 https://preview-courier.web.cern.ch/?p=99176 Precision measurements in Run 3 can act as a gateway to new discoveries explains Abideh Jafari.

The post Pushing the precision frontier appeared first on CERN Courier.

]]>
A candidate vector-boson-scattering event at CMS

Confronted with multiple questions about how nature works at the smallest scales, we exploit precise measurements of the Standard Model (SM) to seek possible answers. Those answers could further confirm the SM or give hints of new phenomena. As a hadron collider, the LHC was primarily built as a discovery machine. After more than a decade of operation, however, it has surpassed expectations. Alongside the discovery of the Higgs boson and a broad programme of direct searches for new phenomena, ultra-precise measurements on a wide range of parameters have been carried out. These include particle masses, the width of the Z boson and the production cross-sections of various SM processes ranging over 10 orders of magnitude (see “Cross sections” figure); the latter are connected to a multitude of measurements including differential distributions and particle properties.

An example that is unique to the LHC is the measurement of the Higgs-boson mass, which was determined to a precision of 0.12% by CMS in 2019. Also of vital importance are the strengths of the Higgs-boson couplings to other known particles (see “Coupling strengths” figure). According to the SM, these couplings must be proportional to a particle’s mass. Nicely following the SM expectation, every coupling in this plot is extracted using various measurements of the Higgs-boson production and decay channels. Besides the remarkable agreement with the SM, the plot shows the result of the Higgs-boson decay to muons, which is challenging to measure because of the muon’s small mass. 

Production cross section of various SM processes

The LHC-experiment collaborations are currently concluding their Run 2 measurements using proton-collision data recorded at 13 TeV while getting ready for the Run 3 startup. From several notable achievements with the Run 2 data, one can point to the measurement of a fundamental parameter of the SM, the mass of the W boson with a precision of 0.02% by ATLAS and of 0.04% in the forward region by LHCb (see “W mass” figure). Precision measurements of the W-boson mass are crucial for testing the consistency of the SM, as radiative corrections connect it with the masses of the top quark and the Higgs boson. A future combination of the LHCb result with similar measurements from ATLAS and CMS can reduce the significant uncertainty of parton distribution functions on this parameter. Although the particle masses are crucial elements of the SM, it is not always possible to determine them directly. In the case of quarks, except for the heaviest top quark, their immediate hadronisation makes the properties of a bare quark inaccessible. Observed for the first time by ALICE, the QCD “dead cone” (an angular region of suppressed gluon emissions surrounding a heavy quark that is proportional to the quark’s mass) in charmed jets may be a possible way to ultimately access the heavy-quark mass directly. 

The coupling structure of the SM, especially between heavy particles, is another key aspect that is being pinned down by ATLAS and CMS. In 2017 the experiments marked an important milestone in this regard with the observation of WW scattering – a first step in a diverse programme of measurements of vector boson scattering (VBS), in which vector bosons emitted from each of the incoming quarks interact with one other (see “Critical physics” image). As VBS processes are sensitive to the self-interaction of four gauge bosons as well as to the exchange of a virtual Higgs boson, they remain a central part of the LHC physics programme during Run 3 and beyond, where the additional data will become a decisive factor.

Run 3 preparations 

The LHC is about to start a new endeavour at an unprecedented energy (13.6 TeV as opposed to 13 TeV) and with an instantaneous luminosity on average 1.5 times higher than in Run 2. In addition to higher statistics, the larger energy reach of Run 3 provides a unique opportunity to study unexplored territories in the kinematic phase space of particles. Prime targets are regions where the discovery of possible new phenomena is mainly awaiting additional data, and those where the insufficient size of the data sample is the main limiting factor on the precision.

Measurements of the mass of the W boson compared to the SM prediction

A major challenge ahead is the increased number of additional interactions within the same or nearby bunch crossings, called pileup. The large rate of interactions puts strain on different parts of the detectors as well as their trigger systems. Relying on cutting-edge technologies, experiments at the LHC have performed extensive upgrades in several subsystems, hardware and software to cope with the associated complexities and exploit the full potential of the data. In some cases, this has involved the installation of new detectors or an entire renewal, or extension, of existing subdetectors. Examples are the New Small Wheel (NSW) muon detector in ATLAS and the muon gas electron multiplier (GEM) detectors in CMS. These gas-based detectors, which are designed in view of the High-Luminosity LHC (HL-LHC) and will be partially operational during Run 3, are installed in the endcap area of the experiments where a significant increase is expected in the particle flux. The improved muon momentum resolution they bring also plays a critical role in the trigger systems by keeping the rate low. 

It is now proven that with advanced analysis strategies we can surpass the expectations from projection studies

In the ALICE experiment, among other important upgrades, the inner tracker system has faced a complete renewal of the silicon-based detectors for enhanced low-momentum vertexing and tracking capabilities. At LHCb, in addition to new front-end electronics for higher-rate triggering and readout, the ring-imaging Cherenkov detector has been upgraded to deal with the large-pileup environment, while a brand new vertex locator and tracking system will allow the reconstruction of charged particles. In parallel to the hardware, the LHC experiments have accomplished a substantial upgrade in software and computing, including the implementation of fast readout systems and the use of state-of-the-art graphics processing units.

Physics ahead 

The series of upgrades undertaken during Long Shutdown 2 will enable the experiments to pursue a rich physics programme during Run 3 and to get ready for Run 4 at the HL-LHC. The preparation also involves Monte Carlo event generation at the new centre-of-mass energy, full simulation of collision events in the new detectors, and designing new methods with modern tools to identify particles and analyse the data. The additional data of Run 3, together with innovative analysis techniques, will result in reduced uncertainties and therefore push the precision frontier forward. The experience from Run 2 is of great value in this regard. 

Predicted vs measured values of the coupling strengths between the Higgs boson and other SM particles

It is now proven that with advanced analysis strategies which make maximal use of the available data, we can surpass the expectations from projection studies. An example is the Higgs-boson decay to muons. Whereas early Run 3 projections suggested an uncertainty of about 20% with 300 fb–1 of LHC data, in 2020 the CMS experiment achieved such precision using Run 2 data alone. In the latest projections, a further improvement of 30–35% is expected thanks to the advanced analysis strategies developed during Run 2. The projected uncertainties in Higgs-boson couplings to other SM particles, including vector bosons and third-generation leptons, are also expected to be reduced. The Higgs-boson interaction with the heaviest known particle, the top quark, is of particular interest as it may give insights into the existence and energy scale of new physics above 100 GeV. Besides the famous ttH process, simultaneous production of four top quarks is also very sensitive to the top quark’s Yukawa interaction with the Higgs boson. Exhibiting the heaviest SM final state, “four-top” is one of the rarest but most important processes. Following evidence reported by ATLAS in 2021, Run 3 data may fully establish its observation. 

Among rare processes that may shed light on electroweak symmetry breaking, one can point to the VBS production of longitudinally polarised W bosons. The longitudinal polarisation is a result of electroweak symmetry breaking through which vector bosons acquire mass from their interaction with the Brout–Englert–Higgs field. Given that the analysis of Run 2 data has reached the expected significance (about 1σ) of the HL-LHC with the same luminosity, we look forward to Run 3 to test the SM with more data and further channels.

Run 3 excitement

The excitement about LHC Run 3 is not restricted to rare phenomena and new discoveries. Well-established processes such as top-quark, W- and Z-boson production are pivotal for a firm understanding of the SM. The upcoming data will provide us with gigantic statistics that translates to a significantly higher precision on the measured properties of these particles in addition to various fundamental parameters of the SM. The latter include the mass of the top quark, the precise determination of which is a critical factor in the stability of vacuum. Early Run 3 projection studies predicted an uncertainty of 1.5 GeV on the top-quark mass. This has already been achieved in Run 2 using tt differential cross-section measurements, and will be further reduced with the upcoming Run 3 data.

The upcoming data will provide us with gigantic statistics that translate to significantly higher precision

Such levels of precision also provide invaluable feedback to the theory community, whose tremendous efforts in modelling and state-of-the-art calculations and simulations are the basis of our measurements. Thanks to the increasing sophistication and precision of SM calculations, any statistically significant deviation from theory can be an unambiguous sign of new physics. Therefore, precision measurements in Run 3 can act as a gateway to new discoveries. These include measurements of properties such as vector-boson polarisation, which are sensitive to new physics by construction, inclusive cross sections of VBS and other rare processes, and differential distributions where new phenomena can appear in the tails.

In October 2021, stable proton beams were circulated and collided at a centre-of-mass energy of 900 GeV in the LHC for the first time since 2018. While preparing for the start up in May this year, the experiments made use of these data for a special period of commissioning to ensure their readiness to collect data in Run 3. The successful outcome of the commissioning brought further enthusiasm and motivation to the LHC-experiment collaborations, who very much look forward to executing their far-reaching Run 3 physics plans.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

The post Pushing the precision frontier appeared first on CERN Courier.

]]>
Feature Precision measurements in Run 3 can act as a gateway to new discoveries explains Abideh Jafari. https://cerncourier.com/wp-content/uploads/2022/04/CCMayJun22_RUN3_precision_event.jpg
Dijet excess intrigues at CMS https://cerncourier.com/a/dijet-excess-intrigues-at-cms/ Tue, 15 Mar 2022 10:43:15 +0000 https://preview-courier.web.cern.ch/?p=98003 Events containing high-mass dijet resonances can originate from Standard Model processes, but those are expected to be extremely rare says the collaboration.

The post Dijet excess intrigues at CMS appeared first on CERN Courier.

]]>
The Standard Model (SM) has been extremely successful in describing the behaviour of elementary particles. Nevertheless, conundrums such as the nature of dark matter and the cosmological matter-antimatter asymmetry strongly suggest that the theory is incomplete. Hence, the SM is widely viewed as an effective low-energy limit of a more fundamental underlying theory which must be modified to describe particles and their interactions at higher energies.

A powerful way to discover new particles expected from physics beyond the SM is to search for high-mass dijet or multi-jet resonances, as these are expected to have large production cross-sections at hadron colliders. These searches look for a pair of jets originating from a pair of quarks or gluons, coming from the decay of a new particle “X” and appearing as a narrow bump in the invariant dijet-mass distribution. Since the energy scale of new physics is most likely high, it is natural to expect these new particles to be massive.

CMS_Figure1

CMS and ATLAS have performed a suite of single-dijet-resonance searches. The next step is to look for new identical-mass particles “X” that are produced in pairs, with (resonant mode) or without (non-resonant mode) a new intermediate heavier particle “Y” being produced and decaying to pairs of X. Such processes would yield two dijet resonances and four jets in the final state: the dijet mass would correspond to particle X and the four-jet mass to particle Y.

The CMS experiment was also motivated to search for Y→ XX → 4-jets by a candidate event recorded in 2017, which was presented by a previous CMS search for dijet resonances (figure 1). This spectacular event has four high transverse-momentum jets, forming two dijet pairs each with an invariant mass of 1.9 TeV and a four-jet invariant mass of 8 TeV.

CMS_Figure2

Presented on 14 March at Rencontres de Moriond, the CMS collaboration has found another very similar event in a new search optimised for this specific Y→ XX → 4-jets topology. These events could originate from quantum-chromodynamics processes, but those are expected to be extremely rare (figure 2). The two candidate events are clearly visible at high masses, distinct from all the rest. Also shown (magenta) is a simulation of a possible new-physics signal – a diquark decaying to vector-like quarks – with a four-jet mass of 8.4 TeV and a dijet mass of 2.1 TeV, which very nicely describes these two candidates.

The hypothesis that these events originate from the SM at the observed X and Y masses is disfavoured with a local significance of 3.9σ. Taking into account the full range of possible X and Y mass values, the compatibility of the observation with the SM expectation leads to a global significance of 1.6σ.

The upcoming LHC Run 3 and future High-Luminosity LHC runs will be crucial in telling us whether these events are statistical fluctuations of the SM expectation, or the first signs of yet another groundbreaking discovery at LHC.

The post Dijet excess intrigues at CMS appeared first on CERN Courier.

]]>
News Events containing high-mass dijet resonances can originate from Standard Model processes, but those are expected to be extremely rare says the collaboration. https://cerncourier.com/wp-content/uploads/2022/03/CMS_Figure1c.png
Turning the screw on right-handed neutrinos https://cerncourier.com/a/turning-the-screw-on-right-handed-neutrinos/ Wed, 09 Mar 2022 08:31:25 +0000 https://preview-courier.web.cern.ch/?p=97866 The existence of heavy neutral leptons could solve the key observational shortcomings of the Standard Model, and such particles might be within reach of current and proposed experiments.

The post Turning the screw on right-handed neutrinos appeared first on CERN Courier.

]]>
The KATRIN experiment

In the 1960s, the creators of the Standard Model made a smart choice: while all charged fermions came in pairs, with left-handed and right-handed components, neutrinos were only left-handed. This “handicap” of neutrinos allowed physicists to accommodate in the most economical way important features of the experimental data at that time. First, such left-handed-only neutrinos are naturally massless, and second, individual leptonic flavours (electron, muon and tau) are automatically conserved.

It is now well established that neutrinos have masses and that the neutrino flavours mix with each other, in similarity with quarks. If this were known 55 years ago, Weinberg’s seminal 1967 work “A Model of Leptons” would be different: in addition to the left-handed neutrinos, it would very likely also contain their right-handed counterparts. The structure of the Standard Model (SM) dictates that these new states, if they exist, are the only singlets with respect to weak-isospin and hyper-charge gauge symmetry and thus do not participate directly in electroweak interactions (see “On the other hand” figure). This makes right-handed neutrinos (also referred to as sterile neutrinos, singlet fermions or heavy neutral leptons) very special: unlike charged quarks and leptons, which get their masses from the Yukawa interaction with the Brout–Englert–Higgs field, the masses of right-handed neutrinos depend on an additional parameter – the Majorana mass – which is not related to the vacuum expectation value and which results in the violation of lepton-number conservation. As such, right-handed neutrinos are also sometimes referred to as Majorana leptons or Majorana fermions.

Leaving aside the possible signals of eV-scale neutrino states reported in recent years, all established experimental signatures of neutrino oscillations can be explained by the SM with the addition of two heavy-neutral leptons (HNLs). If there were only one HNL, then two out of three SM neutrinos would be massless; with two HNLs, only one of the SM neutrinos is massless – this is not excluded experimentally. Any larger number of HNLs is also possible.

Fermion content

The simplest way to extend the SM in the neutrino sector is to add several HNLs and no other new particles. Already this class of theories is very rich (different numbers of HNLs and different values of their masses and couplings imply very different phenomenology), and contains several different scenarios explaining not only the observed masses and flavour oscillations of the SM neutrinos but also other phenomena that are not accommodated by the SM. The scenario in which the Majorana masses of right-handed neutrinos are much higher than the electroweak scale is known as the “type I see-saw model”, first put forward in the late 1970s. The theory with three right-handed neutrinos (the same as the number of generations in the SM) with their masses below the electroweak scale is called the neutrino minimal standard model (νMSM), and was proposed in the mid-2000s.

Would these new particles be useful for anything else besides neutrino physics? The answer is yes. The first, lightest HNL N1 may serve as a dark-matter particle, whereas the other two HNLs N2,3 not only “give” masses to active neutrinos but can also lead to the matter–antimatter asymmetry of the universe. In other words, the SM extended by just three HNLs could solve the key outstanding observational problems of the SM, provided the masses and couplings of the HNLs are chosen in a specific domain. 

The masses of heavy neutral leptons

The leptonic extension of the SM by right-handed neutrinos is quite similar to the gradual adaptation of electroweak theory to experimental data during the past 50 years. While the bosonic sector of the electroweak model remains intact from 1967, with the discoveries of the W and Z bosons in 1983 and the Higgs boson in 2012, the fermionic sector evolved from one to two to three generations, revealing the remarkable symmetry between quarks and leptons. It took about 20 years to find all the quarks and leptons of the third generation. How much time it will take to discover HNLs, if they indeed exist, depends crucially on their masses.

The value of the Majorana mass, and therefore the physical mass of an HNL, is arbitrary from a theoretical point of view and cannot be found from neutrino-oscillation experiments. The famous see-saw formula that relates the observed masses of the active neutrinos to the Majorana masses of HNLs has a degeneracy: change the Yukawa couplings of HNLs to neutrinos by a factor x and the HNL masses by a factor x2, and the active neutrino masses and the physics of their oscillations remain intact. The scale of HNL masses thus can be any number from a fraction of an eV to 1015 GeV (see “Options abound” figure). Moreover, there could be several HNLs with very different masses. Indeed, even in the SM the masses of charged fermions, though they share a similar origin, differ by almost six orders of magnitude. 

Motivated by the value of the active neutrino masses, the HNL could be light, with masses of the order of 1 eV. Alternatively, similar to the known quarks and charged leptons, they could be somewhere around the GeV or Fermi scale. Or they could be close to the grand unification scale, 1015 GeV, where the strong and electromagnetic interactions are thought to be unified. These possibilities have different theoretical and experimental consequences. 

The case of the light sterile neutrino

The see-saw formula tells us that if the mass of HNLs is around 1 eV, their Yukawa couplings should be of the order of 10–12. Such light sterile neutrinos can be potentially observed in neutrino experiments, as they can be involved in the oscillations together with the three active neutrino species. Several experiments – including LSND, GALLEX, SAGE, MiniBooNE and BEST – have reported anomalies in neutrino-oscillation data (the so-called short-baseline, gallium and reactor anomalies) that could be interpreted as a signal for the existence of light sterile neutrinos. However, it looks difficult, if not impossible, to reconcile the existence of these states with recent negative results of other experiments such as MINOS+, MicroBooNE and IceCUBE, accounting for additional constraints coming from β-decay, neutrinoless double-β decay and cosmology.

Cosmological bounds

The parameters of light sterile neutrinos required to explain the experimental anomalies are in strong tension with the cosmological bounds (see “Cosmological bounds” figure). For example, their mixing angle with the ordinary neutrinos should be sufficiently large that these states would have been produced abundantly in the early universe, affecting its expansion rate during Big Bang nucleosynthesis and thus changing the abundances of the light elements. In addition, light sterile neutrinos would affect the formation of structure. Having been created in the hot early universe with relativistic velocities, they would have escaped from forming structures until they cooled down in much later epochs. This so-called “hot dark matter” scenario would mean that the smallest structures, which form first, and the larger ones, which require much more time to develop, would experience different amounts of dark matter. Moreover, the presence of such particles would affect baryon acoustic oscillations and therefore impact the value of the Hubble constant deduced from them.

Besides tensions between the experiments and cosmological bounds, light sterile neutrinos do not provide any solution to the outstanding problems of the SM. They cannot be dark-matter particles because they are too light, nor can they produce the baryon asymmetry of the universe as their Yukawa couplings are too small to give any substantial contribution to lepton-number violation at the temperatures (> 160 GeV) at which the anomalous electroweak processes with baryon non-conservation have a chance to convert a lepton asymmetry into a baryon asymmetry. 

Three Fermi-scale heavy neutral leptons

Another possible scale for HNL masses is around a GeV, plus or minus a few orders of magnitude. Right-handed neutrinos with such masses do not interfere with active-neutrino oscillations because the corresponding length over which these oscillations may occur is far too small. As only two active-neutrino mass differences are fixed by neutrino-oscillation experiments, it is sufficient to have two HNLs N2,3 with appropriate Yukawa couplings to active neutrinos: to get the correct neutrino masses, they should not be smaller than ~10–8 (compared to the electron Yukawa coupling of ~10–6). These two HNLs may produce the baryon asymmetry of the universe, as we explain later, whereas the lightest singlet fermion, N1, may interact with neutrinos much more weakly and thus can be a dark-matter particle (although unstable, its lifetime can greatly exceed the age of the universe). 

Three main considerations determine the possible range of masses and couplings of the dark-matter sterile neutrino (see “Dark-matter constraints” figure). The first is cosmological production. If N1 interact too strongly, they would be overproduced in ℓ+ N1ν reactions and make the abundance of dark matter larger than what is inferred by observations, providing an upper limit on their interaction strength. Conversely, the requirement to produce enough dark matter results in a lower bound on the mixing angle that depends on the conditions in the early universe during the epoch of N1 production. Moreover, the lower bound completely disappears if N1 can also be produced at very high temperatures by interactions related to gravity or at the end of cosmological inflation. The second consideration is X-ray data. Radiative N1γν decays produce a narrow line that can be detected by X-ray telescopes such as XMM–Newton or Chandra, resulting in an upper limit on the mixing angle between sterile and active neutrinos. While this upper limit depends on the uncertainties in the distribution of dark matter in the Milky Way and other nearby galaxies and clusters, as well as on the modelling of the diffuse X-ray background, it is possible to marginalise these to obtain very robust constraints. 

Dark-matter constraints

The third consideration for the sterile neutrino’s properties is structure formation. If N1 is too light, a very large number-density of such particles is required to make an observed halo of a small galaxy. As HNLs are fermions, however, their number density cannot exceed that of a completely degenerate Fermi gas, placing a very robust lower bound on the N1 mass. This bound can be further improved by taking into account that light dark-matter particles remain relativistic until late epochs and therefore suppress or erase density perturbations on small scales. As a result, they would affect the inner structure of the halos of the Milky Way and other galaxies, as well as the matter distribution in the intergalactic medium, in ways that can be observed via gravitational-lensed galaxies, gaps in the stellar streams in galaxies and the spectra of distant quasars. 

Neutrino experiments and robust conclusions from observational cosmology call for extensions of the SM

The upper limits on the interaction strength of sterile neutrinos fixes the overall scale of active neutrino masses in the νMSM. The dark-matter sterile neutrino effectively decouples from the see-saw formula, making the mass of one of the active neutrinos much smaller than the observed solar and atmospheric neutrino-mass differences and fixing the masses of the two other active neutrinos to approximately 0.009 eV and 0.05 eV (for the normal ordering) and to the near-degenerate value 0.05 eV for the inverted ordering.

HNLs at the GeV scale and beyond 

Our universe is baryon-asymmetric – it does not contain antimatter in amounts comparable with the matter. Though the SM satisfies all three “Sakharov conditions” necessary for baryon-asymmetry generation (baryon number non-conservation, C and CP-violation, and departure from thermal equilibrium), it cannot explain the observed baryon asymmetry. The Kobayashi–Maskawa CP-violation is too small to produce any substantial effects, and departures from thermal equilibrium are tiny at the temperatures at which the anomalous fermion-number non-conserving processes are active. This is not the case with two GeV-scale HNLs: these particles are not in thermal equilibrium for temperatures above a few tens of GeV, and CP violation in their interactions with leptons can be large. As a result, a lepton asymmetry is produced, which is converted into baryon asymmetry by the baryon-number violating reactions of the SM.

The requirement to get baryon asymmetry in the νMSM puts stringent constraints on the masses and coupling of HNLs (see “Baryon-asymmetry constraints” figure). The mixing angle of these particles cannot be too large, otherwise they equilibrate and erase the baryon asymmetry, and it cannot be below a certain value because it would make the active neutrino masses too small. We know that their mass should be larger than that of the pion, otherwise their decays in the early universe would break the success of Big Bang nucleosynthesis. In addition, the masses of two HNLs should be close to each other so as to enhance CP-violating effects. Interestingly, the HNLs with these properties are within the experimental reach of existing and future accelerators, as we shall see.

Baryon-asymmetry constraints

The final possible choice of HNL masses is associated with the grand unification scale, ~1015 GeV. To get the correct neutrino masses, the Yukawa couplings of a pair of these superheavy particles should be of the order of one, in which case the baryon asymmetry of the universe can be produced via thermal leptogenesis and anomalous baryon- and lepton-number non-conservation at high temperatures. The third HNL, if interacting extremely weakly, may play the role of a dark-matter particle, as described previously. Another possibility is that there are three superheavy HNLs and one light one, to play the role of dark matter. This model, as well as that with HNL masses of the order of the electroweak scale, may therefore solve the most pressing problems of the SM. The only trouble is that we will never be able to test it experimentally, since the masses of N2,3 are beyond the reach of any current or future experiment.

Experimental opportunities

It is very difficult to detect HNLs experimentally. Indeed, if the masses of these particles are within the reach of current and planned accelerators, they must interact orders of magnitude more weakly than the ordinary weak interactions. As for the dark-matter sterile neutrino, the most promising route is indirect detection with X-ray space telescopes. The new X-ray spectrometer XRISM, which is planned to be launched this year, has great potential to unambiguously detect a signal from dark-matter decay. Like many astrophysical observatories, however, it will not be able to determine the particle origin of this signal. Thus, complementary laboratory searches are needed. One experimental proposal that claims a sufficient sensitivity to enter into the cosmologically relevant region is HUNTER, based on radio­active atom trapping and high-resolution decay-product spectrometry. Sterile neutrinos with masses of around a keV can also show up as a kink in the β-decay spectrum of radioactive nuclei, as discussed by the ambitious PTOLEMY proposal. The current generation of experiments that study β-decay spectra – KATRIN and Troitsk nu-mass – also perform searches for keV HNLs, but they are sensitive to significantly larger mixing angles than required for a dark-matter particle. Extending the KATRIN experiment with a multi-pixel silicon drift detector, TRISTAN, will significantly improve the sensitivity here.

The most promising perspectives to find N2,3 responsible for neutrino masses and baryogenesis are experiments at the intensity frontier. For HNL masses below 5 GeV (the beauty threshold) the best strategy is to direct proton beams at a target to create K, D or B mesons that decay producing HNLs, and then to search for HNL decays through “nothing leptons and hadrons” processes in a near detector. This strategy was used in the previous PS191 experiment at CERN’s Proton Synchrotron (PS), NOMAD, BEBC and CHARM at the Super Proton Synchrotron (SPS) and NuTeV at Fermilab. There are several proposals for future experiments along these lines. The proposed SHiP experiment at the SPS Beam Dump Facility has the best potential as it can potentially cover almost all parameter space down to the lowest bound on coupling constants coming from neutrino masses. The SHiP collaboration has already performed detailed studies and beam tests, and the experiment is under consideration by the SPS and PS experiments committee. A smaller-scale proposal, SHADOWS, covers part of the interesting parameter space.

Electron coupling

The search for HNLs can be carried out at the near detectors of DUNE at Fermilab and T2K/T2HK in Japan, which are due to come online later this decade. The LHC experiments ATLAS, CMS, LHCb, FASER and SND, as well as the proposed CODEX-b facility, can also be used, albeit with fewer chances to enter deeply into the cosmologically interesting part of the HNL parameter space. The decays of HNLs can also be searched for at future huge detectors such as MATHUSLA. And, going to larger HNL masses, breakthroughs can be made at the proposed Future Circular Collider FCC-ee, studying the processes Z νN with a displaced vertex (DV) corresponding to the subsequent decay of N to available channels (see “Electron coupling” figure).

Conclusions

Neutrino experiments and robust conclusions from observational cosmology call for extensions of the SM. But the situation is very different from that in the period preceding the discovery of the Higgs boson, where the consistency of the SM together with other experimental results allowed us to firmly conclude that either the Higgs boson had to be discovered at the LHC, or new physics beyond the SM must show up. Although we know for sure that the SM is incomplete, we do not have a firm prediction about where to search for new particles nor what their masses, spins, interaction types and strengths are.

Experimental guidance and historical experience suggest that the SM should be extended in the fermion sector, and the completion of the SM with three Majorana fermions solves the main observational problems of the SM at once. If this extension of the SM is correct, the only new particles to be discovered in the future are three Majorana fermions. They have remained undetected so far because of their extremely weak interactions with the rest of the world.

The post Turning the screw on right-handed neutrinos appeared first on CERN Courier.

]]>
Feature The existence of heavy neutral leptons could solve the key observational shortcomings of the Standard Model, and such particles might be within reach of current and proposed experiments. https://cerncourier.com/wp-content/uploads/2022/03/CCMarApr22_STERILE-frontis.jpg
LHC Run 3: the final countdown https://cerncourier.com/a/lhc-run-3-the-final-countdown/ Fri, 18 Feb 2022 16:16:38 +0000 https://preview-courier.web.cern.ch/?p=97471 The successful restart of Linac4 on 9 February marked the start of the final countdown to LHC Run 3.

The post LHC Run 3: the final countdown appeared first on CERN Courier.

]]>
LHC run 3 starts

The successful restart of Linac4 on 9 February marked the start of the final countdown to LHC Run 3. Inaugurated in May 2017 after two decades of design and construction, Linac4 was connected to the next link in the accelerator chain, the Proton Synchrotron Booster (PSB), in 2019 at the beginning of Long Shutdown 2 and operated for physics last year. The 86 m-long accelerator now replaces the long-serving Linac2 as the source of all proton beams for CERN experiments.

On 14 February, Hions accelerated to 160 MeV in Linac4 were sent to the PSB, with beam commissioning and physics to start in ISOLDE on 7 and 28 March. Beams will be sent to the PS on 28 February, to serve, after set-up, experiments in the East Area, the Antiproton Decelerator and n_TOF. The SPS will be commissioned with beam during the week beginning 7 March, after which beams will be supplied to the AWAKE facility and to the North Area experiments, where physics operations are due to begin on 25 April.

Meanwhile, preparations for some of the protons’ final destination, the LHC, are under way. Powering tests and magnet training in the last of the LHC’s eight sectors are scheduled to start in the week of 28 February and to last for four weeks, after which the TI12 and TI18 transfer tunnels and the LHC experiments will be closed and machine checkout will begin. LHC beam commissioning with 450 GeV protons is scheduled to start on 11 April, with collisions at 450 GeV per beam expected around 10 May. Stable beams with collisions at 6.8 TeV per beam and nominal bunch population are scheduled for 15 June. An intensity ramp-up will follow, producing collisions with 1200 bunches per beam in the week beginning 18 July on the way to over double this number of bunches. High-energy proton-proton operations will continue for 3–4 months, before the start of a month-long run with heavy ions on 14 November. All dates are subject to change as the teams grapple with LHC operations at higher luminosities and energies than those during Run 2, following significant upgrade and consolidation work completed during LS2.

Among the highlights of Run 3 are the first runs of the neutrino experiments FASERν and SND@LHC

Among the highlights of Run 3 are the first runs of the neutrino experiments FASERν and SND@LHC, as well as the greater integrated luminosities and physics capabilities resulting from upgrades of the four main LHC experiments. A special request was made by LHCb for a SMOG2 proton-helium run in 2023 to measure the antiproton production rate and thus improve understanding of the cosmic antiproton excess reported by AMS-02. Ion runs with oxygen, including proton-oxygen and oxygen-oxygen, will commence in 2023 or 2024. The former is also long-awaited by the cosmic-ray community, to help improve models of high-energy air showers, while high-energy oxygen-oxygen collisions allow studies of the emergence of collective effects in small systems. High β* runs to maximise the interaction rate will be available for the forward experiments TOTEM and LHCf in late 2022 and early 2023.

On 28 January, CERN announced a change to the LHC schedule to allow necessary work for the High-Luminosity LHC (HL-LHC) both in the machine and in the ATLAS and CMS experiments. The new schedule foresees Long Shutdown 3 to start in 2026, one year later than in the previous schedule, and to last for three instead of 2.5 years. “Although the HL-LHC upgrade is not yet completed, a gradual intensity increase from 1.2 × 1011 to 1.8 × 1011 protons per bunch is foreseen for 2023,” says Rende Steerenberg, head of the operations group. “This promises exciting times and a huge amount of data for the experiments.”

To explore more on Run 3 of the LHC ...

The post LHC Run 3: the final countdown appeared first on CERN Courier.

]]>
News The successful restart of Linac4 on 9 February marked the start of the final countdown to LHC Run 3. https://cerncourier.com/wp-content/uploads/2022/02/LHC-run-3-starts-featured.jpg
Shining light on the precision frontier https://cerncourier.com/a/shining-light-on-the-precision-frontier/ Wed, 02 Feb 2022 09:44:39 +0000 https://preview-courier.web.cern.ch/?p=97229 The 30th International Symposium on Lepton Photon Interactions at High Energies covered major topics at the precision frontier and gave a glimpse of a vibrant future.

The post Shining light on the precision frontier appeared first on CERN Courier.

]]>
Lepton_Conference_photo featured

The 30th International Symposium on Lepton Photon Interactions at High Energies, hosted online by the University of Manchester from 10 to 14 January, saw more than 500 physicists from around the world engaged in a broad science programme. The Lepton Photon series dates to the 1960s and takes place every two years. This was the first time the conference was meant to return to the UK in over 50 years, with its original August time slot moved to January due to Covid-19 restrictions. The agenda was stretched to improve accessibility in different time zones. Posters were presented via pre-recorded videos and three prizes were awarded following a public vote.

With 2022 marking the ten-year anniversary of the Higgs-boson discovery, it was appropriate that the conference kicked-off with an experimental Higgs-summary talk. Both the ATLAS and CMS collaborations showcased their latest high-precision measurements of Higgs-boson properties and searches for physics beyond the Standard Model using the Higgs boson as a portal. ATLAS presented a new combination of the Higgs total and differential cross-section measurements in the two-photon and four-lepton channels, while CMS shared the first full Run-2 search for resonant di-Higgs production in several multi-lepton final states.

The LHC experiments continue to demonstrate the power of hadron colliders to test the electroweak sector. Notable new results included the first observation of opposite-charge WWjj production at CMS, the first tri-boson (WWW) observation at ATLAS, and LHCb entering the game of W-boson mass measurements. A highlight of the talks covering QCD topics was a combined fit of the parton distribution function of the proton to differential cross-section measurements from ATLAS and HERA data. A wide range of new-physics searches were presented, including a dark-photon search from ATLAS with the full Run-2 data, and a CMS search for new scalars decaying into final states with Higgs-bosons.

In flavour physics, the pattern of anomalies in rare leptonic and semi-leptonic processes continues to intrigue. Highlights in this area included new tests of lepton universality from LHCb in Λb0 Λc+? decays (ℓ=e, μ, τ) , where the decay involving a τ lepton was observed for the first time, and from Belle in Ω?0→Ω+? decays, where the ratio of the e-μ final-state branching ratios was found to be in agreement with the expectation of unity and where the μ decay had been measured for the first time. Similar studies of rare leptonic decays are now also taking place in the charm sector. The BESIII collaboration tested in one study the e-μ universality in a second decay mode and confirmed its agreement with the Standard Model. Participants also heard about the latest searches for the ultra-rare decay K→π?? from KOTO, searching for the neutral kaon decay mode, and from NA62, which now has a 3.4σ evidence for the charged kaon decay mode.

With the 2021 update on muon g-2 from Fermilab, and with the MEG-II, DeeMe and Mu3e experiments getting ready to search for muon-to-electron transitions, there is much excitement about charged-lepton physics. CP violation in beauty and charm remains a hot topic, with updates from LHCb, Belle and BES-III on D0 and Bs oscillations and the CKM angle γ. In all these areas, the theoretical community continues to push the boundaries to make improved predictions. Among other things, theorists presented the latest global fits of Wilson coefficients, and several welcome developments in lattice QCD. 

The highlights from the neutrino sector included the low-energy excess search by MicroBooNE and the observation of the CNO cycle of solar neutrinos by Borexino. The latest results from the long-baseline experiments – T2K and recently NovA– are starting to hint at large CP violating effects in neutrino oscillations.

A series of talks on dark-matter searches spanned collider experiments, direct detection and astrophysical signatures. Some interesting anomalies persist, such as the DAMA annual modulation and the XENON1T low-energy excess. These will be challenged by a suite of next-generation detectors, such as PANDAX-4T, XENONnT, LZ and DarkSide-20k.

The conference also included a rich programme of talks covering astrophysics with an emphasis on gravitational waves and multi-messenger astronomy. Hot-off-the-press was a combined search for spatial correlations between neutrinos and ultra-high energy cosmic rays, using data from ANTARES, IceCube, Auger and TA collaborations, with no sign yet of a connection.

As well as many new results from experiments in operation, the conference included sessions devoted to R&D in accelerators, detectors, software and computing, covering both collider and non-collider experiments. With many new facilities proposed in the medium and long terms, technological challenges, which include power consumption, data rates and radiation tolerance, are immense and demand significant efforts in harnessing promising avenues such as high-temperature superconductors, quantum sensors or specialised computer accelerators. Common to all areas is the need to train and retain highly skilled people to lead these efforts in future.

A firm part of the Lepton Photon plenary programme are discussions around diversity, inclusion and outreach. A lively panel discussion covered many aspects of the former two topics and ended with a key message to the whole community: be an ally and take an active stance in support of minorities. The conference ended with traditional reports from the IUPAP commission on particles and fields and from ICFA, followed by strategy updates from Snowmass and the African Strategy for Fundamental and Applied Physics. While Snowmass is an established process for regular updates of the US strategy for the field based on wide-spread community input both from the US and internationally, the African strategy is the first of its kind and is testament to the continent’s ambition and growing importance in physics research. The next conference will take place in Melbourne in July 2023.

The post Shining light on the precision frontier appeared first on CERN Courier.

]]>
Meeting report The 30th International Symposium on Lepton Photon Interactions at High Energies covered major topics at the precision frontier and gave a glimpse of a vibrant future. https://cerncourier.com/wp-content/uploads/2022/02/Lepton_Conference_photo-featured-1.jpg
Exotic flavours at the FCC https://cerncourier.com/a/exotic-flavours-at-the-fcc/ Thu, 06 Jan 2022 12:22:32 +0000 https://preview-courier.web.cern.ch/?p=96637 Intriguing hints of deviations from the Standard Model in the flavour sector point towards new physics accessible at a Future Circular Collider. 

The post Exotic flavours at the FCC appeared first on CERN Courier.

]]>
Half a century after its construction, the Standard Model of particle physics (SM) still reigns supreme as the most accurate mathematical description of the visible matter in the universe and its interactions. It was placed upon its throne by the many precise measurements made at the Large Electron Positron collider (LEP), in particular, and its rule was confirmed by the discovery of the Higgs boson at the Large Hadron Collider (LHC). CERN’s LEP/LHC success story, in which a hadron collider provided direct evidence for a new particle (the Higgs boson) whose properties were already partially established at a lepton collider, can serve as a blueprint for physics discoveries at a proposed Future Circular Collider (FCC) operating at CERN after the end of the LHC. 

Back in the late 1970s and early 1980s when the LEP/LHC programme was first proposed, the W and Z bosons mediating the weak interactions had not yet been observed, the top quark was considered a possible discovery, and the Higgs boson was regarded as a distant speculation. Precise studies of the W and Z, which were discovered in 1983 at the SPS proton–antiproton collider at CERN, were key items in LEP’s physics programme along with direct searches for the top quark, the Higgs boson and possible unknown particles. Even though the LEP experiments did not reveal any new particles beyond the W and Z, the unprecedented precision of its measurements revealed indirect effects (via quantum fluctuations) of the top and the Higgs, thereby providing indirect evidence for the SM mechanism of electroweak symmetry breaking. When the top quark was discovered at the Tevatron proton–antiproton collider at Fermilab in 1995, and the Higgs boson at the LHC in 2012, their masses were within the ranges indicated by precision measurements made at lepton colliders. 

Layout of the Future Circular Collider at CERN

Nowadays, the hope is that the proposed FCC programme – comprising an electron–positron collider followed by a high-energy proton-proton collider in the same ~100 km tunnel – will repeat the LEP/LHC success story at an even higher level of precision and energy. The e+e FCC stage would reproduce the entire LEP sample of Z bosons within a couple of minutes, yielding around 5 × 1012 Z bosons after four years of operation. In addition to allowing an incredibly accurate determination of the Z-boson’s properties, Z decays would also provide unprecedented samples of bottom quarks (1.5 × 1012) and tau leptons (3 × 1011). Potential increases in the FCC-ee centre-of-mass-energy would also produce unparalleled numbers of W+W and top–antitop pairs, which are important for the global electroweak fit, close to their respective thresholds, as well as more Higgs bosons than promised by other proposed e+e Higgs factories.

Probing beyond the Standard Model

Analyses of FCC-ee data, combined with results from previous experiments at the LHC and elsewhere, would not only push our understanding of the SM to the next level but would also provide powerful indirect probes of possible physics beyond the SM, with sensitivities to masses an order of magnitude greater than those of the LHC. A possible subsequent proton–proton FCC stage (FCC-hh) operating at a centre-of-mass energy of at least 100 TeV would then provide unequalled opportunities to discover this new physics directly, just as the LHC made possible the discovery of the Higgs boson following the indirect hints from high-precision LEP data. Whereas the combination of LEP and the LHC explore the TeV scale both indirectly and directly, the combination of FCC-ee and FCC-hh will carry the search for new physics to 30 TeV and beyond. 

The e+e stage of FCC would reproduce the entire LEP sample of Z bosons within a couple of minutes

However, for this dream scenario to play out, at least one beyond-the-SM particle must exist within FCC’s discovery reach. While the existence of dark matter and neutrino masses already prove that the SM cannot be complete (and there is no shortage of theoretical ideas as to what extensions of the SM could account for them), these observations can be explained by new particles within a very wide mass range – possibly well beyond the reach of FCC-hh. Fortunately, intriguing hints for new physics in the flavour sector have accumulated in recent years that point towards beyond-the-SM physics that should be accessible to FCC.

B-decay anomalies

Within the SM, the charged leptons – electrons, muons and taus – all have very similar properties. They interact with the photon as well as the W and Z bosons in the same way, and differ only in their masses, which in the SM are represented as Yukawa couplings to the Higgs boson. It is therefore said that the SM (approximately) respects lepton-flavour universality (LFU), despite the seemingly large differences in charged-lepton lifetimes originating from phase-space effects. 

Flavour observables (i.e. processes resulting from rare transitions among the different generations of quarks and leptons), and observables measuring LFU in particular, are especially promising to test the SM because they are strongly suppressed in the SM and thus very sensitive to new physics. In recent years, a coherent pattern of anomalies, all pointing towards the violation of LFU, have emerged. Two classes of fundamental processes giving rise to decays of B mesons – b → sℓ+ and b → cτν – show deviations from the SM predictions. 

In the flavour-changing neutral-current process b → sℓ+, a heavy bottom quark undergoes a transition to a strange quark and a pair of oppositely-charged leptons, which could be either electrons or muons. The ratios RK = Br(B → +μ)/Br(B → Ke+e) and RK* = Br(B → K*μ+μ)/Br(B → K*e+e), measured most precisely by the LHCb collaboration, are particularly interesting because their SM predictions are very clean. Since the muon and electron masses are negligible compared to the B-meson mass, the ratio of muon to electron decays should be close to unity according to the SM. However, intriguingly, LHCb has observed values significantly lower than one, and recently reported first evidence for LFU violation in RK . These hints of new physics are supported by measurements of the angular observable P5′ in B0→ K*0μ+μ decays and the rate of Bs→ φμ+μ decays. Importantly, all these observations can potentially be explained by the same new-physics interactions and are consistent with all other available measurements of processes involving b → sℓ+transitions. In fact, global fits of all available b → sℓ+  data find a preference for new physics compared to the SM hypothesis which reeks of a possible discovery.

Anomalous correlations

The second class of anomalies involves the charged-current process b → cτν, which is already mediated at tree level in the SM. The corresponding B-meson decays therefore have much higher probabilities to occur and thus larger branching ratios. However, the non-negligible tau mass leads to imperfect cancellations of the form factors in the ratio to electron or muon final states, and thus the resulting SM prediction is not as precise as those for RK and RK*. The most prominent examples of observables involving b → cτν transitions are the ratios RD = Br(B → Dτντ)/Br(B → Dℓν) and RD* = Br(B → D*τ ντ)/Br(B → D*τν). Here, the measurements of Belle, BaBar and LHCb consistently point above the SM predictions, resulting in a combined tension of 3σ. Importantly, as these processes happen quite frequently in the SM, a significant new-physics effect would be required to account for the corresponding anomaly. 

With the FCC-ee capable of producing 1.5 × 1012 b quarks, clearly the b anomalies could be further verified within a short period of running, assuming that LHCb, Belle II and possibly other experiments do confirm them. The large data sample would also allow physicists to study complementary modes that bear upon LFU but are more difficult for LHCb to measure, such as other “R” measurements involving neutral kaons. These measurements would be invaluable for pinning down the mechanism responsible for any violation of lepton universality.

Other possible anomalies

The B anomalies are just one exciting avenue that a “Tera-Z factory” like FCC-ee could explore further. The anomalous magnetic moment of the muon, aμ, can also be viewed as an exciting hint for new physics in the lepton sector. Predicted by the Dirac equation to have a value exactly equal to two, the physical value of the magnetic moment of the muon is slightly higher due to fluctuations at the quantum level. The very high precision of both the calculation and measurement therefore make it a powerful observable with which to search for new physics. A tension between the measured and predicted value of aμ has persisted since Brookhaven published its final result in 2006, and was recently strengthened by the muon g-2 experiment at Fermilab, yielding an overall significance of 4.2σ when combined with the earlier Brookhaven data. 

Effects of new physics on precision electroweak measurements

Various models have been proposed to explain the g-2 anomaly. They include leptoquarks (scalar or vector particles that carry colour and couple directly to a quark and a lepton that arise in models with extended gauge groups) and supersymmetry. Such leptoquarks could have masses anywhere between the lower LHC limit of 1.5 TeV and about 10 TeV, thus being within the reach of FCC-hh, whereas a supersymmetric explanation would require a couple of new particles with masses of a few hundred GeV, possibly even within reach of FCC-ee. Importantly, any explanation involving heavy new particles would also lead to effects in Z → μ+μ, as both observables are sensitive to interactions with sizeable coupling strength to muons. FCC-ee’s large Z-boson sample could therefore reveal deviations from the SM predictions at the suggested level. Leptoquarks could also modify the SM prediction for H  μ+μ decay, which will be measured very accurately at FCC-hh (see “Anomalous correlations” figure).

CKM under scrutiny

As the Cabibbo–Kobayashi–Maskawa (CKM) matrix, which describes flavour violation in the quark sector, is unitary, the sum of the squares of the elements in each row and in each column must add up to unity. This unitarity relation can be used to check the consistency of different determinations of CKM elements (within the SM) and thus also to search for new physics. Interestingly, a deficit in the first-row unitarity relation exists at the 3σ level. This can be traced back to the fact that the value of the element Vud, extracted from super-allowed beta decays, is not compatible with the value of Vus, determined from kaon and tau decays, given CKM unitarity. Interestingly, this deviation can also be interpreted as a sign of LFU violation, since beta decays involve electrons while the most precise determination of Vus comes from decays with final-state muons. 

Here, a new-physics effect at a relative sub-per-mille level compared to the SM would suffice to explain the anomaly. This could be achieved by a heavy new lepton or a massive gauge boson affecting the determination of the Fermi constant that parametrises the strength of the weak interactions. As the Fermi constant can also be determined from the global electroweak fit, for which Z decays are crucial inputs, FCC-ee would again be the perfect machine to investigate this anomaly, as it could improve the precision by a large factor (see “High precision” figure). Indeed, the Fermi constant may be determined directly to one part in 105 from the enormous sample (> 1011) of Z decays to tau leptons. 

For this dream scenario to play out, at least one beyond-the-SM particle must exist within FCC’s discovery reach

FCC-ee’s extraordinarily large dataset will also enable scrutiny of a long-standing anomaly in the forward-backward asymmetry of Z → bb decays. The LEP measurement of ΔAFB, which arises from the difference between the Z boson couplings to left- and right-handed chiral states with different strengths, lies 2–3σ below the SM prediction. Although not significant, this anomaly may also be linked to new physics entering in b → s transitions.

Finally, a possible difference in the decay asymmetries of B → D*μν vs B → D*eν was recently reported by an analy­sis of Belle data. As in the case of RK, the SM prediction that the difference between the muon and the electron asymmetries should be zero is very clean and, like RD and RD*, this observable points towards new physics in b → c transitions and could be related via leptoquarks to g-2 of the muon. Once more, the great number of b quarks to be produced at FCC-ee, together with the clean environment of a lepton collider, would allow this observable to be determined with unprecedented accuracy.

Since all these anomalies point, to varying degrees, towards the existence of LFU-violating new physics, it raises the question of whether a common explanation exists? There are several particularly interesting possibilities, including leptoquarks, new scalars and fermions (as arise in supersymmetric extensions of the SM), new vector bosons (W′ and Z′) and new heavy fermions. In the overwhelming majority of such scenarios, a direct discovery of a new particle is possible at FCC-hh. For example, it could discover leptoquarks with masses up to 10 TeV and Z′ bosons with masses up to 40 TeV, covering most of the mass ranges expected in such models.

Anomalies point to possible violations of lepton-flavour universality

A return to the Z pole and beyond

The LEP programme was extremely successful in determining the mechanism of electroweak symmetry breaking, in particular by measuring the properties and decays of the Z boson very precisely from a 17 million-strong sample. This allowed for a prediction of a range for the Higgs mass within which it was later discovered at the LHC. The flavour anomalies could lead to a similar situation in the near future. In this case, the roughly 5 × 1012 Z bosons that the FCC-ee is designed to collect would not only be able to test the effects of new particles in precision electroweak observables, but also, via Z decays into bottom quarks and tau leptons, provide a unique testing ground for flavour physics. As noted earlier, FCC-ee’s Z-pole run is also envisaged to be the first step in a broader electroweak programme encompassing large statistics at the WW and tt thresholds, in addition to its key role as a precision Higgs factory. 

Looking much further ahead to the energy frontier, FCC-hh would be able, in the overwhelming number of scenarios motivated by the flavour anomalies, to directly discover a new particle. Furthermore, FCC-hh would allow for a precise determination of rare Higgs decays and the Higgs potential, probing new-physics effects related to this sector, such as leptoquark explanations of the anomalous magnetic moment of the muon.

Pending the outcome of the FCC feasibility study recommended by the 2020 update of the European strategy for particle physics, the hope that the LEP/LHC success story could be repeated by FCC-ee/FCC-hh is well justified. While FCC-ee could be used to indirectly pin down the parameters of the model(s) of new physics explaining the flavour anomalies via precision electroweak and flavour measurements, FCC-hh would be capable of searching for the predicted particles directly. 

The post Exotic flavours at the FCC appeared first on CERN Courier.

]]>
Feature Intriguing hints of deviations from the Standard Model in the flavour sector point towards new physics accessible at a Future Circular Collider.  https://cerncourier.com/wp-content/uploads/2021/12/CCJanFeb22_FCC_frontis-feature.jpg
Space-based data probe neutron lifetime https://cerncourier.com/a/space-based-data-probe-neutron-lifetime/ Tue, 21 Dec 2021 12:27:33 +0000 https://preview-courier.web.cern.ch/?p=96654 Results show that a dedicated instrument on a future lunar mission would bring a crucial third independent tool to tackle the neutron lifetime puzzle.

The post Space-based data probe neutron lifetime appeared first on CERN Courier.

]]>
Recent measurements of the neutron lifetime

The neutron lifetime is key to a range of fields, not least astrophysics and cosmology, where it is used in the modeling of the synthesis of helium and heavier elements in the early universe. Its value, however, is uncertain. In recent years, discrepancies of up to 4σ between measurements of the neutron lifetime using different methods present a puzzle that particle physicists, nuclear physicists and cosmologists are increasingly eager to solve. 

A recent experiment with the UCNτ experiment at the Los Alamos Neutron Science Center, which resulted in the most constraining measurement of the lifetime to date, further strengthens the discrepancy. The latest result, achieved using the so called “bottle” method, results in a neutron lifetime of 877.75 ± 0.28 (stat) +0.22 –0.16 (syst) s, whereas measurements using the “beam” method have consistently resulted in longer lifetimes (see figure). While the beam method determines the lifetime by measuring the decay products of the neutron, the bottle method instead stores cold, or thermalised, neutrons for a certain time before counting the remaining ones by direct detection. If not the result of some unknown systematic error, the discrepancy could be a sign of exotic physics whereby the longer lifetime in the beam method stems from an unmeasured second decay channel. 

Escape detection

Astrophysics brings a third, independent measurement into play based on the bombardment of galactic cosmic rays on planetary surfaces. This continual process liberates large numbers of high-energy neutrons, some of which escape into space while others approach thermal equilibrium with surface and atmospheric material, a proportion subsequently escaping into space where at some point they will decay. The neutron lifetime can therefore be inferred by counting the neutrons remaining at different distances from their production location, using detectors positioned hundreds to thousands of kilometres above the surface. As the escaped neutron flux depends on a planet’s particular elemental composition at depths corresponding to the neutron mean-free path (typically around 10 cm), neutron spectrometers have already been installed on several missions to explore planetary surface compositions.

A dedicated instrument on a future lunar mission could bring a crucial third independent tool to tackle the neutron lifetime puzzle

In 2020, using neutrons produced through interactions of cosmic rays with Venus and Mercury, a team from the Johns Hopkins Applied Physics Laboratory and Durham University demonstrated the feasibility of such a neutron-lifetime measurement. Now, using data from a lunar mission, the same team has provided the first results with uncertainties approaching those coming from lab-based experiments. Importantly, since it also relies on direct detection, the result from space should produce the same lifetime as the bottle experiments.

For this latest study, the researchers used data from NASA’s Lunar Prospector taken during several elliptical orbits around the moon in 1998. The orbiter contained two neutron detectors, one with a cadmium shield making it insensitive to slow or thermal neutrons, and one containing a tin shield that allows it to measure thermal as well as higher- energy neutrons. The difference between the two count rates then provides the thermal neutron flux. Combining this with the spacecraft position, the group deduced the thermal neutron flux for different positions and distances towards the Moon and fitted the data against a model that includes the production and propagation of thermal neutrons originating from interactions of cosmic rays with the lunar surface.

Surface studies

The highly detailed models account for neutron production from cosmic-ray interactions with the different elements of the lunar surface, and also for the varying composition of the surface in different regions. For the lifetime measurement, thermal neutrons were used due to their lower velocities (a few km/s), which makes their flux as a function of the distance to the surface (typically several 100 km) more sensitive to their lifetimes. The higher sensitivity comes at the cost of greater model complexity, however. For example, thermal neutrons cannot simply be modeled as traveling in a straight line, but are affected by the lunar gravity, meaning that they not only come directly from the surface but also enter the detector from the back as they perform elliptical orbits. 

The study found a lifetime of 887 ± 14 (stat) +7–3 (syst) s. The systematic error stems mainly from uncertainties in the surface composition and its variations as well as a lack of modeling of the temperature variation of the Moon’s surface, which affects the thermalisation process, and from uncertainties in the ephermides (location) of the spacecraft. In future dedicated missions, the latter two issues can be mitigated, while knowledge of the surface composition can be improved with additional studies. Indeed, the large statistical error arises from this being a non-dedicated mission where the small data sample used was not even part of the science data of the original mission. The results are therefore highly promising, as they show that a dedicated instrument on a future lunar mission would bring a crucial third independent tool to tackle the neutron lifetime puzzle.

The post Space-based data probe neutron lifetime appeared first on CERN Courier.

]]>
News Results show that a dedicated instrument on a future lunar mission would bring a crucial third independent tool to tackle the neutron lifetime puzzle. https://cerncourier.com/wp-content/uploads/2021/12/CCJanFeb22_NA_astro_feature.jpg
Witten reflects https://cerncourier.com/a/witten-reflects/ Tue, 21 Dec 2021 12:05:42 +0000 https://preview-courier.web.cern.ch/?p=96758 The veteran theorist explains how the LHC and other recent results have impacted his view on nature. 

The post Witten reflects appeared first on CERN Courier.

]]>
Edward Witten

How has the discovery of a Standard Model-like Higgs boson changed your view of nature? 

The discovery of a Standard Model-like Higgs boson was a great triumph for renormalisable field theory, and really for simplicity. By the time the LHC was operating, attempts to make the Standard Model (SM) work without an elementary Higgs field – using a dynamical mechanism instead – had become rather convoluted. It turned out that, as far as one can judge from what we have learned so far, the original idea of an elementary Higgs particle was correct. This also means that nature takes advantage of all the possible building blocks of renormalisable field theory – fields of spin 0, 1/2 and 1 – and the flexibility that that allows. 

The other key fact is that the Higgs particle has appeared by itself, and without any sign of a mechanism that would account for the smallness of the energy scale of weak interactions compared to the much larger presumed energy scales of gravity, grand unification and cosmic inflation. From the perspective that my generation of particle physicists grew up with (and not only my generation, I would say), this is quite a shock. Of course, we lived through a somewhat similar shock a little over 20 years ago with the discovery that the expansion of the universe is accelerating – something that is most simply interpreted in terms of a very small but positive cosmological constant, the energy density of the vacuum. It seems that the ideas of naturalness that we grew up with are failing us in at least these two cases.

What about new approaches to the fine-tuning problem such as the relaxion or “Nnaturalness”?

Unfortunately, it has been very hard to find a conventional natural explanation of the dark energy and hierarchy problems. Reluctantly, I think we have to take seriously the anthropic alternative, according to which we live in a universe that has a “landscape”of possibilities, which are realised in different regions of space or maybe in different portions of the quantum mechanical wavefunction, and we inevitably live where we can. I have no idea if this interpretation is correct, but it provides a yardstick against which to measure other proposals. Twenty years ago, I used to find the anthropic interpretation of the universe upsetting, in part because of the difficulty it might present in understanding physics. Over the years I have mellowed. I suppose I reluctantly came to accept that the universe was not created for our convenience in understanding it.

Which experimental paths should physicists prioritise at this time?

It is extremely important to probe the twin mysteries of the cosmic acceleration and the smallness of the electroweak scale as thoroughly as possible, in order to determine whether we are interpreting the facts correctly and possibly to discover a new layer of structure. In the case of the cosmic acceleration, this means measuring as precisely as we can the parameter w (the ratio of pressure and energy), which equals –1 if the acceleration of the expansion is governed by a simple cosmological constant, but would be greater than –1 in most alternative models. In particle physics, we would like to probe for further structure as precisely as we can both indirectly, for example with precision studies of the Higgs particle, and hopefully directly by going to higher energies than are available at the LHC.

What might be lurking at energies beyond the LHC?

If it is eventually possible to go to higher energies, I can imagine several possible outcomes. It might become rather clear that the traditional idea of naturalness is not the whole story and that we have on our hands a “bare” Higgs particle, without a mechanism that would account for its mass scale. Alternatively, we might find out that the apparent failure of naturalness was an illusion and that additional particles and forces that provide an explanation for the electroweak scale are just beyond our current experimental reach. There is also an intermediate possibility that I find fascinating. This is that the electroweak scale is not natural in the customary sense, but additional particles and forces that would help us understand what is going on exist at an energy not too much above LHC energies. A fascinating theory of this type is the “split supersymmetry” that has been proposed by Nima Arkani-Hamed and others.  

It seems that the ideas of naturalness that we grew up with are now failing us 

There is an obvious catch, however. It is easy enough to say “such-and-such will happen at an energy not too much above LHC energies”. But for practical purposes, it makes a world of difference whether this means three times LHC energies, six times LHC energies, 25 times LHC energies, or more. In theories such as split supersymmetry, the clues that we have are not sufficient to enable a real answer. A dream would be to get a concrete clue from experiment about what is the energy scale for new physics beyond the Higgs particle. 

Could the flavour anomalies be one such clue?

There are multiple places that new clues could come from. The possible anomalies in b physics observed at CERN are extremely significant if they hold up. The search for an electric dipole moment of the electron or neutron is also very important and could possibly give a signal of something new happening at energies close to those that we have already probed. Another possibility is the slight reported discrepancy between the magnetic moment of the muon and the SM prediction. Here, I think it is very important to improve the lattice gauge theory estimates of the hadronic contribution to the muon moment, in order to clarify whether the fantastically precise measurements that are now available are really in disagreement with the SM. Of course, there are multiple other places that experiment could pinpoint the next energy scale at which the SM needs to be revised, ranging from precision studies of the Higgs particle to searches for muon decay modes that are absent in the SM. 

Which current developments in theory are you most excited about?

The new ideas about gravity and quantum mechanics that go under the rough title “It from qubit” are really exciting. Black-hole thermodynamics was discovered in the 1970s through the work of Jacob Bekenstein, Stephen Hawking and others. These results were fascinating, but for several decades it seemed to me – rightly or wrongly – that this field was evolving only slowly compared to other areas of theoretical physics. In the past decade or so, that is clearly no longer the case. In large part the change has come from thinking about “entropy” as microscopic or fine-grained von Neumann entropy, as opposed to the thermodynamic entropy that Bekenstein and others considered. A formulation in terms of fine-grained entropy has made possible new statements and more general statements which reduce to the traditional ones when thermodynamics is valid. All this has been accelerated by the insights that come from holographic duality between gravity and gauge theory.

How different does the field look today compared to when you entered it?

It is really hard to exaggerate how the field has changed. I started graduate school at Princeton in September 1973. Asymptotic freedom of non-abelian gauge theory had just been discovered a few months earlier by David Gross, Frank Wilzcek and David Politzer. This was the last key ingredient that was needed to make possible the SM as we know it today. Since then there has been a revolution in our experimental knowledge of the SM. Several key ingredients (new quarks, leptons and the Higgs particle) were unknown in 1973. Jets in hadronic processes were still in the future, even as an idea, let alone an experimental reality, and almost nothing was known about CP violation or about scaling violations in high-energy hadronic processes, just to mention two areas that developed later in an impressive way. 

6D Calabi–Yau manifolds

Not only is our experimental knowledge of the SM so much richer than it was in 1973, but the same is really true of our theoretical understanding as well. Quantum field theory is understood much better today than was the case in 1973. There really is no comparison.

Perhaps equally dramatic has been the change in our understanding of cosmology. In 1973, the state of cosmological knowledge could be summarised fairly well in a couple of numbers – notably the cosmic-microwave temperature and the Hubble constant – and of these only the first was measured with any reasonable precision. In the intervening years, cosmology became a precision science and also a much more ambitious science, as cosmologists have learned to grapple with the complex processes of the formation of structure in the universe. In the inhomogeneities of the microwave background, we have observed what appear to be the seeds of structure formation. And the theory of cosmic inflation, which developed starting around 1980, seems to be a real advance over the framework in which cosmology was understood in 1973, though it is certainly still incomplete.

Exploring the string-theory framework has led to a remarkable series of discoveries

Finally, 50 years ago the gulf between particle physics and gravity seemed unbridgeably wide. There is still a wide gap today. But the emergence in string theory of a sensible framework to study gravity unified with particle forces has changed the picture. This framework has turned out to be very powerful, even if one is not motivated by gravity and one is just searching for new understanding of ordinary quantum field theory. We do not understand today in detail how to unify the forces and obtain the particles and interactions that we see in the real world. But we certainly do have a general idea of how it can work, and this is quite a change from where we were in 1973. Exploring the string-theory framework has led to a remarkable series of discoveries. This well has not run dry, and that is one of the reasons that I am optimistic about the future.

Which of the numerous contributions you have made to particle and mathematical physics are you most proud of?

I am most satisfied with the work that I did in 1994 with Nathan Seiberg on electric-magnetic duality in quantum field theory, and also the work that I did the following year in helping to develop an analogous picture for string theory.

Who knows, maybe I will have the good fortune to do something equally significant again in the future.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

The post Witten reflects appeared first on CERN Courier.

]]>
Opinion The veteran theorist explains how the LHC and other recent results have impacted his view on nature.  https://cerncourier.com/wp-content/uploads/2021/12/CCJanFeb22_Interview-main_feature.jpg
Scrutinising the Higgs sector https://cerncourier.com/a/scrutinising-the-higgs-sector/ Fri, 17 Dec 2021 15:57:43 +0000 https://preview-courier.web.cern.ch/?p=96441 The 11th Higgs Hunting workshop saw more than 300 participants discuss the most recent results in the Higgs sector.

The post Scrutinising the Higgs sector appeared first on CERN Courier.

]]>
The 11th Higgs Hunting workshop took place remotely between 20 and 22 September 2021, with more than 300 registered participants engaging in lively discussions about the most recent results in the Higgs sector. ATLAS and CMS presented results based on the full LHC Run-2 dataset (up to 140 fb-1) recorded at 13 TeV. While all results remain compatible with Standard Model expectations, the precision of the measurements benefited from significant reductions in statistical uncertainties, more than three times smaller with the 13 TeV data than in previous LHC results at 7 and 8 TeV. This also brought into sharp relief the role of systematic uncertainties, which in some cases are becoming dominant.

The status of theory improvements and phenomenological interpretations, such as those from effective field theory, were also presented. Highlights included the Higgs pair-production process, which is particularly challenging at the LHC due to its low rate. ATLAS and CMS showed greatly improved sensitivity in various final states, thanks to improvements in analysis techniques. Also shown were results on the scattering of weak vector bosons, a process that is strongly related to the Higgs sector, highlighting large improvements from both the larger datasets and the higher collision energy available in Run 2.

Several searches for phenomena beyond the Standard Model – in particular for additional Higgs bosons – were presented. No significant excesses have yet been found.

The historical talk “The LHC timeline: a personal recollection (1980-2012)” was given by Luciano Maiani, former CERN Director-General, and concluding talks were given by Laura Reina (Florida) and Paolo Meridiani (Rome). A further highlight was the theory talk from Nathaniel Craig, who discussed the progress being made in addressing six open questions. Does the Higgs boson have a size? Does it interact with itself? Does it mediate a Yukawa force? Does it fulfill the naturalness strategy? Does it preserve causality? And does it realise electroweak symmetry?

The next Higgs Hunting workshop will be held in Orsay and Paris from 12 to 14 September 2022.

The post Scrutinising the Higgs sector appeared first on CERN Courier.

]]>
Meeting report The 11th Higgs Hunting workshop saw more than 300 participants discuss the most recent results in the Higgs sector. https://cerncourier.com/wp-content/uploads/2021/12/Higgs-hunting-featured.jpg
The quantum frontier: cold atoms in space https://cerncourier.com/a/the-quantum-frontier-cold-atoms-in-space/ Thu, 16 Dec 2021 11:22:35 +0000 https://preview-courier.web.cern.ch/?p=96425 September workshop targeted a roadmap for extraterrestrial cold-atom experiments to probe the foundations of physics.

The post The quantum frontier: cold atoms in space appeared first on CERN Courier.

]]>
The quantum frontier

Cold atoms offer exciting prospects for high-precision measurements based on emerging quantum technologies. Terrestrial cold-atom experiments are already widespread, exploring both fundamental phenomena such as quantum phase transitions and applications such as ultra-precise timekeeping. The final quantum frontier is to deploy such systems in space, where the lack of environmental disturbances enables high levels of precision.

This was the subject of a workshop supported by the CERN Quantum Technology Initiative, which attracted more than 300 participants online from 23 to 24 September. Following a 2019 workshop triggered by the European Space Agency (ESA)’s Voyage 2050 call for ideas for future experiments in space, the main goal of this workshop was to begin drafting a roadmap for cold atoms in space.

The workshop opened with a presentation by Mike Cruise (University of Birmingham) on ESA’s vision for cold atom R&D for space: considerable efforts will be required to achieve the technical readiness level needed for space missions, but they hold great promise for both fundamental science and practical applications. Several of the cold-atom teams that contributed white papers to the Voyage 2050 call also presented their proposals.

Atomic clocks

Next came a session on atomic clocks, including descriptions of their potential for refining the definitions of SI units, such as the second, and distributing this new time-standard worldwide, and potential applications of atomic clocks to geodesy. Next-generation spacebased atomic-clock projects for these and other applications are ongoing in China, the US (Deep Space Atomic Clock) and Europe.

This was followed by a session on Earth observation, featuring the prospects for improved gravimetry using atom interferometry and talks on the programmes of ESA and the European Union. Quantum space gravimetry could contribute to studies of climate change, for example, by measuring the densities of water and ice very accurately and with improved geographical precision.

Cold-atom experiments in space offer great opportunities to probe the foundations of physics

For fundamental physics, prospects for space-borne cold-atom experiments include studies of wavefunction collapse and Bell correlations in quantum mechanics, probes of the equivalence principle by experiments like STEQUEST, and searches for dark matter.

The proposed AEDGE atom interferometer will search for ultralight dark matter and gravitational waves in the deci-Hertz range, where LIGO/Virgo/KAGRA and the future LISA space observatory are relatively insensitive, and will probe models of dark energy. AEDGE gravitational- wave measurements could be sensitive to first-order phase transitions in the early universe, as occur in many extensions of the Standard Model, as well as to cosmic strings, which could be relics of symmetries broken at higher energies than those accessible to colliders.

These examples show that cold-atom experiments in space offer great opportunities to probe the foundations of physics as well as make frontier measurements in astrophysics and cosmology.

Several pathfinder experiments are underway. These include projects for terrestrial atom interferometers on scales from 10 m to 1 km, such as the MAGIS project at Fermilab and the AION project in the UK, which both use strontium, and the MIGA project in France and proposed European infrastructure ELGAR, which both use rubidium. Meanwhile, a future stage of AION could be situated in an access shaft at CERN – a possibility that is currently under study, and which could help pave the way towards AEDGE. Pioneering experiments using Bose-Einstein condensates on research rockets and the International Space Station were also presented.

A strong feature of the workshop was a series of breakout sessions to enable discussions among members of the various participating communities (atomic clocks, Earth observation and fundamental science), as well as a group considering general perspectives, which were summarised in a final session. Reports from the breakout sessions will be integrated into a draft roadmap for the development and deployment of cold atoms in space. This will be set out in a white paper to appear by the end of the year and presented to ESA and other European space and funding agencies.

Space readiness

Achieving space readiness for cold-atom experiments will require significant research and development. Nevertheless, the scale of participation in the workshop and the high level of engagement testifies to the enthusiasm in the cold-atom community and prospective user communities for deploying cold atoms in space. The readiness of the different communities to collaborate in drafting a joint roadmap for the pursuit of common technological and scientific goals was striking.

The post The quantum frontier: cold atoms in space appeared first on CERN Courier.

]]>
Meeting report September workshop targeted a roadmap for extraterrestrial cold-atom experiments to probe the foundations of physics. https://cerncourier.com/wp-content/uploads/2021/12/blue-galaxy-nasa-space-wallpaper-preview.jpg
Muon detector probes long-lived particles https://cerncourier.com/a/muon-detector-probes-long-lived-particles/ Fri, 05 Nov 2021 10:33:41 +0000 https://preview-courier.web.cern.ch/?p=96256 The CMS collaboration has set constraints on a simplified model mediated by the Higgs boson.

The post Muon detector probes long-lived particles appeared first on CERN Courier.

]]>
New ways to detect long-lived particles (LLPs) are opening up avenues for searching for physics beyond the Standard Model (SM). LLPs could provide evidence for a hidden dark sector of particles that includes dark-matter candidates and could be studied via “portal interactions” with the visible universe. By employing the CMS experiment’s muon spectrometer in a novel way, the collaboration has recently deployed a powerful new technique for detecting LLPs that decay between 6 and 10 metres from the primary interaction point.

An LLP decaying in the endcap muon spectrometer volume should produce a particle shower when its decay products interact with the return yoke of the CMS solenoid. The secondary particles produced by the shower would traverse the gaseous regions of the cathode-strip chamber (CSC) detector and produce a large multiplicity of signals on the wire anodes and strip cathodes. Localised hits are reconstructed by combining these signals using a density-based clustering algorithm. This is the first time the CSC detectors have been used as a sampling calorimeter to try to detect and identify LLP decays. 

Figure 1

Searching for CSC clusters with a sufficiently large number of hits suppresses background processes while maintaining a high efficiency for detecting potential LLP decays. The large amount of steel in the CMS return yoke nearly eliminates “punch-through” hadrons that are not fully stopped by the calorimeter, potentially mimicking the signature of an LLP. The largest remaining source of backgrounds is known LLPs produced by SM processes such as the neutral kaon, KL. These particles are copiously produced in LHC collisions and, on rare occasions, traverse the material without being stopped. Kaons are predominantly produced with much lower energies than the signal LLPs and therefore result in clusters with a smaller number of hits. Requiring clusters with more than 130 CSC hits suppresses these dominant background events to a negligible level (see figure 1).

This search improves on the previous best results by more than a factor of six

Using the full Run-2 dataset, the CMS collaboration detected no excess of particle-shower events above the expected backgrounds, setting constraints on a benchmark-simplified model of scalar LLP production mediated by the Higgs boson (a so-called Higgs portal model). This search improves on the previous best results by more than a factor of six (two) for an LLP mass of 7 GeV (≥ 15) GeV for a proper decay length (cτ) of the scalar larger than 100 m. It is the first to be sensitive to LLP decays with cτ up to 1000 m and masses between 40 and 55 GeV at branching ratios of the Higgs to a pair of LLPs below 20%.

This novel approach to identifying showers in muon detectors opens up an exciting new programme of searches for LLPs in a wide variety of theoretical models. Potential frameworks range from Higgs-portal models to other portals to a dark sector, including neutrinos, axions and dark photons. The on-going development of a dedicated Level-1 and High-Level Trigger focusing on particle showers detected in the CMS muon spectrometer promises an order of magnitude improvement in the discovery sensitivity for LLPs in the forthcoming run of the LHC.

The post Muon detector probes long-lived particles appeared first on CERN Courier.

]]>
News The CMS collaboration has set constraints on a simplified model mediated by the Higgs boson. https://cerncourier.com/wp-content/uploads/2021/11/LLP-CSC-CMS.png
Protons back with a splash https://cerncourier.com/a/protons-back-with-a-splash/ Tue, 19 Oct 2021 15:42:00 +0000 https://preview-courier.web.cern.ch/?p=95787 Proton beams are once again circulating in the LHC in preparation for Run 3.

The post Protons back with a splash appeared first on CERN Courier.

]]>
Upstream splash muons

After a three-year hiatus, protons are once again circulating in the LHC, as physicists make final preparations for the start of Run 3. At the beginning of October, a beam of 450 GeV protons made its way from the Super Proton Synchrotron (SPS) down the TI2 beamline towards Point 2, where it struck a dump block and sprayed secondary particles into the ALICE experiment (see image). Beam was also successfully sent down the TI8 transfer line, which meets the LHC near to where the LHCb experiment is located.

Today, counter-rotating protons were finally injected into the LHC, marking the latest milestone in the reawakening of CERN’s accelerator complex, which closed down at the end of 2018 for Long Shutdown 2 (LS2). Two weeks of beam tests are planned, along with first low-energy collisions in the experiments, before the machine is shut down for a 3-4 month maintenance period. Meanwhile, the experiments are continuing to ready themselves for more luminous Run-3 operations.

Final countdown

Beams have been back at CERN since the spring. After a comprehensive two-year overhaul, the Proton Synchrotron (PS) accelerated its first beams on 4 March and has recently started supplying experiments in the newly refurbished East Area and at the new ELENA ring at the Antimatter Factory. Connecting the brand-new Linac4 to the upgraded PS Booster (which also serves ISOLDE) was a major step in the upgrade programme.  Together, they now provide the PS with a 2 GeV beam, 0.6 GeV up from before, for which the 60-year-old machine had to be fitted out with refurbished magnets, new beam-dump systems, instrumentation, and upgraded RF and cooling systems.

When the LHC comes back online for physics in May 2022, it will not only be more luminous, but it will also operate at higher energies

LS2 saw an even greater overhaul of the SPS, including the addition of a new beam-dump system, a refurbished RF system that now includes the use of solid-state amplifier technology, and a major overhaul of the control system. Combined with the LHC Injectors Upgrade project (the main focus of LS2), the accelerator complex is now primed for more intense beams, in particular for the High-Luminosity LHC (HL-LHC) later this decade.

The first bunch was injected from the PS into the SPS on 12 April, building up to “LHC-like” beams of up to 288 bunches a few weeks later. The SPS delivers beams to all of CERN’s North Area experiments, which include a new facility, NA65, approved in 2019 to investigate fast-neutron production for better understanding of the background in underground neutrino experiments. It also drives the AWAKE experiment, which performs R&D for plasma-wakefield acceleration and entered its second run in July with the goal of demonstrating acceleration gradients of 1 GV/m while preserving the beam quality. The restart of North Area experiments will also see pilot runs for new experiments such as AMBER (the successor of COMPASS) and NA64μ (NA64 running with muon beams).

Brighter and more powerful

When the LHC comes back online for physics in May 2022, it will not only be more luminous (with up to 1.8 × 1011 protons per bunch compared to 1.3–1.4 × 1011 during Run 2), but it will also operate at higher energies. This year, the majority of the LHC’s 1232 dipole magnets were trained to carry 6.8 TeV proton beams, compared to 6.5 TeV before, which involves operating with a current of 11.5 kA (with a margin of 0.1 kA). Following the beam tests this autumn, magnet training for the final two of the machine’s eight sectors will take place during a scheduled maintenance period from 1 November to 21 February. After that, the LHC tunnel and experiment areas will be closed for a two-week-long “cold checkout”, with beam commissioning commencing on 7 March and first stable beams expected during the first week of May.

Meanwhile, the LHC experiments are continuing to ready their detectors for the bumper Run-3 data harvest ahead: at least 160 fb–1 (as for Run 2) to ATLAS and CMS; 25 fb–1 to LHCb (compared to 6 fb–1 in Run 2); and 7.5 nb–1 of Pb–Pb collisions to ALICE (compared to 1.3 nb–1 in Run 2). The higher integrated luminosities expected for ALICE and LHCb are largely possible thanks to the ability of their upgraded detectors to handle the Run-3 data rate, with LHCb teams currently working around the clock to ensure their brand-new sub-detectors are in place. New forward-experiments, FASER, FASERν and SND@LHC, which aim to make the first observations of collider neutrinos and open new searches for feebly interacting particles, are also gearing up to take first data when the LHC comes back to life.

“The injector performance reached in 2021 is just the start of squeezing out the potential they have been given during LS2, paving the way for the HL-LHC, but also benefiting the LHC’s performance during Run 3,” says Rende Steerenberg, head of the operations group. “Having beam back in the entire complex and routinely providing the experimental facilities with physics is testimony to the excellent and hard work of many people at CERN.”

The post Protons back with a splash appeared first on CERN Courier.

]]>
News Proton beams are once again circulating in the LHC in preparation for Run 3. https://cerncourier.com/wp-content/uploads/2021/10/CCNovDec21_NA_protons_feature.jpg
LHCb tests lepton universality in new channels https://cerncourier.com/a/lhcb-tests-lepton-universality-in-new-channels/ Tue, 19 Oct 2021 11:59:57 +0000 https://preview-courier.web.cern.ch/?p=95763 New measurements of the rates of rare B-meson decays to electrons and muons open a further avenue through which to explore the flavour anomalies.

The post LHCb tests lepton universality in new channels appeared first on CERN Courier.

]]>
Measurements of the ratios of muon to electron decays

At a seminar at CERN today, the LHCb collaboration presented new tests of lepton universality in rare B-meson decays. While limited in statistical sensitivity, the results fit an intriguing pattern of recent results in the flavour sector, says the collaboration.

Since 2013, several measurements have hinted at deviations from lepton-flavour universality (LFU), a tenet of the Standard Model (SM) which treats charged leptons, ℓ, as identical apart from their masses. The measurements concern decay processes involving the transition between a bottom and a strange quark b→sℓ+, which are strongly suppressed by the SM because they involve quantum corrections at the one-loop level (leading to branching fractions of one part in 106 or less). A powerful way to probe LFU is therefore to measure the ratio of B-meson decays to muons and electrons, for which the SM prediction, close-to-unity, is theoretically very clean.

In March this year, an LHCb measurement of RK = BR(B+→K+μ+μ)/BR(B+→K+e+e) based on the full LHC Run 1 and 2 dataset showed a 3.1σ difference from the SM prediction. This followed departures at the level of 2.2—2.5σ in the ratio RK*0 (which probes B0→K*0+ decays) reported by LHCb in 2017. The collaboration has also seen slight deficits in the ratio RpK, and departures from theory in measurements of the angular distribution of final-state particles and of branching fractions in neutral B-meson decays. None of the results is individually significant enough to constitute evidence of new physics. But taken together, say theorists, they point to a coherent pattern.

We are seeing a similar deficit of rare muon decays to rare electron decays that we have seen in other LFU tests

Harry Cliff

The latest LHCb analysis clocked the ratio of muons to electrons in the isospin-partner B-decays: B0→ KS0+ and B+→K*++. As well as being a first at the LHC, it’s the first single-experiment observation of these decays, and the most precise measurement yet of their branching ratios. Being difficult to reconstruct due to the presence of a long-lived KS0 in the final state, however, the sensitivity of the results is lower than for previous “RK” analyses. LHCb found R(KS0) = 0.66+0.2/-0.15 (stat) +0.02/-0.04 (syst) and R(K*+) = 0.70+0.18/-0.13 (stat) +0.03/-0.04 (syst), which are consistent with the SM at the level of 1.5 and 1.4σ, respectively.

“What is interesting is that we are seeing a similar deficit of rare muon decays to rare electron decays that we have seen in other LFU tests,” said Harry Cliff of the University of Cambridge, who presented the result on behalf of LHCb (in parallel with a presentation at Rencontres de Blois by Cambridge PhD student John Smeaton). “With many other LFU tests in progress using Run 1 and 2 data, there will be more to come on this puzzle soon. Then we have Run 3, where we expect to really zoom in on the measurements and obtain a detailed understanding.”

The experimental and theoretical status of the flavour anomalies in b→sℓ+ℓ and semi-leptonic B-decays will be the focus of the Flavour Anomaly Workshop at CERN on Wednesday 20 October, at which ATLAS and CMS activities will also be discussed, along with perspectives from theorists.

The post LHCb tests lepton universality in new channels appeared first on CERN Courier.

]]>
News New measurements of the rates of rare B-meson decays to electrons and muons open a further avenue through which to explore the flavour anomalies. https://cerncourier.com/wp-content/uploads/2021/04/CCMayJun21_FN_frontis.jpg
Breaking records at EPS-HEP https://cerncourier.com/a/breaking-records-at-eps-hep/ Tue, 05 Oct 2021 14:27:14 +0000 https://preview-courier.web.cern.ch/?p=95346 EPS-HEP 2021 saw breathtaking results from LHC Run 2, writes Christophe Grojean.

The post Breaking records at EPS-HEP appeared first on CERN Courier.

]]>
2021-EPS-HEP-Poster-WEB-final

In this year’s unusual Olympic summer, high-energy physicists pushed back the frontiers of knowledge and broke many records. The first one is surely the number of registrants to the EPS-HEP conference, hosted online from 26 to 30 July by the University of Hamburg and DESY: nearly 2000 participants scrutinised more than 600 talks and 280 posters. After 18 months of the COVID pandemic, the community showed a strong desire to meet and discuss physics with international colleagues. 

200 trillion b-quarks, 40 billion electroweak bosons, 300 million top quarks and 10 million Higgs bosons

The conference offered the opportunity to hear about analyses using the full LHC Run-2 data set, which is the richest hadron-collision data sample ever recorded. The results are breathtaking. As my CERN colleague Michelangelo Mangano explained recently to summer students, “The LHC works and is more powerful than expected, the experiments work and are more precise than expected, and the Standard Model works beautifully and is more reliable than expected.” About 3000 papers have been published by the LHC collaborations in the past decade. They have established the LHC as a truly multi-messenger endeavour, not so much because of the multitude of elementary particles produced – 200 trillion b-quarks, 40 billion electroweak bosons, 300 million top quarks and 10 million Higgs bosons – but because of the diversity of scientifically independent experiments that historically would have required different detectors and facilities, built and operated by different communities. “Data first” should always remain the leitmotif of the natural sciences. 

Paula Alvarez Cartelle (Cambridge) reminded us that the LHC has revealed new states of matter, with LHCb confirming that four or even five quarks can assemble themselves into new long-lived bound states, stabilised by the presence of two charm quarks. For theorists, these new quark-molecules provide valuable input data to tune their lattice simulations and to refine their understanding of the non-perturbative dynamics of strong interactions.

Theoretical tours de force

While Run 1 was a time for inclusive measurements, a multitude of differential measurements were performed during Run 2. Paolo Azzurri (INFN Pisa) reviewed the transverse momentum distribution of the jets produced in association with electroweak gauge bosons. These offer a way to test quantum chromodynamics and electroweak predictions at the highest achievable precision through higher-order computations, resummation and matching to parton showers. The work is fuelled by remarkable theoretical tours de force reported by Jonas Lindert (Sussex) and Lorenzo Tancredi (Oxford), which build on advanced mathematical techniques, including inspiring new mathematical developments in algebraic geometry and finite-field arithmetic. We experienced a historic moment: the LHC definitively became a precision machine, achieving measurements reaching and even surpassing LEP’s precision. This new situation also induced a shift more towards precision measurements, model-independent interpretations and Standard Model (SM) compatibility checks, and away from model-dependent searches for new physics. Effective-field-theory analyses are therefore gaining popularity, explained Veronica Sanz (Valencia and Sussex).

We know for certain that the SM is not the ultimate theory of nature. How and when the first cracks will be revealed is the big question that motivates future collider design studies. The enduring and compelling “B anomalies” reported by LHCb could well be the revolutionary surprise that challenges our current understanding of the structure of matter. The ratios of the decay widths of B mesons, either through charged or neutral currents, b→cℓν and b→sℓ+, could finally reveal that the electron, muon and tau lepton differ by more than just their masses.

The statistical significance of the lepton flavour anomalies is growing, reported Franz Muheim (Edinburgh and CERN), creating “cautious” excitement and stimulating the creativity of theorists like Ana Teixeira (Clermont-Ferrand), who builds new physics models with leptoquarks and heavy vectors with different couplings to the three families of leptons, to accommodate the apparent lepton-flavour-universality violations. Belle II should soon bring new additional input to the debate, said Carsten Niebuhr (DESY).

Long-awaited results

The other excitement of the year came from the long-awaited results from the muon g-2 experiment at Fermilab, presented by Alex Keshavarzi (Manchester). The spin precession frequency of a sample of 10 billion muons was measured with a precision of a few hundred parts per million, confirming the deviation from the SM prediction observed nearly 20 years ago by the E821 experiment at Brookhaven. With the current statistics, the deviation now amounts to 4.2σ. With an increase by a factor 20 of the dataset foreseen in the next run, the measurement will soon become systematics limited. Gilberto Colangelo (Bern) also discussed new and improved lattice computations of the hadronic vacuum polarisation, significantly reducing the discrepancy between the theoretical prediction and the experimental measurement. The jury is still out – and the final word might come from the g-2/EDM experiment at J-PARC.

Accelerator-based experiments might not be the place to prove the SM wrong. Astrophysical and cosmological observations have already taught us that SM matter only constitutes around 5% of the stuff that the universe is made of. The traditional idea that the gap in the energy budget of the universe is filled by new TeV-scale particles that stabilise the electroweak scale under radiative corrections is fading away. And a huge range of possible dark-matter scales opens up a rich and reinvigorated experimental programme that can profit from original techniques exploiting electron and nuclear recoils caused by the scattering of dark-matter particles. A front-runner in the new dark-matter landscape is the QCD axion originally introduced to explain why strong interactions do not distinguish matter from antimatter. Babette Döbrich (CERN) discussed the challenges inherent in capturing an axion, and described the many new experiments around the globe designed to overcome them.

Progress could also come directly from theory

Progress could also come directly from theory. Juan Maldacena (IAS Princeton) recalled the remarkable breakthroughs on the black-hole information problem. The Higgs discovery in 2012 established the non-trivial vacuum structure of space–time. We are now on our way to understanding the quantum mechanics of this space–time.

Like at the Olympics, where breaking records requires a lot of work and effort by the athletes, their teams and society, the quest to understand nature relies on the enthusiasm and the determination of physicists and their funding agencies. What we have learnt so far has allowed us to formulate precise and profound questions. We now need to create opportunities to answer them and to move ahead.

One cannot underestimate how quickly the landscape of physics can change, whether the B-anomalies will be confirmed or whether a dark-matter particle will be discovered. Let’s see what will be awaiting us at the next EPS-HEP conference in 2023 in Hamburg – in person this time!

The post Breaking records at EPS-HEP appeared first on CERN Courier.

]]>
Meeting report EPS-HEP 2021 saw breathtaking results from LHC Run 2, writes Christophe Grojean. https://cerncourier.com/wp-content/uploads/2021/10/EPS.png
Learning to detect new top-quark interactions https://cerncourier.com/a/learning-to-detect-new-top-quark-interactions/ Mon, 04 Oct 2021 07:29:50 +0000 https://preview-courier.web.cern.ch/?p=93687 A new CMS analysis searches for anomalies in top-quark interactions with the Z boson using an effective-field-theory framework.

The post Learning to detect new top-quark interactions appeared first on CERN Courier.

]]>
Figure 1

Ever since its discovery in 1995 at the Tevatron, the top quark has been considered to be a highly effective probe of new physics. A key reason is that the last fundamental fermion predicted by the Standard Model (SM) has a remarkably high mass, just a sliver under the Higgs vacuum expectation value divided by the square root of two, implying a Yukawa coupling close to unity. This has far-reaching implications: the top quark impacts the electroweak sector significantly through loop corrections, and may couple preferentially to new massive states. But while the top quark may represent a window into new physics, we cannot know a priori whether new massive particles could ever be produced at the LHC, and direct searches have so far been inconclusive. Model-inde­pendent measurements carried out within the framework of effective field theory (EFT) are therefore becoming increasingly important as a means to make the most of the wealth of precision measurements at the LHC. This approach makes it possible to systematically correlate sparse deviations observed in different measurements, in order to pinpoint any anomalies in top-quark couplings that might arise from unknown massive particles.

The top quark impacts the electroweak sector significantly through loop corrections

A new CMS analysis searches for anomalies in top-quark interactions with the Z boson using an EFT framework. The cross-section measurements of the rare associated production of either one (tZ) or two (ttZ) top quarks with a Z boson were statistically limited until recently. These interactions are among the least constrained by the available data in the top-quark sector, despite being modified in numerous beyond-SM models, such as composite Higgs models and minimal supersymmetry. Using the full LHC Run-2 data set, this study targets high-purity final states with multiple electrons and muons. It sets some of the tightest constraints to date on five generic types of EFT interactions that could substantially modify the characteristics of associated top-Z production, while having negligible or no effect on background processes.

Machine learning

In contrast to the more usual reinterpretations of SM measurements that require assumptions on the nature of new physics, this analysis considers EFT effects on observables at the detector level and constrains them directly from the data using a strategy that combines observables specifically selected for their sensitivity to EFT. The key feature of this work is its heavy use of multivariate-analysis techniques based on machine learning, which improve its sensitivity to new interactions. First, to define regions enriched in the processes of interest, a multiclass neural network is trained to discriminate between different SM processes. Subsequently, several binary neural networks learn to separate events generated according to the SM from events that include EFT effects arising from one or more types of anomalous interactions. For the first time in an analysis using LHC data, these classifiers were trained on the full physical amplitudes, including the interference between SM and EFT components.

The binary classifiers are used to construct powerful discriminant variables out of high-dimensional input data. Their distributions are fitted to data to constrain up to five types of EFT couplings simultaneously. The widths of the corresponding confidence intervals are significantly reduced thanks to the combination of the available kinematic information that was specifically chosen to be sensitive to EFT in the top quark sector. All results are consistent with the SM, which indicates either the absence of new effects in the targeted interactions or that the mass scale of new physics is too high to be probed with the current sensitivity. This result is an important step towards the more widespread use of machine learning to target EFT effects, to efficiently explore the enormous volume of LHC data more globally and comprehensively.

The post Learning to detect new top-quark interactions appeared first on CERN Courier.

]]>
News A new CMS analysis searches for anomalies in top-quark interactions with the Z boson using an effective-field-theory framework. https://cerncourier.com/wp-content/uploads/2021/05/CMS-muon-191.jpg
Emergence https://cerncourier.com/a/emergence/ Thu, 02 Sep 2021 09:54:39 +0000 https://preview-courier.web.cern.ch/?p=93632 Erik Verlinde sizes up the Standard Model, gravity and intelligence as candidates for future explanation as emergent phenomena.

The post Emergence appeared first on CERN Courier.

]]>
A murmuration of starlings

Particle physics is at its heart a reductionistic endeavour that tries to reduce reality to its most basic building blocks. This view of nature is most evident in the search for a theory of everything – an idea that is nowadays more common in popularisations of physics than among physicists themselves. If discovered, all physical phenomena would follow from the application of its fundamental laws.

A complementary perspective to reductionism is that of emergence. Emergence says that new and different kinds of phenomena arise in large and complex systems, and that these phenomena may be impossible, or at least very hard, to derive from the laws that govern their basic constituents. It deals with properties of a macroscopic system that have no meaning at the level of its microscopic building blocks. Good examples are the wetness of water and the superconductivity of an alloy. These concepts don’t exist at the level of individual atoms or molecules, and are very difficult to derive from the microscopic laws. 

As physicists continue to search for cracks in the Standard Model (SM) and Einstein’s general theory of relativity, could these natural laws in fact be emergent from a deeper reality? And emergence is not limited to the world of the very small, but by its very nature skips across orders of magnitude in scale. It is even evident, often mesmerisingly so, at scales much larger than atoms or elementary particles, for example in the murmurations of a flock of birds – a phenomenon that is impossible to describe by following the motion of an individual bird. Another striking example may be intelligence. The mechanism by which artificial intelligence is beginning to emerge from the complexity of underlying computing codes shows similarities with emergent phenomena in physics. One can argue that intelligence, whether it occurs naturally, as in humans, or artificially, should also be viewed as an emergent phenomenon. 

Data compression

Renormalisable quantum field theory, the foundation of the SM, works extraordinarily well. The same is true of general relativity. How can our best theories of nature be so successful, while at the same time being merely emergent? Perhaps these theories are so successful precisely because they are emergent. 

As a warm up, let’s consider the laws of thermodynamics, which emerge from the microscopic motion of many molecules. These laws are not fundamental but are derived by statistical averaging – a huge data compression in which the individual motions of the microscopic particles are compressed into just a few macroscopic quantities such as temperature. As a result, the laws of thermodynamics are universal and independent of the details of the microscopic theory. This is true of all the most successful emergent theories; they describe universal macroscopic phenomena whose underlying microscopic descriptions may be very different. For instance, two physical systems that undergo a second-order phase transition, while being very different microscopically, often obey exactly the same scaling laws, and are at the critical point described by the same emergent theory. In other words, an emergent theory can often be derived from a large universality class of many underlying microscopic theories. 

Successful emergent theories describe universal macroscopic phenomena whose underlying microscopic descriptions may be very different

Entropy is a key concept here. Suppose that you try to store the microscopic data associated with the motion of some particles on a computer. If we need N bits to store all that information, we have 2N possible microscopic states. The entropy equals the logarithm of this number, and essentially counts the number of bits of information. Entropy is therefore a measure of the total amount of data that has been compressed. In deriving the laws of thermodynamics, you throw away a large amount of microscopic data, but you at least keep count of how much information has been removed in the data-compression procedure.

Emergent quantum field theory

One of the great theoretical-physics paradigm shifts of the 20th century occurred when Kenneth Wilson explained the emergence of quantum field theory through the application of the renormalisation group. As with thermodynamics, renormalisation compresses microscopic data into a few relevant parameters – in this case, the fields and interactions of the emergent quantum field theory. Wilson demonstrated that quantum field theories appear naturally as an effective long-distance and low-energy description of systems whose microscopic definition is given in terms of a quantum system living on a discretised spacetime. As a concrete example, consider quantum spins on a lattice. Here, renormalisation amounts to replacing the lattice by a coarser lattice with fewer points, and redefining the spins to be the average of the original spins. One then rescales the coarser lattice so that the distance between lattice points takes the old value, and repeats this step many times. A key insight was that, for quantum statistical systems that are close to a phase transition, you can take a continuum limit in which the expectation values of the spins turn into the local quantum fields on the continuum spacetime.

This procedure is analogous to the compression algorithms used in machine learning. Each renormalisation step creates a new layer, and the algorithm that is applied between two layers amounts to a form of data compression. The goal is similar: you only keep the information that is required to describe the long-distance and low-energy behaviour of the system in the most efficient way.

A neural network

So quantum field theory can be seen as an effective emergent description of one of a large universality class of many possible underlying microscopic theories. But what about the SM specifically, and its possible supersymmetric extensions? Gauge fields are central ingredients of the SM and its extensions. Could gauge symmetries and their associated forces emerge from a microscopic description in which there are no gauge fields? Similar questions can also be asked about the gravitational force. Could the curvature of spacetime be explained from an emergent perspective?

String theory seems to indicate that this is indeed possible, at least theoretically. While initially formulated in terms of vibrating strings moving in space and time, it became clear in the 1990s that string theory also contains many more extended objects, known as “branes”. By studying the interplay between branes and strings, an even more microscopic theoretical description was found in which the coordinates of space and time themselves start to dissolve: instead of being described by real numbers, our familiar (x, y, z) coordinates are replaced by non-commuting matrices. At low energies, these matrices begin to commute, and give rise to the normal spacetime with which we are familiar. In these theoretical models it was found that both gauge forces and gravitational forces appear at low energies, while not existing at the microscopic level.

While these models show that it is theoretically possible for gauge forces to emerge, there is at present no emergent theory of the SM. Such a theory seems to be well beyond us. Gravity, however, being universal, has been more amenable to emergence.

Emergent gravity

In the early 1970s, a group of physicists became interested in the question: what happens to the entropy of a thermodynamic system that is dropped into a black hole? The surprising conclusion was that black holes have a temperature and an entropy, and behave exactly like thermodynamic systems. In particular, they obey the first law of thermodynamics: when the mass of a black hole increases, its (Bekenstein–Hawking) entropy also increases.

The correspondence between the gravitational laws and the laws of thermodynamics does not only hold near black holes. You can artificially create a gravitational field by accelerating. For an observer who continues to accelerate, even empty space develops a horizon, from behind which light rays will not be able to catch up. These horizons also carry a temperature and entropy, and obey the same thermodynamic laws as black-hole horizons. 

It was shown by Stephen Hawking that the thermal radiation emitted from a black hole originates from pair creation near the black-hole horizon. The properties of the pair of particles, such as spin and charge, are undetermined due to quantum uncertainty, but if one particle has spin up (or positive charge), then the other particle must have spin down (or negative charge). This means that the particles are quantum entangled. Quantum entangled pairs can also be found in flat space by considering accelerated observers. 

Crucially, even the vacuum can be entangled. By separating spacetime into two parts, you can ask how much entanglement there is between the two sides. The answer to this was found in the last decade, through the work of many theorists, and turns out to be rather surprising. If you consider two regions of space that are separated by a two-dimensional surface, the amount of quantum entanglement between the two sides turns out to be precisely given by the Bekenstein–Hawking entropy formula: it is equal to a quarter of the area of the surface measured in Planck units. 

Holographic renormalisation

The area of the event horizon

The AdS/CFT correspondence incorporates a principle called “holography”: the gravitational physics inside a region of space emerges from a microscopic description that, just like a hologram, lives on a space with one less dimension and thus can be viewed as living on the boundary of the spacetime region. The extra dimension of space emerges together with the gravitational force through a process called “holographic renormalisation”. One successively adds new layers of spacetime. Each layer is obtained from the previous layer through “coarse-graining”, in a similar way to both renormalisation in quantum field theory and data-compression algorithms in machine learning.

Unfortunately, our universe is not described by a negatively curved spacetime. It is much closer to a so-called de Sitter spacetime, which has a positive curvature. The main difference between de Sitter space and the negatively curved anti-de Sitter space is that de Sitter space does not have a boundary. Instead, it has a cosmological horizon whose size is determined by the rate of the Hubble expansion. One proposed explanation for this qualitative difference is that, unlike for negatively curved spacetimes, the microscopic quantum state of our universe is not unique, but secretly carries a lot of quantum information. The amount of this quantum information can once again be counted by an entropy: the Bekenstein–Hawking entropy associated with the cosmological horizon. 

This raises an interesting prospect: if the microscopic quantum data of our universe may be thought of as many entangled qubits, could our current theories of spacetime, particles and forces emerge via data compression? Space, for example, could emerge by forgetting the precise way in which all the individual qubits are entangled, but only preserving the information about the amount of quantum entanglement present in the microscopic quantum state. This compressed information would then be stored in the form of the areas of certain surfaces inside the emergent curved spacetime. 

In this description, gravity would follow for free, expressed in the curvature of this emergent spacetime. What is not immediately clear is why the curved spacetime would obey the Einstein equations. As Einstein showed, the amount of curvature in spacetime is determined by the amount of energy (or mass) that is present. It can be shown that his equations are precisely equivalent to an application of the first law of thermodynamics. The presence of mass or energy changes the amount of entanglement, and hence the area of the surfaces in spacetime. This change in area can be computed and precisely leads to the same spacetime curvature that follows from the Einstein equations. 

The idea that gravity emerges from quantum entanglement goes back to the 1990s, and was first proposed by Ted Jacobson. Not long afterwards, Juan Maldacena discovered that general relativity can be derived from an underlying microscopic quantum theory without a gravitational force. His description only works for infinite spacetimes with negative curvature called anti-de Sitter (or AdS–) space, as opposed to the positive curvature we measure. The microscopic description then takes the form of a scale-invariant quantum field theory – a so-called conformal field theory (CFT) – that lives on the boundary of the AdS–space (see “Holographic renormalisation” panel). It is in this context that the connection between vacuum entanglement and the Bekenstein–Hawking entropy, and the derivation of the Einstein equations from entanglement, are best understood. I have also contributed to these developments in a paper in 2010 that emphasised the role of entropy and information for the emergence of the gravitational force. Over the last decade a lot of progress has been made in our understanding of these connections, in particular the deep connection between gravity and quantum entanglement. Quantum information has taken centre stage in the most recent theoretical developments.

Emergent intelligence

But what about viewing the even more complex problem of human intelligence as an emergent phenomenon? Since scientific knowledge is condensed and stored in our current theories of nature, the process of theory formation can itself be viewed as a very efficient form of data compression: it only keeps the information needed to make predictions about reproducible events. Our theories provide us with a way to make predictions with the fewest possible number of free parameters. 

The same principles apply in machine learning. The way an artificial-intelligence machine is able to predict whether an image represents a dog or a cat is by compressing the microscopic data stored in individual pixels in the most efficient way. This decision cannot be made at the level of individual pixels. Only after the data has been compressed and reduced to its essence does it becomes clear what the picture represents. In this sense, the dog/cat-ness of a picture is an emergent property. This is even true for the way humans process the data collected by our senses. It seems easy to tell whether we are seeing or hearing a dog or a cat, but underneath, and hidden from our conscious mind, our brains perform a very complicated task that turns all the neural data that come from our eyes and ears into a signal that is compressed into a single outcome: it is a dog or a cat. 

Emergence is often summarised with the slogan “the whole is more than the sum of its parts”

Can intelligence, whether artificial or human, be explained from a reductionist point of view? Or is it an emergent concept that only appears when we consider a complex system built out of many basic constituents? There are arguments in favour of both sides. As human beings, our brains are hard-wired to observe, learn, analyse and solve problems. To achieve these goals the brain takes the large amount of complex data received via our senses and reduces it to a very small set of information that is most relevant for our purposes. This capacity for efficient data compression may indeed be a good definition for intelligence, when it is linked to making decisions towards reaching a certain goal. Intelligence defined in this way is exhibited in humans, but can also be achieved artificially.

Artificially intelligent computers beat us at problem solving, pattern recognition and sometimes even in what appears to be “generating new ideas”. A striking example is DeepMind’s AlphaZero, whose chess rating far exceeds that of any human player. Just four hours after learning the rules of chess, AlphaZero was able to beat the strongest conventional “brute force” chess program by coming up with smarter ideas and showing a deeper understanding of the game. Top grandmasters use its ideas in their own games at the highest level. 

In its basic material design, an artificial-intelligence machine looks like an ordinary computer. On the other hand, it is practically impossible to explain all aspects of human intelligence by starting at the microscopic level of the neurons in our brain, let alone in terms of the elementary particles that make up those neurons. Furthermore, the intellectual capability of humans is closely connected to the sense of consciousness, which most scientists would agree does not allow for a simple reductionist explanation.

Emergence is often summarised with the slogan “the whole is more than the sum of its parts” – or as condensed-matter theorist Phil Anderson put it, “more is different”. It counters the reductionist point of view, reminding us that the laws that we think to be fundamental today may in fact emerge from a deeper underlying reality. While this deeper layer may remain inaccessible to experiment, it is an essential tool for theorists of the mind and the laws of physics alike.

The post Emergence appeared first on CERN Courier.

]]>
Feature Erik Verlinde sizes up the Standard Model, gravity and intelligence as candidates for future explanation as emergent phenomena. https://cerncourier.com/wp-content/uploads/2021/08/CCSepOct21_EMERGE_frontis.jpg
Surveyors eye up a future collider https://cerncourier.com/a/surveyors-eye-up-a-future-collider/ Thu, 02 Sep 2021 09:49:55 +0000 https://preview-courier.web.cern.ch/?p=94058 This summer, surveyors performed the first geodetic measurements for the proposed Future Circular Collider at CERN.

The post Surveyors eye up a future collider appeared first on CERN Courier.

]]>
Levelling measurements

CERN surveyors have performed the first geodetic measurements for a possible Future Circular Collider (FCC), a prerequisite for high-precision alignment of the accelerator’s components. The millimetre-precision measurements are one of the first activities undertaken by the FCC feasibility study, which was launched last year following the recommendation of the 2020 update of the European strategy for particle physics. During the next three years, the study will explore the technical and financial viability of a 100 km collider at CERN, for which the tunnel is a top priority. Geology, topography and surface infrastructure are the key constraints on the FCC tunnel’s position, around which civil engineers will design the optimal route, should the project be approved.

The FCC would cover an area about 10 times larger than the LHC, in which every geographical reference must be pinpointed with unprecedented precision. To provide a reference coordinate system, in May the CERN surveyors, in conjunction with ETH Zürich, the Federal Office of Topography Swisstopo, and the School of Engineering and Management Vaud, performed geodetic levelling measurements along an 8 km profile across the Swiss–French border south of Geneva.

Such measurements have two main purposes. The first is to determine a high-precision surface model, or “geoid”, to map the height above sea level in the FCC region. The second purpose is to improve the present reference system, whose measurements date back to the 1980s when the tunnel housing the LHC was built.

“The results will help to evaluate if an extrapolation of the current LHC geodetic reference systems and infrastructure is precise enough, or if a new design is needed over the whole FCC area,” says Hélène Mainaud Durand, group leader of CERN’s geodetic metrology group.

The FCC feasibility study, which involves more than 140 universities and research institutions from 34 countries, also comprises technological, environmental, engineering, political and economic considerations. It is due to be completed by the time the next strategy update gets under way in the middle of the decade. Should the outcome be positive, and the project receive the approval of CERN’s member states, civil-engineering works could start as early as the 2030s.

The post Surveyors eye up a future collider appeared first on CERN Courier.

]]>
News This summer, surveyors performed the first geodetic measurements for the proposed Future Circular Collider at CERN. https://cerncourier.com/wp-content/uploads/2021/08/CCSepOct21_NA_survey.jpg
Designing an AI physicist https://cerncourier.com/a/designing-an-ai-physicist/ Thu, 02 Sep 2021 09:46:08 +0000 https://preview-courier.web.cern.ch/?p=94092 Jesse Thaler argues that particle physicists must go beyond deep learning and design AI capable of deep thinking. 

The post Designing an AI physicist appeared first on CERN Courier.

]]>
Merging the insights from AI and physics intelligence

Can we trust physics decisions made by machines? In recent applications of artificial intelligence (AI) to particle physics, we have partially sidestepped this question by using machine learning to augment analyses, rather than replace them. We have gained trust in AI decisions through careful studies of “control regions” and painstaking numerical simulations. As our physics ambitions grow, however, we are using “deeper” networks with more layers and more complicated architectures, which are difficult to validate in the traditional way. And to mitigate 10 to 100-fold increases in computing costs, we are planning to fully integrate AI into data collection, simulation and analysis at the high-luminosity LHC.

To build trust in AI, I believe we need to teach it to think like a physicist.

I am the director of the US National Science Foundation’s new Institute for Artificial Intelligence and Fundamental Interactions, which was founded last year. Our goal is to fuse advances in deep learning with time-tested strategies for “deep thinking” in the physical sciences. Many promising opportunities are open to us. Core principles of fundamental physics such as causality and spacetime symmetries can be directly incorporated into the structure of neural networks. Symbolic regression can often translate solutions learned by AI into compact, human-interpretable equations. In experimental physics, it is becoming possible to estimate and mitigate systematic uncertainties using AI, even when there are a large number of nuisance parameters. In theoretical physics, we are finding ways to merge AI with traditional numerical tools to satisfy stringent requirements that calculations be exact and reproducible. High-energy physicists are well positioned to develop trustworthy AI that can be scrutinised, verified and interpreted, since the five-sigma standard of discovery in our field necessitates it.

It is equally important, however, that we physicists teach ourselves how to think like a machine.

Jesse Thaler

Modern AI tools yield results that are often surprisingly accurate and insightful, but sometimes unstable or biased. This can happen if the problem to be solved is “underspecified”, meaning that we have not provided the machine with a complete list of desired behaviours, such as insensitivity to noise, sensible ways to extrapolate and awareness of uncertainties. An even more challenging situation arises when the machine can identify multiple solutions to a problem, but lacks a guiding principle to decide which is most robust. By thinking like a machine, and recognising that modern AI solves problems through numerical optimisation, we can better understand the intrinsic limitations of training neural networks with finite and imperfect datasets, and develop improved optimisation strategies. By thinking like a machine, we can better translate first principles, best practices and domain knowledge from fundamental physics into the computational language of AI. 

Beyond these innovations, which echo the logical and algorithmic AI that preceded the deep-learning revolution of the past decade, we are also finding surprising connections between thinking like a machine and thinking like a physicist. Recently, computer scientists and physicists have begun to discover that the apparent complexity of deep learning may mask an emergent simplicity. This idea is familiar from statistical physics, where the interactions of many atoms or molecules can often be summarised in terms of simpler emergent properties of materials. In the case of deep learning, as the width and depth of a neural network grows, its behaviour seems to be describable in terms of a small number of emergent parameters, sometimes just a handful. This suggests that tools from statistical physics and quantum field theory can be used to understand AI dynamics, and yield deeper insights into their power and limitations.

If we don’t exploit the full power of AI, we will not maximise the discovery potential of the LHC and other experiments

Ultimately, we need to merge the insights gained from artificial intelligence and physics intelligence. If we don’t exploit the full power of AI, we will not maximise the discovery potential of the LHC and other experiments. But if we don’t build trustable AI, we will lack scientific rigour. Machines may never think like human physicists, and human physicists will certainly never match the computational ability of AI, but together we have enormous potential to learn about the fundamental structure of the universe.

The post Designing an AI physicist appeared first on CERN Courier.

]]>
Opinion Jesse Thaler argues that particle physicists must go beyond deep learning and design AI capable of deep thinking.  https://cerncourier.com/wp-content/uploads/2021/08/CCSepOct21_VIEW_frontis.jpg
What’s in the box? https://cerncourier.com/a/whats-in-the-box/ Tue, 31 Aug 2021 21:50:42 +0000 https://preview-courier.web.cern.ch/?p=93625 The LHC Olympics and Dark Machines data challenges stimulated innovation in the use of machine learning to search for new physics, write Benjamin Nachman and Melissa van Beekveld.

The post What’s in the box? appeared first on CERN Courier.

]]>
A neural network probing a black box of complex final states

The need for innovation in machine learning (ML) transcends any single experimental collaboration, and requires more in-depth work than can take place at a workshop. Data challenges, wherein simulated “black box” datasets are made public, and contestants design algorithms to analyse them, have become essential tools to spark interdisciplinary collaboration and innovation. Two have recently concluded. In both cases, contestants were challenged to use ML to figure out “what’s in the box?”

LHC Olympics

The LHC Olympics (LHCO) data challenge was launched in autumn 2019, and the results were presented at the ML4Jets and Anomaly Detection workshops in spring and summer 2020. A final report summarising the challenge was posted to arXiv earlier this year, written by around 50 authors from a variety of backgrounds in theory, the ATLAS and CMS experiments, and beyond. The name of this community effort was inspired by the first LHC Olympics that took place more than a decade ago, before the start of the LHC. In those olympics, researchers were worried about being able to categorise all of the new particles that would be discovered when the machine turned on. Since then, we have learned a great deal about nature at TeV energy scales, with no evidence yet for new particles or forces of nature. The latest LHC Olympics focused on a different challenge – being able to find new physics in the first place. We now know that new physics must be rare and not exactly like what we expected.

In order to prepare for rare and unexpected new physics, organisers Gregor Kasieczka (University of Hamburg), Benjamin Nachman (Lawrence Berkeley National Laboratory) and David Shih (Rutgers University) provided a set of black-box datasets composed mostly of Standard Model (SM) background events. Contestants were charged with identifying any anomalous events that would be a sign of new physics. These datasets focused on resonant anomaly detection, whereby the anomaly is assumed to be localised – a “bump hunt”, in effect. This is a generic feature of new physics produced from massive new particles: the reconstructed parent mass is the resonant feature. By assuming that the signal is localised, one can use regions away from the signal to estimate the background. The LHCO provided one R&D dataset with labels and three black boxes to play with: one with an anomaly decaying into two two-pronged resonances, one without an anomaly, and one with an anomaly featuring two different decay modes (a dijet decay X → qq and a trijet decay X → gY, Y → qq).  There are currently no dedicated searches for these signals in LHC data.

No labels

About 20 algorithms were deployed on the LHCO datasets, including supervised learning, unsupervised learning, weakly supervised learning and semi-supervised learning. Supervised learning is the most widely used method across science and industry, whereby each training example has a label: “background” or “signal”. For this challenge, the data do not have labels as we do not know exactly what we are looking for, and so strategies trained with labels from a different dataset often did not work well. By contrast, unsupervised learning generally tries to identify events that are rarely or never produced by the background; weakly supervised methods use some context from data to provide noisy labels; and semi-supervised methods use some simulation information in order to have a partial set of labels. Each method has its strengths and weaknesses, and multiple approaches are usually needed to achieve a broad coverage of possible signals.

The Dark Machines data challenge focused on developing algorithms broadly sensitive to non-resonant anomalies

The best performance on the first black box in the LHCO challenge, as measured by finding and correctly characterising the anomalous signals, was by a team of cosmologists at Berkeley (George Stein, Uros Seljak and Biwei Dai) who compared the phase-space density between a sliding signal region and sidebands (see “Olympian algorithm” figure). Overall, the algorithms did well on the R&D dataset, and some also did well on the first black box, with methods that made use of likelihood ratios proving particularly effective. But no method was able to detect the anomalies in the third black box, and many teams reported a false signal for the second black box. This “placebo effect’’ illustrates the need for ML approaches to have an accurate estimation of the background and not just a procedure for identifying signals. The challenge for the third black box, however, required algorithms to identify multiple clusters of anomalous events rather than a single cluster. Future innovation is needed in this department.

Dark Machines

A second data challenge was launched in June 2020 within the Dark Machines initiative. Dark Machines is a research collective of physicists and data scientists who apply ML techniques to understand the nature of dark matter – as we don’t know the nature of dark matter, it is critical to search broadly for its anomalous signatures. The challenge was organised by Sascha Caron (Radboud University), Caterina Doglioni (University of Lund) and Maurizio Pierini (CERN), with notable contributions from Bryan Ostidiek (Harvard University) in the development of a common software infrastructure, and Melissa van Beekveld (University of Oxford) for dataset generation. In total, 39 participants arranged in 13 teams explored various unsupervised techniques, with each team submitting multiple algorithms.

The anomaly score

By contrast with LHCO, the Dark Machines data challenge focused on developing algorithms broadly sensitive to non-resonant anomalies. Good examples of non-resonant new physics include many supersymmetric models and models of dark matter – anything where “invisible” particles don’t interact with the detector. In such a situation, resonant peaks become excesses in the tails of the missing-transverse-energy distribution. Two datasets were provided: R&D datasets including a concoction of SM processes and many signal samples for contestants to develop their approaches on; and a black-box dataset mixing SM events with events from unspecified signal processes. The challenge has now formally concluded, and its outcome was posted on arXiv in May, but the black-box has not been opened to allow the community to continue to test ideas on it.

A wide variety of unsupervised methods have been deployed so far. The algorithms use diverse representations of the collider events (for example, lists of particle four-momenta, or physics quantities computed from them), and both implicit and explicit approaches for estimating the probability density of the background (for example, autoencoders and “normalising flows”). While no single method universally achieved the highest sensitivity to new-physics events, methods that mapped the background to a fixed point and looked for events that were not described well by this mapping generally did better than techniques that had a so-called dynamic embedding. A key question exposed by this challenge that will inspire future innovation is how best to tune and combine unsupervised machine-learning algorithms in a way that is model independent with respect to the new physics describing the signal.

The enthusiastic response to the LHCO and Dark Machines data challenges highlights the important future role of unsupervised ML at the LHC and elsewhere in fundamental physics. So far, just one analysis has been published – a dijet-resonance search by the ATLAS collaboration using weakly-supervised ML – but many more are underway, and these techniques are even being considered for use in the level-one triggers of LHC experiments (see Hunting anomalies with an AI trigger). And as the detection of outliers also has a large number of real-world applications, from fraud detection to industrial maintenance, fruitful cross-talk between fundamental research and industry is possible. 

The LHCO and Dark Machines data challenges are a stepping stone to an exciting experimental programme that is just beginning. 

The post What’s in the box? appeared first on CERN Courier.

]]>
Feature The LHC Olympics and Dark Machines data challenges stimulated innovation in the use of machine learning to search for new physics, write Benjamin Nachman and Melissa van Beekveld. https://cerncourier.com/wp-content/uploads/2021/08/Data-Challenges.jpg
Stealing theorists’ lunch https://cerncourier.com/a/stealing-theorists-lunch/ Tue, 31 Aug 2021 21:49:46 +0000 https://preview-courier.web.cern.ch/?p=94049 Artificial-intelligence techniques have been used in experimental particle physics for 30 years, and are becoming increasingly widespread in theoretical physics. Anima Anandkumar and John Ellis explore the possibilities.

The post Stealing theorists’ lunch appeared first on CERN Courier.

]]>
John Ellis and Anima Anandkumar

How might artificial intelligence make an impact on theoretical physics?

John Ellis (JE): To phrase it simply: where do we go next? We have the Standard Model, which describes all the visible matter in the universe successfully, but we know dark matter must be out there. There are also puzzles, such as what is the origin of the matter in the universe? During my lifetime we’ve been playing around with a bunch of ideas for tackling those problems, but haven’t come up with solutions. We have been able to solve some but not others. Could artificial intelligence (AI) help us find new paths towards attacking these questions? This would be truly stealing theoretical physicists’ lunch.

 Anima Anandkumar (AA): I think the first steps are whether you can understand more basic physics and be able to come up with predictions as well. For example, could AI rediscover the Standard Model? One day we can hope to look at what the discrepancies are for the current model, and hopefully come up with better suggestions.

 JE: An interesting exercise might be to take some of the puzzles we have at the moment and somehow equip an AI system with a theoretical framework that we physicists are trying to work with, let the AI loose and see whether it comes up with anything. Even over the last few weeks, a couple of experimental puzzles have been reinforced by new results on B-meson decays and the anomalous magnetic moment of the muon. There are many theoretical ideas for solving these puzzles but none of them strike me as being particularly satisfactory in the sense of indicating a clear path towards the next synthesis beyond the Standard Model. Is it imaginable that one could devise an AI system that, if you gave it a set of concepts that we have, and the experimental anomalies that we have, then the AI could point the way?

 AA: The devil is in the details. How do we give the right kind of data and knowledge about physics? How do we express those anomalies while at the same time making sure that we don’t bias the model? There are anomalies suggesting that the current model is not complete – if you are giving that prior knowledge then you could be biasing the models away from discovering new aspects. So, I think that delicate balance is the main challenge.

 JE: I think that theoretical physicists could propose a framework with boundaries that AI could explore. We could tell you what sort of particles are allowed, what sort of interactions those could have and what would still be a well-behaved theory from the point of view of relativity and quantum mechanics. Then, let’s just release the AI to see whether it can come up with a combination of particles and interactions that could solve our problems. I think that in this sort of problem space, the creativity would come in the testing of the theory. The AI might find a particle and a set of interactions that would deal with the anomalies that I was talking about, but how do we know what’s the right theory? We have to propose some other experiments that might test it – and that’s one place where the creativity of theoretical physicists will come into play.

 AA: Absolutely. And many theories are not directly testable. That’s where the deeper knowledge and intuition that theoretical physicists have is so critical.

Is human creativity driven by our consciousness, or can contemporary AI be creative? 

AA: Humans are creative in so many ways. We can dream, we can hallucinate, we can create – so how do we build those capabilities into AI? Richard Feynman famously said “What I cannot create, I do not understand.” It appears that our creativity gives us the ability to understand the complex inner workings of the universe. With the current AI paradigm this is very difficult. Current AI is geared towards scenarios where the training and testing distributions are similar, however, creativity requires extrapolation – being able to imagine entirely new scenarios. So extrapolation is an essential aspect. Can you go from what you have learned and extrapolate new scenarios? For that we need some form of invariance or understanding of the underlying laws. That’s where physics is front and centre. Humans have intuitive notions of physics from early childhood. We slowly pick them up from physical interactions with the world. That understanding is at the heart of getting AI to be creative.

 JE: It is often said that a child learns more laws of physics than an adult ever will! As a human being, I think that I think. I think that I understand. How can we introduce those things into AI?

Could AI rediscover the Standard Model?

 AA: We need to get AI to create images, and other kinds of data it experiences, and then reason about the likelihood of the samples. Is this data point unlikely versus another one? Similarly to what we see in the brain, we recently built feedback mechanisms into AI systems. When you are watching me, it’s not just a free-flowing system going from the retina into the brain; there’s also a feedback system going from the inferior temporal cortex back into the visual cortex. This kind of feedback is fundamental to us being conscious. Building these kinds of mechanisms into AI is the first step to creating conscious AI.

 JE: A lot of the things that you just mentioned sound like they’re going to be incredibly useful going forward in our systems for analysing data. But how is AI going to devise an experiment that we should do? Or how is AI going to devise a theory that we should test?

 AA: Those are the challenging aspects for an AI. A data-driven method using a standard neural network would perform really poorly. It will only think of the data that it can see and not about data that it hasn’t seen – what we call “zero-short generalisation”. To me, the past decade’s impressive progress is due to a trinity of data, neural networks and computing infrastructure, mainly powered by GPUs [graphics processing units], coming together: the next step for AI is a wider generalisation to the ability to extrapolate and predict hitherto unseen scenarios.

Across the many tens of orders of magnitude described by modern physics, new laws and behaviours “emerge” non-trivially in complexity (see Emergence). Could intelligence also be an emergent phenomenon?

JE: As a theoretical physicist, my main field of interest is the fundamental building blocks of matter, and the roles that they play very early in the history of the universe. Emergence is the word that we use when we try to capture what happens when you put many of these fundamental constituents together, and they behave in a way that you could often not anticipate if you just looked at the fundamental laws of physics. One of the interesting developments in physics over the past generation is to recognise that there are some universal patterns that emerge. I’m thinking, for example, of phase transitions that look universal, even though the underlying systems are extremely different. So, I wonder, is there something similar in the field of intelligence? For example, the brain structure of the octopus is very different from that of a human, so to what extent does the octopus think in the same way that we do?

 AA: There’s a lot of interest now in studying the octopus. From what I learned, its intelligence is spread out so that it’s not just in its brain but also in its tentacles. Consequently, you have this distributed notion of intelligence that still works very well. It can be extremely camouflaged – imagine being in a wild ocean without a shell to protect yourself. That pressure created the need for intelligence such that it can be extremely aware of its surroundings and able to quickly camouflage itself or manipulate different tools.

 JE: If intelligence is the way that a living thing deals with threats and feeds itself, should we apply the same evolutionary pressure to AI systems? We threaten them and only the fittest will survive. We tell them they have to go and find their own electricity or silicon or something like that – I understand that there are some first steps in this direction, computer programs competing with each other at chess, for example, or robots that have to find wall sockets and plug themselves in. Is this something that one could generalise? And then intelligence could emerge in a way that we hadn’t imagined?

Similarly to what we see in the brain, we recently built feedback mechanisms into AI systems

 AA: That’s an excellent point. Because what you mentioned broadly is competition – different kinds of pressures that drive towards good, robust objectives. An example is generative adversarial models, which can generate very realistic looking images. Here you have a discriminator that challenges the generator to generate images that look real. These kinds of competitions or games are getting a lot of traction and we have now passed the Turing test when it comes to generating human faces – you can no longer tell very easily whether it is generated by AI or if it is a real person. So, I think those kinds of mechanisms that have competition built into the objective they optimise are fundamental to creating more robust and more intelligent systems.

 JE: All this is very impressive – but there are still some elements that I am missing, which seem very important to theoretical physics. Take chess: a very big system but finite nevertheless. In some sense, what I try to do as a theoretical physicist has no boundaries. In some sense, it is infinite. So, is there any hope that AI would eventually be able to deal with problems that have no boundaries?

 AA: That’s the difficulty. These are infinite-dimensional spaces… so how do we decide how to move around there? What distinguishes an expert like you from an average human is that you build your knowledge and develop intuition – you can quickly make judgments and find which narrow part of the space you want to work on compared to all the possibilities. That’s the aspect that is so difficult for AI to figure out. The space is enormous. On the other hand, AI does have a lot more memory, a lot more computational capacity. So can we create a hybrid system, with physicists and machine learning in tandem, to help us harness the capabilities of both AI and humans together? We’re currently exploring theorem provers: can we use the theorems that humans have proven, and then add reinforcement learning on top to create very fast theorem solvers? If we can create such fast theorem provers in pure mathematics, I can see them being very useful for understanding the Standard Model and the gaps and discrepancies in it. It is much harder than chess, for example, but there are exciting programming frameworks and data sets available, with efforts to bring together different branches of mathematics. But I don’t think humans will be out of the loop, at least for now.

The post Stealing theorists’ lunch appeared first on CERN Courier.

]]>
Opinion Artificial-intelligence techniques have been used in experimental particle physics for 30 years, and are becoming increasingly widespread in theoretical physics. Anima Anandkumar and John Ellis explore the possibilities. https://cerncourier.com/wp-content/uploads/2021/08/CCSepOct21_INT_EllisAnan.jpg
Long-lived particles gather interest https://cerncourier.com/a/long-lived-particles-gather-interest/ Wed, 21 Jul 2021 08:48:46 +0000 https://preview-courier.web.cern.ch/?p=93435 The long-lived particle community marked five years of stretching the limits of searches for new physics with its ninth and best-attended workshop yet.

The post Long-lived particles gather interest appeared first on CERN Courier.

]]>
From 25 to 28 May, the long-lived particle (LLP) community marked five years of stretching the limits of searches for new physics with its ninth and best-attended workshop yet, with more than 300 registered participants.

LLP9 played host to six new results, three each from ATLAS and CMS. These included a remarkable new ATLAS paper searching for stopped particles – beyond-the-Standard Model (BSM) LLPs that can be produced in a proton–proton collision and then get stuck in the detector before decaying minutes, days or weeks later. Good hypothetical examples are the so-called gluino R-hadrons that occur in supersymmetric models. Also featured was a new CMS search for displaced di-muon resonances using “data scouting” – a unique method of increasing the number of potential signal events kept at the trigger level by reducing the event information that is retained. Both experiments presented new results searching for the Higgs boson decaying to LLPs (see “LLP candidate” figure).

Long-lived particles can also be produced in a collision inside ATLAS, CMS or LHCb and live long enough to drift entirely outside of the detector volume. To ensure that this discovery avenue is also covered for the future of the LHC’s operation, there is a rich set of dedicated LLP detectors either approved or proposed, and LLP9 featured updates from MoEDAL, FASER, MATHUSLA, CODEX-b, MilliQan, FACET and SND@LHC, as well as a presentation about the proposed forward physics facility for the High-Luminosity LHC (HL-LHC).

Reinterpreting machine learning

The liveliest parts of any LLP community workshop are the brainstorming and hands-on working-group sessions. LLP9 included multiple vibrant discussions and working sessions, including on heavy neutral leptons and the ability of physicists who are not members of experimental collaborations to be able to re-interpret LLP searches – a key issue for the LLP community. At LLP9, participants examined the challenges inherent in re-interpreting LLP results that use machine learning techniques, by now a common feature of particle-physics analyses. For example, boosted decision trees (BDTs) and neural networks (NNs) can be quite powerful for either object identification or event-level discrimination in LLP searches, but it’s not entirely clear how best to give theorists access to the full original BDT or NN used internally by the experiments.

LLP searches at the LHC often must also grapple with background sources that are negligible for the majority of searches for prompt objects. These backgrounds – such as cosmic muons, beam-induced backgrounds, beam-halo effects and cavern backgrounds – are reasonably well-understood for Run 2 and Run 3, but little study has been performed for the upcoming HL-LHC, and LLP9 featured a brainstorming session about what such non-standard backgrounds might look like in the future.

Also looking to the future, two very forward-thinking working-group sessions were held on LLPs at a potential future muon collider and at the proposed Future Circular Collider (FCC). Hadron collisions at ~100 TeV in FCC-hh would open up completely unprecedented discovery potential, including for LLPs, but it’s unclear how to optimise detector designs for both LLPs and the full slate of prompt searches.

Simulating dark showers is a longstanding challenge

Finally, LLP9 hosted an in-depth working-group session dedicated to the simulation of “dark showers”, in collaboration with the organisers of the dark-showers study group connected to the Snowmass process, which is currently shaping the future of US particle physics. Dark showers are a generic and poorly understood feature of a potential BSM dark sector with similarities to QCD, which could have its own “dark hadronisation” rules. Simulating dark showers is a longstanding challenge. More than 50 participants joined for a hands-on demonstration of simulation tools and a discussion of the dark-showers Pythia module, highlighting the growing interest in this subject in the LLP community.

LLP9 was raucous and stimulating, and identified multiple new avenues of research. LLPX, the tenth workshop in the series, will be held in November this year.

The post Long-lived particles gather interest appeared first on CERN Courier.

]]>
Meeting report The long-lived particle community marked five years of stretching the limits of searches for new physics with its ninth and best-attended workshop yet. https://cerncourier.com/wp-content/uploads/2021/07/CMS-LLPs-1000.jpg
LHCP sees a host of new results https://cerncourier.com/a/lhcp-sees-a-host-of-new-results/ Sat, 17 Jul 2021 15:03:41 +0000 https://preview-courier.web.cern.ch/?p=93356 Over 1000 physicists took part in the ninth Large Hadron Collider Physics conference.

The post LHCP sees a host of new results appeared first on CERN Courier.

]]>
More than 1000 physicists took part in the ninth Large Hadron Collider Physics (LHCP) conference from 7 to 12 June. The in-person conference was to have been held in Paris: for the second year in a row, however, the organisers efficiently moved the meeting online, without a registration fee, thanks to the support of CERN and IUPAP. While the conference experience cannot be the same over a video link, the increased accessibility for people from all parts of the international community was evident, with LHCP21 participants hailing from institutes across 54 countries.

LHCP21 poster

The LHCP format traditionally has plenary sessions in the mornings and late afternoons, with parallel sessions in the middle of the day. This “shape” was kept for the online meeting, with a shorter day to improve the practicality of joining from distant time zones. This resulted in a dense format with seven-fold parallel sessions, allowing all parts of the LHC programme, both experimental and theoretical, to be explored in detail. The overall vitality of the programme is illustrated by the raw statistics: a grand total of 238 talks and 122 posters were presented.

Last year saw a strong focus on the couplings to the second generation

Nine years on from the discovery of the 125 GeV Higgs boson, measurements have progressed to a new level of precision with the full Run-2 data. Both ATLAS and CMS presented new results on Higgs production, helping constrain the dynamics of the production mechanisms via differential and “simplified template” cross-section measurements. While the couplings of the Higgs to third-generation fermions are now established, last year saw a strong focus on the couplings to the second generation. After first evidence for Higgs decays to muons was reported from CMS and ATLAS results earlier in the year, ATLAS presented a new search with the full Run-2 data for Higgs decays to charm quarks using powerful new charm-tagging techniques. Both CMS and ATLAS showed updated searches for Higgs-pair production, with ATLAS being able to exclude a production rate more than 4.1 times the Standard Model (SM) prediction at 95% confidence. This is a process that should be observable with High-Luminosity LHC statistics, if it is as predicted in the SM. A host of searches were also reported, some using the Higgs as a tool to probe for new physics.

Puzzling hints

The most puzzling hints from the LHC Run 1 seem to strengthen in Run 2. LHCb presented analyses relating to the “flavour anomalies” found most notably in b→sµ+µ decays, updated to the full data statistics, in multiple channels. While no result yet passes a 5σ difference from SM expectations, the significances continue to creep upwards. Searches by ATLAS and CMS for potential new particles or effects at high masses that could indicate an associated new-physics mechanism continue to draw a blank, however. This remains a dilemma to be studied with more precision and data in Run 3. Other results in the flavour sector from LHCb included a new measurement of the lifetime of the Ωc, four times longer than previous measurements (CERN Courier July/August 2021 p17) and the first observation of a mass difference between the mixed D0D0 meson mass eigenstates (CERN Courier July/August 2021 p8).

A wealth of results was presented from heavy-ion collisions. Measurements with heavy quarks were prominent here as well. ALICE reported various studies of the differences in heavy-flavour hadron production in proton–proton and heavy-ion collisions, for example using D mesons. CMS reported the first observation of Bc meson production in heavy-ion collisions, and also first evidence for top-quark pair production in lead–lead collisions. ATLAS used heavy-flavour decays to muons to compare suppression of b- and c-hadron production in lead–lead and proton–proton collisions. Beyond the ions, ALICE also showed intriguing new results demonstrating that the relative rates of different types of c-hadron production differ in proton–proton collisions compared to earlier experiments using e+e and ep collisions at LEP and HERA.

Looking forward, the experiments reported on their preparations for the coming LHC Run 3, including substantial upgrades. While some work has been slowed by the pandemic, recommissioning of the detectors has begun in preparation for physics data taking in spring 2022, with the brighter beams expected from the upgraded CERN accelerator chain. One constant to rely on, however, is that LHCP will continue to showcase the fantastic panoply of physics at the LHC.

The post LHCP sees a host of new results appeared first on CERN Courier.

]]>
Meeting report Over 1000 physicists took part in the ninth Large Hadron Collider Physics conference. https://cerncourier.com/wp-content/uploads/2021/07/eventdisplay_2L.jpg
KEK tackles neutron-lifetime puzzle https://cerncourier.com/a/kek-tackles-neutron-lifetime-puzzle/ Fri, 02 Jul 2021 07:54:05 +0000 https://preview-courier.web.cern.ch/?p=92831 In an attempt to shed light on the neutron-lifetime puzzle, a team at Japan’s KEK laboratory in collaboration with Japanese universities has developed a new experimental setup.

The post KEK tackles neutron-lifetime puzzle appeared first on CERN Courier.

]]>
The apparatus in which neutrons from J-PARC were clocked

More than a century after its discovery, the proton remains a source of intrigue, its charge-radius and spin posing puzzles that are the focus of intense study. But what of its mortal sibling, the neutron? In recent years, discrepancies between measurements of the neutron lifetime using different methods constitute a puzzle with potential implications for cosmology and particle physics. The neutron lifetime determines the ratio of protons to neutrons at the beginning of big-bang nucleosynthesis and thus affects the yields of light elements, and it is also used to determine the CKM matrix-element Vud in the Standard Model.

The neutron-lifetime puzzle stems from measurements using two techniques. The “bottle” method counts the number of surviving ultra-cold neutrons contained in a trap after a certain period, while the “beam” method uses the decay probability of the neutron obtained from the ratio of the decay rate to an incident neutron flux. Back in the 1990s, the methods were too imprecise to worry about differences between the results. Today, however, the average neutron lifetime measured using the bottle and beam methods, 879.4 ± 0.4 s and 888.0 ± 2.0 s, respectively, stand 8.6 s (or 4σ) apart. 

We think it will take two years to obtain a competitive result from our experiment

Kenji Mishima

In an attempt to shed light on the issue, a team at Japan’s KEK laboratory in collaboration with Japanese universities has developed a new experimental setup. Similar to the beam method, it compares the decay rate to the reaction rate of neutrons in a pulsed beam from the Japan Proton Accelerator Research Complex (J-PARC). The decay rate and the reaction rate are determined by simultaneously detecting electrons from the neutron decay and protons from the reaction 3He 3H in a 1 m-long time-projection chamber containing diluted 3He, removing some of the systematic uncertainties that affect previous beam methods. The experiment is still in its early stages, and while the first results have been released – τn = 898 ± 10(stat)+15–18 (sys) s – the uncertainty is currently too large to draw conclusions.

“In the current situation, it is important to verify the puzzle by experiments in which different systematic errors dominate,” says Kenji Mishima of KEK, adding that further improvements in the statistical and systematic uncertainties are underway. “We think it will take two years to obtain a competitive result from our experiment.”

Several new-physics scenarios have been proposed as solutions of the neutron lifetime puzzle. These include exotic decay modes involving undetectable particles with a branching ratio of about 1%, such as “mirror neutrons” or dark-sector particles. 

The post KEK tackles neutron-lifetime puzzle appeared first on CERN Courier.

]]>
News In an attempt to shed light on the neutron-lifetime puzzle, a team at Japan’s KEK laboratory in collaboration with Japanese universities has developed a new experimental setup. https://cerncourier.com/wp-content/uploads/2021/06/CCJulAug21_NA_Neutron-lifetime.jpg
‘X’ boson feels the squeeze at NA64 https://cerncourier.com/a/x-boson-feels-the-squeeze-at-na64/ Fri, 25 Jun 2021 14:25:23 +0000 https://preview-courier.web.cern.ch/?p=92739 Data from CERN's NA64 experiment place constraints on new particles that might account for the electron g-2 and ATOMKI anomalies.

The post ‘X’ boson feels the squeeze at NA64 appeared first on CERN Courier.

]]>
NA64

Recent measurements bolstering the longstanding tension between the experimental and theoretical values of the muon’s anomalous magnetic moment generated a buzz in the community. Though with a much lower significance, a similar puzzle may also be emerging for the anomalous magnetic moment of the electron, ae.

Depending on which of two recent independent measurements of the fine-structure constant is used in the theoretical calculation of ae – one obtained at Berkeley in 2018 or the other at Kastler–Brossel Laboratory in Paris in 2020 – the Standard Model prediction stands 2.4σ higher or 1.6σ lower than the best experimental value, respectively. Motivated by this inconsistency, the NA64 collaboration at CERN set out to investigate whether new physics – in the form of a lightweight “X boson” – might be influencing the electron’s behaviour.

The generic X boson could be a sub- GeV scalar, pseudoscalar, vector or axial- vector particle. Given experimental constraints on its decay modes involving Standard Model particles, it is presumed to decay predominantly invisibly, for example into dark-sector particles. NA64 searches for X bosons by directing 100 GeV electrons generated by the SPS onto a target, and looking for missing energy in the detector via electron–nuclei scattering eZ → eZX.

The result sets new bounds on the eX interaction strength

Analysing data collected in 2016, 2017 and 2018, corresponding to about 3 × 1011 electrons-on-target, the NA64 team found no evidence for such events. The result sets new bounds on the eX interaction strength and, as a result, on the contributions of X bosons to ae: X bosons with a mass below 1 GeV could contribute at most between one part in 1015 and one part in 1013, depending on the X-boson type and mass. These contributions are too small to explain the current anomaly in the electron’s anomalous magnetic moment, says NA64 spokesperson Sergei Gninenko. “But the fact that NA64 reached an experimental sensitivity that is better than the current accuracy of the direct measurements of ae, and of recent high-precision measurements of the fi ne-structure constant, is amazing.”

In a separate analysis, the NA64 team carried out a model-independent search for a particular pseudoscalar X boson with a mass of around 17 MeV. Coupling to electrons and decaying into e+e pairs, the so-called “X17” has been proposed to explain an excess of e+e pairs created during nuclear transitions of excited 8Be and 4He nuclei reported by the “ATOMKI” experiment in Hungary since 2015.

The e-X17 coupling strength is constrained by data: too large and the X17 would contribute too much to ae; too small and the X17 would decay too rarely and too far away from the ATOMKI target. In 2019, the NA64 team excluded a large range of couplings, although at large values, for a vector-like X17. More recently, they searched for a pseudoscalar X17, which has a lifetime about half that of the vector version for the same coupling strength. Re-analysing a sample of approximately 8.4 × 1010 electrons-on-target collected in 2017 and 2018 with 100 and 150 GeV electrons, respectively, the collaboration has now excluded couplings in the range 2.1–3.2 × 10–4 for a 17 MeV X-boson.

“We plan to further improve the sensitivity to vector and pseudoscalar X17’s after long shutdown 2, and also try to reconstruct the mass of X17, to be sure that if we see the signal it is the ATOMKI boson,” says Gninenko.

The post ‘X’ boson feels the squeeze at NA64 appeared first on CERN Courier.

]]>
News Data from CERN's NA64 experiment place constraints on new particles that might account for the electron g-2 and ATOMKI anomalies. https://cerncourier.com/wp-content/uploads/2021/06/Screenshot-2021-06-11-at-13.20.28.jpg
Higgsinos under the microscope https://cerncourier.com/a/higgsinos-under-the-microscope/ Fri, 14 May 2021 10:52:46 +0000 https://preview-courier.web.cern.ch/?p=92086 The ATLAS collaboration recently released a set of results based on the full LHC Run 2 dataset that explore some of the most challenging experimental scenarios involving higgsinos.

The post Higgsinos under the microscope appeared first on CERN Courier.

]]>
Figure 1

The Higgs boson was hypothesised to explain electroweak symmetry breaking nearly 50 years before its discovery. Its eventual discovery at the LHC took half a century of innovative accelerator and detector development, and extensive data analysis. Today, several outstanding questions in particle physics could be answered by higgsinos – theorised supersymmetric partners of an extended Higgs field. The higgsinos are a triplet of electroweak states, two neutral and one charged. If the lightest neutral state is stable, it can provide an explanation of astronomically observed dark matter. Furthermore, an intimate connection between higgsinos and the Higgs boson could explain why the mass of the Higgs boson is so much lighter than suggested by theoretical arguments. While higgsinos may not be much heavier than the Higgs boson, they would be produced more rarely and are significantly more challenging to find, especially if they are the only supersymmetric particles near the electroweak scale.

Higgsinos mix with other supersymmetric electroweak states, the wino and the bino, to form the physical particles that would be observed

The ATLAS collaboration recently released a set of results based on the full LHC Run 2 dataset that explore some of the most challenging experimental scenarios involving higgsinos. Each result tests different assumptions. Owing to quantum degeneracy, the higgsinos mix with other supersymmetric electroweak states, the wino and the bino, to form the physical particles that would be observed by the experiment. The mass difference between the lightest neutral and charged states, ∆m, depends on this mixing. Depending on the model assumptions, the phenomenology varies dramatically, requiring different analysis techniques and stimulating the development of new tools.

If ∆m is only a few hundred MeV, the small phase space suppresses the decay from the heavier states to the lightest one. The long-lived charged state flies partway through the inner tracker before decaying, and its short track can be measured. A search targeting this anomalous “disappearing track” signature was performed by exploiting novel requirements on the quality of the signal candidate and the ability of the ATLAS inner detectors to reconstruct short tracks. Finding that the number of short tracks is as expected from background processes alone, this search rules out higgsinos with lifetimes of a fraction of a nanosecond for masses up to 210 GeV.

If higgsinos mix somewhat with other supersymmetric electroweak states, they will decay promptly to the lightest stable higgsino and low-energy Standard Model particles. These soft decay products are extremely challenging to detect at the LHC, and ATLAS has performed several searches for events with two or three leptons to maximise the sensitivity to different values of ∆m. Each search features innovative optimisation and powerful discriminants to reject background. For the first time, ATLAS has performed a statistical combination of these searches, constraining higgsino masses to be larger than 150 GeV for ∆m above 2 GeV.

A final result targets higgsinos in models in which the lightest supersymmetric particle is not stable. In these scenarios, higgsinos may decay to triplets of quarks. A search designed around an adversarial neural network and employing a completely data-driven background estimation technique was developed to distinguish these rare decays from the overwhelming multi-jet background. This search is the first at the LHC to obtain sensitivity to this higgsino model, and rules out scenarios of the pair production of higgsinos with masses between 200 and 320 GeV (figure 1). 

Together, these searches set significant constraints on higgsino masses, and for certain parameters provide the first extension of sensitivity since LEP. With the development of new techniques and more data to come, ATLAS will continue to seek higgsinos at higher masses, and to test other theoretical and experimental assumptions.

The post Higgsinos under the microscope appeared first on CERN Courier.

]]>
News The ATLAS collaboration recently released a set of results based on the full LHC Run 2 dataset that explore some of the most challenging experimental scenarios involving higgsinos. https://cerncourier.com/wp-content/uploads/2019/06/Atlas-7.jpg
LHC reinterpreters think long-term https://cerncourier.com/a/lhc-reinterpreters-think-long-term/ Wed, 28 Apr 2021 08:36:24 +0000 https://preview-courier.web.cern.ch/?p=92151 The question of making research data findable, accessible, interoperable and reusable is a burning one throughout modern science.

The post LHC reinterpreters think long-term appeared first on CERN Courier.

]]>
A Map of the Invisible

The ATLAS, CMS and LHCb collaborations perform precise measurements of Standard Model (SM) processes and direct searches for physics beyond the Standard Model (BSM) in a vast variety of channels. Despite the multitude of BSM scenarios tested this way by the experiments, it still constitutes only a small subset of the possible theories and parameter combinations to which the experiments are sensitive. The (re)interpretation of the LHC results in order to fully understand their implications for new physics has become a very active field, with close theory–experiment interaction and with new computational tools and related infrastructure being developed. 

From 15 to 19 February, almost 300 theorists and experimental physicists gathered for a week-long online workshop to discuss the latest developments. The topics covered ranged from advances in public software packages for reinterpretation to the provision of detailed analysis information by the experiments, from phenomenological studies to global fits, and from long-term preservation to public data.

Open likelihoods

One of the leading questions throughout the workshop was that of public likelihoods. The statistical model of an experimental analysis provides its complete mathematical description; it is essential information for determining the compatibility of the observations with theoretical predictions. In his keynote talk “Open science needs open likelihoods’’, Harrison Prosper (Florida State University) explained why it is in our scientific interest to make the publication of full likelihoods routine and straightforward. The ATLAS collaboration has recently made an important step in this direction by releasing full likelihoods in a JSON format, which provides background estimates, changes under systematic variations, and observed data counts at the same fidelity as used in the experiment, as presented by Eric Schanet (LMU Munich). Matthew Feickert (University of Illinois) and colleagues gave a detailed tutorial on how to use these likelihoods with the pyhf python package. Two public reinterpretation tools, MadAnalysis5 presented by Jack Araz (IPPP Durham) and SModelS presented by Andre Lessa (UFABC Santo Andre) can already make use of pyhf and JSON likelihoods, and others are to follow. An alternative approach to the plain-text JSON serialisation is to encode the experimental likelihood functions in deep neural networks, as discussed by Andrea Coccaro (INFN Genova) who presented the DNNLikelihood framework. Several more contributions from CMS, LHCb and from theorists addressed the question of how to present and use likelihood information, and this will certainly stay an active topic at future workshops.  

The question of making research data findable, accessible, interoperable and reusable is a burning one throughout modern science

A novelty for the Reinterpretation workshop was that the discussion was extended to experiences and best practices beyond the LHC, to see how experiments in other fields address the need for publicly released data and reusable results. This included presentations on dark-matter direct detection, the high-intensity frontier, and neutrino oscillation experiments. Supporting Prosper’s call for data reusability 40 years into the future – “for science 2061” – Eligio Lisi (INFN Bari) pointed out the challenges met in reinterpreting the 1998 Super-Kamiokande data, initially published in terms of the then-sufficient two-flavour neutrino-oscillation paradigm, in terms of contemporary three-neutrino descriptions, and beyond. On the astrophysics side, the LIGO and Virgo collaborations actively pursue an open-science programme. Here, Agata Trovato (APC Paris) presented the Gravitational Wave Open Science Center, giving details on the available data, on their format and on the tools to access them. An open-data policy also exists at the LHC, spearheaded by the CMS collaboration, and Edgar Carrera Jarrin (USF Quito) shared experiences from the first CMS open-data workshop. 

The question of making research data findable, accessible, interoperable and reusable (“FAIR” in short) is a burning one throughout modern science. In a keynote talk, the head of the GO FAIR Foundation, Barend Mons, explained the FAIR Guiding Principles together with the technical and social aspects of FAIR data management and data reuse, using the example of COVID-19 disease modelling. There is much to be learned here for our field. 

The wrap-up session revolved around the question of how to implement the recommendations of the Reinterpretation workshop in a more systematic way. An important aspect here is the proper recognition, within the collaborations as well as the community at large, of the additional work required to this end. More rigorous citation of HEPData entries by theorists may help in this regard. Moreover, a “Reinterpretation: Auxiliary Mat­erial Presentation” (RAMP) seminar series will be launched to give more visibility and explicit recognition to the efforts of preparing and providing extensive mat­erial for reinterpretation. The first RAMP meetings took place on 9 and 23 April.

The post LHC reinterpreters think long-term appeared first on CERN Courier.

]]>
Meeting report The question of making research data findable, accessible, interoperable and reusable is a burning one throughout modern science. https://cerncourier.com/wp-content/uploads/2021/04/CCMayJun21_FN_map_feature.jpg
Muon g–2: the promise of a generation https://cerncourier.com/a/muon-g-2-the-promise-of-a-generation/ Thu, 15 Apr 2021 13:41:18 +0000 https://preview-courier.web.cern.ch/?p=92055 The recent Fermilab result offers a moment to reflect on perseverance and collaboration.

The post Muon g–2: the promise of a generation appeared first on CERN Courier.

]]>
CERN g-2 storage ring

It has been almost a century since Dirac formulated his famous equation, and 75 years since the first QED calculations by Schwinger, Tomonaga and Feynman were used to explain the small deviations in hydrogen’s hyperfine structure. These calculations also predicted that deviations from Dirac’s prediction a = (g–2)/2, where g is the gyromagnetic ratio e/2me, should be non-zero and thus “anomalous”. The result is famously engraved on Schwinger’s tombstone, standing as a monument to the importance of this result and a marker of things to come.

In January 1957 Garwin and collaborators at Columbia published the first measurements of g for the recently discovered muon, accurate to 5%, followed two months later by Cassels and collaborators at Liverpool with uncertainties of less than 1%. Leon Lederman is credited with initiating the CERN campaign of g–2 experiments from 1959 to 1979, starting with a borrowed 83 × 52 × 10 cm magnet from Liverpool and ending with a dedicated storage ring and a precision of better than 10 ppm.

Why was CERN so interested in the muon? In a 1981 review, Combley, Farley and Picasso commented that the CERN results for aμ had a higher sensitivity to new physics by “a modification to the photon propagator or new couplings” by a factor (mμ/me)2. Revealing a deeper interest, they also admitted “… this activity has brought us no nearer to the understanding of the muon mass [200 times that of the electron].”

With the end of the CERN muon programme, focus turned to Brookhaven and the E821 experiment, which took up the challenge of measuring aμ 20 times more precisely, providing sensitivity to virtual particles with masses beyond the reach of the colliders at the time. In 2004 the E821 collaboration delivered on its promise, reporting results accurate to about 0.6 ppm. At the time this showed a 2–3σ discrepancy with respect to the Standard Model (SM) – tantalising, but far from conclusive.

Spectacular progress
The theoretical calculation of g–2 made spectacular progress in step with experiment. Almost eclipsed by the epic 2012 achievement of calculating the QED contributions to five loops from 12,672 Feynman diagrams, huge advances in calculating the hadronic vacuum polarisation contributions to aμ have been made. A reappraisal of the E821 data using this information suggested at least a 3.5σ discrepancy with the SM. It was this that provided the impetus to Lee Roberts and colleagues to build the improved muon g–2 experiments at Fermilab, the first results from which are described in this issue, and at J-PARC. Full results from the Fermilab experiment alone should reduce the aμ uncertainties by at least another factor of three – down to a level that really challenges what we know about the SM.

Muon g–2 is a clear demonstration that theory and experiment must progress hand in hand

Of course, the interpretation of the new results relies on the choice of theory baseline. For example, one could choose, as the Fermilab experiment has, to use the consensus “International Theory Initiative” expectation for aμ. One could also take into account the new results provided by LHCb’s recent RK measurement, which hint that muons might behave differently than electrons. There will inevitably be speculation over the coming months about the right approach. Whatever one’s choice, muon g–2 is a clear demonstration that theory and experiment must progress hand in hand.

Perhaps the most important lesson is the continued cross-fertilisation and impetus to the physics delivered both at CERN and at Fermilab by recent results. The g–2 experiment, an international collaboration between dozens of labs and universities in seven countries, has benefited from students who cut their teeth on LHC experiments. Likewise, students who have worked at the precision frontier at Fermilab are now armed with the expertise of making blinded ppm measurements and are keen to see how they can make new measurements at CERN, for example at the proposed MUonE experiment, or at other muon experiments due to come online this decade.

“It remains to be seen whether or not future refinement of the [SM] will call for the discerning scrutiny of further measurements of even greater precision,” concluded Combley, Farley and Picasso in their 1981 review – a wise comment that is now being addressed.

The post Muon g–2: the promise of a generation appeared first on CERN Courier.

]]>
Opinion The recent Fermilab result offers a moment to reflect on perseverance and collaboration. https://cerncourier.com/wp-content/uploads/2021/04/Screenshot-2021-04-15-at-12.14.53-1.png
An anomalous moment for the muon https://cerncourier.com/a/an-anomalous-moment-for-the-muon/ Wed, 14 Apr 2021 12:58:58 +0000 https://preview-courier.web.cern.ch/?p=92019 To confidently discover new physics in the muon g−2 anomaly requires that data-driven and lattice-QCD calculations of the Standard-Model value agree, write Thomas Blum, Luchang Jin and Christoph Lehner.

The post An anomalous moment for the muon appeared first on CERN Courier.

]]>
Hadronic light-by-light computation

A fermion’s spin tends to twist to align with a magnetic field – an effect that becomes dramatically macroscopic when electron spins twist together in a ferromagnet. Microscopically, the tiny magnetic moment of a fermion interacts with the external magnetic field through absorption of photons that comprise the field. Quantifying this picture, the Dirac equation predicts fermion magnetic moments to be precisely two in units of Bohr magnetons, e/2m. But virtual lines and loops add an additional 0.1% or so to this value, giving rise to an “anomalous” contribution known as “g–2” to the particle’s magnetic moment, caused by quantum fluctuations. Calculated to tenth order in quantum electrodynamics (QED), and verified experimentally to about two parts in 1010, the electron’s magnetic moment is one of the most precisely known numbers in the physical sciences. While also measured precisely, the magnetic moment of the muon, however, is in tension with the Standard Model.

Tricky comparison

The anomalous magnetic moment of the muon was first measured at CERN in 1959, and prior to 2021, was most recently measured by the E821 experiment at Brookhaven National Laboratory (BNL) 16 years ago. The comparison between theory and data is much trickier than for electrons. Being short-lived, muons are less suited to experiments with Penning traps, whereby stable charged particles are confined using static electric and magnetic fields, and the trapped particles are then cooled to allow precise measurements of their properties. Instead, experiments infer how quickly muon spins precess in a storage ring – a situation similar to the wobbling of a spinning top, where information on the muon’s advancing spin is encoded in the direction of the electron that is emitted when it decays. Theoretical calculations are also more challenging, as hadronic contributions are no longer so heavily suppressed when they emerge as virtual particles from the more massive muon.

All told, our knowledge of the anomalous magnetic moment of the muon is currently three orders of magnitude less precise than for electrons. And while everything tallies up, more or less, for the electron, BNL’s longstanding measurement of the magnetic moment of the muon is 3.7σ greater than the Standard Model prediction (see panel “Rising to the moment”). The possibility that the discrepancy could be due to virtual contributions from as-yet-undiscovered particles demands ever more precise theoretical calculations. This need is now more pressing than ever, given the increased precision of the experimental value expected in the next few years from the Muon g–2 collaboration at Fermilab in the US and other experiments such as the Muon g–2/EDM collaboration at J-PARC in Japan. Hotly anticipated results from the first data run at Fermilab’s E989 experiment were released on 7 April. The new result is completely consistent with the BNL value but with a slightly smaller error, leading to a slightly larger discrepancy of 4.2σ with the Standard Model when the measurements are combined (see Fermilab strengthens muon g-2 anomaly).

Hadronic vacuum polarisation

The value of the muon anomaly, aμ, is an important test of the Standard Model because currently it is known very precisely – to roughly 0.5 parts per million (ppm) – in both experiment and theory. QED dominates the value of aμ, but due to the non-perturbative nature of QCD it is strong interactions that contribute most to the error. The theoretical uncertainty on the anomalous magnetic moment of the muon is currently dominated by so-called hadronic vacuum polarisation (HVP) diagrams. In HVP, a virtual photon briefly explodes into a “hadronic blob”, before being reabsorbed, while the magnetic-field photon is simultaneously absorbed by the muon. While of order α2 in QED, it is all orders in QCD, making for very difficult calculations.

Rising to the moment

Artist

In the Standard Model, the magnetic moment of the muon is computed order-by-order in powers of a for QED (each virtual photon represents a factor of α), and to all orders in as for QCD.

At the lowest order in QED, the Dirac term (pictured left) accounts for precisely two Bohr magnetons and arises purely from the muon (μ) and the real external photon (γ) representing the magnetic field.

 

At higher orders in QED, virtual Standard Model particles, depicted by lines forming loops, contribute to a fractional increase of aμ with respect to that value: the so-called anomalous magnetic moment of the muon. It is defined to be aμ = (g–2)/2, where g is the gyromagnetic ratio of the muon – the number of Bohr magnetons, e/2m, which make up the muon’s magnetic moment. According to the Dirac equation, g = 2, but radiative corrections increase its value.

The biggest contribution is from the Schwinger term (pictured left, O(α)) and higher-order QED diagrams.

 

aμQED = (116 584 718.931 ± 0.104) × 10–11

Electroweak lines (pictured left) also make a well-defined contribution. These diagrams are suppressed by the heavy masses of the Higgs, W and Z bosons.

aμEW = (153.6 ± 1.0) × 10–11

The biggest QCD contribution is due to hadronic vacuum polarisation (HVP) diagrams. These are computed from leading order (pictured left, O(α2)), with one “hadronic blob” at all orders in as (shaded) up to next-to-next-to-leading order (NNLO, O(α4), with three hadronic blobs) in the HVP.

 

 

Hadronic light-by-light scattering (HLbL, pictured left at O(α3) and all orders in αs (shaded)), makes a smaller contribution but with a larger fractional uncertainty.

 

 

 

Neglecting lattice–QCD calculations for the HVP in favour of those based on e+e data and phenomenology, the total anomalous magnetic moment is given by

aμSM = aμQED + aμEW + aμHVP + aμHLbL = (116 591 810 ± 43) × 10–11.

This is somewhat below the combined value from the E821 experiment at BNL in 2004 and the E989 experiment at Fermilab in 2021.

aμexp = (116 592 061 ± 41) × 10–11

The discrepancy has roughly 4.2σ significance:

aμexp– aμSM = (251 ± 59) × 10–11.

Historically, and into the present, HVP is calculated using a dispersion relation and experimental data for the cross section for e+e hadrons. This idea was born of necessity almost 60 years ago, before QCD was even on the scene, let alone calculable. The key realisation is that the imaginary part of the vacuum polarisation is directly related to the hadronic cross section via the optical theorem of wave-scattering theory; a dispersion relation then relates the imaginary part to the real part. The cross section is determined over a relatively wide range of energies, in both exclusive and inclusive channels. The dominant contribution – about three quarters – comes from the e+e π+π channel, which peaks at the rho meson mass, 775 MeV. Though the integral converges rapidly with increasing energy, data are needed over a relatively broad region to obtain the necessary precision. Above the τ mass, QCD perturbation theory hones the calculation.

Several groups have computed the HVP contribution in this way, and recently a consensus value has been produced as part of the worldwide Muon g–2 Theory Initiative. The error stands at about 0.58% and is the dominant part of the theory error. It is worth noting that a significant part of the error arises from a tension between the most precise measurements, by the BaBar and KLOE experiments, around the rho–meson peak. New measurements, including those from experiments at Novosibirsk, Russia and Japan’s Belle II experiment, may help resolve the inconsistency in the current data and reduce the error by a factor of two or so. 

The alternative approach, of calculating the HVP contribution from first principles using lattice QCD, is not yet at the same level of precision, but is getting there. Consistency between the two approaches will be crucial for any claim of new physics.

Lattice QCD

Kenneth Wilson formulated lattice gauge theory in 1974 as a means to rid quantum field theories of their notorious infinities – a process known as regulating the theory – while maintaining exact gauge invariance, but without using perturbation theory. Lattice QCD calculations involve the very large dimensional integration of path integrals in QCD. Because of confinement, a perturbative treatment including physical hadronic states is not possible, so the complete integral, regulated properly in a discrete, finite volume, is done numerically by Monte Carlo integration.

Lattice QCD has made significant improvements over the last several years, both in methodology and invested computing time. Recently developed methods (which rely on low-lying eigenmodes of the Dirac operator to speed up calculations) have been especially important for muon–anomaly calculations. By allowing state-of-the-art calculations using physical masses, they remove a significant systematic: the so-called chiral extrapolation for the light quarks. The remaining systematic errors arise from the finite volume and non-zero lattice spacing employed in the simulations. These are handled by doing multiple simulations and extrapolating to the infinite-volume and zero-lattice-spacing limits. 

The HVP contribution can readily be computed using lattice QCD in Euclidean space with space-like four-momenta in the photon loop, thus yielding the real part of the HVP directly. The dispersive result is currently more precise (see “Off the mark” figure”), but further improvements will depend on consistent new e+e scattering datasets.

Hadronic vacuum-polarisation contribution

Rapid progress in the last few years has resulted in first lattice results with sub-percent uncertainty, closing in on the precision of the dispersive approach. Since these lattice calculations are very involved and still maturing, it will be crucial to monitor the emerging picture once several precise results with different systematic approaches are available. It will be particularly important to aim for statistics-dominated errors to make it more straightforward to quantitatively interpret the resulting agreement with the no-new-physics scenario or the dispersive results. In the shorter term, it will also be crucial to cross-check between different lattice and dispersive results using additional observables, for example based on the vector–vector correlators.

With improved lattice calculations in the pipeline from a number of groups, the tension between lattice QCD and phenomenological calculations may well be resolved before the Fermilab and J-PARC experiments announce their final results. Interestingly, there is a new lattice result with sub-percent precision (BMW 2020) that is in agreement both with the no-new-physics point within 1.3σ, and with the dispersive-data-driven result within 2.1σ. Barring a significant re-evaluation of the phenomenological calculation, however, HVP does not appear to be the source of the discrepancy with experiments. 

The next most likely Standard Model process to explain the muon anomaly is hadronic light-by-light scattering. Though it occurs less frequently since it includes an extra virtual photon compared to the HVP contribution, it is much less well known, with comparable uncertainties to HVP.

Hadronic light-by-light scattering

In hadronic light-by-light scattering (HLbL), the magnetic field interacts not with the muon, but with a hadronic “blob”, which is connected to the muon by three virtual photons. (The interaction of the four photons via the hadronic blob gives HLbL its name.) A miscalculation of the HLbL contribution has often been proposed as the source of the apparently anomalous measurement of the muon anomaly by BNL’s E821 collaboration.

Since the so-called Glasgow consensus (the fruit of a 2009 workshop) first established a value more than 10 years ago, significant progress has been made on the analytic computation of the HLbL scattering contribution. In particular, a dispersive analysis of the most important hadronic channels has been carried out, including the leading pion–pole, sub-leading pion loop and rescattering diagrams including heavier pseudoscalars. These calculations are analogous in spirit to the dispersive HVP calculations, but are more complicated, and the experimental measurements are more difficult because form factors with one or two virtual photons are required. 

The project to calculate the HLbL contribution using lattice QCD began more than 10 years ago, and many improvements to the method have been made to reduce both statistical and systematic errors since then. Last year we published, with colleagues Norman Christ, Taku Izubuchi and Masashi Hayakawa, the first ever lattice–QCD calculation of the HLbL contribution with all errors controlled, finding aμHLbL, lattice = (78.7 ± 30.6 (stat) ± 17.7 (sys)) × 10–11. The calculation was not easy: it took four years and a billion core-hours on the Mira supercomputer at Argonne National Laboratory’s Large Computing Facility. 

Our lattice HLbL calculations are quite consistent with the analytic and data-driven result, which is approximately a factor of two more precise. Combining the results leads to aμHLbL = (90 ± 17) × 10–11, which means the very difficult HLbL contribution cannot explain the Standard Model discrepancy with experiment. To make such a strong conclusion, however, it is necessary to have consistent results from at least two completely different methods of calculating this challenging non-perturbative quantity. 

New physics?

If current theory calculations of the muon anomaly hold up, and the new experiments reduce its uncertainty by the hoped-for factor of four, then a new-physics explanation will become impossible to ignore. The idea would be to add particles and interactions that have not yet been observed but may soon be discovered at the LHC or in future experiments. New particles would be expected to contribute to the anomaly through Feynman diagrams similar to the Standard Model topographies (see “Rising to the moment” panel).

Calculations of the anomalous magnetic moment of the muon are not finished

The most commonly considered new-physics explanation is supersymmetry, but the increasingly stringent lower limits placed on the masses of super-partners by the LHC experiments make it increasingly difficult to explain the muon anomaly. Other theories could do the job too. One popular idea that could also explain persistent anomalies in the b-quark sector is heavy scalar leptoquarks, which mediate a new interaction allowing leptons and quarks to change into each other. Another option involves scenarios whereby the Standard Model Higgs boson is accompanied by a heavier Higgs-like boson.

The calculations of the anomalous magnetic moment of the muon are not finished. As a systematically improvable method, we expect more precise lattice determinations of the hadronic contributions in the near future. Increasingly powerful algorithms and hardware resources will further improve precision on the lattice side, and new experimental measurements and analysis methods will do the same for dispersive studies of the HVP and HLbL contributions.

To confidently discover new physics requires that these two independent approaches to the Standard Model value agree. With the first new results on the experimental value of the muon anomaly in almost two decades showing perfect agreement with the old value, we anxiously await more precise measurements in the near future. Our hope is that the clash of theory and experiment will be the beginning of an exciting new chapter of particle physics, heralding new discoveries at current and future particle colliders. 

The post An anomalous moment for the muon appeared first on CERN Courier.

]]>
Feature To confidently discover new physics in the muon g−2 anomaly requires that data-driven and lattice-QCD calculations of the Standard-Model value agree, write Thomas Blum, Luchang Jin and Christoph Lehner. https://cerncourier.com/wp-content/uploads/2021/04/Muon-g-2_feature.jpg
Fermilab strengthens muon g-2 anomaly https://cerncourier.com/a/fermilab-strengthens-muon-g-2-anomaly/ Wed, 07 Apr 2021 15:21:48 +0000 https://preview-courier.web.cern.ch/?p=91977 The first run of the muon g-2 experiment at Fermilab increases the tension between measurements and theoretical calculations to 4.2 standard deviations.

The post Fermilab strengthens muon g-2 anomaly appeared first on CERN Courier.

]]>
Hotly anticipated results from the first run of the muon g-2 experiment at Fermilab were announced today, increasing the tension between measurements and theoretical calculations. The last time this ultra-precise measurement was performed, in a sequence of results at Brookhaven National Laboratory in the late 1990s and early 2000s, it disagreed with the Standard Model (SM) by 3.7σ. After almost eight years of work rebuilding the Brookhaven experiment at Fermilab and analysing its first data, the muon’s anomalous magnetic moment has been measured to be 116 592 040(54)×10-11. The result is in agreement with the Brookhaven measurement and is 3.3σ greater than the SM prediction: 116 591 810(43)×10-11. Combined with the Brookhaven result, the world-average value for the anomalous magnetic moment of the muon is 116 592 061(41)×10-11, representing a 4.2σ departure from the SM.

“Today is an extraordinary day, long awaited not only by us but by the whole international physics community,” says Graziano Venanzoni of the INFN, who is co-spokesperson of the Fermilab muon g-2 collaboration. “A large amount of credit goes to our young researchers who, with their talent, ideas and enthusiasm, have allowed us to achieve this incredible result.”

Today is an extraordinary day, long awaited not only by us but by the whole international physics community

Graziano Venanzoni

The Fermilab result was unblinded during a Zoom meeting on 25 February in the presence of around 200 collaborators from around the world. “We were all very excited to finally know our result and the meeting was very emotional,” says Venanzoni. The analysis took almost three years from data taking to the release of the result and the collaboration decided to unblind only when all the steps of the analysis were completed and there were no outstanding questions. Venanzoni adds that no further analysis was completed after the unblinding and the results are unchanged.

The previous Brookhaven measurement left physicists pondering whether the presence of unknown particles in loops could be affecting the muon’s behaviour. It was clear that further measurements were needed, but it turned out to be much cheaper to move the apparatus to Fermilab than to build a new, more precise experiment at Brookhaven. So in the summer of 2013, the experiment’s 14-m diameter, 1.45 T superconducting magnet was transported from Long Island to the suburbs of Chicago. The Fermilab team reassembled the magnet and spent a year “shimming” its field, making it three times more uniform than the one it created at Brookhaven. Along with a new beamline to deliver a purer muon beam, Fermilab’s muon g-2 reincarnation required entirely new instrumentation, along with new detectors and a control room.

When a muon travels through the strong external magnetic field of a storage ring, the direction of its magnetic moment precesses at a rate that depends on its strength g. The Dirac equation predicts that all fermions have a g-factor equal to two. But higher order loops add an “anomalous” moment, aμ = (g-2)/2, which can be calculated extremely precisely. At Fermilab, muons with an energy of about 3.1 GeV are vertically focused in the storage ring via quadrupoles, and their precession frequency is determined from decays to electrons using 24 electromagnetic calorimeters located along the ring’s inner circumference. The intense polarised muon beam suppresses the pion contamination that challenged the Brookhaven measurement, while new calibration systems and simulations allow better control of systematic uncertainties.

It is so gratifying to finally be resolving this mystery

Chris Polly

The Fermilab muon g-2 collaboration took its first dataset in 2018, with over eight billion muon decays resulting in an overall uncertainty approximately 15% better than Brookhaven’s. Data analysis on the second and third runs is already under way, while a fourth run is ongoing and a fifth is planned. The collaboration is targeting a final precision of around 0.14 ppm – four times greater than the previous measurement.

“After the 20 years that have passed since the Brookhaven experiment ended, it is so gratifying to finally be resolving this mystery,” said Fermilab’s Chris Polly, a co-spokesperson for the current experiment and a graduate student on the Brookhaven experiment. “So far we have analysed less than 6% of the data that the experiment will eventually collect. Although these first results are telling us that there is an intriguing difference with the Standard Model, we will learn much more in the next couple of years.”

Theory baseline
Developments in the theory community are equally vital. The Fermilab muon g-2 collaboration takes as its theory baseline the value for aμ obtained last year by the Muon g-2 Theory Initiative. Uncertainties in the calculation are dominated by hadronic contributions, in particular a term called the hadronic vacuum polarization (HVP). The Theory Initiative incorporates the HVP value obtained by well-established “dispersive methods”, which combine fundamental properties of quantum field theory with experimental measurements of low-energy hadronic processes. An alternative approach gaining traction is to calculate the HVP contribution using lattice QCD. In a paper published in Nature today, one group reports lattice calculations of HVP which, if included in the theory result, would significantly reduce the discrepancy between the experimental and theoretical values for aμ. The result is in 2σ tension with the value obtained from the dispersive approach, and is currently dominated by systematic uncertainties stemming from approximations used in the lattice calculations, say Muon g-2 Theory Initiative members.

“This being the first lattice result at sub-percent precision, it is premature to draw firm conclusions from this comparison,” reads a statement from the Muon g-2 Theory Initiative steering committee. “Indeed, given the complexity of the computations, independent results from different lattice groups with commensurate uncertainties are needed to test and check the lattice calculations against each other. Being entirely based on Standard Model theory, once the lattice results are well tested and precise enough, they will play an important role in understanding how new physics enters into the discrepancy.”

The post Fermilab strengthens muon g-2 anomaly appeared first on CERN Courier.

]]>
News The first run of the muon g-2 experiment at Fermilab increases the tension between measurements and theoretical calculations to 4.2 standard deviations. https://cerncourier.com/wp-content/uploads/2021/04/FNAL-muon-g-2.hr_.jpg
Tooling up to hunt dark matter https://cerncourier.com/a/tooling-up-to-hunt-dark-matter/ Thu, 04 Mar 2021 13:33:55 +0000 https://preview-courier.web.cern.ch/?p=91450 The TOOLS 2020 conference attracted around 200 phenomenologists and experimental physicists to work on numerical tools for dark-matter models, and more.

The post Tooling up to hunt dark matter appeared first on CERN Courier.

]]>
Bullet Cluster

The past century has seen ever stronger links forged between the physics of elementary particles and the universe at large. But the picture is mostly incomplete. For example, numerous observations indicate that 87% of the matter of the universe is dark, suggesting the existence of a new matter constituent. Given a plethora of dark-matter candidates, numerical tools are essential to advance our understanding. Fostering cooperation in the development of such software, the TOOLS 2020 conference attracted around 200 phenomenologists and experimental physicists for a week-long online workshop in November.

The viable mass range for dark matter spans 90 orders of magnitude, while the uncertainty about its interaction cross section with ordinary matter is even larger (see “Theoretical landscape” figure). Dark matter may be new particles belonging to theories beyond-the-Standard Model (BSM), an aggregate of new or SM particles, or very heavy objects such as primordial black holes (PBHs). On the latter subject, Jérémy Auffinger (IP2I Lyon) updated TOOLS 2020 delegates on codes for very light PBHs, noting that “BlackHawk” is the first open-source code for Hawking-radiation calculations.

Flourishing models

Weakly interacting massive particles (WIMPs) have enduring popularity as dark-matter candidates, and are amenable to search strategies ranging from colliders to astrophysical observations. In the absence of any clear detection of WIMPs at the electroweak scale, the number of models has flourished. Above the TeV scale, these include general hidden-sector models, FIMPs (feebly interacting massive particles), SIMPs (strongly interacting massive particles), super-heavy and/or composite candidates and PBHs. Below the GeV scale, besides FIMPs, candidates include the QCD axion, more generic ALPs (axion-like particles) and ultra-light bosonic candidates. ALPs are a class of models that received particular attention at TOOLS 2020, and is now being sought in fixed-target experiments across the globe.

For each dark-matter model, astro­particle physicists must compute the theoretical predictions and characteristic signatures of the model and confront those predictions with the experimental bounds to select the model parameter space that is consistent with observations. To this end, the past decade has seen the development of a huge variety of software – a trend mapped and encouraged by the TOOLS conference series, initiated by Fawzi Boudjema (LAPTh Annecy) in 1999, which has brought the community together every couple of years since.

Models connecting dark matter with collider experiments are becoming ever more optimised to the needs of users

Three continuously tested codes currently dominate generic BSM dark-matter model computations. Each allows for the computation of relic density from freeze-out and predictions for direct and indirect detection, often up to next-to-leading corrections. Agreement between them is kept below the percentage level. “micrOMEGAs” is by far the most used code, and is capable of predicting observables for any generic model of WIMPs, including those with multiple dark-matter candidates. “DarkSUSY” is more oriented towards supersymmetric theories, but it can be used for generic models as the code has a very convenient modular structure. Finally, “MadDM” can compute WIMP observables for any BSM model from MeV to hundreds of TeV. As MadDM is a plugin of MadGraph, it inherits unique features such as its automatic computation of new dark-matter observables, including indirect-detection processes with an arbitrary number of final-state particles and loop-induced processes. This is essential for analysing sharp spectral features in indirect-detection gamma-ray measurements that cannot be mimicked by any known astrophysical background.

Interaction cross sections versus mass

Both micrOMEGAs and MadDM permit the user to confront theories with recast experimental likelihoods for several direct and indirect detection experiments. Jan Heisig (UCLouvain) reported that this is a work in progress, with many more experimental data sets to be included shortly. Torsten Bringmann (University of Oslo) noted that a strength of DarkSUSY is the modelling of qualitatively different production mechanisms in the early universe. Alongside the standard freeze-out mechanism, several new scenarios can arise, such as freeze-in (FIMP models, as chemical and kinetic equilibrium cannot be achieved), dark freeze-out, reannihilation and “cannibalism”, to name just a few. Freeze-in is now supported by micrOMEGAs.

Models connecting dark matter with collider experiments are becoming ever more optimised to the needs of users. For example, micrOMEGAs interfaces with SModelS, which is capable of quickly applying all possible LHC-relevant supersymmetric searches. The software also includes long-lived particles, as commonly found in FIMP models. As MadDM is embedded in MadGraph, noted Benjamin Fuks (LPTHE Paris), tools such as MadAnalysis may be used to recast CMS and ATLAS searches. Celine Degrande (UCLouvain) described another nice tool, FeynRules, which produces model files in both the MadDM and micrOMEGAs formats given the Lagrangian for the BSM model, providing a very useful automatised chain from the model directly to the dark-matter observables, high-energy predictions and comparisons with experimental results. Meanwhile, MadDump expands MadGraph’s predictions and detector simulations from the high-energy collider limits to fixed-target experiments such as NA62. To complete a vibrant landscape of development efforts, Tomas Gonzalo (Monash) presented the GAMBIT collaboration’s work to provide tools for global fits to generic dark-matter models.

A phenomenologists dream

Huge efforts are underway to develop a computational platform to study new directions in experimental searches for dark matter, and TOOLS 2020 showed that we are already very close to the phenomenologist’s dream for WIMPs. TOOLS 2020 wasn’t just about dark matter either – it also covered developments in Higgs and flavour physics, precision tests and general fitting, and other tools. Interested parties are welcome to join in the next TOOLS conference due to take place in Annecy in 2022.

The post Tooling up to hunt dark matter appeared first on CERN Courier.

]]>
Meeting report The TOOLS 2020 conference attracted around 200 phenomenologists and experimental physicists to work on numerical tools for dark-matter models, and more. https://cerncourier.com/wp-content/uploads/2021/02/CCMarApr21_FN_bulletcluster.jpg
In search of WISPs https://cerncourier.com/a/in-search-of-wisps/ Thu, 04 Mar 2021 13:17:30 +0000 https://preview-courier.web.cern.ch/?p=91468 Experiments such as MADMAX, IAXO and ALPS II are expanding the search for axions and other weakly interacting ‘slim’ particles that could hail from far above the TeV scale.

The post In search of WISPs appeared first on CERN Courier.

]]>
The ALPS II experiment at DESY

The Standard Model (SM) cannot be the complete theory of particle physics. Neutrino masses evade it. No viable dark-matter candidate is contained within it. And under its auspices the electric dipole moment of the neutron, experimentally compatible with zero, requires the cancellation of two non-vanishing SM parameters that are seemingly unrelated – the strong-CP problem. The physics explaining these mysteries may well originate from new phenomena at energy scales inaccessible to any collider in the foreseeable future. Fortunately, models involving such scales can be probed today and in the next decade by a series of experiments dedicated to searching for very weakly interacting slim particles (WISPs).

WISPs are pseudo Nambu–Goldstone bosons (pNGBs) that arise automatically in extensions of the SM from global symmetries which are broken both spontaneously and explicitly. NGBs are best known for being “eaten” by the longitudinal degrees of freedom of the W and Z bosons in electroweak gauge-symmetry breaking, which underpins the Higgs mechanism, but theorists have also postulated a bevy of pNGBs that get their tiny masses by explicit symmetry breaking and are potentially discoverable as physical particles. Typical examples arising in theoretically well-motivated grand-unified theories are axions, flavons and majorons. Axions arise from a broken “Peccei–Quinn” symmetry and could potentially explain the strong-CP problem, while flavons and majorons arise from broken flavour and lepton symmetries.

The Morpurgo magnet

Being light and very weakly interacting, WISPs would be non-thermally produced in the early universe and thus remain non-relativistic during structure formation. Such particles would inevitably contribute to the dark matter of the universe. WISPs are now the target of a growing number and type of experimental searches that are complementary to new-physics searches at colliders.

Among theorists and experimentalists alike, the axion is probably the most popular WISP. Recently, massive efforts have been undertaken to improve the calculations of model-dependent relic-axion production in the early universe. This has led to a considerable broadening of the mass range compatible with the explanation of dark matter by axions. The axion could make up all of the dark matter in the universe for a symmetry-breaking scale fa between roughly 108 and 1019 GeV (the lower limit being imposed by astrophysical arguments, the upper one by the Planck scale), corresponding to axion masses from 10–13 eV to 10 meV. For other light pNGBs, generically dubbed axion-like particles (ALPs), the parameter range is even broader. With many plausible relic-ALP-production mechanisms proposed by theorists, experimentalists need to cover as much of the unexplored parameter range as possible.

Although the strengths of the interactions between axions or ALPs and SM particles are very weak, being inversely proportional to fa, several strategies for observing them are available. Limits and projected sensitivities span several orders of magnitude in the mass-coupling plane (see “The field of play” figure).

IAXO’s design profited greatly from experience with the ATLAS toroid

Since axions or ALPs can usually decay to two photons, an external static magnetic field can substitute one of the two photons and induce axion-to-photon conversion. Originally proposed by Pierre Sikivie, this inverse Primakoff effect can classically be described by adding source terms proportional to B and E to Maxwell’s equations. Practically, this means that inside a static homogeneous magnetic field the presence of an axion or ALP field induces electric-field oscillations – an effect readily exploited by many experiments searching for WISPs. Other processes exploited in some experimental searches and suspected to lead to axion production are their interactions with electrons, leading to axion bremsstrahlung, and their interactions with nucleons or nuclei, leading to nucleon-axion bremsstrahlung or oscillations of the electric dipole moment of the nuclei or nucleons.

The potential to make fundamental discoveries from small-scale experiments is a significant appeal of experimental WISP physics, however the most solidly theoretically motivated WISP parameter regions and physics questions require setups that go well beyond “table-top” dimensions. They target WISPs that flow through the galactic halo, shine from the Sun, or spring into existence when lasers pass through strong magnetic fields in the laboratory.

Dark-matter halo

Haloscopes target the detection of dark-matter WISPs in the halo of our galaxy, where non-relativistic cold-dark-matter axions or ALPs induce electric field oscillations as they pass through a magnetic field. The frequency of the oscillations corresponds to the axion mass, and the amplitude to B/fa. When limits or projections are given for these kinds of experiments, it is assumed that the particle under scrutiny homogeneously makes up all of the dark matter in the universe, introducing significant cosmological model dependence.

Axion–photon coupling versus axion mass plane

The furthest developed currently operating haloscopes are based on resonant enhancement of the axion-induced electric-field oscillations in tunable resonant cavities. Using this method, the presently running ADMX project at the University of Washington has the sensitivity to discover dark-matter axions with masses of a few µeV. Nuclear resonance methods could be sensitive to halo dark-matter axions with mass below 1 neV and “fuzzy” dark-matter ALPs down to 10–22 eV within the next decade, for example at the CASPEr experiments being developed at the University of Mainz and Boston University. Meanwhile, experiments based on classical LC circuits, such as ABRACADABRA at MIT, are being designed to measure ALP- or axion-induced magnetic field oscillations in the centre of a toroidal magnet. These could be sensitive in a mass range between 10 neV and 1 µeV.

ALPS II is the first laser-based setup to fully exploit resonance techniques

For dark-matter axions with masses up to approximately 50 µeV, promising developments in cavity technologies such as multiple matched cavities and superconducting or diel­ectric cavities are ongoing at several locations, including at CAPP in South Korea, the University of Western Australia, INFN Legnaro and the RADES detector, which has taken data as part of the CAST experiment at CERN. Above ~40 µeV, however, the cavity concept becomes more and more challenging, as sensitivity scales with the volume of the resonant cavity, which decreases dramatically with increasing mass (as roughly 1/ma3). To reach sensitivity at higher masses, in the region of a few hundred µeV, a novel “dielectric haloscope” is being developed by the MADMAX (Magnetized Disk and Mirror Axion experiment) collaboration for potential installation at DESY. It exploits the fact that static magnetic-field boundaries between media with different dielectric constants lead to tiny power emissions that compensate the discontinuity in the axion-induced electric fields in neighbouring media. If multiple surfaces are stacked in front of each other, this should lead to constructive interference, boosting the emitted power from the expected axion dark matter in the desired mass range to detectable levels. Other novel haloscope concepts, based on meta-materials (“plasma haloscopes”, for example) and topological insulators, are also currently being developed. These could have sensitivity to even higher axion masses, up to a few meV.

Staying in tune

In principle, axion-dark-matter detection should be relatively simple, given the very high number density of particles – approximately 3 × 1013 axions/cm3 for an axion mass of 10 µeV – and the well-established technique of resonant axion-to-photon conversion. But, as the axion mass is unknown, the experiments must be painstakingly tuned to each possible mass value in turn. After about 15 years of steady progress, the ADMX experiment has reached QCD-axion dark-matter sensitivity in the mass regime of a few µeV.

ADMX uses tunable microwave resonators inside a strong solenoidal magnetic field, and modern quantum sensors for readout. Unfortunately, however, this technology is not scalable to the higher axion-mass regions as preferred, for example, by cosmological models where Peccei–Quinn symmetry breaking happened after an inflationary phase of the universe. That’s where MADMAX comes in. The collaboration is working on the dielectric-haloscope concept – initiated and led by scientists at the Max Planck Institute for Physics in Munich – to investigate the mass region around 100 µeV.

Astrophysical hints

Globular clusters

Weakly interacting slim particles (WISPs) could be produced in hot astrophysical plasmas and transport energy out of stars, including the Sun, stellar remnants and other dense sources. Observed lifetimes and energy-loss rates can therefore probe their existence. For the axion, or an axion-like particle (ALP) with sub-MeV mass that couples to nucleons, the most stringent limit, fa > ~108 GeV, stems from the duration of the neutrino signal from the progenitor neutron star of Supernova 1987A.

Tantalisingly, there are stellar hints from observations of red giants, helium-burning stars, white dwarfs and pulsars that seem to indicate energy losses with slight excesses with respect to those expected from standard energy emission by neutrinos. These hints may be explained by axions with masses below 100 meV or sub-keV-mass ALPs with a coupling to both electrons and photons.

Other observations suggest that TeV photons from distant blazars are less absorbed than expected by standard interactions with extragalactic background light – the so-called transparency hint. This could be explained by the conversion of photons into ALPs in the magnetic field of the source, and back to photons in astrophysical magnetic fields. Interestingly, these would have about the same ALP–photon coupling strength as indicated by the observed stellar anomalies, though with a mass that is incompatible with both ALPs which can explain dark matter and with QCD axions (see “The field of play” figure).

MADMAX will use a huge ~9 T superconducting dipole magnet with a bore of about 1.35 m and a stored energy of roughly 480 MJ. Such a magnet has never been built before. The MADMAX collaboration teamed up with CEA-IRFU and Bilfinger-Noell and successfully worked out a conceptual design. First steps towards qualifying the conductor are under way. The plan is for the magnet to be installed at DESY inside the old iron yoke of the former HERA experiment H1. DESY is already preparing the required infrastructure, including the liquid-helium supply necessary to cool the magnet. R&D for the dielectric booster, with up to 80 adjustable 1.25 m2 disks, is in full swing.

A first prototype, containing a more modest 20 discs of 30 cm diameter, will be tested in the “Morpurgo” magnet at CERN during future accelerator shutdowns (see “Haloscope home” figure). With a peak field strength of 1.6 T, its dipole field will allow new ALP-dark-matter parameter regions to be probed, though the main purpose of the prototype is to demonstrate the operation of the booster system in cryogenic surroundings inside a magnetic field. The MADMAX collaboration is extremely happy to have found a suitable magnet at CERN for such tests. If sufficient funds can be acquired within the next two to three years for magnet construction, and provided that the prototype efforts at CERN are successful, MADMAX could start data taking at DESY in 2028.

While direct dark-matter search experiments like ADMX and MADMAX offer by far the highest sensitivity for axion searches, this is based on the assumption that the dark matter problem is solved by axions, and if no signal is discovered any claim of an exclusion limit must rely on specific cosmological assumptions. Therefore, other less model-dependent experiments, such as helioscopes or light shining through a wall (LSW) experiments, are extremely beneficial in addition to direct dark-matter searches.

Solar axions

In contrast to dark-matter axions or ALPs, those produced in the Sun or in the laboratory should have considerable momentum. Indeed, solar axions or ALPs should have energies of a few keV, corresponding to the temperature at which they are produced. These could be detected by helioscopes, which seek to use the inverse Primakoff effect to convert solar axions or ALPs into X-rays in a magnet pointed towards the Sun, as at the CERN Axion Solar Telescope (CAST) experiment. Helioscopes could cover the mass range compatible with the simplest axion models, in the vicinity of 10 meV, and could be sensitive to ALPs with masses below 1 eV without any tuning at all.

The CAST helioscope, which reused an LHC prototype dipole magnet, has driven this field in the past decade, and provides the most sensitive exclusion limits to date. Going beyond CAST calls for a much larger magnet. For the next-generation International Axion Observatory (IAXO) helioscope, CERN members of the international collaboration worked out a conceptual design for a 20 m-long toroidal magnet with eight 60 cm-diameter bores. IAXO’s design profited greatly from experience with the ATLAS toroid.

BabyIAXO helioscope

In the past three years, the collaboration, led by the University of Zaragoza, has been concentrating its activities on the BabyIAXO prototype in order to finesse the magnet concept, the X-ray telescopes necessary to focus photons from solar axion conversion and the low-background detectors. BabyIAXO will increase the signal-to-noise ratio of CAST by two orders of magnitude; IAXO by a further two orders of magnitude.

In December 2020 the directorates of CERN and DESY signed a collaboration agreement regarding BabyIAXO: CERN will provide the detailed design of the prototype magnet including its cryostat, while DESY will design and prepare the movable platform and infrastructure (see “Prototype” figure). BabyIAXO will be located at DESY in Hamburg. The collaboration hopes to attract the remaining funds for BabyIAXO so construction can begin in 2021 and first science runs could take place in 2025. The timeline for IAXO will depend strongly on experiences during the construction and operation of BabyIAXO, with first light potentially possible in 2028.

Light shining through a wall

In contrast to haloscopes, helioscopes do not rely on the assumption that all dark matter is made up by axions. But light-shining-through-wall (LSW) experiments are even less model dependent with respect to ALP production. Here, intense laser light could be converted to axions or ALPs inside a strong magnetic field by the Primakoff effect. Behind a light-impenetrable wall they would be re-converted to photons and detected at the same wavelength as the laser light. The disadvantage of LSW experiments is that they only reach sensitivity to ALPs with a mass up to a few hundred µeV with comparably high coupling to photons. However, this is sensitive enough to test the parameter range consistent with the transparency hint and parts of the mass range consistent with the stellar hints (see “Astrophysical hints” panel).

The Any Light Particle Search (ALPS II) at DESY follows this approach. By seeking to observe light shining through a wall, any ALPs would be generated in the experiment itself, removing the need to make assumptions about their production. ALPS II is based on 24 modified superconducting dipole magnets that have been straightened by brute-force deformation, following their former existence in the proton accelerator of the HERA complex. With the help of two 124 m-long high-finesse optical resonators, encompassed by the magnets on both sides of the wall, ALPS II is also the first laser-based setup to fully exploit resonance techniques. Two readout systems capable of measuring a 1064 nm photon flux down to a rate of 2 × 10–5 s–1 have been developed by the collaboration. Compared to the present best LSW limits provided by OSQAR at CERN, the signal-to-noise ratio will rise by no less than 12 orders of magnitude at ALPS II. Nevertheless, MADMAX would surpass ALPS II in the sensitivity for the axion-photon coupling strength by more than three orders of magnitude. This is the price to pay for a model-independent experiment – however, ALPS II principally targets not dark-matter candidates but ALPs indicated by astrophysical phenomena.

Tunelling ahead

The installation of the 24 dipole magnets in a straight section of the HERA tunnel was completed in 2020. Three clean rooms at both ends and in the centre of the experiment were also installed, and optics commissioning is under way. A first science run is expected for autumn 2021.

ALPS II

In the overlapping mass region up to 0.1 meV, the sensitivities of ALPS II and BabyIAXO are roughly equal. In the event of a discovery, this would provide a unique opportunity to study the new WISP. Excitingly, a similar case might be realised for IAXO: combining the optics and detectors of ALPS II with simplified versions of the dipole magnets being studied for FCC-hh would provide an LSW experiment with “IAXO sensitivity” regarding the axion-photon coupling, albeit in a reduced mass range. This has been outlined as the putative JURA (Joint Undertaking on Research for Axions) experiment in the context of the CERN-led Physics Beyond Colliders study.

The past decade has delivered significant developments in axion and ALP theory and phenomenology. This has been complemented by progress in experimental methods to cover a large fraction of the interesting axion and ALP parameter range. In close collaboration with universities and institutes across the globe, CERN, DESY and the Max Planck society will together pave the road to the exciting results that are expected this decade.

The post In search of WISPs appeared first on CERN Courier.

]]>
Feature Experiments such as MADMAX, IAXO and ALPS II are expanding the search for axions and other weakly interacting ‘slim’ particles that could hail from far above the TeV scale. https://cerncourier.com/wp-content/uploads/2021/02/CCMarApr21_WISPs_ALPS.jpg
Deep learning tailors supersymmetry searches https://cerncourier.com/a/deep-learning-tailors-supersymmetry-searches/ Tue, 23 Feb 2021 10:13:42 +0000 https://preview-courier.web.cern.ch/?p=91432 CMS has released a new search for charginos and neutralinos, with sensitivity boosted by the use of parametric machine learning.

The post Deep learning tailors supersymmetry searches appeared first on CERN Courier.

]]>
CMS charginos neutralinos

Supersymmetry is a popular extension of the Standard Model (SM) that has the potential to resolve several open questions in particle physics. As a result of a postulated new symmetry between fermions and bosons, the theory predicts a “superpartner” for each SM particle. The lightest of these new particles could be what makes up dark matter, while additional new superpartners could resolve the question of why the Higgs boson has a relatively low mass. Many searches for supersymmetry have already been performed by the ATLAS and CMS collaborations, but most have focused on strongly interacting superpartners that could be very heavy. It is possible, however, that electroweak production of supersymmetric particles is the dominant or only source of superpartners accessible at the LHC.

Supersymmetric events are expected to have an imbalance in transverse momentum

The unprecedented data volume of LHC Run 2 provides a unique opportunity to search for rare processes such as electroweak production of supersymmetric particles. A recent result from the CMS collaboration uses the Run-2 dataset to search for the superpartners of the electroweak bosons, called charginos and neutralinos. Events with three or more charged leptons, or two leptons of the same charge, were analysed. Such events are relatively rare in the SM, and, if they exist, charginos and neutralinos are predicted to create an excess of events with these topologies. Supersymmetric events are also expected to have an apparent imbalance in transverse momentum, because the lightest supersymmetric particle should evade detection. Correlations between the multiple leptons in the events, and between the leptons and the momentum imbalance, can be used to define a set of discriminating variables sensitive to chargino and neutralino production. These variables are used to assign the selected events into several search regions that address different possible signals of the production and decay of supersymmetric particles. Making such a multivariate binning optimal in every corner of phase-space, and for any possible manifestation of supersymmetry, is a challenging task.

Parametric machine learning

Events with three electrons and/or muons provide the bulk of the sensitivity by striking the best balance between signal purity and yields. A novel search approach is used that aims at better capturing the complexity of the events than is possible using predetermined search regions: parametric machine learning. The aim is to achieve the maximum sensitivity for any parameter choice nature might have made, as supersymmetry is not one model, but a class of models. Variations in the masses of the superpartners can substantially modify the observable signatures. Parametric neural networks were trained to find charginos and neutralinos with the unknown mass parameters added as input variables to the training. The network can evaluate the data at fixed values of the mass parameters, effectively performing a dedicated search for a signal with given masses in the data (figure 1).

The parametric neural network, together with a new optimised event binning of the other event categories, makes this analysis the most powerful search for charginos and neutralinos carried out by the CMS collaboration so far. The neural network alone results in a sensitivity boost that ranges from 30% to more than 100%. Substantial improvements occur for models where the decay of the charginos and neutralinos are mediated by the superpartners of leptons. The improvements become even larger when the mass splitting between sleptons and the chargino is relatively small. The data show no evidence for electroweak superpartner production, and chargino masses up to 1450 GeV, compared to 1150 GeV in earlier CMS searches for this scenario, are excluded at 95% confidence.

The post Deep learning tailors supersymmetry searches appeared first on CERN Courier.

]]>
News CMS has released a new search for charginos and neutralinos, with sensitivity boosted by the use of parametric machine learning. https://cerncourier.com/wp-content/uploads/2021/02/1311292_01.jpg
Implementing a vision for CERN’s future https://cerncourier.com/a/implementing-a-vision-for-cerns-future/ Wed, 27 Jan 2021 08:49:14 +0000 https://preview-courier.web.cern.ch/?p=90738 The 2020 update of the European strategy for particle physics forms the basis of CERN’s objectives for the next five years, explains CERN Director-General Fabiola Gianotti.

The post Implementing a vision for CERN’s future appeared first on CERN Courier.

]]>
Wandering the immeasurable

The European strategy for particle physics (ESPP), updated by the CERN Council in June 2020, lays the foundations for a bright future for accelerator-based particle physics. Its 20 recommendations – covering the components of a compelling scientific programme for the short, medium and long terms, as well as the societal and environmental impact of the field, public engagement and support for early-career scientists – set out an ambitious but prudent approach to realise the post-LHC future in Europe within the worldwide context.

Full exploitation of the LHC and its high-luminosity upgrade is a major priority, both in terms of its physics potential and its role as a springboard to a future energy-frontier machine. The ESPP identified an electron–positron Higgs factory as the highest priority next collider. It also recommended that Europe, together with its international partners, investigate the technical and financial feasibility of a future hadron collider at CERN with a centre-of-mass energy of at least 100 TeV, with an electron–positron Higgs and electroweak factory as a possible first stage. Reinforced R&D on a range of accelerator technologies is another ESPP priority, as is continued support for a diverse scientific programme.

Implementation starts now

It is CERN’s role, in strong collaboration with other laboratories and institutions in Europe and beyond, to help translate the visionary scientific objectives of the ESPP update into reality. CERN’s recently approved medium-term plan (MTP), which covers the period 2021–2025, provides a first implementation of the ESPP vision.

Fabiola Gianotti

Starting this year, CERN will deploy efforts on the feasibility study for a Future Circular Collider (FCC) as recommended by the ESPP update. One of the first goals is to verify that there are no showstoppers to building a 100 km tunnel in the Geneva region, and to gather pledges for the necessary funds to build it. The estimated FCC cost cannot be met only from CERN’s budget, and special contributions from non-Member States as well as new funding mechanisms will be required. Concerning the enabling technologies, the first priority is to demonstrate that the superconducting high-field magnets needed for 100 TeV (or more) proton–proton collisions in a 100 km tunnel can be made available on the mid-century time scale. To this end CERN is implementing a reinforced magnet R&D programme in partnership with industry and other institutions in Europe and beyond. Fresh resources will be used to explore low- and high-temperature superconducting materials, to develop magnet models towards industrialisation and cost reduction, and to build the needed test infrastructure. These studies will also have vast applications outside the field. Minimising the environmental impact of the tunnel, the colliders and detectors will be another major focus, as well as maximising the benefits to society from the transfer of FCC-related technologies.

The 2020 MTP includes resources to continue R&D on key technologies for the Compact Linear Collider and for the establishment of an international design study for a muon collider. Further advanced accelerator technologies will be pursued, as well as detector R&D and a new initiative on quantum technologies.

Continued progress requires a courageous, global experimental venture involving all the tools at our disposal

Scientific diversity is an important pillar of CERN’s programme and will continue to be supported. Resources for the CERN-hosted Physics Beyond Colliders study have been increased in the 2020 MTP and developments for long-baseline neutrino experiments in the US and Japan will continue at an intense pace via the CERN Neutrino Platform.

Immense impact

The discovery of the Higgs boson, a particle with unprecedented characteristics, has contributed to turning the focus of particle physics towards deep structural questions. Furthermore, many of the open questions in the microscopic world are increasingly intertwined with the universe at large. Continued progress on this rich and ambitious path of fundamental exploration requires a courageous, global experimental venture involving all the tools at our disposal: high-energy colliders, low-energy precision tests, observational cosmology, cosmic rays, dark-matter searches, gravitational waves, neutrinos, and many more. High-energy colliders, in particular, will continue to be an indispensable and irreplaceable tool to scrutinise nature at the smallest scales. If the FCC can be realised, its impact will be immense, not only on CERN’s future, but also on humanity’s knowledge.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

The post Implementing a vision for CERN’s future appeared first on CERN Courier.

]]>
Opinion The 2020 update of the European strategy for particle physics forms the basis of CERN’s objectives for the next five years, explains CERN Director-General Fabiola Gianotti. https://cerncourier.com/wp-content/uploads/2021/01/CCJanFeb21_VIEW-sculpture.jpg
A long-lived paradigm shift https://cerncourier.com/a/a-long-lived-paradigm-shift/ Fri, 27 Nov 2020 12:50:36 +0000 https://preview-courier.web.cern.ch/?p=90136 Experimentalists and theorists met from 16 to 19 November for the eighth workshop of the LHC's long-lived particles community.

The post A long-lived paradigm shift appeared first on CERN Courier.

]]>
Searches for new physics at high-energy colliders traditionally target heavy new particles with short lifetimes. These searches determine detector design, data acquisition and analysis methods. However, there could be new long-lived particles (LLPs) which travel through the detectors without decaying, either because they are light or have small couplings. Searches for LLPs have been going on at the LHC since the start of data taking, and at previous colliders, but they are attracting increasing interest in recent times, more so in light of the lack of new particles discovered in more mainstream searches.

Detecting LLPs at the LHC experiments requires a paradigm shift with respect to the usual data-analysis and trigger strategies. To that end, more than 200 experimentalists and theorists met online from 16 to 19 November for the eighth workshop of the LHC LLP community.

Dark quarks would undergo fragmentation and hadronisation, resulting in “dark showers”

Strong theoretical motivations underpin searches for LLPs. For example, dark matter could be part of a larger dark sector, parallel to the Standard Model (SM), with new particles and interactions. If dark quarks could be produced at the LHC, they would undergo fragmentation and hadronisation in the dark sector resulting in characteristic “dark showers” — one of the focuses of the workshop. Collider signatures for dark showers depend on the fraction of unstable particles they contain and their lifetime, with a range of categories presenting their own analysis challenges: QCD-like jets, semi-visible jets, emerging jets, and displaced vertices with missing transverse energy. Delegates agreed on the importance of connecting collider-level searches for dark showers with astrophysical and cosmological scales. In a similar spirit of collaboration across communities, a joint session with the HEP Software Foundation focused on triggering and reconstruction software for dedicated LLP detectors.

Heavy neutral leptons

The discovery of heavy neutral leptons (HNLs) could address different open questions of the SM. For example, neutrinos are expected to be left-handed and massless in the SM, but oscillate between flavours as their wavefunction evolves, providing evidence for as-yet immeasurably small masses. One way to fix this problem is to complete the field pattern of the SM with right-handed HNLs. The number and other characteristics of HNLs depend on the model considered, but in many cases HNLs are long-lived and connect to other important questions of the SM, such as dark matter and the baryon asymmetry of the universe. There are many ongoing searches for HNLs at the LHC and many more proposed elsewhere. During the November workshop the discussion touched on different models and simulations, reviewing what is available and what is needed for the different signal benchmarks.

Another focus was the reinterpretation of previous LLP searches. Recasting public results is common practice at the LHC and a good way to increase physics impact, but reinterpreting LLP searches is more difficult than prompt searches due to the use of non-standard selections and analysis-specific objects.

 

The latest results from CERN experiments were presented. ATLAS reported the first LHC search for sleptons using displaced-lepton final states, greatly improving sensitivity compared to LEP. CMS presented a search for strongly interacting massive particles with trackless jets, and a search for long-lived particles decaying to jets with displaced vertices. LHCb reported searches for low -mass di-muon resonances and a search for heavy neutrinos in the decay of a W boson into two muons and a jet, and the NA62 experiment at CERN’s SPS presented a search for π0 decays to invisible particles. These results bring important new constraints on the properties and parameters of LLP models.

Dedicated detectors

A series of dedicated LLP detectors at CERN — including the Forward Physics Facility for the HL-LHC, the CMS forward detector, FASER, Codex-b and Codex-ß, MilliQan, MoEDAL-MAPP, MATHUSLA, ANUBIS, SND@LHC, and FORMOSA – are in different stages between proposal and operation. These additional detectors, located at various distances from the LHC experiments, have diverse strengths: some, like MilliQan, look for specific particles (milli-charged particles, in that case), whereas others, like Mathusla, offer a very low background environment in which to search for neutral LLPs. These complementary efforts will, in the near future, provide all the different pieces needed to build the most complete picture possible of a variety of LLP searches, from axion-like particles to exotic Higgs decays, potentially opening the door to a dark sector.

ATLAS reported the first LHC search for sleptons using displaced-lepton final states

The workshop featured a dedicated session on future colliders for the first time. Designing these experiments with LLPs in mind would radically boost discovery chances. Key considerations will be tracking and the tracking volume, timing information, trigger and DAQ, as well as potential additional instrumentation in tunnels or using the experimental caverns.

Together with the range of new results presented and many more in the pipeline, the 2020 LLP workshop was representative of a vibrant research community, constantly pushing the “lifetime frontier”.

The post A long-lived paradigm shift appeared first on CERN Courier.

]]>
Meeting report Experimentalists and theorists met from 16 to 19 November for the eighth workshop of the LHC's long-lived particles community. https://cerncourier.com/wp-content/uploads/2020/11/EXO-19-011_zoom2.png
Strong interest in feeble interactions https://cerncourier.com/a/strong-interest-in-feeble-interactions/ Thu, 12 Nov 2020 10:12:05 +0000 https://preview-courier.web.cern.ch/?p=89959 The FIPs 2020 workshop was structured around portals that may link the Standard Model to a rich dark sector: axions, dark photons, dark scalars and heavy neutral leptons.

The post Strong interest in feeble interactions appeared first on CERN Courier.

]]>
Searches for axion-like particles

Since the discovery of the Higgs boson in 2012, great progress has been made in our understanding of the Standard Model (SM) and the prospects for the discovery of new physics beyond it. Despite excellent advances in Higgs-sector measurements, searches for WIMP dark matter and exploration of very rare processes in the flavour realm, however, no unambiguous signals of new fundamental physics have been seen. This is the reason behind the explosion of interest in feebly interacting particles (FIPs) over the past decade or so.

The inaugural FIPs 2020 workshop, hosted online by CERN from 31 August to 4 September, convened almost 200 physicists from around the world. Structured around the four “portals” that may link SM particles and fields to a rich dark sector – axions, dark photons, dark scalars and heavy neutral leptons – the workshop highlighted the synergies and complementarities among a great variety of experimental facilities, and called for close collaboration across different physics communities.

Today, conventional experimental efforts are driven by arguments based on the naturalness of the electroweak scale. They result in searches for new particles with sizeable couplings to the SM, and masses near the electroweak scale. FIPs represent an alternative paradigm to the traditional beyond-the-SM physics explored at the LHC. With masses below the electroweak scale, FIPs could belong to a rich dark sector and answer many open questions in particle physics (see “Four portals” figure). Diverse searches using proton beams (CERN and Fermilab), kaon beams (CERN and JPARC), neutrino beams (JPARC and Fermilab) and muon beams (PSI) today join more idiosyncratic experiments across the globe in a worldwide search for FIPs.

FIPs can arise from the presence of feeble couplings in the interactions of new physics with SM particles and fields. These may be due to a dimensionless coupling constant or to a “dimensionful” scale, larger than that of the process being studied, which is defined by a higher dimension operator that mediates the interaction. The smallness of these couplings can be due to the presence of an approximate symmetry that is only slightly broken, or to the presence of a large mass hierarchy between particles, as the absence of new-physics signals from direct and indirect searches seems to suggest.

A selection of open questions

Take the axion, for example. This is the particle that may be responsible for the conservation of charge–parity symmetry in strong interactions. It may also constitute a fraction or the entirety of dark matter, or explain the hierarchical masses and mixings of the SM fermions – the flavour puzzle.

Or take dark photons or dark Z′ bosons, both examples of new vector gauge bosons. Such particles are associated with extensions of the SM gauge group, and, in addition to indicating new forces beyond the four we know, could lead to evidence of dark-matter candidates with thermal origins and masses in the MeV to GeV range.

Exotic Higgs bosons could also have been responsible for cosmological inflation

Then there are exotic Higgs bosons. Light dark scalar or pseudoscalar particles related to the SM Higgs may provide novel ways of addressing the hierarchy problem, in which the Higgs mass can be stabilised dynamically via the time evolution of a so-called “relaxion” field. They could also have been responsible for cosmological inflation.

Finally, consider right-handed neutrinos, often referred to as sterile neutrinos or heavy neutral leptons, which could account for the origin of the tiny, nearly-degenerate masses of the neutrinos of the SM and their oscillations, as well as providing a mechanism for our universe’s matter–antimatter asymmetry.

Scientific diversity

No single experimental approach can cover the large parameter space of masses and couplings that FIPs models allow. The interconnections between open questions require that we construct a diverse research programme incorporating accelerator physics, dark-matter direct detection, cosmology, astrophysics, and precision atomic experiments, with a strong theoretical involvement. The breadth of searches for axions or axion-like particles (ALPs) is a good indication of the growing interest in FIPs (see “Scaling the ALPs” figure). Experimental efforts here span particle and astroparticle physics. In the coming years, helioscopes, which aim to detect solar axions by their conversion into photons (X-rays) in a strong magnetic field, will improve the sensitivity by more than 10 orders of magnitude in mass in the sub-eV range. Haloscopes, which work by converting axions into visible photons inside a resonant microwave cavity placed inside a strong magnetic field, will complement this quest by increasing the sensitivity for small couplings by six orders of magnitude (down to the theoretically motivated gold band in a mass region where the axions can be a dark-matter candidate). Accelerator-based experiments, meanwhile, can probe the strongly motivated QCD scale (MeV–GeV) and beyond for larger couplings. All these results
will be complemented by a lively theo­retical activity aimed at interpreting astrophysical signals within axion and ALP models.

FIPs 2020 triggered lively discussions that will continue in the coming months via topical meetings on different subjects. Topics that motivated particular interest between communities included possible ways of comparing results from direct-detection dark-matter experiments in the MeV–GeV range against those obtained at extracted beam line and collider experiments; the connection between right-handed neutrino properties and active neutrino parameters; and the interpretation of astrophysical and cosmological bounds, which often overwhelm the interpretation of each of the four portals.

The next FIPs workshop will take place at CERN next year.

The post Strong interest in feeble interactions appeared first on CERN Courier.

]]>
Meeting report The FIPs 2020 workshop was structured around portals that may link the Standard Model to a rich dark sector: axions, dark photons, dark scalars and heavy neutral leptons. https://cerncourier.com/wp-content/uploads/2020/11/FIPS2020.jpg
In pursuit of right-handed photons https://cerncourier.com/a/in-pursuit-of-right-handed-photons/ Tue, 10 Nov 2020 16:46:47 +0000 https://preview-courier.web.cern.ch/?p=89902 The presence of new virtual particles could be clearly signalled by a right-handed contribution to the photon polarisation.

The post In pursuit of right-handed photons appeared first on CERN Courier.

]]>
Figure 1

On 17 January 1957, a few months after Chien-Shiung Wu’s discovery of parity violation, Wolfgang Pauli wrote to Victor Weisskopf: “Ich glaube aber nicht, daß der Herrgott ein schwacher Linkshänder ist” (I cannot believe that God is a weak left-hander). But maximal parity violation is now well established within the Standard Model (SM). The weak interaction only couples to left-handed particles, as dramatically seen in the continuing absence of experimental evidence for right-handed neutrinos. In the same way, the polarisation of photons originating from transitions that involve the weak interaction is expected to be completely left-handed.

The LHCb collaboration recently tested the handedness of photons emitted in rare flavour-changing transitions from a b-quark to an s-quark. These are mediated by the bosons of the weak interaction according to the SM – but what if new virtual particles contribute too? Their presence could be clearly signalled by a right-handed contribution to the photon polarisation.

New virtual particles could be clearly signalled by a right-handed contribution to the photon polarisation

The b → sγ transition is rare. Fewer than one in a thousand b-quarks transform into an s-quark and a photon. This process has been studied for almost 30 years at particle colliders around the world. By precise measurements of its properties, physicists hope to detect hints of new heavy particles that current colliders are not powerful enough to produce.

The probability of this b-quark decay has been measured in previous experiments with a precision of about 5%, and found to agree with the SM prediction, which bears a similar theoretical uncertainty. A promising way to go further is to study the polarisation of the emitted photon. Measuring the b → sγ polarisation is not easy though. The emitted photons are too energetic to be analysed by a polarimeter and physicists must find innovative ways to probe them indirectly. For example, a right-handed polarisation contribution could induce a charge-parity asymmetry in the B0→ KSπ0γ or Bs0→ φγ decays. It could also contribute to the total rate of radiative b → sγ decays, containing any strange meson, B → Xsγ.

The LHCb collaboration has pioneered a new method to perform this measurement using virtual photons and the largest sample of the very rare B0→ K*0e+e decay ever collected. First, the sub-sample of decays that come from B0→ K*0γ with a virtual photon that mat­erialises in an electron–positron pair is isolated. The angular distributions of the B0→ K*0e+e decay products are then used as a polarimeter to measure the handedness of the photon. The number of decays with a virtual photon is small compared to the decays with a real photon, but these latter decays cannot be used as the information on the polarisation is lost.

The size of the right-handed contribution to b → sγ is encoded in the magnitude of the complex parameter C′7/C7. This is a ratio of the right- and left-handed Wilson coefficients that are used in the effective description of b → s transitions. The new B0→ K*0e+e analysis by the LHCb collaboration constrains the value of C′7/C7, and thus the photon polarisation, with unprecedented precision (figure 1). The measurement is compatible with the SM prediction.

This result showcases the exceptional capability of the LHCb experiment to study b → sγ transitions. The uncertainty is currently dominated by the data sample size, and thus more accurate studies are foreseen with the large data sample expected in Run 3 of the LHC. More precise measurements may yet unravel a small right-handed polarisation.

The post In pursuit of right-handed photons appeared first on CERN Courier.

]]>
News The presence of new virtual particles could be clearly signalled by a right-handed contribution to the photon polarisation. https://cerncourier.com/wp-content/uploads/2020/11/201811-329_01.jpg
Leptoquarks and the third generation https://cerncourier.com/a/leptoquarks-and-the-third-generation/ Tue, 10 Nov 2020 15:56:17 +0000 https://preview-courier.web.cern.ch/?p=89913 In a recent CMS analysis researchers have challenged the Standard Model by investigating a previously unexplored leptoquark signature.

The post Leptoquarks and the third generation appeared first on CERN Courier.

]]>
Figure 1

The Standard Model (SM) groups quarks and leptons separately to account for their rather different observed properties, but might they be unified through a new particle that couples to both and turns one into the other? Such “leptoquarks” emerge quite naturally in several theo­ries that extend the SM. Searches for leptoquarks have been an important part of the LHC’s research programme since the beginning, and have received additional attention recently in the light of hints of deviations from the principle of lepton universality – the so-called flavour anomalies.

In a recent CMS analysis, where the events collected in pp collisions during Run 2 (137 fb–1) are analysed, researchers have challenged the SM by investigating a previously unexplored leptoquark signature involving the third generation of fermions. The motivation for considering the third generation is to confront the principle of lepton universality, which asserts that the couplings of leptons with gauge bosons are flavour independent. This principle is built into the SM, but has recently been put under stress by a series of anomalies observed in precision measurements of certain B-meson decays by the LHCb, Belle and BaBar collaborations. A possible explanation for these anomalies, which are still under investigation and not yet confirmed, lies in the existence of leptoquarks that preferentially couple to the heaviest fermions.

These results are the most stringent limits to date on the presence of leptoquarks that couple preferentially to the third generation

The new CMS search looks for both single and pair production of leptoquarks. It considers leptoquarks that decay to a quark (top or bottom) and a lepton (tau or neutrino), targeting the signature with a top quark, a tau lepton, missing transverse momentum due to a neutrino, and, in the case of double production, an additional bottom-quark jet. This is the first search to simultaneously consider both production mechanisms by categorising events with one or two jets originating from a bottom quark. The analysis also includes a dedicated selection for the case of a large mass splitting between the leptoquark and the top quark, which would boost the top quark and could cause its decay products to be inseparable given the spatial resolution of jets.

The observations are found to be in agreement with the SM prediction, and exclusion limits are derived in the plane of the leptoquark–lepton–quark vertex coupling λ and the leptoquark mass. The results are derived separately for hypothetical spin-0 and spin-1 (figure 1) leptoquarks, reflecting the two types allowed by theoretical models. The analy­sis assumes that the leptoquark decays half the time to each of the possible quark–lepton flavour pairs, for example, in the case of a spin-1 leptoquark, to a top quark and a neutrino, or to a bottom quark and a tau lepton. CMS finds a range of lower limits on the leptoquark mass between 0.98 and 1.73 TeV, at 95% confidence, depending on λ and the spin.

These results are the most stringent limits to date on the presence of leptoquarks that couple preferentially to the third generation of fermions. They also probe the parameter space preferred by the B-physics anomalies in several models, excluding relevant portions. As theories predict leptoquark masses as high as many tens of TeV, the pursuit of this promising solution for the unification of quarks and leptons must continue. The CMS collaboration has a broad programme for further investigations that will exploit the larger data samples from Run 3 and the high-luminosity LHC under different hypotheses. If leptoquarks exist, they may well be revealed in the coming data.

The post Leptoquarks and the third generation appeared first on CERN Courier.

]]>
News In a recent CMS analysis researchers have challenged the Standard Model by investigating a previously unexplored leptoquark signature. https://cerncourier.com/wp-content/uploads/2020/11/CMS.jpg
Sensing a passage through the unknown https://cerncourier.com/a/sensing-a-passage-through-the-unknown/ Tue, 07 Jul 2020 11:27:06 +0000 https://preview-courier.web.cern.ch/?p=87702 A global network of ultra-sensitive optical atomic magnetometers – GNOME – has begun its search for exotic fields beyond the Standard Model.

The post Sensing a passage through the unknown appeared first on CERN Courier.

]]>
Since the inception of the Standard Model (SM) of particle physics half a century ago, experiments of all shapes and sizes have put it to increasingly stringent tests. The largest and most well-known are collider experiments, which in particular have enabled the direct discovery of various SM particles. Another approach utilises the tools of atomic physics. The relentless improvement in the precision of tools and techniques of atomic physics, both experimental and theoretical, has led to the verification of the SM’s predictions with ever greater accuracy. Examples include measurements of atomic parity violation that reveal the effects of the Z boson on atomic states, and measurements of atomic energy levels that verify the predictions of quantum electrodynamics (QED). Precision atomic physics experiments also include a vast array of searches for effects predicted by theories beyond-the-SM (BSM), such as fifth forces and permanent electric dipole moments that violate parity- and time-reversal symmetry. These tests probe potentially subtle yet constant (or controllable) changes of atomic properties that can be revealed by averaging away noise and controlling systematic errors.

GNOME

But what if the glimpses of BSM physics that atomic spectroscopists have so painstakingly searched for over the past decades are not effects that persist over the many weeks or months of a typical measurement campaign, but rather transient events that occur only sporadically? For example, might not cataclysmic astrophysical events such as black-hole mergers or supernova explosions produce hypothetical ultralight bosonic fields impossible to generate in the laboratory? Or might not Earth occasionally pass through some invisible “cloud” of a substance (such as dark matter) produced in the early universe? Such transient phenomena could easily be missed by experimenters when data are averaged over long times to increase the signal-to-noise ratio.

Transient phenomena

Detecting such unconventional events represents several challenges. If a transient signal heralding new physics was observed with a single detector, it would be exceedingly difficult to confidently distinguish the exotic-physics signal from the many sources of noise that plague precision atomic physics measurements. However, if transient interactions occur over a global scale, a network of such detectors geographically distributed over Earth could search for specific patterns in the timing and amplitude of such signals that would be unlikely to occur randomly. By correlating the readouts of many detectors, local effects can be filtered away and exotic physics could be distinguished from mundane physics.

This idea forms the basis for the Global Network of Optical Magnetometers to search for Exotic physics (GNOME), an international collaboration involving 14 institutions from all over the world (see “Correlated” figure). Such an idea, like so many others in physics, is not entirely new. The same concept is at the heart of the worldwide network of interferometers used to observe gravitational waves (LIGO, Virgo, GEO, KAGRA, TAMA, CLIO), and the global network of proton-precession magnetometers used to monitor geomagnetic and solar activity. What distinguishes GNOME from other global sensor networks is that it is specifically dedicated to searching for signals from BSM physics that have evaded detection in earlier experiments.

Optical atomic magnetometer

GNOME is a growing network of more than a dozen optical atomic magnetometers, with stations in Europe, North America, Asia and Australia. The project was proposed in 2012 by a team of physicists from the University of California at Berkeley, Jagiellonian University, California State University – East Bay, and the Perimeter Institute. The network started taking preliminary data in 2013, with the first dedicated science-run beginning in 2017. With more data on the way, the GNOME collaboration, consisting of more than 50 scientists from around the world, is presently combing the data for signs of the unexpected, with its first results expected later this year.

Exotic-physics detectors

Optical atomic magnetometers (OAMs) are among the most sensitive devices for measuring magnetic fields. However, the atomic vapours that are the heart of GNOME’s OAMs are placed inside multi-layer shielding systems, reducing the effects of external magnetic fields by a factor of more than a million. Thus, in spite of using extremely sensitive magnetometers, GNOME sensors are largely insensitive to magnetic signals. The reasoning is that many BSM theories predict the existence of exotic fields that couple to atomic spins and would penetrate through magnetic shields largely unaffected. Since the OAM signal is proportional to the spin-dependent energy shift regardless of whether or not a magnetic field causes the energy shift, OAMs – even enclosed within magnetic shields – are sensitive to a broad class of exotic fields.

The OAM setup

The basic principle behind OAM operation (see “Optical rotation” figure) involves optically measuring spin-dependent energy shifts by controlling and monitoring an ensemble of atomic spins via angular momentum exchange between the atoms and light. The high efficiency of optical pumping and probing of atomic spin ensembles, along with a wide array of clever techniques to minimise atomic spin relaxation (even at high atomic vapour densities), have enabled OAMs to achieve sensitivities to spin-dependent energy shifts at levels well below 10–20 eV after only one second of integration. One of the 14 OAM installations, at California State University – East Bay, is shown in the “Benchtop physics” image.

However, one might wonder: do any of the theoretical scenarios suggesting the existence of exotic fields predict signals detectable by a magnetometer network while also evading all existing astrophysical and laboratory constraints? This is not a trivial requirement, since previous high-precision atomic spectroscopy experiments have established stringent limits on BSM physics. In fact, OAM techniques have been used by a number of research groups (including our own) over the past several decades to search for spin-dependent energy shifts caused by exotic fields sourced by nearby masses or polarised spins. Closely related work has ruled out vast areas of BSM parameter space by comparing measurements of hyperfine structure in simple hydrogen-like atoms to QED calculations. Furthermore, if exotic fields do exist and couple strongly enough to atomic spins, they could cause noticeable cooling of stars and affect the dynamics of supernovae. So far, all laboratory experiments have produced null results and all astrophysical observations are consistent with the SM. Thus if such exotic fields exist, their coupling to atomic spins must be extremely feeble.

Despite these constraints and requirements, theoretical scenarios both consistent with existing constraints and that predict effects measurable with GNOME do exist. Prime examples, and the present targets of the GNOME collaboration’s search efforts, are ultralight bosonic fields. A canonical example of an ultralight boson is the axion. The axion emerged from an elegant solution, proposed by Roberto Peccei and Helen Quinn in the late 1970s, to the strong–CP problem. The Peccei–Quinn mechanism explains the mystery of why the strong interaction, to the highest precision we can measure, respects the combined CP symmetry whereas quantum chromodynamics naturally accommodates CP violation at a level ten orders of magnitude larger than present constraints. If CP violation in the strong interaction can be described not by a constant term but rather by a dynamical (axion) field, it could be significantly suppressed by spontaneous symmetry breaking at a high energy scale. If the symmetry breaking scale is at the grand-unification-theory (GUT) scale (~1016 GeV), the axion mass is around 10-10 eV, and at the Planck scale (1019 GeV) around 10-13 eV – both many orders of magnitude less massive than even neutrinos. Searching for ultralight axions therefore offers the exciting possibility of probing physics at the GUT and Planck scales, far beyond the direct reach of any existing collider.

Beyond the Standard Model

In addition to the axion, there are a wide range of other hypothetical ultralight bosons that couple to atomic spins and could generate signals potentially detectable with GNOME. Many theories predict the existence of spin-0 bosons with properties similar to the axion (so-called axion-like particles, ALPs). A prominent example is the relaxion, proposed by Peter Graham, David Kaplan and Surjeet Rajendran to explain the hierarchy problem: the mystery of why the electroweak force is about 24 orders-of-magnitude stronger than the gravitational force. In 2010, Asimina Arvanitaki and colleagues found that string theory suggests the existence of many ALPs of widely varying masses, from 10-33 eV to 10-10 eV. From the perspective of BSM theories, ultralight bosons are ubiquitous. Some predict ALPs such as “familons”, “majorons” and “arions”. Others predict new ultralight spin-1 bosons such as dark and hidden photons. There is even a possibility of exotic spin-0 or spin-1 gravitons: while the graviton for a quantum theory of gravity matching that described by general relativity must be spin-2, alternative gravity theories (for example torsion gravity and scalar-vector-tensor gravity) predict additional spin-0 and/or spin-1 gravitons.

Earth passing through a topological defect

It also turns out that such ultralight bosons could explain dark matter. Most searches for ultralight bosonic dark matter assume the bosons to be approximately uniformly distributed throughout the dark matter halo that envelopes the Milky Way. However, in some theoretical scenarios, the ultralight bosons can clump together into bosonic “stars” due to self-interactions. In other scenarios, due to a non-trivial vacuum energy landscape, the ultralight bosons could take the form of “topological” defects, such as domain walls that separate regions of space with different vacuum states of the bosonic field (see “New domains” figure). In either of these cases, the mass-energy associated with ultralight bosonic dark matter would be concentrated in large composite structures that Earth might only occasionally encounter, leading to the sort of transient signals that GNOME is designed to search for.

Magnetic field deviation

Yet another possibility is that intense bursts of ultralight bosonic fields might be generated by cataclysmic astrophysical events such as black-hole mergers. Much of the underlying physics of coalescing singularities is unknown, possibly involving quantum-gravity effects far beyond the reach of high-energy experiments on Earth, and it turns out that quantum gravity theories generically predict the existence of ultralight bosons. Furthermore, if ultralight bosons exist, they may tend to condense in gravitationally bound halos around black holes. In these scenarios, a sizable fraction of the energy released when black holes merge could plausibly be emitted in the form of ultralight bosonic fields. If the energy density of the ultralight bosonic field is large enough, networks of atomic sensors like GNOME might be able to detect a signal.

In order to use OAMs to search for exotic fields, the effects of environmental magnetic noise must be reduced, controlled, or cancelled. Even though the GNOME magnetometers are enclosed in multi-layer magnetic shields so that signals from external electromagnetic fields are significantly suppressed, there is a wide variety of phenomena that can mimic the sorts of signals one would expect from ultralight bosonic fields. These include vibrations, laser instabilities, and noise in the circuitry used for data acquisition. To combat these spurious signals, each GNOME station uses auxiliary sensors to monitor electromagnetic fields outside the shields (which could leak inside the shields at a far-reduced level), accelerations and rotations of the apparatus, and overall magnetometer performance. If the auxiliary sensors indicate data may be suspect, the data are flagged and ignored in the analysis (see “Spurious signals” figure).

GNOME data that have passed this initial quality check can then be scanned to see if there are signals matching the patterns expected based on various exotic physics hypotheses. For example, to test the hypothesis that dark matter takes the form of ALP domain walls, one searches for a signal pattern resulting from the passage of Earth through an astronomical-sized plane having a finite thickness given by the ALP’s Compton wavelength. The relative velocity between the domain wall and Earth is unknown, but can be assumed to be randomly drawn from the velocity distribution of virialised dark matter, having an average speed of about one thousandth the speed of light. The relative timing of signals appearing in different GNOME magnetometers should be consistent with a single velocity v: i.e. nearby stations (in the direction of the wall propagation) should detect signals with smaller delays and stations that are far apart should detect signals with larger delays, and furthermore the time delays should occur in a sensible sequence. The energy shift that could lead to a detectable signal in GNOME magnetometers is caused by an interaction of the domain-wall field φ with the atomic spin S whose strength is proportional to the scalar product of the spin with the gradient of the field, S∙∇φ. The gradient of the domain-wall field ∇φ is proportional to its momentum relative to S, and hence the signals appearing in different GNOME magnetometers are proportional to S∙v. Both the signal-timing pattern and the signal-amplitude pattern should be consistent with a single value of v; signals inconsistent with such a pattern can be rejected as noise.

If such exotic fields exist, their coupling to atomic spins must be extremely feeble

To claim discovery of a signal heralding BSM physics, detections must be compared to the background rate of spurious false-positive events consistent with the expected signal pattern but not generated by exotic physics. The false-positive rate can be estimated by analysing time-shifted data: the data stream from each GNOME magnetometer is shifted in time relative to the others by an amount much larger than any delays resulting from propagation of ultralight bosonic fields through Earth. Such time-shifted data can be assumed to be free of exotic-physics signals, so any detections are necessarily false positives: merely random coincidences due to noise. When the GNOME data are analysed without timeshifts, to be regarded as an indication of BSM physics, the signal amplitude must surpass the 5σ threshold as compared to the background determined with the time-shifted data. This means that, for a year-long data set, an event due to noise coincidentally matching the assumed signal pattern throughout the network would occur only once every 3.5 million years.

Inspiring efforts

Having already collected over a year of data, and with more on the way, the GNOME collaboration is presently combing the data for signs of BSM physics. New results based on recent GNOME science runs are expected in 2020. This would represent the first ever search for such transient exotic spin-dependent effects. Improvements in magnetometer sensitivity, signal characterisation, and data-analysis techniques are expected to improve on these initial results over the next several years. Significantly, GNOME has inspired similar efforts using other networks of precision quantum sensors: atomic clocks, interferometers, cavities, superconducting gravimeters, etc. In fact, the results of searches for exotic transient signals using clock networks have already been reported in the literature, constraining significant parameter space for various BSM scenarios. We would suggest that all experimentalists should seriously consider accurately time-stamping, storing, and sharing their data so that searches for correlated signals due to exotic physics can be conducted a posteriori. One never knows what nature might be hiding just beyond the frontier of the precision of past measurements.

The post Sensing a passage through the unknown appeared first on CERN Courier.

]]>
Feature A global network of ultra-sensitive optical atomic magnetometers – GNOME – has begun its search for exotic fields beyond the Standard Model. https://cerncourier.com/wp-content/uploads/2020/07/CCJulAug20_GNOME_frontis.jpg
Researchers grapple with XENON1T excess https://cerncourier.com/a/researchers-grapple-with-xenon1t-excess/ Thu, 02 Jul 2020 15:11:38 +0000 https://preview-courier.web.cern.ch/?p=87667 The excess could be due to a difficult-to-constrain tritium background, solar axions or solar neutrinos with a Majorana nature, says the collaboration.

The post Researchers grapple with XENON1T excess appeared first on CERN Courier.

]]>
An intriguing low-energy excess of background events recorded by the world’s most sensitive WIMP dark-matter experiment has sparked a series of preprints speculating on its underlying cause. On 17 June, the XENON collaboration, which searches for excess nuclear recoils in the XENON1T detector, a one-tonne liquid-xenon time-projection chamber (TPC) located underground at Gran Sasso National Laboratory in Italy, reported an unexpected excess in electronic recoils at energies of a few keV, just above its detection threshold. Though acknowledging that the excess could be due to a difficult-to-constrain tritium background, the collaboration says solar axions and solar neutrinos with a Majorana nature, both of which would signal physics beyond the Standard Model, are credible explanations for the approximately 3σ effect.

Who needs the WIMP if we can have the axion?

Elena Aprile

“Thanks to our unprecedented low event rate in electronic recoils background, and thanks to our large exposure, both in detector mass and time, we could afford to look for signatures of rare and new phenomena expected at the lowest energies where one usually finds lots of background,” says XENON spokesperson Elena Aprile, of Columbia University in New York. “I am especially intrigued by the possibility to detect axions produced in the Sun,” she says. “Who needs the WIMP if we can have the axion?”

The XENON collaboration has been in pursuit of WIMPs, a leading bosonic cold-dark-matter candidate, since 2005 with a programme of 10 kg, 100 kg and now 1 tonne liquid-xenon TPCs. Particles scattering in the liquid xenon create both scintillation light and ionisation electrons; the latter drift upwards in an electric field towards a gaseous phase where electroluminescence amplifies the charge signal into a light signal. Photomultiplier tubes record both the initial scintillation light and the later electroluminescence, to reveal 3D particle tracks, and the relative magnitudes of the two signals allows nuclear and electronic recoils to be differentiated. XENON1T derives its world-leading limit on WIMPs – the strictest 90% confidence limit being a cross-section of 4.1×10−47 cm2 for WIMPs with a mass of 30 GeV – from the very low rate of nuclear recoils observed by XENON1T from February 2017 to February 2018.

XENON1T low-energy electronic recoils

A surprise was in store, however, in the same data set, which also revealed 285 electronic recoils at the lower end of XENON1T’s energy acceptance, from 1 to 7 keV, over the expected background of 232±15. The sole background-modelling explanation for the excess that the collaboration has not been able to rule out is a minute concentration of tritium in the liquid xenon. With a half-life of 12.3 years and a relatively low amount of energy liberated in the decay of 18.6 keV, an unexpected contribution of tritium decays is favoured over XENON1T’s baseline background model at approximately 3σ. “We can measure extremely tiny amounts of various potential background sources, but unfortunately, we are not sensitive to a handful of tritium atoms per kilogram,” explains deputy XENON1T spokesperson Manfred Lindner, of the Max Planck Institute for Nuclear Physics in Heidelberg. Cryogenic distillation plus running the liquid xenon through a getter is expected to remove any tritium below the level that would be relevant, he says, but this needs to be cross-checked. The question is whether a minute amount of tritium could somehow remain in liquid xenon or if some makes it from the detector materials into the liquified xenon in the detector. “I personally think that the observed excess could equally well be a new background or new physics. About 3σ implies of course a certain statistical chance for a fluctuation, but I find it intriguing to have this excess not at some random place, but towards the lower end of the spectrum. This is interesting since many new-physics scenarios generically lead to a 1/E or 1/E2 enhancement which would be cut off by our detection threshold.”

Solar axions

One solution proposed by the collaboration is solar axions. Axions are a consequence of a new U(1) symmetry proposed in 1977 to explain the immeasurably small degree of CP violation in quantum chromodynamics – the so-called strong CP problem — and are also a dark-matter candidate. Though XENON1T is not expected to be sensitive to dark-matter axions, should they exist they would be produced by the sun at energies consistent with the XENON1T excess. According to this hypothesis, the axions would be detected via the “axioelectric” effect, an axion analogue of the photoelectric effect. Though a good fit phenomenologically, and like tritium favoured over the background-only hypothesis at approximately 3σ, the solar-axion explanation is disfavoured by astrophysical constraints. For example, it would lead to a significant extra energy loss in stars.

Axion helioscopes such as the CERN Axion Solar Telescope (CAST) experiment, which directs a prototype LHC dipole magnet at the Sun and could convert solar axions into X-ray photons, will help in testing the hypothesis. “It is not impossible to have an axion model that shows up in XENON but not in CAST,” says deputy spokesperson Igor Garcia Irastorza of University of Zaragoza, “but CAST already constraints part of the axion interpretation of the XENON signal.” Its successor, the International Axion Observatory (IAXO), which is set to begin data taking in 2024, will have improved sensitivity. “If the XENON1T signal is indeed an axion, IAXO will find it within the first hours of running,” says Garcia Irastorza.

A second new-physics explanation cited for XENON1T’s low-energy excess is an enhanced rate of solar neutrinos interacting in the detector. In the Standard Model, neutrinos have a negligibly small magnetic moment, however, should they be Majorana rather than Dirac fermions, and identical to their antiparticles, their magnetic moment should be larger, and proportional to their mass, though still not detectable. New physics beyond the Standard Model could, however, enhance the magnetic moment further. This leads to a larger interaction cross section at low energies and an excess of low-energy electron recoils. XENON1T fits indicate that solar Majorana neutrinos with an enhanced magnetic moment are also favoured over the background-only hypothesis at the level of 3σ.

The absorption of dark photons could explain the observed excess.

Joachim Kopp

The community has quickly chimed in with additional ideas, with around 40 papers appearing on the arXiv preprint server since the result was released. One possibility is a heavy dark-matter particle that annihilates or decays to a second, much lighter, “boosted dark-matter” particle which could scatter on electrons via some new interaction, notes CERN theorist Joachim Kopp. Another class of dark-matter model that has been proposed, he says, is “inelastic dark matter”, where dark-matter particles down-scatter in the detector into another dark-matter state just a few keV below the original one, with the liberated energy then seen in the detector. “An explanation I like a lot is in terms of dark photons,” he says. “The Standard Model would be augmented by a new U(1) gauge symmetry whose corresponding gauge boson, the dark photon, would mix with the Standard-Model photon. Dark photons could be abundant in the Universe, possibly even making up all the dark matter. Their absorption in the XENON1T detector could explain the observed excess.”

“The strongest asset we have is our new detector, XENONnT,” says Aprile. Despite COVID-19, the collaboration is on track to take first data before the end of 2020, she says. XENONnT will boast three times the fiducial volume of XENON1T and a factor six reduction in backgrounds, and should be able to verify or refute the signal within a few months of data taking. “An important question is if the signal has an annual modulation of about 7% correlated to the distance of the sun,” notes Lindner. “This would be a strong hint that it could be connected to new physics with solar neutrinos or solar axions.”

The post Researchers grapple with XENON1T excess appeared first on CERN Courier.

]]>
News The excess could be due to a difficult-to-constrain tritium background, solar axions or solar neutrinos with a Majorana nature, says the collaboration. https://cerncourier.com/wp-content/uploads/2020/07/XENON1T-1000.jpg
Funky physics at KIT https://cerncourier.com/a/funky-physics-at-kit/ Fri, 05 Jun 2020 07:01:57 +0000 https://preview-courier.web.cern.ch/?p=87521 The FUNK experiment has set an improved limit on the existence of hidden photons as candidates for dark matter with masses in the eV range.

The post Funky physics at KIT appeared first on CERN Courier.

]]>
The FUNK experimental area, where the black-painted floor can be seen with the PMT-camera pillar at the centre and the mirror on the left. A black-cotton curtain encloses the whole area during running. Credit: KIT.

A new experiment at Karlsruhe Institute of Technology (KIT) called FUNK – Finding U(1)s of a Novel Kind – has reported its first results in the search for ultralight dark matter. Using a large spherical mirror as an electromagnetic dark-matter antenna, the FUNK team has set an improved limit on the existence of hidden photons as candidates for dark matter with masses in the eV range.

Despite overwhelming astronomical evidence for the existence of dark matter, direct searches for dark-matter particles at colliders and dedicated nuclear-recoil experiments have so far come up empty handed. With these searches being mostly sensitive to heavy dark-matter particles, namely weakly interacting massive particles (WIMPs), the search for alternative light dark-matter candidates is growing in momentum. Hidden photons, a cold, ultralight dark-matter candidate, arise in extensions of the Standard Model which contain a new U(1) gauge symmetry and are expected to couple very weakly to charged particles via kinetic mixing with regular photons. Laboratory experiments that are sensitive to such hidden or dark photons include helioscopes such as the CAST experiment at CERN, and “light-shining-through-a-wall” methods such as ALPS experiment at DESY.

FUNK exploits a novel “dish-antenna” method first proposed in 2012, whereby a hidden photon crossing a metallic spherical mirror surface would cause faint electromagnetic waves to be emitted almost perpendicularly to the mirror surface, and be focused on the radius point. The experiment was conceived in 2013 at a workshop at DESY when it was realised that there was a perfectly suited mirror — a prototype for the Pierre Auger Observatory with a surface area of 14 m2 – in the basement of KIT. Various photodetectors placed at the radius point allow FUNK to search for a signal in different wavelength ranges, corresponding to different hidden-photon masses. The dark-matter nature of a possible signal can then be verified by observing small daily and seasonal movements of the spot around the radius point as Earth moves through the dark-matter field. The broadband dish-antenna technique is able to scan hidden photons over a large parameter space.

The mass range of viable hidden-photon dark matter is huge

Joerg Jaeckel

Completed in 2018, the experiment took data during last year in several month-long runs using low-noise PMTs. In the mass range 2.5 – 7 eV, the data exclude a hidden-photon coupling stronger than 10−12 in kinetic mixing. “This is competitive with limits derived from astrophysical results and partially exceeds those from other existing direct-detection experiments,” says FUNK principal investigator Ralph Engel of KIT. So far two other experiments of this type have reported search results for hidden photons in this energy range — the dish-antenna at the University of Tokyo and the SHUKET experiment at Paris-Saclay – though FUNK’s factor-of-ten larger mirror surface brings a greater experimental sensitivity, says the team. Other experiments, such as NA64 at CERN which employs missing-energy techniques, are setting stringent bounds on the strength of dark-photon couplings for masses in the MeV range and above.

“The mass range of viable hidden-photon dark matter is huge,” says FUNK collaborator Joerg Jaeckel of Heidelberg University. “For this reason, techniques which can scan over a large parameter space are especially useful even if they cannot explore couplings as small as is possible with some other dedicated methods. A future exploitation of the setup in other wavelength ranges is possible, and FUNK therefore carries an enormous physics potential.”

The post Funky physics at KIT appeared first on CERN Courier.

]]>
News The FUNK experiment has set an improved limit on the existence of hidden photons as candidates for dark matter with masses in the eV range. https://cerncourier.com/wp-content/uploads/2020/06/FUNK-featured-image-web.jpg
ATLAS extends search for top squark https://cerncourier.com/a/atlas-extends-search-for-top-squark/ Mon, 18 May 2020 09:35:04 +0000 https://preview-courier.web.cern.ch/?p=86636 A challenge for such searches is that the masses of the supersymmetric particles are unknown, leaving a large range of possibilities to explore.

The post ATLAS extends search for top squark appeared first on CERN Courier.

]]>
Figure 1

Supersymmetry is an attractive extension of the Standard Model, and aims to answer some of the most fundamental open questions in modern particle physics. For example: why is the Higgs boson so light? What is dark matter and how does it fit in with our understanding of the universe? Do electroweak and strong forces unify at smaller distances?

Supersymmetry predicts a new partner for each elementary particle, including the heaviest particle ever observed – the top quark. If the partner of the top quark (the top squark, or “stop”) were not too heavy, the quantum corrections to the Higgs boson mass would largely cancel, thereby stabilising its small value of 125 GeV. Moreover, the lightest supersymmetric particle (LSP) may be stable and weakly interacting, providing a dark-matter candidate. Signs of the top squark, and thus supersymmetry, may yet be lurking in the enormous number of proton–proton collisions provided by the LHC.

Two new searches

The ATLAS collaboration recently released two new searches, each looking to detect pairs of top squarks by exploring the full LHC dataset corresponding to an integrated luminosity of 139 fb–1 recorded during Run 2. Each top squark decays to a top quark and an LSP that escapes the detector without interacting. Thus, our experimental signature is an event that is energetically unbalanced, with two sets of top-quark remnants and a large amount of missing energy.

A challenge for such searches is that the masses of the supersymmetric particles are unknown, leaving a large range of possibilities to explore. Depending on the mass difference between the top squark and the LSP, the final decay products can be (very) soft or (very) energetic, calling for different reconstruction techniques and sparking the development of new approaches. For example, novel “soft b-tagging” techniques, based on either pure secondary-vertex information or jets built from tracks, were implemented for the first time in these analyses to extend the sensitivity to lower kinematic regimes. This allowed the searches to probe small top squark–LSP mass differences down to 5 GeV for the first time.

Leptoquark decays would exhibit a similar experimental signature to top-squark decays

Other sophisticated analysis strategies, including the use of machine-learning techniques, improved the discrimination between the signal and Standard-Model background and maximised the sensitivity of the analysis. Furthermore, these two searches are designed in such a way as to fully complement one another. Together they greatly extend the reach in the top squark mass versus LSP mass plane, including the challenging region where the top squark masses are very close to the top mass (figure 1). No evidence of new physics was found in any of these searches.

Beyond supersymmetry, these search results are intriguing for other new-physics scenarios. For example, the decay of a hypothetical top quark–neutrino hybrid, called a leptoquark, would exhibit a similar experimental signature to a top-squark decay. The results also constrain models predicting dark matter produced with a pair of top quarks that do not originate from supersymmetry.

The post ATLAS extends search for top squark appeared first on CERN Courier.

]]>
News A challenge for such searches is that the masses of the supersymmetric particles are unknown, leaving a large range of possibilities to explore. https://cerncourier.com/wp-content/uploads/2019/06/Atlas-6.jpg
Circular colliders eye Higgs self-coupling https://cerncourier.com/a/circular-colliders-eye-higgs-self-coupling/ Fri, 08 May 2020 16:33:26 +0000 https://preview-courier.web.cern.ch/?p=87406 Alain Blondel and Panagiotis Charitos report on developments at the third FCC Physics and Experiments Workshop.

The post Circular colliders eye Higgs self-coupling appeared first on CERN Courier.

]]>
Coupling correlations

Physics beyond the Standard Model must exist, to account for dark matter, the smallness of neutrino masses and the dominance of matter over antimatter in the universe; but we have no real clue of its energy scale. It is also widely recognised that new and more precise tools will be needed to be certain that the 125 GeV boson discovered in 2012 is indeed the particle postulated by Brout, Englert, Higgs and others to have modified the base potential of the whole universe, thanks to its coupling to itself, liberating energy for the masses of the W and Z bosons.

To tackle these big questions, and others, the Future Circular Collider (FCC) study, launched in 2014, proposed the construction of a new 100 km circular tunnel to first host an intensity-frontier 90 to 365 GeV e+e collider (FCC-ee), and then an energy-frontier (> 100 TeV) hadron collider, which could potentially also allow electron–hadron collisions. Potentially following the High-Luminosity LHC in the late 2030s, FCC-ee would provide 5 × 1012 Z decays – over five orders of magnitude more than the full LEP era, followed by 108 W pairs, 106 Higgs bosons (ZH events) and 106 top-quark pairs. In addition to providing the highest parton centre-of-mass energies foreseeable today (up to 40 TeV), FCC-hh would also produce more than 1013 top quarks and W bosons, and 50 billion Higgs bosons per experiment.

Rising to the challenge

Following the publication of the four-volume conceptual design report and submissions to the European strategy discussions, the third FCC Physics and Experiments Workshop was held at CERN from 13 to 17 January, gathering more than 250 participants for 115 presentations, and establishing a considerable programme of work for the coming years. Special emphasis was placed on the feasibility of theory calculations matching the experimental precision of FCC-ee. The theory community is rising to the challenge. To reach the required precision at the Z-pole, three-loop calculations of quantum electroweak corrections must include all the heavy Standard Model particles (W±, Z, H, t).

In parallel, a significant focus of the meeting was on detector designs for FCC-ee, with the aim of forming experimental proto-collaborations by 2025. The design of the interaction region allows for a beam vacuum tube of 1 cm radius in the experiments – a very promising condition for vertexing, lifetime measurements and the separation of bottom and charm quarks from light-quark and gluon jets. Elegant solutions have been found to bring the final-focus magnets close to the interaction point, using either standard quadrupoles or a novel magnet design using a superposition of off-axis (“canted”) solenoids. Delegates discussed solutions for vertexing, tracking and calorimetry during a Z-pole run at FCC-ee, where data acquisition and trigger electronics would be confronted with visible Z decays at 70 kHz, all of which would have to be recorded in full detail. A new subject was π/K/p identification at energies from 100 MeV to 40 GeV – a consequence of the strategy process, during which considerable interest was expressed in the flavour-physics programme at FCC-ee.

Physicists cannot refrain from investigating improvements

The January meeting showed that physicists cannot refrain from investigating improvements, in spite of the impressive statistics offered by the baseline design of FCC-ee. Increasing the number of interaction points from two to four is a promising way to nearly double the total delivery of luminosity for little extra power consumption, but construction costs and compatibility with a possible subsequent hadron collider must be determined. A bolder idea discussed at the workshop aims to improve both luminosity (by a factor of 10) and energy reach (perhaps up to 600 GeV), by turning FCC-ee into a 100 km energy-recovery linac. The cost, and how well this would actually work, are yet to be established. Finally, a tantalising possibility is to produce the Higgs boson directly in the s-channel: e+e → H, sitting exactly at a centre-of-mass energy equal to that of the Higgs boson. This would allow unique access to the tiny coupling of the Higgs boson to the electron. As the Higgs width (4.2 MeV in the Standard Model) is more than 20 times smaller than the natural energy spread of the beam, this would require a beam manipulation called monochromatisation and a careful running procedure, which a task force was nominated to study.

The ability to precisely probe the self-coupling of the Higgs boson is the keystone of the FCC physics programme. As said above, this self-interaction is the key to the electroweak phase transition, and could have important cosmological implications. Building on the solid foundation of precise and model-independent measurements of Higgs couplings at FCC-ee, FCC-hh would be able to access Hμμ, Hγγ, HZγ and Htt couplings at sub-percent precision. Further study of double Higgs production at FCC-hh shows that a measurement of the Higgs self-coupling could be done with a statistical precision of a couple of percent with the full statistics – which is to say that after the first few years of running the precision will already have been reduced to below 10%. This is much faster than previously realised, and definitely constituted the highlight of the workshop

The post Circular colliders eye Higgs self-coupling appeared first on CERN Courier.

]]>
Meeting report Alain Blondel and Panagiotis Charitos report on developments at the third FCC Physics and Experiments Workshop. https://cerncourier.com/wp-content/uploads/2020/05/FCC-wheel-twitter.jpg
First foray into CP symmetry of top-Higgs interactions https://cerncourier.com/a/first-foray-into-cp-symmetry-of-top-higgs-interactions/ Mon, 04 May 2020 15:01:52 +0000 https://preview-courier.web.cern.ch/?p=87315 The ATLAS and CMS collaborations have obtained new insights into the charge-parity structure of top-Higgs interactions.

The post First foray into CP symmetry of top-Higgs interactions appeared first on CERN Courier.

]]>
One of the many doors to new physics that have been opened by the discovery of the Higgs boson concerns the possibility of finding charge-parity violation (CPV) in Higgs-boson interactions. Were CPV to be observed in the Higgs sector, it would be an unambiguous indication of physics beyond the Standard Model (SM), and could have important ramifications for understanding the baryon asymmetry of the universe. Recently, the ATLAS and CMS collaborations reported their first forays into this area by measuring the CP-structure of interactions between the Higgs boson and top quarks.

While CPV is well established in the weak interactions of quarks (most recently in the charm system by the LHCb collaboration), and is explained in the SM by the existence of a phase in the CKM matrix, the amount of CPV observed is many orders of magnitude too small to account for the observed cosmological matter-antimatter imbalance. Searching for additional sources of CPV is a major programme in particle physics, with a moderate-significance suggestion of CPV in lepton interactions recently announced by the T2K collaboration. It is likely that sources of CPV from phenomena beyond the scope of the SM are needed, and the detailed properties of the Higgs sector are one of several possible hiding places.

Based on the full LHC Run 2 dataset, ATLAS and CMS studied events where the Higgs boson is produced in association with one or two top quarks before decaying into two photons. The latter (ttH) process, which accounts for around 1% of the Higgs bosons produced at the LHC, was observed by both collaborations in 2018. But the tH production channel is predicted to be about six times rarer. This is due to destructive interference between higher order diagrams involving W bosons, and makes the tH process particularly sensitive to new-physics processes.

Exploring the CP properties of these interactions is non-trivial

According to the SM, the Higgs boson is “CP-even” – that is, it is possible to rotate-away any CP-odd phase from the scalar mass term. Previous probes of the interaction between the Higgs and vector bosons by CMS and ATLAS support the CP-even nature of the Higgs boson, determining its quantum numbers to be most consistent with JPC = 0++, though small CP-odd contributions from a more complex coupling structure are not excluded. The presence of a CP-odd component, together with the dominant CP-even one, would imply CPV, altering the kinematic properties of the ttH process and modifying tH production. Exploring the CP properties of these interactions is non-trivial, and requires the full capacities of the detectors and analysis techniques.

The collaborations employed machine-learning (Boosted Decision Tree) algorithms to disentangle the relative fractions of the CP-even and CP-odd components of top-Higgs interactions. The CMS collaboration observed ttH production at significance of 6.6σ, and excluded a pure CP-odd structure of the top-Higgs Yukawa coupling at 3.2σ. The ratio of the measured ttH production rate to the predicted production rate was found by CMS to be 1.38 with an uncertainty of about 25%. ATLAS data also show agreement with the SM. Assuming a CP-even coupling, ATLAS observed ttH with a significance of 5.2σ. Comparing the strength of the CP-even and CP-odd components, the collaboration favours a CP-mixing angle very close to 0 (indicating no CPV) and excludes a pure CP-odd coupling at 3.9σ. ATLAS did not observe tH production, setting an upper limit on its rate of 12 times the SM expectation.

In addition to further probing the CP properties of the top–Higgs interaction with larger data samples, ATLAS and CMS are searching in other Higgs-boson interactions for signs of CPV.

The post First foray into CP symmetry of top-Higgs interactions appeared first on CERN Courier.

]]>
News The ATLAS and CMS collaborations have obtained new insights into the charge-parity structure of top-Higgs interactions. https://cerncourier.com/wp-content/uploads/2020/05/Ht-ATLAS-CMS.jpg
First physics for Belle II https://cerncourier.com/a/first-physics-for-belle-ii/ Fri, 10 Apr 2020 09:05:06 +0000 https://preview-courier.web.cern.ch/?p=87123 The collaboration scoured four months of electron-positron collisions at SuperKEKB for evidence of invisibly decaying Z′ bosons.

The post First physics for Belle II appeared first on CERN Courier.

]]>
Belle II

The Belle II collaboration at the SuperKEKB collider in Japan has published its first physics analysis: a search for Z′ bosons, which are hypothesised to couple the Standard Model (SM) with the dark sector. The team scoured four months of data from a pilot run in 2018 for evidence of invisibly decaying Z′ bosons in the process e+e→μ+μZ′, and for  lepton-flavour violating Z′ bosons in e+e→e±μZ′, by looking for missing energy recoiling against two clean lepton tracks. “This is the first ever search for the process e+e→μ+μZ′ where the Z′ decays invisibly,” says Belle II spokesperson Toru Iijima of Nagoya University.

The team did not find any excess of events, yielding preliminary sensitivity to the coupling g′ in the so-called Lμ−Lτ extension of the SM, wherein the Z′ couples only to muon and tau-lepton flavoured SM particles and the dark sector. This model also has the potential to explain anomalies in b → sμ+μ decays reported by LHCb and the longstanding muon g-2 anomaly, claims the team.

Belle II Z

The results come a little over a year since the first collisions were recorded in the fully instrumented Belle II detector on 25 March 2019. Following in the footsteps of Belle at the KEKB facility, the new SuperKEKB b-factory plans to achieve a 40-fold increase on the luminosity of its predecessor, which ran from 1999 to 2010. First turns were achieved in February 2016, and first collisions between its asymmetric-energy electron and positron beams were achieved in April 2018. The machine has now reached a luminosity of 1.4 × 1034 cm-2 s-1 and is currently integrating around 0.7 fb-1 each day, exceeding the peak luminosity of the former PEP-II/BaBar facility at SLAC, notes Iijima.

By summer the team aims to exceed the Belle/KEKB record of 2.1 × 1034 cm-2 s-1 by implementing a nonlinear “crab waist” focusing scheme. First used at the electron-positron collider DAΦNE at INFN Frascati, and not to be confused with the crab-crossing technology used to boost the luminosity at KEKB and planned for the high-luminosity LHC, the scheme stabilises e+e beam-beam blowup using carefully tuned sextupole magnets located symmetrically on either side of the interaction point. “The 100 fb-1 sample which we plan to integrate by summer will allow us to provide our first interesting results in B physics,” says Tom Browder of the University of Hawaii, who was Belle II spokesperson until last year.

Flavour debut

Belle II will make its debut in flavour physics at a vibrant moment, complementing  efforts to resolve hints of anomalies seen at the LHC, such as the recent test of lepton-flavour universality in beauty-baryon decays by the LHCb collaboration.

We will then look for the star attraction of the dark sector, the dark photon

Tom Browder

As well as updating searches for  invisible decays of the Z′ with one to two orders of magnitude more data, Belle II will now conduct further dark-sector studies including a search for axion-like particles decaying to two photons, the Z′ decaying to visible final states and dark-Higgstrahlung with a μ+μ pair and missing energy, explains Browder. “We will then look for the star attraction of the dark sector, the dark photon, with the difficult signature of e+e to a photon and nothing else.”

The post First physics for Belle II appeared first on CERN Courier.

]]>
News The collaboration scoured four months of electron-positron collisions at SuperKEKB for evidence of invisibly decaying Z′ bosons. https://cerncourier.com/wp-content/uploads/2020/04/Belle-II-simulation.jpg
Anomalies persist in flavour-changing B decays https://cerncourier.com/a/anomalies-persist-in-flavour-changing-b-decays/ Wed, 11 Mar 2020 11:44:11 +0000 https://preview-courier.web.cern.ch/?p=86752 Updated measurements by LHCb of the angular distributions of a rare neutral B meson decay add fresh intrigue to the flavour puzzle.

The post Anomalies persist in flavour-changing B decays appeared first on CERN Courier.

]]>
The distribution of the angular variable P5’ as a function of the mass squared of the muon pair, q2. The LHCb Run 1 results (red), those from the additional 2016 dataset only (blue), and those from both datasets (black) are shown along with the SM predictions (orange). Credit: LHCb

The LHCb collaboration has confirmed previous hints of odd behaviour in the way B mesons decay into a K*and a pair of muons, bringing fresh intrigue to the pattern of flavour anomalies that has emerged during the past few years. At a seminar at CERN on 10 March, Eluned Smith of RWTH Aachen University presented an updated analysis of the angular distributions of B0→K*0μ+μ decays based on around twice as many events than were used for the collaboration’s previous measurement reported in 2015. The result reveals a mild increase in the overall tension with the Standard Model (SM) prediction, though, at 3.3σ, more data are needed to determine the source of the effect.

The B0→K*0μ+μ decay is a promising system with which to explore physics beyond the SM. A flavour-changing neutral-current process, it involves a quark transition (b→s) which is forbidden at the lowest perturbative order in the SM, and therefore occurs only around once for every million B decays. The decay proceeds instead via higher-order penguin and box processes, which are sensitive to the presence of new, heavy particles. Such particles would enter in competing processes and could significantly change the B0→K*0μ+μ decay rate and the angular distribution of its final-state particles. Measuring angular distributions as a function of the invariant mass squared (q2) of the muon pair is of particular interest because it is possible to construct variables that depend less on hadronic modelling uncertainties.

Potentially anomalous behaviour in an angular variable called P5′ came to light in 2013, when LHCb reported a 3.7σ local deviation with respect to the SM in one q2 bin, based on 1fb-1 of data. In 2015, a global fit of different angular distributions of the B0→K*0μ+μ decays using the total Run 1 data sample of 3 fb-1 reaffirmed the puzzle, showing discrepancies of 3.4σ (later reduced to 3.0σ when using new theory calculations with an updated description of potentially large hadronic effects). In 2016, the Belle experiment at KEK in Japan performed its own angular analysis of B0→K*0μ+μ using data from electron—positron collisions and found a 2.1σ deviation in the same direction and in the same q2 region as the LHCb anomaly.

We as a community have been eagerly waiting for this measurement and LHCb has not disappointed

Jure Zupan

The latest LHCb result includes additional Run 2 data collected during 2016, corresponding to a total integrated luminosity of 4.7fb-1. It shows that the local tension of P5′ in two q2 bins between 4 and 8 GeV2/c4 reduces from 2.8 and 3.0σ, as observed in the previous analysis, to 2.5 and 2.9σ. However, a global fit to several angular observables shows that the overall tension with the SM increases from 3.0 to 3.3σ. The results of the fit also find a better overall agreement with predictions of new-physics models that contain additional vector or axial-vector contributions. However, the collaboration also makes it clear that the discrepancy could be explained by an unexpectedly large hadronic effect that is not accounted for in the SM predictions.

“We as a community have been eagerly waiting for this measurement and LHCb has not disappointed,” says theorist Jure Zupan of the University of Cincinnati. “The new measurements have moved closer to the SM predictions in the angular observables so that the combined significance of the excess remained essentially the same. It is thus becoming even more important to understand well and scrutinise the SM predictions and the claimed theory errors.”

Flavour puzzle
The latest result makes LHCb’s continued measurements of lepton-flavour universality even more important, he says. In recent years, LHCb has also found that the ratio of the rates of muonic and electronic B decays departs from the SM prediction, suggesting a violation of the key SM principle of lepton-flavour universality. Though not individually statistically significant, the measurements are theoretically very clean, and the most striking departure – in the variable known as RK — concerns B decays that proceed via the same b→s transition as B0→K*0μ+μ. This has led physicists to speculate that the two effects could be caused by the same new physics, with models involving leptoquarks or new gauge bosons in principle able to accommodate both sets of anomalies.

An update on RK based on additional Run 2 data is hotly anticipated, and the collaboration is also planning to add data from 2017-18 to the B0→K*0μ+μ angular analysis, as well as working on further analyses with b-quark transitions in mesons. LHCb also recently brought the decays of beauty baryons, which also depend on b→s transitions, to bear on the subject. Departures from the norm have also been spotted in B decays to D mesons, which involve tree-level b→c quark transitions. Such decays probe lepton-flavour universality via comparisons between tau leptons and muons and electrons but, as with RK, the individual measurements are not highly significant.

“We have not seen evidence of new physics, but neither were the B physics anomalies ruled out,” says Zupan of the LHCb result. “The wait for the clear evidence of new physics continues.”

The post Anomalies persist in flavour-changing B decays appeared first on CERN Courier.

]]>
News Updated measurements by LHCb of the angular distributions of a rare neutral B meson decay add fresh intrigue to the flavour puzzle. https://cerncourier.com/wp-content/uploads/2020/03/LHCb-general-view.jpg
LHC at 10: the physics legacy https://cerncourier.com/a/lhc-at-10-the-physics-legacy/ Mon, 09 Mar 2020 21:13:36 +0000 https://preview-courier.web.cern.ch/?p=86548 The LHC’s physics programme has transformed our understanding of elementary particles, writes Michelangelo Mangano.

The post LHC at 10: the physics legacy appeared first on CERN Courier.

]]>
Ten years have passed since the first high-energy proton–proton collisions took place at the Large Hadron Collider (LHC). Almost 20 more are foreseen for the completion of the full LHC programme. The data collected so far, from approximately 150 fb–1 of integrated luminosity over two runs (Run 1 at a centre-of-mass energy of 7 and 8 TeV, and Run 2 at 13 TeV), represent a mere 5% of the anticipated 3000 fb–1 that will eventually be recorded. But already their impact has been monumental.

In Search of the Higgs Boson

Three major conclusions can be drawn frofm these first 10 years. First and foremost, Run 1 has shown that the Higgs boson – the previously missing, last ingredient of the Standard Model (SM) – exists. Secondly, the exploration of energy scales as high as several TeV has further consolidated the robustness of the SM, providing no compelling evidence for phenomena beyond the SM (BSM). Nevertheless, several discoveries of new phenomena within the SM have emerged, underscoring the power of the LHC to extend and deepen our understanding of the SM dynamics, and showing the unparalleled diversity of phenomena that the LHC can probe with unprecedented precision.

Exceeding expectations

Last but not least, we note that 10 years of LHC operations, data taking and data interpretation, have overwhelmingly surpassed all of our most optimistic expectations. The accelerator has delivered a larger than expected luminosity, and the experiments have been able to operate at the top of their ideal performance and efficiency. Computing, in particular via the Worldwide LHC Computing Grid, has been another crucial driver of the LHC’s success. Key ingredients of precision measurements, such as the determination of the LHC luminosity, or of detection efficiencies and of backgrounds using data-driven techniques beyond anyone’s expectations, have been obtained thanks to novel and powerful techniques. The LHC has also successfully provided a variety of beam and optics configurations, matching the needs of different experiments and supporting a broad research programme. In addition to the core high-energy goals of the ATLAS and CMS experiments, this has enabled new studies of flavour physics and of hadron spectroscopy, of forward-particle production and total hadronic cross sections. The operations with beams of heavy nuclei have reached a degree of virtuosity that made it possible to collide not only the anticipated lead beams, but also beams of xenon, as well as combined proton–lead, photon–lead and photon-photon collisions, opening the way to a new generation of studies of matter at high density.

Figure 1

Theoretical calculations have evolved in parallel to the experimental progress. Calculations that were deemed of impossible complexity before the start of the LHC have matured and become reality. Next-to-leading-order (NLO) theoretical predictions are routinely used by the experiments, thanks to a new generation of automatic tools. The next frontier, next-to-next-to-leading order (NNLO), has been attained for many important processes, reaching, in a few cases, the next-to-next-to-next-to-leading order (N3LO), and more is coming.

Aside from having made these first 10 years an unconditional success, all these ingredients are the premise for confident extrapolations of the physics reach of the LHC programme to come.

To date, more than 2700 peer-reviewed physics papers have been published by the seven running LHC experiments (ALICE, ATLAS, CMS, LHCb, LHCf, MoEDAL and TOTEM). Approximately 10% of these are related to the Higgs boson, and 30% to searches for BSM phenomena. The remaining 1600 or so report measurements of SM particles and interactions, enriching our knowledge of the proton structure and of the dynamics of strong interactions, of electroweak (EW) interactions, of flavour properties, and more. In most cases, the variety, depth and precision of these measurements surpass those obtained by previous experiments using dedicated facilities. The multi-purpose nature of the LHC complex is unique, and encompasses scores of independent research directions. Here it is only possible to highlight a fraction of the milestone results from the LHC’s expedition so far.

Entering the Higgs world

The discovery by ATLAS and CMS of a new scalar boson in July 2012, just two years into LHC physics operations, was a crowning early success. Not only did it mark the end of a decades-long search, but it opened a new vista of exploration. At the time of the discovery, very little was known about the properties and interactions of the new boson. Eight years on, the picture has come into much sharper focus.

The structure of the Higgs-boson interactions revealed by the LHC experiments is still incomplete. Its couplings to the gauge bosons (W, Z, photon and gluons) and to the heavy third-generation fermions (bottom and top quarks, and tau leptons) have been detected, and the precision of these measurements is at best in the range of 5–10%. But the LHC findings so far have been key to establish that this new particle correctly embodies the main observational properties of the Higgs boson, as specified by the Brout–Englert–Guralnik–Hagen–Higgs–Kibble EW-symmetry breaking mechanism, referred hereafter as “BEH”, a cornerstone of the SM. To start with, the measured couplings to the W and Z bosons reflect the Higgs’ EW charges and are proportional to the W and Z masses, consistently with the properties of a scalar field breaking the SM EW symmetry. The mass dependence of the Higgs interactions with the SM fermions is confirmed by the recent ATLAS and CMS observations of the H → bb and H → ττ decays, and of the associated production of a Higgs boson together with a tt quark pair (see figure 1).

Figure 2

These measurements, which during Run 2 of the LHC have surpassed the five-sigma confidence level, provide the second critical confirmation that the Higgs fulfills the role envisaged by the BEH mechanism. The Higgs couplings to the photon and the gluon (g), which the LHC experiments have probed via the H → γγ decay and the gg → H production, provide a third, subtler test. These couplings arise from a combination of loop-level interactions with several SM particles, whose interplay could be modified by the presence of BSM particles, or interactions. The current agreement with data provides a strong validation of the SM scenario, while leaving open the possibility that small deviations could emerge from future higher statistics.

The process of firmly establishing the identification of the particle discovered in 2012 with the Higgs boson goes hand-in-hand with two research directions pioneered by the LHC: seeking the deep origin of the Higgs field and using the Higgs boson as a probe of BSM phenomena.

The breaking of the EW symmetry is a fact of nature, requiring the existence of a mechanism like BEH. But, if we aim beyond a merely anthropic justification for this mechanism (i.e. that, without it, physicists wouldn’t be here to ask why), there is no reason to assume that nature chose its minimal implementation, namely the SM Higgs field. In other words: where does the Higgs boson detected at the LHC come from? This summarises many questions raised by the possibility that the Higgs boson is not just “put in by hand” in the SM, but emerges from a larger sector of new particles, whose dynamics induces the breaking of the EW symmetry. Is the Higgs elementary, or a composite state resulting from new confining forces? What generates its mass and self-interaction? More generally, is the existence of the Higgs boson related to other mysteries, such as the origin of dark matter (DM), of neutrino masses or of flavour phenomena?

The Higgs boson is becoming an increasingly powerful exploratory tool to probe the origin of the Higgs itself

Ever since the Higgs-boson discovery, the LHC experiments have been searching for clues to address these questions, exploring a large number of observables. All of the dominant production channels (gg fusion, associated production with vector bosons and with top quarks, and vector-boson fusion) have been discovered, and decay rates to WW, ZZ, γγ, bb and ττ were measured. A theoretical framework (effective field theory, EFT) has been developed to interpret in a global fashion all these measurements, setting strong constraints on possible deviations from the SM. With the larger data set accumulated during Run 2, the production properties of the Higgs have been studied with greater detail, simultaneously testing the accuracy of theoretical calculations, and the resilience of SM predictions.

Figure 3

To explore the nature of the Higgs boson, what has not been seen as yet can be as important as what was seen. For example, lack of evidence for Higgs decays to the fermions of the first and second generation is consistent with the SM prediction that these should be very rare. The H → μμ decay rate is expected to be about 3 × 10–3 times smaller than that of H → ττ; the current sensitivity is two times below, and ATLAS and CMS hope to first observe this decay during the forthcoming Run 3, testing for the first time the couplings of the Higgs boson to second-generation fermions. The SM Higgs boson is expected to conserve flavour, making decays such as H → μτ, H → eτ or t → Hc too small to be seen. Their observation would be a major revolution in physics, but no evidence has shown up in the data so far. Decays of the Higgs to invisible particles could be a signal of DM candidates, and constraints set by the LHC experiments are complementary to those from standard DM searches. Several BSM theories predict the existence of heavy particles decaying to a Higgs boson. For example, heavy top partners, T, could decay as T → Ht, and heavy bosons X decay as X → HV (V = W, Z). Heavy scalar partners of the Higgs, such as charged Higgs states, are expected in theories such as supersymmetry. Extensive and thorough searches of all these phenomena have been carried out, setting strong constraints on SM extensions.

As the programme of characterising the Higgs properties continues, with new challenging goals such as the measurement of the Higgs self-coupling through the observation of Higgs pair production, the Higgs boson is becoming an increasingly powerful exploratory tool to probe the origin of the Higgs itself, as well as a variety of solutions to other mysteries of particle physics.

Interactions weak and strong

The vast majority of LHC processes are controlled by strong interactions, described by the quantum-chromodynamics (QCD) sector of the SM. The predictions of production rates for particles like the Higgs or gauge bosons, top quarks or BSM states, rely on our understanding of the proton structure, in particular of the energy distribution of its quark and gluon components (the parton distribution functions, PDFs). The evolution of the final states, the internal structure of the jets emerging from quark and gluons, the kinematical correlations between different objects, are all governed by QCD. LHC measurements have been critical, not only to consolidate our understanding of QCD in all its dynamical domains, but also to improve the precision of the theoretical interpretation of data, and to increase the sensitivity to new phenomena and to the production of BSM particles.

Collisions galore

Approximately 109 proton–proton (pp) collisions take place each second inside the LHC detectors. Most of them bear no obvious direct interest for the search of BSM phenomena, but even simple elastic collisions, pp → pp, which account for about 30% of this rate, have so far failed to be fully understood with first-principle QCD calculations. The ATLAS ALFA spectrometer and the TOTEM detector have studied these high-rate processes, measuring the total and elastic pp cross sections, at the various beam energies provided by the LHC. The energy dependence of the relation between the real and imaginary part of the pp forward scattering amplitude has revealed new features, possibly described by the exchange of the so-called odderon, a coherent state of three gluons predicted in the 1970s.

Figure 4

The structure of the final states in generic pp collisions, aside from defining the large background of particles that are superimposed on the rarer LHC processes, is of potential interest to understand cosmic-ray (CR) interactions in the atmosphere. The LHCf detector measured the forward production of the most energetic particles from the collision, those driving the development of the CR air showers. These data are a unique benchmark to tune the CR event generators, reducing the systematics in the determination of the nature of the highest-energy CR constituents (protons or heavy nuclei?), a step towards solving the puzzle of their origin.

On the opposite end of the spectrum, rare events with dijet pairs of mass up to 9 TeV have been observed by ATLAS and CMS. The study of their angular distribution, a Rutherford-like scattering experiment, has confirmed the point-like nature of quarks, down to 10–18 cm. The overall set of production studies, including gauge bosons, jets and top quarks, underpins countless analyses. Huge samples of top quark pairs, produced at 15 Hz, enable the surgical scrutiny of this mysteriously heavy quark, through its production and decays. New reactions, unobservable before the LHC, were first detected. Gauge-boson scattering (e.g. W+ W+ W+ W+), a key probe of electroweak symmetry breaking proposed in the 1970s, is just one example. By and large, all data show an extraordinary agreement with theoretical predictions resulting from decades of innovative work (figure 2). Global fits to these data refine the proton PDFs, improving the predictions for the production of Higgs bosons or BSM particles.

The cross sections σ of W and Z bosons provide the most precise QCD measurements, reaching a 2% systematic uncertainty, dominated by the luminosity uncertainty. Ratios such as σ(W+)/σ(W) or σ(W)/σ(Z), and the shapes of differential distributions, are known to a few parts in 1000. These data challenge the theoretical calculations’ accuracy, and require caution to assess whether small discrepancies are due to PDF effects, new physics or yet imprecise QCD calculations.

Precision is the keystone to consolidate our description of nature

As already mentioned, the success of the LHC owes a lot to its variety of beam and experimental conditions. In this context, the data at the different centre-of-mass energies provided in the two runs are a huge bonus, since the theoretical prediction for the energy-dependence of rates can be used to improve the PDF extraction, or to assess possible BSM interpretations. The LHCb data, furthermore, cover a forward kinematical region complementary to that of ATLAS and CMS, adding precious information.

The precise determination of the W and Z production and decay kinematics has also allowed new measurements of fundamental parameters of the weak interaction: the W mass (mW) and the weak mixing angle (sinθW). The measurement of sinθW is now approaching the precision inherited from the LEP experiments and SLD, and will soon improve to shed light on the outstanding discrepancy between those two measurements. The mW precision obtained by the ATLAS experiment, ΔmW = 19 MeV, is the best worldwide, and further improvements are certain. The combination with the ATLAS and CMS measurements of the Higgs boson mass (ΔmH ≅ 200 MeV) and of the top quark mass (Δmtop ≲ 500 MeV), provides a strong validation of the SM predictions (see figure 3). For both mW and sinθW the limiting source of systematic uncertainty is the knowledge of the PDFs, which future data will improve, underscoring the profound interplay among the different components of the LHC programme.

QCD matters

The understanding of the forms and phases that QCD matter can acquire is a fascinating, broad and theoretically challenging research topic, which has witnessed great progress in recent years. Exotic multi-quark bound states, beyond the usual mesons (qq) and baryons (qqq), were initially discovered at e+e colliders. The LHCb experiment, with its large rates of identified charm and bottom final states, is at the forefront of these studies, notably with the first discovery of heavy pentaquarks (qqqcc) and with discoveries of tetraquark candidates in the charm sector (qccq), accompanied by determinations of their quantum numbers and properties. These findings have opened a new playground for theoretical research, stimulating work in lattice QCD, and forcing a rethinking of established lore.

Figure 5

The study of QCD matter at high density is the core task of the heavy-ion programme. While initially tailored to the ALICE experiment, all active LHC experiments have since joined the effort. The creation of a quark–gluon plasma (QGP) led to astonishing visual evidence for jet quenching, with 1 TeV jets shattered into fragments as they struggle their way out of the dense QGP volume. The thermodynamics and fluctuations of the QGP have been probed in multiple ways, indicating that the QGP behaves as an almost perfect fluid, the least viscous fluid known in nature. The ability to explore the plasma interactions of charm and bottom quarks is a unique asset of the LHC, thanks to the large production rates, which unveiled new phenomena such as  the recombination of charm quarks, and the sequential melting of bb bound states.

While several of the qualitative features of high-density QCD were anticipated, the quantitative accuracy, multitude and range of the LHC measurements have no match. Examples include ALICE’s precise determination of dynamical parameters such as the QGP shear-viscosity-to-entropy-density ratio, or the higher harmonics of particles’ azimuthal correlations. A revolution ensued in the sophistication of the required theoretical modelling. Unexpected surprises were also discovered, particularly in the comparison of high-density states in PbPb collisions with those occasionally generated by smaller systems such as pp and pPb. The presence in the latter of long-range correlations, various collective phenomena and an increased strange baryon abundance (figure 4), resemble behaviour typical of the QGP. Their deep origin is a mysterious property of QCD, still lacking an explanation. The number of new challenging questions raised by the LHC data is almost as large as the number of new answers obtained!

Flavour physics

Understanding the structure and the origin of flavour phenomena in the quark sector is one of the big open challenges of particle physics. The search for new sources of CP violation, beyond those present in the CKM mixing matrix, underlies the efforts to explain the baryon asymmetry of the universe. In addition to flavour studies with Higgs bosons and top quarks, more than 1014 charm and bottom quarks have been produced so far by the LHC, and the recorded subset has led to landmark discoveries and measurements. The rare Bs→ μμ decay, with a minuscule rate of approximately 3 × 10–9, has been discovered by the LHCb, CMS and ATLAS experiments. The rarer Bd→ μμ decay is still unobserved, but its expected ~10–10 rate is within reach. These two results alone had a big impact on constraining the parameter space of several BSM theories, notably supersymmetry, and their precision and BSM sensitivity will continue improving. LHCb has discovered DD mixing and the long-elusive CP violation in D-meson decays, a first for up-type quarks (figure 5). Large hadronic non-perturbative uncertainties make the interpretation of these results particularly challenging, leaving under debate whether the measured properties are consistent with the SM, or signal new physics. But the experimental findings are a textbook milestone in the worldwide flavour physics programme.

Figure 6

LHCb produced hundreds more measurements of heavy-hadron properties and flavour-mixing parameters. Examples include the most precise measurement of the CKM angle γ = (74.0+5.0–5.8)o and, with ATLAS and CMS, the first measurement of φs, the tiny CP-violation phase of Bs → J/ψϕ, whose precisely predicted SM value is very sensitive to new physics. With a few notable exceptions, all results confirm the CKM picture of flavour phenomena. Those exceptions, however, underscore the power of LHC data to expose new unexpected phenomena: B → D(*) ℓν (ℓ = μ,τ) and B → K(*)+ (ℓ = e,μ) decays hint at possible deviations from the expected lepton flavour universality. The community is eagerly waiting for further developments.

Beyond the Standard Model

Years of model building, stimulated before and after the LHC start-up by the conceptual and experimental shortcomings of the SM (e.g. the hierarchy problem and the existence of DM), have generated scores of BSM scenarios to be tested by the LHC. Evidence has so far escaped hundreds of dedicated searches, setting limits on new particles up to several TeV (figure 6). Nevertheless, much was learned. While none of the proposed BSM scenarios can be conclusively ruled out, for many of them survival is only guaranteed at the cost of greater fine-tuning of the parameters, reducing their appeal. In turn, this led to rethinking the principles that implicitly guided model building. Simplicity, or the ability to explain at once several open problems, have lost some drive. The simplest realisations of BSM models relying on supersymmetry, for example, were candidates to at once solve the hierarchy problem, provide DM candidates and set the stage for the grand unification of all forces. If true, the LHC should have piled up evidence by now. Supersymmetry remains a preferred candidate to achieve that, but at the price of more Byzantine constructions. Solving the hierarchy problem remains the outstanding theoretical challenge. New ideas have come to the forefront, ranging from the Higgs potential being determined by the early-universe evolution of an axion field, to dark sectors connected to the SM via a Higgs portal. These latter scenarios could also provide DM candidates alternative to the weakly-interacting massive particles, which so far have eluded searches at the LHC and elsewhere.

With such rapid evolution of theoretical ideas taking place as the LHC data runs progressed, the experimental analyses underwent a major shift, relying on “simplified models”: a novel model-independent way to represent the results of searches, allowing published results to be later reinterpreted in view of new BSM models. This amplified the impact of experimental searches, with a surge of phenomenological activity and the proliferation of new ideas. The cooperation and synergy between experiments and theorists have never been so intense.

Having explored the more obvious search channels, the LHC experiments refocused on more elusive signatures. Great efforts are now invested in searching corners of parameter space, extracting possible subtle signals from large backgrounds, thanks to data-driven techniques, and to the more reliable theoretical modelling that has emerged from new calculations and many SM measurements. The possible existence of new long-lived particles opened a new frontier of search techniques and of BSM models, triggering proposals for new dedicated detectors (Mathusla, CODEX-b and FASER, the last of which was recently approved for construction and operation in Run 3). Exotic BSM states, like the milli-charged particles present in some theories of dark sectors, could be revealed by MilliQan, a recently proposed detector. Highly ionising particles, like the esoteric magnetic monopoles, have been searched for by the MoEDAL detector, which places plastic tracking films cleverly in the LHCb detector hall.

While new physics is still eluding the LHC, the immense progress of the past 10 years has changed forever our perspective on searches and on BSM model building.

Final considerations

Most of the results only parenthetically cited, like the precision on the mass of the top quark, and others not even quoted, are the outcome of hundreds of years of person-power work, and would have certainly deserved more attention here. Their intrinsic value goes well beyond what was outlined, and they will remain long-lasting textbook material, until future work at the LHC and beyond improves them.

Theoretical progress has played a key role in the LHC’s progress, enhancing the scope and reliability of the data interpretation. Further to the developments already mentioned, a deeper understanding of jet structure has spawned techniques to tag high-pT gauge and Higgs bosons, or top quarks, now indispensable in many BSM searches. Innovative machine-learning ideas have become powerful and ubiquitous. This article has concentrated only on what has already been achieved, but the LHC and its experiments have a long journey of exploration ahead.

The terms precision and discovery, applied to concrete results rather than projections, well characterise the LHC 10-year legacy. Precision is the keystone to consolidate our description of nature, increase the sensitivity to SM deviations, give credibility to discovery claims, and to constrain models when evaluating different microscopic origins of possible anomalies. The LHC has already fully succeeded in these goals. The LHC has also proven to be a discovery machine, and in a context broader than just Higgs and BSM phenomena. Altogether, it delivered results that could not have been obtained otherwise, immensely enriching our understanding of nature.

The post LHC at 10: the physics legacy appeared first on CERN Courier.

]]>
Feature The LHC’s physics programme has transformed our understanding of elementary particles, writes Michelangelo Mangano. https://cerncourier.com/wp-content/uploads/2020/02/CCMarApr_LHC10_frontis.jpg
Learning to love anomalies https://cerncourier.com/a/learning-to-love-anomalies/ Fri, 10 Jan 2020 09:21:47 +0000 https://preview-courier.web.cern.ch/?p=86009 The 2020s will sort current anomalies in fundamental physics into discoveries or phantoms, says Ben Allanach.

The post Learning to love anomalies appeared first on CERN Courier.

]]>
All surprising discoveries were anomalies at some stage

Anomalies, which I take to mean data that disagree with the scientific paradigm of the day, are the bread and butter of phenomenologists working on physics beyond the Standard Model (SM). Are they a mere blip or the first sign of new physics? A keen understanding of statistics is necessary to help decide which “bumps” to work on.

Take the excess in the rate of di-photon production at a mass of around 750 GeV spotted in 2015 by the ATLAS and CMS experiments. ATLAS had a 4σ peak with respect to background, which CMS seemed to confirm, although its signal was less clear. Theorists produced an avalanche of papers speculating on what the signal might mean but, in the end, the signal was not confirmed in new data. In fact, as is so often the case, the putative signal stimulated some very fruitful work. For example, it was realised that ultra-peripheral collisions between lead ions could produce photon-photon resonances, leading to an innovative and unexpected search programme in heavy-ion physics. Other authors proposed using such collisions to measure the anomalous magnetic moment of the tau lepton, which is expected to be especially sensitive to new physics, and in 2018 ATLAS and CMS found the first evidence for (non-anomalous) high-energy light-by-light scattering in lead-lead ultra-peripheral collisions.

Some anomalies have disappeared during the past decade not primarily because they were statistical fluctuations, but because of an improved understanding of theory. One example is the forward-backward asymmetry (AFB) of top–antitop production at the Tevatron. At large transverse momentum, AFB was measured to be much too large compared to SM predictions, which were at next-to-leading order in QCD with some partial next-to-next-to leading order (NNLO) corrections. The complete NNLO corrections, calculated in a Herculean effort, proved to contribute much more than was previously thought, faithfully describing top–antitop production both at the Tevatron and at the LHC.

Ben Allanach

Other anomalies are still alive and kicking. Arguably, chief among them is the long-standing oddity in the measurement of the anomalous magnetic moment of the muon, which is about 4σ discrepant with the SM predictions. Spotted 20 years ago, many papers have been written in an attempt to explain it, with contributions ranging from supersymmetric particles to leptoquarks. A similarly long-standing anomaly is a 3.8σ excess in the number of electron antineutrinos emerging from a muon–antineutrino beam observed by the LSND experiment and backed up more recently by MiniBooNE. Again, numerous papers attempting to explain the excess, e.g. in terms of the existence of a fourth “sterile” neutrino, have been written, but the jury is still out.

Some anomalies are more recent, and unexpected. The so-called “X17” anomaly reported at a nuclear physics experiment in Hungary, for instance, shows a significant excess in the rate of certain nuclear decays of 8Be and 4He nuclei (see Rekindled Atomki anomaly merits closer scrutiny) which has been interpreted as being due to the creation of a new particle of mass 17 MeV. Though possible theoretically, one needs to work hard to make this new particle not fall afoul of other experimental constraints; confirmation from an independent experiment is also needed. Personally, I am not pursuing this: I think that the best new-physics ideas have already been had by other authors.

When working on an anomaly, beyond-the-SM phenomenologists hypothesise a new particle and/or interaction to explain it, check to see if it works quantitatively, check to see if any other measurements rule the explanation out, then provide new ways in which the idea can be tested. After this, they usually check where the new physics might fit into a larger theoretical structure, which might explain some other mysteries. For example, there are currently many anomalies in measurements of B meson decays, each of which isn’t particularly statistically significant (typically 2–3σ away from the SM) but taken together they form a coherent picture with a higher significance. The exchange of hypothesised Z′ or leptoquark quanta provide working explanations, the larger structure also shedding light on the pattern of masses of SM fermions, and most of my research time is currently devoted to studying them.

The coming decade will presumably sort several current anomalies into discoveries, or those that “went away”. Belle II and future LHCb measurements should settle the B anomalies, while the anomalous muon magnetic moment may even be settled this year by the g-2 experiment at Fermilab. Of course, we hope that new anomalies will appear and stick. One anomaly from the late 1990s – that type 1a supernovae have an anomalous acceleration at large red-shifts – turned out to reveal the existence of dark-energy and produce the dominant paradigm of cosmology today. This reminds us that all surprising discoveries were anomalies at some stage.

The post Learning to love anomalies appeared first on CERN Courier.

]]>
Opinion The 2020s will sort current anomalies in fundamental physics into discoveries or phantoms, says Ben Allanach. https://cerncourier.com/wp-content/uploads/2020/01/CCJanFeb20_Viewpoint_pencils.jpg
Rekindled Atomki anomaly merits closer scrutiny https://cerncourier.com/a/rekindled-atomki-anomaly-merits-closer-scrutiny/ Fri, 20 Dec 2019 15:52:39 +0000 https://preview-courier.web.cern.ch/?p=85878 A large discrepancy in nuclear decay rates has received new experimental support, generating headlines about the possible existence of a fifth force.

The post Rekindled Atomki anomaly merits closer scrutiny appeared first on CERN Courier.

]]>

A large discrepancy in nuclear decay rates spotted four years ago in an experiment in Hungary has received new experimental support, generating media headlines about the possible existence of a fifth force of nature.

In 2015, researchers at the Institute of Nuclear Research (“Atomki”) in Debrecen, Hungary, reported a large excess in the angular distribution of e+e pairs created during nuclear transitions of excited 8Be nuclei to their ground state (8Be* → 8Be γ; γ → e+e). Significant peak-like enhancement was observed at large angles measured between the e+e pairs, corresponding to a 6.8σ surplus over the expected e+e pair-creation from known processes. The excess was soon interpreted by theorists as being due to the possible emission of a new boson X with a mass of 16.7 MeV decaying into e+e pairs.

In a preprint published in October 2019, the Atomki team has now reported a similar excess of events from the electromagnetically forbidden “M0” transition in 4He nuclei. The anomaly has a statistical significance of 7.2σ and is likely, claim the authors, to be due to the same “X17” particle proposed to explain the earlier 8Be excess.

Quality control

“We were all very happy when we saw this,” says lead author Attila Krasznahorkay. “After the analysis of the data a really significant effect could be observed.” Although not a fully blinded analysis, Krasznahorkay says the team has taken several precautions against bias and carried out numerous cross- checks of its result. These include checks for the effect in the angular correlation of e+e pairs in different regions of the energy distribution, and assuming different beam and target positions. The paper does not go into the details of systematic errors, for instance due to possible nuclear-modeling uncertainties, but Krasznahorkay says that, overall, the result is in “full agreement” with the results of the Monte Carlo simulations performed for the X17 decay.

The Atomki team with the apparatus used for the latest beryllium and helium results, which detects electron-positron pairs from the de-excitation of nuclei produced by firing protons at different targets. Credit: Atomki

While it cannot yet be ruled out, the existence of an X boson is not naively expected, say theorists. For one, such a particle would have to “know” about the distinction between up and down quarks and thus electroweak symmetry breaking. Being a vector boson, the X17 would constitute a new force. It could also be related to the dark-matter problem, write Krasznahorkay and co-workers, and could help resolve the discrepancy between measured and predicted values of the muon magnetic moment.

Last year, the NA64 collaboration at CERN reported results from a direct search for the X boson via the bremsstrahlung reaction eZ → eZX, the absence of a signal placing the first exclusion limits on the X–e coupling in the range (1.3–4.2) × 10–4. “The Atomki anomaly could be an experimental effect, a nuclear-physics e ect or something completely new,” comments NA64 spokesperson Sergei Gninenko. “Our results so far exclude only a fraction of the allowed parameter space for the X boson, so I’m really interested in seeing how this story, which is only just beginning, will unfold.” Last year, researchers used data from the BESIII experiment in China to search for direct X-boson production in electron–positron collisions and indirect production in J/ψ decays – finding no signal. Krasznahorkay and colleagues also point to the potential of beam-dump experiments such as PADME in Frascati, and to the upcoming Dark Light experiment at Jefferson Laboratory, which will search for 10–100 MeV dark photons.

I do not know of any inconsistencies in the experimental data that would indicate that it is an experimental effect

Jonathan Feng

Theorist Jonathan Feng of the University of California at Irvine, who’s group proposed the X-boson hypothesis in 2016, says that the new 4He results from Atomki support the previous 8Be evidence of a new particle – particularly since the excess is observed at a slightly different e+e opening angle in 4He (115o) than it is in 8Be (135o). “If it is an experimental error or some nuclear-physics effect, there is no reason for the excess to shift to different angles, but if it is a new particle, this is exactly what is expected,” says Feng. “I do not know of any inconsistencies in the experimental data that would indicate that it is an experimental effect.”

Data details

In 2017, theorists Gerald Miller at the University of Washington and Xilin Zhang at Ohio State concluded that, if the Atomki data are correct, the original 8Be excess cannot be explained by nuclear-physics modelling uncertainties. But they also wrote that a direct comparison to the e+e– data is not feasible due to “missing public information” about the experimental detector efficiency. “Tuning the normalisation of our results reduces the confidence level of the anomaly by at least one standard deviation,” says Miller. As for the latest Atomki result, the nuclear physics in 4He is more complicated than 8Be because two nuclear levels are involved, explains Miller, making it difficult to carry out an analysis analogous to the 8Be one. “For 4He there is also a background pair- production mechanism and interference effect that is not mentioned in the paper, much of which is devoted to the theory and other future experiments,” he says. “I think the authors would have been better served if they presented a fuller account of their data because, ultimately, this is an experimental issue. Confirming or refuting this discovery by future nuclear experiments would be extremely important. A monumental discovery could be possible.”

A monumental discovery could be possible

Gerald Miller

The Hungarian team is now planning on repeating the measurement with a new gamma-ray coincidence spectrometer at Atomki (see main image), which they say might help to distinguish between the vector and the pseudoscalar interpretation of the X17. Meanwhile, a project called New JEDI will enable an independent veri cation of the 8Be anomaly at the ARAMIS-SCALP facility (Orsay, France) during 2020, followed by direct searches by the same group for the existence of the X boson, in particular in other light quantum systems, at the GANIL-SPIRAL2 facility in Caen, France.

“Many people are sceptical that this is a new particle,” says Feng, who too was doubtful at first. “But at this point, what we need are new ideas about what can cause this anomaly. The Atomki group has now found the effect in two different decays. It would be most helpful for other groups to step forward to confirm or refute their results.”

The post Rekindled Atomki anomaly merits closer scrutiny appeared first on CERN Courier.

]]>
News A large discrepancy in nuclear decay rates has received new experimental support, generating headlines about the possible existence of a fifth force. https://cerncourier.com/wp-content/uploads/2019/12/News-Atomki.jpeg
Rarest strange decay shrinks from sight https://cerncourier.com/a/rarest-strange-decay-shrinks-from-sight/ Fri, 29 Nov 2019 15:04:23 +0000 https://preview-courier.web.cern.ch/?p=85111 The decay of the K-short to two muons is sensitive to contributions from yet-to-be discovered particles that are too heavy to be observed directly.

The post Rarest strange decay shrinks from sight appeared first on CERN Courier.

]]>
Fig. 1.

For every trillion K0S, only five are expected to decay to two muons. Like the better known Bs → μ+ μ decay, which was first observed jointly by LHCb and CMS in 2013, the decay rate is very sensitive to possible contributions from yet-to-be discovered particles that are too heavy to be observed directly at the LHC, such as leptoquarks or supersymmetric partners. These particles could significantly enhance the decay rate, up to existing experimental limits, but could also suppress it via quantum interference with the Standard Model (SM) amplitude.

Despite the unprecedented K0S production rate at the LHC, searching for K0S → μ+μ is challenging due to the low transverse momentum of the two muons, typically of a few hundred MeV/c. Though primarily designed for the study of heavy-flavour particles, LHCb’s unique ability to select low transverse-momentum muons in real time makes the search feasible. According to SM predictions, just two signal events are expected in the Run-2 data, potentially making this the rarest decay ever recorded.

The analysis uses two machine-learning tools: one to discriminate muons from pions, and another to discriminate signal candidates from the so-called combinatorial background that arises from coincidental decays. Additionally, a detailed and data-driven map of the detector material around the interaction point helps to reduce the “fixed-target” background caused by particles interacting with the detector material. A background of K0S → π+π decays dominates the selection, and in the absence of a compelling signal, an upper limit to the branching fraction of 2.1 × 10–10 has been set at 90% confidence. This is approximately four times more stringent than the previous world-best limit, set by LHCb with Run-1 data. This result has implications for physics models with leptoquarks and some fine-tuned regions of the Minimal Supersym­metric SM.

The upgraded LHCb detector, scheduled to begin operating in 2021 after the present long shutdown of the LHC, will offer excellent opportunities to improve the precision of this search and eventually find a signal. In addition to the increased luminosity, the LHCb upgrade will have a full software trigger, which is expected to significantly improve the signal efficiency for K0S → μ+μ and other decays with very soft final-state particles.

The post Rarest strange decay shrinks from sight appeared first on CERN Courier.

]]>
News The decay of the K-short to two muons is sensitive to contributions from yet-to-be discovered particles that are too heavy to be observed directly. https://cerncourier.com/wp-content/uploads/2019/11/CCNovDec19_News_LHCB_feature.jpg
CMS revisits rare and beautiful decays https://cerncourier.com/a/cms-revisits-rare-and-beautiful-decays/ Thu, 12 Sep 2019 08:15:37 +0000 https://preview-courier.web.cern.ch/?p=84312 For many years the search for its extremely rare decay to a μ+μ– pair was a holy grail of particle physics.

The post CMS revisits rare and beautiful decays appeared first on CERN Courier.

]]>
Two muons emerge from a Bs → μμ decay candidate

The Bs meson is a bound state of a strange quark and a beauty antiquark – as such it possesses both beauty and strangeness. For many years the search for its extremely rare decay to a μ+μ pair was a holy grail of particle physics, because of its sensitivity to theories that extend the Standard Model (SM). The SM predicts the decay rate for Bsμ+μ to be only about 3.6 parts per billion (ppb). Its lighter cousin, the B0, which is made from a down quark and a beauty antiquark, has an even lower predicted branching fraction for decays to a μ+μ pair of 0.1 ppb. If beyond-the-SM particles exist, however, the predictions could be modified by their presence, giving the decays sensitivity to new physics that rivals and might even exceed that of direct searches.

It took more than a quarter of a century of extensive effort to establish Bsμ+μ, and the first observation was presented in 2013, in a joint publication by the CMS and LHCb collaborations based on LHC Run 1 data. The same paper reported evidence for B0μ+μ with a significance of three standard deviations, however, this signal has not subsequently been confirmed by CMS, LHCb or ATLAS analyses. A new CMS Run 2 analysis now looks set to bolster interest in these intriguing decays.

Diagram of probability contours

The CMS collaboration has updated its 2013 analysis with higher centre-of-mass-energy Run 2 data from 2016, permitting an observation of Bsμ+μ with a significance of 5.6 standard deviations (figure 1). The results are consistent with the latest results from ATLAS and LHCb, and while no significant deviation from the SM is observed by any of the experiments, all three decay rates are found to lie slightly below the SM prediction. The slight deficit is not significant, but the trend is intriguing because it could be related to so-called flavour anomalies recently observed by the LHCb experiment in other rare decays of B mesons (CERN Courier May/June p9). This makes the new CMS measurement even more exciting. The new analysis showed no sign of B0μ+μ, and a stringent 95% confidence limit of less than 0.36 ppb was set on its rate.

CMS also managed to measure the effective lifetime of the Bs meson using the several dozen Bsμ+μ decay events that were observed. The interest in measuring this lifetime is that, just as for the branching fraction, new physics might alter its value from the SM expectation. This measurement yielded a lifetime of about 1.7 ps, consistent with the SM. The measured CMS value is also consistent with the only other such lifetime measurement, performed by LHCb.

With three times more Run 2 data yet to be analysed by CMS, the next update – based on the full Run 1 and Run 2 datasets – may shed more light on this fascinating corner of physics, and move us closer to the ultimate goal, which is the observation of the B0μ+μ decays.

The post CMS revisits rare and beautiful decays appeared first on CERN Courier.

]]>
News For many years the search for its extremely rare decay to a μ+μ– pair was a holy grail of particle physics. https://cerncourier.com/wp-content/uploads/2019/09/CCSepOct19_ef_beauty2.jpg
Between desert and swampland https://cerncourier.com/a/between-desert-and-swampland/ Wed, 11 Sep 2019 13:08:05 +0000 https://preview-courier.web.cern.ch/?p=84339 Planck 2019 focused on the latest in beyond the Standard Model (SM) physics and ultraviolet completions of the SM within theories that unify the fundamental interactions.

The post Between desert and swampland appeared first on CERN Courier.

]]>
Ferruccio Feruglio of INFN Padova

At the 22nd edition of the Planck conference series, which took place in Granada, Spain, from 3–7 June, 170 particle physicists and cosmologists discussed the latest in beyond the Standard Model (BSM) physics and ultraviolet completions of the SM within theories that unify the fundamental interactions.

Several speakers addressed the serious model-building restrictions in supersymmetry and Higgs compositeness that are imposed by the negative results of direct searches for BSM particles at ATLAS and CMS. Particular emphasis was put on the (extended) Higgs sector of the SM, where precision measurements might detect signals of BSM physics. Updates from LHCb and Belle on the flavour anomalies were also eagerly discussed, with proposed explanations including leptoquarks and additional U(1) gauge symmetries with exotic vector-like quarks. However, not all were convinced that the results signal BSM physics. On the cosmological side, delegates learned of the latest attempts to build models of WIMPs, axions, magnetic relics and dark radiation, which also include mechanisms for baryogenesis and inflation in the early universe.

Given the absence of new BSM particles so far at the LHC, theorists talk of a “desert” beyond the weak and Planck scales containing nothing but SM particles. Several speakers reported that phase transitions between non-trivial Higgs vacua could lead to violent phenomena in the early universe that might be tested by future gravitational-wave detectors. Within the inflationary universe these phenomena might also lead to the production of primordial black holes that could explain dark matter.

Discussions of ultraviolet (i.e. high-energy) completions of the SM encompassed the grand unification of fundamental interactions, the origin of neutrino masses, flavour symmetries and the so-called “swampland conjectures”, which characterise theories that might not be compatible with a consistent theory of quantum gravity. Therefore, one might hope that healthy signals of BSM physics might appear somewhere between the desert and the swampland.

Planck 2020 will be held from 8-12 June in Durham, UK.

The post Between desert and swampland appeared first on CERN Courier.

]]>
Meeting report Planck 2019 focused on the latest in beyond the Standard Model (SM) physics and ultraviolet completions of the SM within theories that unify the fundamental interactions. https://cerncourier.com/wp-content/uploads/2019/09/CCSepOct19_fn-desert.jpg
Muon g−2 collaboration prepares for first results https://cerncourier.com/a/muon-g%e2%88%922-collaboration-prepares-for-first-results/ Wed, 11 Sep 2019 08:40:35 +0000 https://preview-courier.web.cern.ch/?p=84352 Fermilab's E989 experiment aims to improve experimental errors by a factor of four.

The post Muon g−2 collaboration prepares for first results appeared first on CERN Courier.

]]>
The muon g−2 collaboration

The annual “g-2 physics week”, which took place on Elba Island in Italy from 27 May to 1 June, saw almost 100 physicists discuss the latest progress at the muon g−2 experiment at Fermilab. The muon magnetic anomaly, aμ, is one of the few cases where there is a hint of a discrepancy between a Standard Model (SM) prediction and an experimental measurement. Almost 20 years ago, in a sequence of increasingly precise measurements, the E821 collaboration at Brookhaven National Laboratory (BNL) determined aμ = (g–2)/2 with a relative precision of 0.54 parts per million (ppm), providing a rigorous test of the SM. Impressive as it was, the result was limited by statistical uncertainties.

A new muon g−2 experiment currently taking data at Fermilab, called E989, aims to improve the experimental error on aμ by a factor of four. The collaboration took its first dataset in 2018, integrating 40% more statistics than the BNL experiment, and is now coming to the end of a second run that will yield a combined dataset more than three times larger.

A thorough review of the many analysis efforts during the first data run has been conducted. The muon magnetic anomaly is determined from the ratio of the muon and proton precession frequencies in the same magnetic field. The ultimate aim of experiment E989 is to measure both of these frequencies with a precision of 0.1 ppm by employing techniques and expertise from particle-physics experimentation (straw tracking detectors and calorimetetry), nuclear physics (nuclear magnetic resonance) and accelerator science. These frequencies are independently measured by several analysis groups with different methodologies and different susceptibilities to systematic effects.

A recent relative unblinding of a subset of the data with a statistical precision of 1.3 ppm showed excellent agreement across the analyses in both frequencies. The absolute values of the two frequencies are still subject to a ~25 ppm hardware blinding offset, so no physics conclusion can yet be drawn. But the exercise has shown that the collaboration is well on the way to publishing its first result with a precision better than E821 towards the end of the year.

The post Muon g−2 collaboration prepares for first results appeared first on CERN Courier.

]]>
Meeting report Fermilab's E989 experiment aims to improve experimental errors by a factor of four. https://cerncourier.com/wp-content/uploads/2019/09/Muon1-lores.jpg
Mexico hosts dynamic LHCP week https://cerncourier.com/a/mexico-hosts-dynamic-lhcp-week/ Thu, 11 Jul 2019 08:40:53 +0000 https://preview-courier.web.cern.ch?p=83623 Studies involving unusual signatures were popular at the Mexico conference.

The post Mexico hosts dynamic LHCP week appeared first on CERN Courier.

]]>

The seventh edition of the Large Hadron Collider Physics (LHCP) conference took place in Puebla, Mexico, from 20 to 25 May, hosted by the Benemérita Universidad Autónoma de Puebla (BUAP). With almost 400 participants, the week involved dynamic discussions between experimentalists and theorists on an assortment of topics related to LHC research. These ranged from heavy-ion physics to precision measurements of the Standard Model (SM), including Higgs-sector constraints and searches for hints of physics beyond the SM such as supersymmetry and model-independent high-mass resonance searches.

Results from the wealth of LHC data collected at 13 TeV during Run 2 (from 2015–2018) are beginning to be published. The ATLAS and CMS collaborations presented new results in the search for supersymmetry, setting new limits on supersymmetric parameters. The latest CMS search for top squarks in events with two tau leptons in the final state excludes top-squark masses above 1 TeV for nearly massless neutralinos. The first ATLAS Run 2 measurement for the production of tau sleptons was also presented, excluding masses between 120 and 390 GeV for a massless neutralino. Both of these challenging analyses contain a high amount of missing momentum, originating from the lightest supersymmetric particle and the neutrinos from the tau decays.

Studies involving unusual signatures were popular at the Mexico conference. Disappearing tracks, emerging jets, displaced vertices and out-of-time decays, which would each be indications of new processes or particles being present in the event, were all discussed. These signatures also provide a challenge for detector and algorithm designs, especially at the high-luminosity LHC (HL-LHC).

The recent observation of CP violation in charm quarks (CERN Courier May/June p7) published by the LHCb Collaboration in March was presented. “Long awaited, finally observed!” was the statement from LHCb-spokesperson Giovanni Passaleva. This result, which shows the different decay rates of charm quarks and charm anti-quarks, opens up new avenues of investigation for testing the SM.

The final two days of the conference featured open discussions on recent progress in the upgrades of the LHC and the detectors for the HL-LHC, and on various proposals and design challenges for future colliders. The HL-LHC will be a very challenging environment in which to distinguish particles of interest, as the average number of proton–proton collisions will increase from around 50 to about 200 each time the bunches in the LHC beams cross. For future colliders, circular and linear, delegates agreed that the community must better communicate the motivations and goals for such future machines with governments and the public.

The next edition of the conference will take place in Paris in 2020. Though also taking place during the current long shutdown, many new results with the full LHC Run-2 statistics will be presented, as well as progress on preparing the detectors and the accelerator for Run 3.

The post Mexico hosts dynamic LHCP week appeared first on CERN Courier.

]]>
Meeting report Studies involving unusual signatures were popular at the Mexico conference. https://cerncourier.com/wp-content/uploads/2019/07/LHCP-conference.jpg
Heavy ions and hidden sectors https://cerncourier.com/a/heavy-ions-and-hidden-sectors/ Thu, 11 Jul 2019 08:34:28 +0000 https://preview-courier.web.cern.ch?p=83621 The meeting was inspired by several recent proposals to take advantage of the unique environment of heavy-ion collisions at the LHC to search for new phenomena.

The post Heavy ions and hidden sectors appeared first on CERN Courier.

]]>
The first dedicated workshop on searches for new physics in heavy-ion collisions took place at the Université Catholique de Louvain, Belgium, on 4–5 December 2018. The meeting was inspired by several recent proposals to take advantage of the unique environment of heavy-ion collisions at the LHC to search for new phenomena. A key topic was the exploration of “hidden” or “dark” sectors that couple only feebly to ordinary matter and could explain the dark-matter puzzle, neutrino masses or the matter–antimatter asymmetry of the universe. This is currently a hot topic in the search for physics beyond the Standard Model that has gained increasing interest in the heavy-ion community. The purpose of this workshop was to spark ideas and initiate exchanges between theorists, experimentalists and accelerator physicists.

A key question was how to optimise the choice of ions and the beam parameters for new-physics searches without compromising the study of the quark–gluon plasma

Discussions at the workshop first focused on particle production mechanisms unique to heavy-ion collisions. Simon Knapen from the IAS at Princeton University and Oliver Gould of the University of Helsinki emphasised the strongly enhanced production cross-sections for axion-like particles and magnetic monopoles in ultra-peripheral heavy-ion collisions compared to proton–proton collisions. This enhancement is due to the collective action of up to 82 charges (for lead ions), thereby generating the strongest electromagnetic fields ever produced in the laboratory, as the heavy ions pass each other at ultra-relativistic energies. David d’Enterria of CERN discussed the experimental potential to exploit such unique opportunities in searches for new physics by using the LHC as a “photon–photon collider”. In contrast to these studies of ultra-peripheral collisions, Glennys Farrar of New York University motivated interest in head-on collisions: thermal production in the quark–gluon plasma could be used to search for non-conventional dark-matter candidates such as “sexaquarks”.

Jan Hajer of the Université Catholique de Louvain stressed that not only the production mechanisms but also the backgrounds are qualitatively different in heavy-ion collisions. This can, for example, allow searches for long-lived particles in parameter regions that are hard to probe in proton collisions due to limitations related to the high pile-up during future LHC runs.

A key question that emerged from the workshop was how to optimise the choice of ions and the beam parameters for new-physics searches without compromising the study of the quark–gluon plasma. The discussion was extremely helpful for elucidating the hard engineering restrictions within which any novel proposals must fit, such as the capacity of the injectors and the beam lifetime.

The workshop was very successful and triggered many discussions, including the proposal to submit an input for the update of the European Strategy for Particle Physics and for a follow-up event in 2020. The topic is still young, and we are very much looking forward to input from the wider community.

The post Heavy ions and hidden sectors appeared first on CERN Courier.

]]>
Meeting report The meeting was inspired by several recent proposals to take advantage of the unique environment of heavy-ion collisions at the LHC to search for new phenomena. https://cerncourier.com/wp-content/uploads/2019/07/Heavy_Ions_and_Hidden_Sectors.jpg
Topological avatars of new physics https://cerncourier.com/a/topological-avatars-of-new-physics/ Thu, 11 Jul 2019 08:16:55 +0000 https://preview-courier.web.cern.ch?p=83603 A field configuration is topologically non-trivial if it exhibits the topology of a “mathematical knot” in some space, real or otherwise.

The post Topological avatars of new physics appeared first on CERN Courier.

]]>

Topologically non-trivial solutions of quantum field theory have always been a theoretically “elegant” subject, covering all sorts of interesting and physically relevant field configurations, such as magnetic monopoles, sphalerons and black holes. These objects have played an important role in shaping quantum field theories and have provided important physical insights into cosmology, particle colliders and condensed-matter physics.

In layman’s terms, a field configuration is topologically non-trivial if it exhibits the topology of a “mathematical knot” in some space, real or otherwise. A mathematical knot (or a higher-dimensional generalisation such as a Möbius strip) is not like a regular knot in a piece of string: it has no ends and cannot be continuously deformed into a topologically trivial configuration like a circle or a sphere.

One of the most conceptually simple non-trivial configurations arises in the classification of solitons, which are finite-energy extended configurations of a scalar field behaving like the Higgs field. Among the various finite-energy classical solutions for the Higgs field, there are some that cannot be continuously deformed into the vacuum without an infinite cost in energy, and are therefore “stable”. For finite-energy configurations that are spherically symmetric, the Higgs field must map smoothly onto its vacuum solution at the boundary of space.

The ’t Hooft–Polyakov monopole, which is predicted to exist in grand unified theo­ries, is one such finite-energy topologically non-trivial solitonic configuration. The black hole is an example from general relativity of a singular space–time configuration with a non-trivial space–time topology. The curvature of space–time blows up in the singularity at the centre, and this cannot be removed either by continuous deformations or by coordinate changes: its nature is topological.

Such configurations constituted the main theme of a recent Royal Society Hooke meeting “Topological avatars of new physics”, which took place in London from 4–5 March. The meeting focused on theoretical modelling and experimental searches for topologically important solutions of relativistic quantum field theories in particle physics, general relativity and cosmology, and quantum gravity. Of particular interest were topological objects that could potentially be detectable at the Large Hadron Collider (LHC), or at future colliders.

Gerard ’t Hooft opened the scientific proceedings with an inspiring talk on form­ulating a black hole in a way consistent with quantum mechanics and time-reversal symmetry, before Steven Giddings described his equally interesting proposal. Another highlight was Nicholas Manton’s talk on the inevitability of topological non-trivial unstable configurations of the Higgs field – “sphalerons” – in the Standard Model. Henry Tye said sphalerons can in principle be produced at the (upgraded) LHC or future linear colliders. A contradictory view was taken by Sergei Demidov, who predicted that their production will be strongly suppressed at colliders.

One of the exemplars of topological physics receiving significant experimental attention is the magnetic monopole

A major part of the workshop was devoted to monopoles. The theoretical framework of light monopoles within the Standard Model, possibly producible at the LHC, was presented by Yong Min Cho. These “electroweak” monopoles have twice the magnetic charge of Dirac monopoles. Like the ’t Hooft–Polyakov monopole, but unlike the Dirac monopole, they are solitonic structures, with the Higgs field playing a crucial role. Arttu Rajantie considered relatively unsuppressed thermal production of generic monopole–antimonopole pairs  in the presence of the extreme high temperatures and strong magnetic fields of heavy-ion collisions at the LHC. David Tong discussed the ambiguities on the gauge group of the Standard Model, and how these could affect monopoles that are admissible solutions of such gauge field theories. Importantly, such solutions give rise to potentially observable phenomena at the LHC and at future colliders. Anna Achucaro and Tanmay Vachaspati reported on fascinating computer simulations of monopole scattering, as well as numerical studies of cosmic strings and other topologically non-trivial defects of relevance to cosmology.

One of the exemplars of topological physics currently receiving significant experimental attention is the magnetic monopole. The MoEDAL experiment at the LHC has reported world-leading limits on multiply magnetically charged monopoles, and Albert de Roeck gave a wide-ranging report on the search for the monopole and other highly-ionising particles, with Laura Patrizii and Adrian Bevan also reporting on these searches and the machine-learning techniques employed in them.

Supersymmetric scenarios can consistently accommodate all the aforementioned topologically non-trivial field theory configurations. Doubtless, as John Ellis described, the story of the search for this beautiful – but as yet hypothetical – new symmetry of nature, is a long way from being over. Last but not least, were two inspiring talks by Juan Garcia Bellido and Marc Kamionkowski on the role of primordial black holes as dark matter, and their potential detection by means of gravitational waves.

The workshop ended with a vivid round-table discussion of the importance of a new ~100 TeV collider. The aim of this machine is to explore beyond the historic watershed represented by the discovery of the Higgs boson, and to move us closer to understanding the origin of elementary particles, and indeed space–time itself. This Hooke workshop clearly demonstrated the importance of topological avatars of new physics to such a project.

The post Topological avatars of new physics appeared first on CERN Courier.

]]>
Meeting report A field configuration is topologically non-trivial if it exhibits the topology of a “mathematical knot” in some space, real or otherwise. https://cerncourier.com/wp-content/uploads/2019/07/CCJulAug19_FN-topological-1.jpg
The difficult work begins https://cerncourier.com/a/the-difficult-work-begins/ Wed, 22 May 2019 08:34:55 +0000 https://preview-courier.web.cern.ch?p=83921 The open symposium of the European Strategy for Particle Physics revealed a vibrant field in flux as it grapples with the next big questions.

The post The difficult work begins appeared first on CERN Courier.

]]>
The open symposium of the European Strategy for Particle Physics (ESPP) update, which drew to a close last week in Granada, Spain, was a moment for physicists to take stock of their field’s status and future. A week of high-quality presentations and focused discussions proved how far things have moved on since the previous strategy update concluded in 2013. In the past few years the LHC has proved the existence of the Higgs boson and so far suggested that there are no new particles beyond the SM at the electroweak scale. Spectacular progress has been made with neutrinos, dark-matter searches, flavour and electroweak physics, and gravitational-wave astronomy is beginning to take off. The deepest puzzles of the standard models of particle physics and cosmology remain at large, however, and large colliders are one of the best tools to address them.

Recommendations from the ESPP are due early next year. Dominating discussions at the open symposium last week was which project should succeed the LHC after its operations cease in the 2030s. The decision has significant consequences for the next generation of particle physicists, not just in Europe but internationally. Perspectives from Asia and the Americas, in addition to national views and inputs from the astroparticle– and nuclear-physics communities, brought into sharp focus the global nature of modern high-energy physics and the need for greater coordination at all levels.

The 130 or so talks and discussion sessions in Granada revealed a community united in its desire for a post-LHC collider, but less so in its choice of that collider’s form. Enormous efforts have gone into weighing up the physics reach of the various projects under study, a task complicated by the complexity of future accelerator technologies, detectors and analyses. Stimulating some heated exchanges, the ESPP saw the International Linear Collider (ILC) in Japan, a Compact Linear Collider (CLIC) or future circular electron–positron collider (FCC-ee) at CERN and a Circular Electron Positron Collider in China (CEPC) pitted against each other and against expectations from the high-luminosity LHC in terms of their potential in key areas such as Higgs physics.

Summary sessions

Summing up the situation for beyond-SM (BSM) physics, Gian Giudice of CERN said that the remaining BSM-physics space is “huge”, and pointed to four big questions for colliders: to what extent can we tell whether the Higgs is fundamental or composite? Are there new interactions or new particles around or above the electroweak scale? What cases of thermal relic WIMPs are still unprobed and can be fully covered by future collider searches? And to what extent can current or future accelerators probe feebly interacting sectors?

Neutrinos, the least well known of all the SM particles, were the subject of numerous presentations. The ESPP audience was reminded that neutrino masses, as established by neutrino oscillations, are the first particle-physics evidence for BSM phenomena. A vibrant programme is under way to fully measure the neutrino mixing matrix and in particular the neutrino mass ordering and CP violation phase. Other experiments are probing the neutrino’s absolute mass scale and testing whether they are of a Dirac or Majorana nature. Along with gravitational waves, neutrinos play a powerful role in multimesseneger astronomy.

Around a fifth of the 160 input documents to the ESPP were linked to flavour physics, covering topics such as lepton-flavour universality, electric-dipole moments and heavy-flavour studies.

Flavour physics is crucial for BSM searches since it is potentially sensitive to effects at scales as high as 105 TeV, said Antonio Zoccoli of INFN in his summary. There is also much complementarity between low-energy physics, the high-energy frontier and searches for feebly interacting particles, he said. Oddities in b-decays seen by the LHCb collaboration are of particular interest. “Flavour is a major legacy of LHC,” Zoccoli concluded. “Charged hadron particle-ID should be mandatory for a full physics programme at future colliders.”

Summarising ESPP sessions on dark-matter and dark-sector physics, Shoji Asai of the University of Tokyo drew attention to a shift in sociology that is taking place. In the old view, dark-matter solutions arose as a byproduct of “top-down” approaches (such as supersymmetry) to solve the SM’s problems. The “new sociology” holds that dark matter needs an explanation of its own, and it’s to be considered a bonus if such a solution also elucidates important issues such as the strong-CP problem or baryogenesis. Among the “big questions” identified in this sector at the ESPP update were: What are the main differences between light hidden-sector dark matter and WIMPs? How broad is the parameter space for the QCD axion? How do we compare the results of different experiments in a more model-independent way? And how will direct and indirect dark-matter detection experiments inform/guide accelerator searches and vice versa? Asai said that consensus has emerged on the need for more coordination and support between accelerator-based direct detection and indirect detection dark-sector searches, as exemplified by the new European Center for AstroParticle Theory.

In summarising interests in the strong sector, Jorgen D’Hondt of Vrije Universiteit Brussel listed the many dedicated experiments in this area and the open questions identified at the ESPP symposium: “What are the experimental and theoretical prerequisites to reach an adequate precision of perturbative and non-perturbative QCD predictions at the highest energies? What can be learned from beams-on-target experiments at current and potential future accelerators? How to probe the quark–gluon plasma equation of state and to establish whether there is a first-order phase transition at high baryon density? What is known about the make-up of the proton (mass, radius, spin, etc) and how to extract it? And what is the role of strong interactions at very low and very high (up to astrophysical) energies?”

Electroweak sparks

Of all the scientific themes of the week, electroweak physics generated the most lively discussions, especially concerning how well the Higgs boson’s couplings to fermions, gauge bosons and to itself can be probed at current and future colliders. Summary speaker Beate Heinemann of DESY cautioned that such quantitative estimates should be treated with a degree of flexibility at this time, though a few things stand out: one is the impressive estimated performance from the HL-LHC in the next 15 or so years; another is that a long-term physics programme based on successive machines in a 100 km-circumference tunnel offers the largest overall physics reach on the Higgs boson and other key parameters. The long timescales required to master the technology for the next hadron collider were well noted. There is broad agreement that the next major collider after the LHC should collide electrons and positrons to fully explore the Higgs boson and make precision measurements of other electroweak parameters that are sensitive to phenomena at higher energy scales. Whether that machine is circular or linear, and built in Asia or Europe, are the billion-dollar questions facing the community now.

The closer involvement of particle physics with astroparticle physics, in particular following the discovery of gravitational waves, was the running theme of the open symposium. It was argued that, in terms of technology, next-generation gravitational-wave detectors such as the Einstein Telescope are essentially “accelerators without beams” and that CERN’s expertise in vacuum and cryogenic technologies (a result of the lab’s continual pursuit and execution of big-collider projects) would help to make such facilities a reality.

The closing discussion of the symposium offered a final hour for physicists to air their views, many of which were met with applause. Proponents of circular machines highlighted the high flexibility and exploratory potential of projects such as FCC-ee, pointing out that it would serve as an electroweak as well as a Higgs factory. Linear-minded participants cited factors such as the extendable nature of linacs, and the independence of their tunnels from a subsequent hadron collider. For others, the priority for CERN should be to enter negotiations as soon as possible for a 100 km tunnel in the Geneva region, buying time to decide which physics option should be installed. Warm applause followed a remark that CERN decides for itself what its next project should be, without relying on other labs. But there were reminders from others that high-energy physics is an international field and that, in times of scarce resources, all options should be considered.

The high-energy physics community has risen to the occasion of the ESPP update. New thinking, from basic theory to instrumentation, computing, analysis and global organisation, is clearly required to sustain the recent rate of progress. Now that the open symposium is over, the European Strategy Group (ESG) will start to prepare a briefing book. Further input can be submitted to the strategy secretariat during the next months, and at a special session organised by the European Committee for Future Accelerators on 14 July 2019 during the European Physical Society Conference on High Energy Physics in Ghent, Belgium. An ESG drafting session will take place on 20–24 January 2020 in Bad Honnef, Germany, and the update of the ESPP is due to be completed and approved by the CERN Council in May 2020.

The post The difficult work begins appeared first on CERN Courier.

]]>
News The open symposium of the European Strategy for Particle Physics revealed a vibrant field in flux as it grapples with the next big questions. https://cerncourier.com/wp-content/uploads/2019/07/Summary-talks-IMG_8921_main.jpg
Addressing the outstanding questions https://cerncourier.com/a/addressing-the-outstanding-questions/ Tue, 14 May 2019 08:08:44 +0000 https://preview-courier.web.cern.ch?p=83892 At the European Strategy for Particle Physics, talks covered key questions in the field, and the accelerator, detector and computing technologies necessary to tackle them.

The post Addressing the outstanding questions appeared first on CERN Courier.

]]>
The success of the Standard Model (SM) in describing elementary particles and their interactions is beyond doubt. Yet, as an all-encompassing theory of nature, it falls short. Why are the fermions arranged into three neat families? Why do neutrinos have a vanishingly small but non-zero mass? Why does the Higgs boson discovered fit the simplest “toy model” of itself? And what lies beneath the SM’s 26 free parameters? Similarly profound questions persist in the universe at large: the mechanism of inflation; the matter–antimatter asymmetry; and the nature of dark energy and dark matter.

Surveying outstanding questions in particle physics during the opening session of the update of the European Strategy for Particle Physics (ESPP) on Monday, theorist Pilar Hernández of the University of Valencia discussed the SM’s unique weirdness. Quoting Newton’s assertion “that truth is ever to be found in simplicity, and not in the multiplicity and confusion of things”, she argued that a deeper theory is needed to solve the model’s many puzzles. “At some energy scale the SM stops making sense, so there is a cut off,” she stated. “The question is where?”

This known unknown has occupied theorists ever since the SM came into existence. If it is assumed that the natural cut-off is the Planck scale, 12 orders of magnitude above the energies at the LHC where gravity becomes relevant to the quantum world, then fine tuning is necessary to explain why the Higgs boson (which generates mass via its interactions) is so light. Traditional theoretical solutions to this hierarchy problem – such as supersymmetry or large extra dimensions – imply the existence of new phenomena at scales higher than the mass of the Higgs boson. While initial results from the LHC severely constrain the most natural parameter spaces, the 10­–100 TeV region is still an interesting scale to explore, says Hernández. At the same time, continues Hernández, there is a shift to more “bottom-up, rather than top-down”, approaches to beyond-SM (BSM) physics. “Particle physics could be heading to crisis or revolution. New BSM avenues focus on solving open problems such as the flavour puzzle, the origin of neutrino masses and the baryon asymmetry at lower scales.”

Introducing a “motivational toolkit” to plough the new territories ahead, Hernández named targets such as axion-like and long-lived particles, and the search for connections between the SM’s various puzzles. She noted in particular that 23 of the 26 free parameters of the SM are related in one way or another to the Higgs boson. “If we are looking for the suspect that could be hiding some secret, obviously the Higgs is the one!”

Linear versus circular

The accelerator, detector and computing technology needed for future fundamental exploration was the main focus of the scientific plenary session on day one of the ESPP update. Reviewing Higgs factory programmes, Vladimir Shiltsev, head of Fermilab’s Accelerator Physics Center, weighed up the pros and cons of linear versus circular machines. The former includes the International Linear Collider (ILC) and the Compact Linear Collider (CLIC); the latter a future circular electron–positron collider at CERN (FCCee) and the Circular Electron Positron Collider in China (CEPC). All need a high luminosity at the Higgs energy scale.

Linear colliders, said Shiltsev, are based on mature designs and organisation, are expandable to higher energies, and draw a wall-plug power similar to that of the LHC. On the other hand, they face potential challenges linked to their luminosity spectrum and beam current. Circular Higgs factories are also based on mature technology, with a strong global collaboration in the case of FCC. They offer a higher luminosity and more interaction points than linear options but require strategic R&D into high-efficiency RF sources and superconducting cavities, said Shiltsev. He also described a potential muon collider with a centre of mass energy of 126 GeV, which could be realised in a machine as short as 10 km. Although the cost would be relatively low, he said, the technology is not yet ready.

coffee break at open symposium of the European Strategy for Particle Physics

For energy-frontier colliders, the three current options – CERN’s HE-LHC (27 TeV) and FCC-hh (100 TeV), and China’s SppC (75 TeV) – demand high-field superconducting dipole magnets. These machines also present challenges such as how to deal with extreme levels of synchrotron radiation, collimation, injection and the overall machine design and energy efficiency. In a talk about the state-of-the-art and challenges in accelerator technology, Akira Yamamoto of CERN/KEK argued that, while a lepton collider could begin construction in the next few years, the dipoles necessary for a hadron collider might take 10 to 15 years of R&D before construction could start. There are natural constraints in such advanced-magnet development regardless of budget and manpower, he remarked.

Concerning more futuristic acceleration technologies based on plasma wakefields, which offer a factor 1000 more power than today’s RF systems, impressive results have been achieved recently at facilities such as BELLA at Berkeley and AWAKE at CERN. Responding to a question about when these technologies might supersede current ones, Shiltsev said: “Hopefully 20–30 years from now we should be able to know how many thousands of TeV will be possible by the end of the century.”

Recognising detectors and computing

An energy-frontier hadron collider would produce radiation environments that current detectors cannot deal with, said Francesco Forti of INFN and the University of Pisa in his talk about the technological challenges of particle-physics experiments. Another difficulty for detectors is how to handle non-standard physics signals, such as long-lived particles and monopoles. Like accelerators, detectors require long time scales – it was the very early 1990s when the first LHC detector CDRs were written. From colliders to fixed-target to astrophysics experiments, detectors in high-energy physics face a huge variety of operating conditions and employ technologies that are often deeply entwined with developments in industry. The environmental credentials of detectors are also increasingly in the spotlight.

The focus of detector R&D should follow a “70–20–10” model, whereby 70% of efforts go into current detectors, 20% on future detectors and 10% blue-sky R&D, argued Forti. Given that detector expertise is distributed among many institutions, the field also needs solid co-ordination. Forti cited CERN’s “RD” projects in diamond detectors, silicon radiation-hard devices, micro-pattern gas detectors and pixel readout chips for ATLAS and CMS as good examples of coordination towards common goals. Finally, he argued strongly for greater consideration of the “human factor”, stating that the current career model “just doesn’t work very well.” Your average particle physicist cannot be expert and innovative simultaneously in analysis, detectors, computing, teaching, outreach and other areas, he reasoned. “Career opportunities for detector physicists must be greatly strengthened and kept open in a systematic way, he said. “Invest in the people and in the murky future.”

Computing for high-energy physics faces similar challenges. “There is an increasing gap between early-career physicists and the profile needed to program new architectures, such as greater parallelisation,” said Simone Campana of CERN and the HEP software foundation in a presentation about future computing challenges. “We should recognise the efforts of those who specialise in software because they can really change things like the speed of analyses and simulations.”

In terms of data processing, the HL-LHC presents a particular challenge. DUNE, FAIR, BELLE II and other experiments will also create massive data samples. Then there is the generation of Monte Carlo samples. “Computing resources in HEP will be more constrained in the future,” said Campana. “We enter a regime where existing projects are entering a challenging phase, and many new projects are competing for resources – not just in HEP but in other sciences, too.” At the same time, the rate of advances in hardware performance has slowed in recent years, encouraging the community to adapt to take advantage of developments such as GPUs, high-performance computing and commercial cloud services.

The HEP software foundation released a community white paper in 2018 setting out the radical changes in computing and software – not just for processing but also for data storage and management – required to ensure the success of the LHC and other high-energy physics experiments into the 2020s.

Closing out

Closer examination of linear and circular colliders took place during subsequent parallel sessions on the first day of the ESPP update. Dark matter, flavour physics and electroweak and Higgs measurements were the other parallel themes. A final discussion session focusing on the capability of future machines for precision Higgs physics generated particularly lively exchanges between participants. It illuminated both the immensity of efforts to evaluate the physics reach of the high-luminosity LHC and future colliders, and the unenviable task faced by ESPP committees in deciding which post-LHC project is best for the field. It’s a point summed up well in the opening address by the chair of the ESPP strategy secretariat, Halina Abramowicz: “This is a very strange symposium. Normally we discuss results at conferences, but here we are discussing future results.”

The post Addressing the outstanding questions appeared first on CERN Courier.

]]>
News At the European Strategy for Particle Physics, talks covered key questions in the field, and the accelerator, detector and computing technologies necessary to tackle them. https://cerncourier.com/wp-content/uploads/2019/07/Shape-shifting-IMG_8828_main.jpg
Pushing the limits on supersymmetry https://cerncourier.com/a/pushing-the-limits-on-supersymmetry/ Wed, 08 May 2019 09:18:02 +0000 https://preview-courier.web.cern.ch?p=83033 Despite the theory’s many appealing features, searches for SUSY at the LHC and elsewhere have so far yielded only exclusion limits.

The post Pushing the limits on supersymmetry appeared first on CERN Courier.

]]>

A report from the ATLAS experiment

Supersymmetry (SUSY) introduces a new fermion–boson symmetry that gives rise to supersymmetric “partners” of the Standard Model (SM) particles, and “naturally” leads to a light Higgs boson with mass close to that of the W and Z. SUSY partners that are particularly relevant in these “natural SUSY” scenarios are the top and bottom squarks, as well as the SUSY partners of the weak SM bosons, the neutralinos and charginos.

Despite the theory’s many appealing features, searches for SUSY at the LHC and elsewhere have so far yielded only exclusion limits. With LHC Run 2 completed as of the end of 2018, the ATLAS experiment has recorded 139 fb-1 of physics-quality proton–proton collisions at a centre-of-mass energy of 13 TeV. Three recent ATLAS SUSY searches highlight the significant increase in sensitivity offered by this dataset.

The first search took advantage of refinements in b-tagging to search for light bottom squarks decaying into bottom quarks, Higgs bosons and the lightest SUSY partner, which is assumed to be invisible and stable (a candidate for dark matter). The data agree with the SM and lead to significantly improved constraints, with bottom squark masses now excluded up to 1.5 TeV. 

If the accessible SUSY particles can only be produced via electroweak processes, the resulting low-production cross sections present a challenge. The second search focuses on such electroweak SUSY signatures with two charged leptons and a significant amount of missing momentum carried away by a pair of the lightest SUSY partners. The current search places strong constraints on SUSY models with light charginos and more than doubles the sensitivity of the previous analysis (figure 1).

A third recent analysis considered less conventional signatures. Top squarks – the bosonic SUSY partner of the top quark – may evade detection if they have a long lifetime and decay at macroscopic distances from the collision point. This search looked for SUSY particles decaying to a quark and a muon, looking primarily for long-lived top squarks that decayed several millimetres into the detector volume. The observed results are consistent with the background-only expectation.

These analyses represent just the beginning of a large programme of SUSY searches using the entirety of the Run-2 dataset. With a rich signature space left to explore, there remains plenty of room for discovery in mining the riches from the LHC.

The post Pushing the limits on supersymmetry appeared first on CERN Courier.

]]>
News Despite the theory’s many appealing features, searches for SUSY at the LHC and elsewhere have so far yielded only exclusion limits. https://cerncourier.com/wp-content/uploads/2019/05/CCMayJun19_News-atlas.jpg
Boosting searches for fourth-generation quarks https://cerncourier.com/a/boosting-searches-for-fourth-generation-quarks/ Wed, 08 May 2019 09:15:57 +0000 https://preview-courier.web.cern.ch?p=83030 If the new "T" quarks exist, they are expected to decay to a quark and a W, Z or Higgs boson.

The post Boosting searches for fourth-generation quarks appeared first on CERN Courier.

]]>

A report from the CMS experiment

Ever since the 1970s, when the third generation of quarks and leptons began to emerge experimentally, physicists have asked if further generations await discovery. One of the first key results from the Large Electron–Positron Collider 30 years ago provided evidence to the contrary, showing that there are only three generations of neutrinos. The discovery of the Higgs boson in 2012 added a further wrinkle to the story: many theorists believe that the mass of the Higgs boson is unnaturally small if there are additional generations of quarks heavier than the top quark. But a loophole arises if the new heavy quarks do not interact with the Higgs field in the same way as regular quarks. The search for new heavy fourth-generation quarks – denoted T – is therefore the subject of active research at the LHC today.

CMS researchers have recently completed a search for such “vector-like” quarks using a new machine-learning method that exploits special relativity in a novel way. If the new T quarks exist, they are expected to decay to a quark and a W, Z or Higgs boson. As top quarks and W/Z/H bosons decay themselves, production of a T quark–antiquark pair could lead to dozens of different final states. While most previous searches focused on a handful of channels at most, this new analysis is able to search for 126 different possibilities at once.

The key to classifying all the various final states is the ability to identify high-energy top quarks, Higgs bosons, and W and Z bosons that decay into jets of particles recorded by the detector. In the reference frame of the CMS detector, these particles produce wide jets that all look alike, but things look very different in a frame of reference in which the initial particle (a W, Z or H boson, or a top quark) is at rest. For example, in the centre-of-mass frame of a Higgs boson, it would appear as two well-collimated back-to-back jets of particles, whereas in the reference frame of the CMS detector the jets are no longer back-to-back and may indeed be difficult to identify as separate at all. This feature, based on special relativity, tells us how to distinguish “fat” jets originating from different initial particles.

Modern machine-learning techniques were used to train a deep neural-network classification algorithm using simulations of the expected particle decays. Several dozen properties of the jets were calculated in different hypothetical reference frames, and fed to the network, which classifies the original fat jets as coming from either top quarks, H, W or Z bosons, b quarks, light quarks, or gluons. Each event is then classified according to how many of each jet type there are in the event. The number of observed events in each category was then compared to the predicted background yield: an excess could indicate T-quark pair production.

CMS found no evidence for T-quark pair production in the 2016 data, and has excluded T-quark masses up to 1.4 TeV (figure 1). The collaboration is working on new ideas to improve the classification method and extend the search to higher masses using the four-times larger 2017 to 2018 dataset.

The post Boosting searches for fourth-generation quarks appeared first on CERN Courier.

]]>
News If the new "T" quarks exist, they are expected to decay to a quark and a W, Z or Higgs boson. https://cerncourier.com/wp-content/uploads/2019/05/CCMayJun19_News-cms2_th.jpg
In it for the long haul https://cerncourier.com/a/in-it-for-the-long-haul/ Mon, 11 Mar 2019 16:42:51 +0000 https://preview-courier.web.cern.ch?p=13547 We have conquered the easiest challenges in fundamental physics, says Nima Arkani-Hamed. The case for building the next major collider is now more compelling than ever.

The post In it for the long haul appeared first on CERN Courier.

]]>
Nima Arkani-Hamed

How do you view the status of particle physics?

There has never been a better time to be a physicist. The questions on the table today are not about this-or-that detail, but profound ones about the very structure of the laws of nature. The ancients could (and did) wonder about the nature of space and time and the vastness of the cosmos, but the job of a professional scientist isn’t to gape in awe at grand, vague questions – it is to work on the next question. Having ploughed through all the “easier” questions for four centuries, these very deep questions finally confront us: what are space and time? What is the origin and fate of our enormous universe? We are extremely fortunate to live in the era when human beings first get to meaningfully attack these questions. I just wish I could adjust when I was born so that I could be starting as a grad student today! But not everybody shares my enthusiasm. There is cognitive dissonance. Some people are walking around with their heads hanging low, complaining about being disappointed or even depressed that we’ve “only discovered the Higgs and nothing else”.

So who is right?

It boils down to what you think particle physics is really about, and what motivates you to get into this business. One view is that particle physics is the study of the building blocks of matter, in which “new physics” means “new particles”. This is certainly the picture of the 1960s leading to the development of the Standard Model, but it’s not what drew me to the subject. To me, “particle physics” is the study of the fundamental laws of nature, governed by the still mysterious union of space–time and quantum mechanics. Indeed, from the deepest theoretical perspective, the very definition of what a particle is invokes both quantum mechanics and relativity in a crucial way. So if the biggest excitement for you is a cross-section plot with a huge bump in it, possibly with a ticket to Stockholm attached, then, after the discovery of the Higgs, it makes perfect sense to take your ball and go home, since we can make no guarantees of this sort whatsoever. We’re in this business for the long haul of decades and centuries, and if you don’t have the stomach for it, you’d better do something else with your life!

Isn’t the Standard Model a perfect example of the scientific method?

Sure, but part of the reason for the rapid progress in the 1960s is that the intellectual structure of relativity and quantum mechanics was already sitting there to be explored and filled in. But these more revolutionary discoveries took much longer, involving a wide range of theoretical and experimental results far beyond “bump plots”. So “new physics” is much more deeply about “new phenomena” and “new principles”. The discovery of the Higgs particle – especially with nothing else accompanying it so far – is unlike anything we have seen in any state of nature, and is profoundly “new physics” in this sense. The same is true of the other dramatic experimental discovery in the past few decades: that of the accelerating universe. Both discoveries are easily accommodated in our equations, but theoretical attempts to compute the vacuum energy and the scale of the Higgs mass pose gigantic, and perhaps interrelated, theoretical challenges. While we continue to scratch our heads as theorists, the most important path forward for experimentalists is completely clear: measure the hell out of these crazy phenomena! From many points of view, the Higgs is the most important actor in this story amenable to experimental study, so I just can’t stand all the talk of being disappointed by seeing nothing but the Higgs; it’s completely backwards. I find that the physicists who worry about not being able to convince politicians are (more or less secretly) not able to convince themselves that it is worth building the next collider. Fortunately, we do have a critical mass of fantastic young experimentalists who believe it is worth studying the Higgs to death, while also exploring whatever might be at the energy frontier, with no preconceptions about what they might find.

What makes the Higgs boson such a rich target for a future collider?

It is the first example we’ve seen of the simplest possible type of elementary particle. It has no spin, no charge, only mass, and this extreme simplicity makes it theoretically perplexing. There is a striking difference between massive and massless particles that have spin. For instance, a photon is a massless particle of spin one; because it moves at the speed of light, we can’t “catch up” with it, and so we only see it have two “polarisations”, or ways it can spin. By contrast the Z boson, which also has spin one, is massive; since you can catch up with it, you can see it spinning in any of three directions. This “two not equal to three” business is quite profound. As we collide particles at ever increasing energies, we might think that their masses are irrelevant tiny perturbations to their energies, but this is wrong, since something must account for the extra degrees of freedom.

The whole story of the Higgs is about accounting for this “two not equal to three” issue, to explain the extra spin states needed for massive W and Z particles mediating the weak interactions. And this also gives us a good understanding of why the masses of the elementary particles should be pegged to that of the Higgs. But the huge irony is that we don’t have any good understanding for what can explain the mass of the Higgs itself. That’s because there is no difference in the number of degrees of freedom between massive and massless spin-zero particles, and related to this, simple estimates for the Higgs mass from its interactions with virtual particles in the vacuum are wildly wrong. There are also good theoretical arguments, amply confirmed in analogous condensed-matter systems and elsewhere in particle physics, for why we shouldn’t have expected to see such a beast lonely, unaccompanied by other particles. And yet here we are. Nature clearly has other ideas for what the Higgs is about than theorists do.

Is supersymmetry still a motivation for a new collider?

Nobody who is making the case for future colliders is invoking, as a driving motivation, supersymmetry, extra dimensions or any of the other ideas that have been developed over the past 40 years for physics beyond the Standard Model. Certainly many of the versions of these ideas, which were popular in the 1980s and 1990s, are either dead or on life support given the LHC data, but others proposed in the early 2000s are alive and well. The fact that the LHC has ruled out some of the most popular pictures is a fantastic gift to us as theorists. It shows that understanding the origin of the Higgs mass must involve an even larger paradigm change than many had previously imagined. Ironically, had the LHC discovered supersymmetric particles, the case for the next circular collider would be somewhat weaker than it is now, because that would (indirectly) support a picture of a desert between the electroweak and Planck scales. In this picture of the world, most people wanted a linear electron–positron collider to measure the superpartner couplings in detail. It’s a picture people very much loved in the 1990s, and a picture that appears to be wrong. Fine. But when theorists are more confused, it’s the time for more, not less experiments.

What definitive answers will a future high-energy collider give us?

First and foremost, we go to high energies because it’s the frontier, and we look around for new things. While there is absolutely no guarantee we will produce new particles, we will definitely stress test our existing laws in the most extreme environments we have ever probed. Measuring the properties of the Higgs, however, is guaranteed to answer some burning questions. All the drama revolving around the existence of the Higgs would go away if we saw that it had substructure of any sort. But from the LHC, we have only a fuzzy picture of how point-like the Higgs is. A Higgs factory will decisively answer this question via precision measurements of the coupling of the Higgs to a slew of other particles in a very clean experimental environment. After that the ultimate question is whether or not the Higgs looks point-like even when interacting with itself. The simplest possible interaction between elementary particles is when three particles meet at a space–time point. But we have actually never seen any single elementary particle enjoy this simplest possible interaction. For good reasons going back to the basics of relativity and quantum mechanics, there is always some quantum number that must change in this interaction – either spin or charge quantum numbers change. The Higgs is the only known elementary particle allowed to have this most basic process as its dominant self-interaction. A 100 TeV collider producing billions of Higgs particles will not only detect the self-interaction, but will be able to measure it to an accuracy of a few per cent. Just thinking about the first-ever probe of this simplest possible interaction in nature gives me goosebumps.

What are the prospects for future dark-matter searches?

Beyond the measurements of the Higgs properties, there are all sorts of exciting signals of new particles that can be looked for at both Higgs factories and 100 TeV colliders. One I find especially important is WIMP dark matter. There is a funny perception, somewhat paralleling the absence of supersymmetry at the LHC, that the simple paradigm of WIMP dark matter has been ruled out by direct-detection experiments. Nope! In fact, the very simplest models of WIMP dark matter are perfectly alive and well. Once the electroweak quantum numbers of the dark-matter particles are specified, you can unambiguously compute what mass an electroweak charged dark-matter particle should have so that its thermal relic abundance is correct. You get a number between 1–3 TeV, far too heavy to be produced in any sizeable numbers at the LHC. Furthermore, they happen to have miniscule interaction cross sections for direct detection. So these very simplest theories of WIMP dark matter are inaccessible to the LHC and direct-detection experiments. But a 100 TeV collider has just enough juice to either see these particles, or rule out this simplest WIMP picture.

What is the cultural value of a 100 km supercollider?

Both the depth and visceral joy of experiments in particle physics is revealed in how simple it is to explain: we smash things together with the largest machines that have ever been built, to probe the fundamental laws of nature at the tiniest distances we’ve ever seen. But it goes beyond that to something more important about our self-conception as people capable of doing great things. The world has all kinds of long-term problems, some of which might seem impossible to solve. So it’s important to have a group of people who, over centuries, give a concrete template for how to go about grappling with and ultimately conquering seemingly impossible problems, driven by a calling far larger than themselves. Furthermore, suppose it’s 200 years from now, and there are no big colliders on the planet. How can humans be sure that the Higgs or top particles exist? Because it says so in dusty old books? There is an argument to be made that as we advance we should be able to do the things we did in the past. After all, the last time that fundamental knowledge was shoved in old dusty books was in the dark ages, and that didn’t go very well for the West.

What about justifying the cost of the next collider?

There are a number of projects and costs we could be talking about, but let’s call it $5–25 billion. Sounds like a lot, right? But the global economy is growing, not shrinking, and the cost of accelerators as a fraction of GDP has barely changed over the past 40 years – even a 100 TeV collider is in this same ballpark. Meanwhile the scientific issues at stake are more profound than they have been for many decades, so we certainly have an honest science case to make that we need to keep going.

People sometimes say that if we don’t spend billions of dollars on colliders, then we can do all sorts of other experiments instead. I am a huge fan of small-scale experiments, but this argument is silly because science funding is infamously not a zero-sum game. So, it’s not a question of, “do we want to spend tens of billions on collider physics or something else instead”, it is rather “do we want to spend tens of billions on fundamental physics experiments at all”.

Another argument is that we should wait until some breakthrough in accelerator technology, rather than just building bigger machines. This is naïve. Of course miracles can always happen, but we can’t plan doing science around miracles. Similar arguments were made around the time of the cancellation of the Superconducting Super Collider (SSC) 30 years ago, with prominent condensed-matter physicists saying that the SSC should wait for the development of high-temperature superconductors that would dramatically lower the cost. Of course those dreamed-of practical superconductors never materialised, while particle physics continued from strength to strength with the best technology available.

What do you make of claims that colliders are no longer productive?

It would be only to the good to have a no-holds barred, public discussion about the pros and cons of future colliders, led by people with a deep understanding of the relevant technical and scientific issues. It’s funny that non-experts don’t even make the best arguments for not building colliders; I could do a much better job than they do! I can point you to an awesomely fierce debate about future colliders that already took place in China two years ago: (Int. J. Mod. Phys. A 31 1630053 and 1630054). C N Yang, who is one of the greatest physicists of the 20th century and enormously influential in China, came out with a strong attack on colliders, not only in China but more broadly. I was delighted. Having a serious attack meant there could be a serious response, masterfully provided by David Gross. It was the King Kong vs Godzilla of fundamental physics, played out on the pages of major newspapers in China, fantastic!

What are you working on now?

About a decade ago, after a few years of thinking about the cosmology of “eternal inflation” in connection with solutions to the cosmological constant and hierarchy problems, I concluded that these mysteries can’t be understood without reconceptualising what space–time and quantum mechanics are really about. I decided to warm up by trying to understand the dynamics of particle scattering, like collisions at the LHC, from a new starting point, seeing space-time and quantum mechanics as being derived from more primitive notions. This has turned out to be a fascinating adventure, and we are seeing more and more examples of rather magical new mathematical structures, which surprisingly appear to underlie the physics of particle scattering in a wide variety of theories, some close to the real world. I am also turning my attention back to the goal that motivated the warm-up, trying to understand cosmology, as well as possible theories for the origin of the Higgs mass and cosmological constant, from this new point of view. In all my endeavours I continue to be driven, first and foremost, by the desire to connect deep theoretical ideas to experiments and the real world.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

The post In it for the long haul appeared first on CERN Courier.

]]>
Opinion We have conquered the easiest challenges in fundamental physics, says Nima Arkani-Hamed. The case for building the next major collider is now more compelling than ever. https://cerncourier.com/wp-content/uploads/2019/03/CCMarApr19_Int-arkani.png
CMS beam pipe to be mined for monopoles https://cerncourier.com/a/cms-beam-pipe-to-be-mined-for-monopoles/ Fri, 08 Mar 2019 15:15:59 +0000 https://preview-courier.web.cern.ch?p=13619 A 6 m-long section will be cut into pieces and fed into a SQUID in the name of fundamental research.

The post CMS beam pipe to be mined for monopoles appeared first on CERN Courier.

]]>
The original CMS beampipe

On 18 February the CMS and MoEDAL collaborations at CERN signed an agreement that will see a 6 m-long section of the CMS beam pipe cut into pieces and fed into a SQUID in the name of fundamental research. The 4 cm diameter beryllium tube – which was in place (right) from 2008 until its replacement by a new beampipe for LHC Run 2 in 2013 – is now under the proud ownership of MoEDAL spokesperson Jim Pinfold and colleagues, who will use it to search for the existence of magnetic monopoles.

Magnetic monopoles with multiple magnetic charge, if produced in high-energy particle collisions at the LHC, are so highly ionising that they could stop in the material surrounding the collision points and bind there with the beryllium nuclei of the beam pipe. To detect the trapped monopoles, Pinfold and coworkers will pass the beam-pipe material through superconducting loops and look for a non-decaying current using highly precise SQUID-based magnetometers.

Materials from the CDF and D0 detectors at the Tevatron and from the H1 detector at HERA were subjected to such searches during the 1990s, and the first pieces of beam pipe from the LHC experiments, taken from the CMS region, were tested in 2012. But these were from regions far from the collision point, whereas the new study will use material surrounding the CMS central-interaction region. “It’s the most directly exposed piece of material of the experiment that the monopoles encounter when produced and moving away from the collision point,” says Albert De Roeck of CMS and MoEDAL, who was involved in the previous LHC and HERA studies. “Although no signs of monopoles have shown up in data so far, this new study pushes the search for monopoles with magnetic charge well beyond the five Dirac charges currently achievable with the MoEDAL detector.”

MoEDAL technical coordinator Richard Soluk and a small team of technicians will first cut the beampipe into bite-sized pieces at a special facility constructed at the Centre for Particle Physics at the University of Alberta, Canada, where they have to be especially careful because beryllium is highly toxic. The resulting pieces, carefully enshrined in plastic, will then be shipped back to Europe to the SQUID Magnetometer Laboratory at ETH Zurich, where the freshly sliced beam pipe will undergo a short measurement campaign planned for early summer. “On the analysis front we have to estimate how many monopoles would have been trapped in the beam pipe during its deployment at CMS as a function of monopole mass, spin, magnetic charge, kinetic energy and production mechanism,” says Pinfold.

The latest search is complementary to general monopole searches that have already been carried out by the ATLAS and MoEDAL collaborations. Deployed at LHC Point 8, MoEDAL contains more than 100 m2 of nuclear-track detectors that are sensitive only to new physics and has a dedicated trapping detector consisting of around one tonne of aluminum.

“Most modern theories such as GUTs and string theory require the existence of monopoles,” says Pinfold. “The monopole is the most important particle not yet found.”

The post CMS beam pipe to be mined for monopoles appeared first on CERN Courier.

]]>
News A 6 m-long section will be cut into pieces and fed into a SQUID in the name of fundamental research. https://cerncourier.com/wp-content/uploads/2019/03/CCMarApr19_News-cmsnew-1.jpg
Colliders join the hunt for dark energy https://cerncourier.com/a/colliders-join-the-hunt-for-dark-energy/ Thu, 24 Jan 2019 09:00:56 +0000 https://preview-courier.web.cern.ch/?p=13083 The ATLAS collaboration carried out a first collider search for light scalar particles that could contribute to the accelerating expansion of the universe.

The post Colliders join the hunt for dark energy appeared first on CERN Courier.

]]>
Dark analysis

It is 20 years since the discovery that the expansion of the universe is accelerating, yet physicists still know precious little about the underlying cause. In a classical universe with no quantum effects, the cosmic acceleration can be explained by a constant that appears in Einstein’s equations of general relativity, albeit one with a vanishingly small value. But clearly our universe obeys quantum mechanics, and the ability of particles to fluctuate in and out of existence at all points in space leads to a prediction for Einstein’s cosmological constant that is 120 orders of magnitude larger than observed. “It implies that at least one, and likely both, of general relativity and quantum mechanics must be fundamentally modified,” says Clare Burrage, a theorist at the University of Nottingham in the UK.

With no clear alternative theory available, all attempts to explain the cosmic acceleration introduce a new entity called dark energy (DE) that makes up 70% of the total mass-energy content of the universe. It is not clear whether DE is due to a new scalar particle or a modification of gravity, or whether it is constant or dynamic. It’s not even clear whether it interacts with other fundamental particles or not, says Burrage. Since DE affects the expansion of space–time, however, its effects are imprinted on astronomical observables such as the cosmic microwave background and the growth rate of galaxies, and the main approach to detecting DE involves looking for possible deviations from general relativity on cosmological scales.

Unique environment

Collider experiments offer a unique environment in which to search for the direct production of DE particles, since they are sensitive to a multitude of signatures and therefore to a wider array of possible DE interactions with matter. Like other signals of new physics, DE (if accessible at small scales) could manifest itself in high-energy particle collisions either through direct production or via modifications of electroweak observables induced by virtual DE particles.

Last year, the ATLAS collaboration at the LHC carried out a first collider search for light scalar particles that could contribute to the accelerating expansion of the universe. The results demonstrate the ability of collider experiments to access new regions of parameter space and provide complementary information to cosmological probes.

Unlike dark matter, for which there exists many new-physics models to guide searches at collider experiments, few such frameworks exist that describe the interaction between DE and Standard Model (SM) particles. However, theorists have made progress by allowing the properties of the prospective DE particle and the strength of the force that it transmits to vary with the environment. This effective-field-theory approach integrates out the unknown microscopic dynamics of the DE interactions.

The new ATLAS search was motivated by a 2016 model by Philippe Brax of the Université Paris-Saclay, Burrage, Christoph Englert of the University of Glasgow, and Michael Spannowsky of Durham University. The model provides the most general framework for describing DE theories with a scalar field and contains as subsets many well-known specific DE models – such as quintessence, galileon, chameleon and symmetron. It extends the SM lagrangian with a set of higher dimensional operators encoding the different couplings between DE and SM particles. These operators are suppressed by a characteristic energy scale, and the goal of experiments is to pinpoint this energy for the different DE–SM couplings. Two representative operators predict that DE couples preferentially to either very massive particles like the top quark (“conformal” coupling) or to final states with high-momentum transfers, such as those involving high-energy jets (“disformal” coupling).

Signatures

“In a big class of these operators the DE particle cannot decay inside the detector, therefore leaving a missing energy signature,” explains Spyridon Argyropoulos of the University of Iowa, who is a member of the ATLAS team that carried out the analysis. “Two possible signatures for the detection of DE are therefore the production of a pair of top-anti­top quarks or the production of high-energy jets, associated with large missing energy. Such signatures are similar to the ones expected by the production of supersymmetric top quarks (“stops”), where the missing energy would be due to the neutralinos from the stop decays or from the production of SM particles in association with dark-matter particles, which also leave a missing energy signature in the detector.”

The ATLAS analysis, which was based on 13 TeV LHC data corresponding to an integrated luminosity of 36.1 fb–1, re-interprets the result of recent ATLAS searches for stop quarks and dark matter produced in association with jets. No significant excess over the predicted background was observed, setting the most stringent constraints on the suppression scale of conformal and disformal couplings of DE to normal matter in the context of an effective field theory of DE. The results show that the characteristic energy scale must be higher than approximately 300 GeV for the conformal coupling and above 1.2 TeV for the disformal coupling.

The search for DE at colliders is only at the beginning, says Argyropoulos. “The limits on the disformal coupling are several orders of magnitudes higher than the limits obtained from other laboratory experiments and cosmological probes, proving that colliders can provide crucial information for understanding the nature of DE. More experimental signatures and more types of coupling between DE and normal matter have to be explored and more optimal search strategies could be developed.”

With this pioneering interpretation of a collider search in terms of dark-energy models, ATLAS has become the first experiment to probe all forms of matter in the observable universe, opening a new avenue of research at the interface of particle physics and cosmology. A complementary laboratory measurement is also being pursued by CERN’s CAST experiment, which studies a particular incarnation of DE (chameleon) produced via interactions of DE with photons.

But DE is not going to give up its secrets easily, cautions theoretical cosmologist Dragan Huterer at the University of Michigan in the US. “Dark energy is normally considered a very large-scale phenomenon, but you may justifiably ask how the study of small systems in a collider can say anything about DE. Perhaps it can, but in a fairly model-dependent way. If ATLAS finds a signal that departs from the SM prediction it would be very exciting. But linking it firmly to DE would require follow-up work and measurements – all of which would be very exciting to see happen.”

The post Colliders join the hunt for dark energy appeared first on CERN Courier.

]]>
News The ATLAS collaboration carried out a first collider search for light scalar particles that could contribute to the accelerating expansion of the universe. https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_News-dark.png
Search for new quarks addresses unnaturalness https://cerncourier.com/a/search-for-new-quarks-addresses-unnaturalness/ Mon, 29 Oct 2018 09:00:47 +0000 https://preview-courier.web.cern.ch/?p=12846 Why is the observed mass of the Higgs only 125 GeV?

The post Search for new quarks addresses unnaturalness appeared first on CERN Courier.

]]>
Figure 1

The Standard Model (SM) is a triumph of modern physics, with unprecedented success in explaining the subatomic world. The Higgs boson, discovered in 2012, was the capstone of this amazing theory, yet this newly known particle raises many questions.  For example, interactions between the Higgs boson and the top quark should lead to huge quantum corrections to the Higgs boson mass, possibly as large as the Planck mass (>1018 GeV). Why, then, is the observed mass only 125 GeV? Finding a solution to this “hierarchy problem” is one of the top motivations of many new theories of particle physics.

A common feature in several of these theories is the existence of vector-like quarks – in particular, a vector-like top quark (T) that could naturally cancel the large quantum corrections caused by the SM top quark. Like other quarks, vector-like quarks are spin-½ particles that interact via the strong force and, like all spin-½ particles, they have left-handed and right-handed versions. The unique feature of vector-like quarks is their ambidexterity: while the weak force only interacts with left-handed SM particles, it would interact the same way with both the right- and left-handed versions of vector-like quarks. This also gives vector-like quarks more options in how they can decay. Unlike the Standard Model top quark, which almost always decays to a bottom quark and W boson (tWb), a vector-like top quark could decay three different ways: TWb, TZt, or THt.

The search for vector-like quarks in ATLAS spans a wide range of dedicated analyses, each focusing on a particular experimental signature (possibly involving leptons, boosted objects or large missing transverse energy). The breadth of the programme allows ATLAS to be sensitive to most relevant decays of vector-like top quarks, and also those of vector-like bottom quarks, thus increasing the chances of discovery. The creation of particle–antiparticle pairs is the most probable production mechanism for vector-like quarks with mass around or below 1 TeV. For higher masses, single production of vector-like quarks may have a larger rate.

ATLAS recently performed a statistical combination of all the individual searches that looked for pair-production of vector-like quarks. While the individual analyses were designed to be sensitive to particular sets of decays, the combined results provide increased sensitivity to all considered decays of vector-like top quarks with masses up to 1.3 TeV. No vector-like top or bottom quarks were found. The combination allowed ATLAS to set the most stringent exclusion bounds on the mass of a vector-like top quark for arbitrary sets of branching ratios to the three decay modes (figure, left).

As the limits on vector-like quarks reach higher masses, the importance of searching for their single production rises. Such searches are also interesting from a theoretical perspective, since they allow one to constrain parameters of the production model (figure, right).

Given these new strong limits on vector-like quarks and the lack of evidence for supersymmetry, the theoretical case for a naturally light Higgs boson is not looking good! But nature probably still has a few tricks up her sleeve to get out of this conundrum.

The post Search for new quarks addresses unnaturalness appeared first on CERN Courier.

]]>
News Why is the observed mass of the Higgs only 125 GeV? https://cerncourier.com/wp-content/uploads/2018/10/CCNov18_Viewpoint-ATLAS-1.png
Search for WISPs gains momentum https://cerncourier.com/a/search-for-wisps-gains-momentum/ Fri, 31 Aug 2018 07:45:55 +0000 https://preview-courier.web.cern.ch/?p=12607 Interest is growing in new experiments that probe dark-matter candidates such as axions and other very weakly interacting sub-eV particles.

The post Search for WISPs gains momentum appeared first on CERN Courier.

]]>
MADMAX

Understanding the nature of dark matter is one of the most pressing problems in physics. This strangely nonreactive material is estimated, from astronomical observations, to make up 85% of all matter in the universe. The known particles of the Standard Model (SM) of particle physics, on the other hand, account for a paltry 15%.

Physicists have proposed many dark-matter candidates. Two in particular stand out because they arise in extensions of the SM that solve other fundamental puzzles, and because there are a variety of experimental opportunities to search for them. The first is the neutralino, which is the lightest supersymmetric partner of the SM neutral bosons. The second is the axion, postulated 40 years ago to solve the strong CP problem in quantum chromodynamics (QCD). While the neutralino belongs to the category of weakly interacting massive particles (WIMPs), the axion is the prime example of a very weakly interacting sub-eV particle (WISP).

Neutralinos as WIMPs have dominated the search for cold dark matter since the mid-1980s, when it was realised that massive particles with a cross section of the order of the weak interaction would result in precisely the right density to explain dark matter. There have been tremendous efforts to hunt for WIMPs both at hadron colliders, especially now at CERN’s Large Hadron Collider (LHC), and in large underground detectors, such as CDMS, CRESST, DARKSIDE, LUX, PandaX and XENON. However, up to now, no WIMP has been observed (CERN Courier July/August 2018 p9).

Fig. 1.

Very light bosons as WISPs are a firm prediction of models that solve problems of the SM by the postulation of a new symmetry which is broken spontaneously in the vacuum. Such extensions contain an additional scalar field with a potential shaped like a Mexican hat – similar to the Higgs potential in the SM (figure 1). This leads to spontaneous breaking of symmetry at a scale corresponding to the radius of the trough of the hat: excitations in the direction along the trough correspond to a light Nambu–Goldstone (NG) boson, while the excitation in the radial direction perpendicular to the trough corresponds to a heavy particle with a mass determined by the symmetry-breaking scale. The strengths of the interactions between such light bosons and regular SM particles are inversely proportional to the symmetry-breaking energy scale and are therefore very weak. Being light, very weakly interacting and cold due to their non-thermal production history, these particles qualify as natural WISP cold dark-matter candidates.

Primordial production

In fact, WISP dark matter is inevitably produced in the early universe. When the temperature in the primordial plasma drops below the symmetry-breaking scale, the boson fields are frozen at a random initial value in each causally-connected region. Later, they relax towards the minimum of their potential at zero fields and oscillate around it. Since there is no significant damping of these field oscillations via decays or interactions, the bosons behave as a very cold dark-matter fluid. If symmetry breaking occurs after the likely inflationary-expansion epoch of the universe (corresponding to a post-inflationary symmetry-breaking scenario), WISP dark matter would also be produced by the decay of topological defects from the realignment of patches of the universe with random initial conditions. A huge region in parameter space spanned by WISP masses and their symmetry-breaking scales can give rise to the observed dark-matter distribution.

The axion is a particularly well-motivated example of a WISP. It was proposed to explain the results of searches for a static electric dipole moment of the neutron, which would constitute a CP-violating effect of QCD. The size of this CP-violation, parameterised by the angle θ, is predicted to have an arbitrary value between –π and π, yet experiments show its absolute value to be less than 10–10. If θ is replaced by a dynamical field, θ(x), as proposed by Peccei and Quinn in 1977, QCD dynamics ensures that the low-energy effective potential of the axion field has an absolute minimum at θ = 0. Therefore, in vacuum, the CP violating effects due to the θ angle in QCD disappear – providing an elegant solution to the strong CP problem. The axion is the inevitable particle excitation of θ(x), and its mass is determined by the unknown breaking scale of the global symmetry.

Fig. 2.

Lattice-QCD calculations performed last year precisely determined the temperature and corresponding time after the Big Bang when axion cold dark-matter could have formed as a function of the axion mass. It was found that, in the post-inflationary symmetry breaking scenario, the axion mass has to exceed 28 μeV; otherwise, the predicted amount of dark matter overshoots the observed amount. Taking into account the additional production of axion dark-matter from the decay of topological defects, an axion with a mass between 30 μeV and 10 meV may account for all of the dark matter in the universe. In the pre-inflationary symmetry breaking scenario, smaller masses are also possible.

Axions are not the only WISP species that could account for dark matter. There could be axion-like particles (ALPs), which are very similar to axions but do not solve the CP problem of QCD, or lightweight, weakly interacting, so-called hidden photons, for example. String theory suggests a plenitude of ALPs, which could have couplings to photons, leptons or light quarks.

Due to their tiny masses, WISPs might also be produced inside stars or alter the propagation of photons in the universe. Observations of stellar evolutions hint at such signals: red giants, helium-burning stars and white dwarfs seem to be experiencing unseen energy losses exceeding those expected from neutrino emission. Intriguingly, these anomalies can be explained in a unified manner by the existence of a sub-keV-mass axion or ALP with a coupling both to electrons and photons. Additionally, observations suggest that the propagation of TeV photons in the universe suffers less than expected from interactions with the extragalactic background light. This, in turn, could be explained by the conversion of photons into ALPs and back in astrophysical magnetic fields, interestingly with about the same axion–photon coupling strength as indicated by the observed stellar anomalies. Both effects have been known for almost 10 years. They are scientifically disputed, but a WISP explanation has not yet been excluded.

Experimental landscape

Most experiments searching for WISPs exploit their possible mixing with photons. Given the small masses and feeble interactions of axions and ALPs, however, building experiments that are sensitive enough to detect them is a considerable challenge. In the 1980s, Pierre Sikivie of the University of Florida in the US suggested a way forward based on the conversion of axions to photons: in a static magnetic field, the axion can “borrow” a virtual photon from the field and turn into a real photon (figure 2). Most experiments search for axions and ALPs in this way, with three main approaches being pursued: haloscopes, which look directly for dark-matter WISPs in the galactic halo of our Milky Way; helioscopes, which search for ALPs or axions emitted by the Sun; and laboratory experiments, which aim to generate and detect ALPs in a single setup.

Fig. 3.

Direct axion dark-matter searches differ in two aspects from WIMP dark-matter searches. First, axion dark matter would convert to photons, while WIMPs are scattered off matter. Second, the particle-number density for axion dark-matter, due to its low mass, is about 15 orders of magnitude larger than it is for WIMP dark matter. In fact, cold dark-matter axions and ALPs behave like a highly degenerate Bose–Einstein condensate with a de Broglie wavelength of the order of metres or kilometres for μeV and neV masses, respectively. Dark-matter axions and ALPs are thus much better pictured as a classical-field oscillation. In a magnetic field, they induce tiny electric-field oscillations with a frequency determined by the axion mass. If the de Broglie wavelength of the dark-matter axion is larger than the experimental setup, the tiny oscillations are spatially coherent in the experiment and can, in principle, be “easily” detected using a resonant microwave cavity tuned to the correct but unknown frequency. The sensitivity of such an experiment increases with the magnetic field strength squared, the volume of the cavity and its quality factor. Unfortunately, since the range of axion mass predicted by theories is huge, methods are required to tune the cavity to the frequency range corresponding to the respective axion masses.

This cavity approach has been the basis of most searches for axion dark-matter in the past decades, in particular the Axion Dark Matter Experiment (ADMX) at the University of Washington, US. Using a tuning rod inside the cavity to change the resonance frequency and, recently, by reducing noise in its detector system, the ADMX team has shown that it can reach axion dark-matter sensitivity. ADMX, which has been pioneering the field for two decades, is currently taking data and could find dark-matter axions at any time, provided the axion mass lies in the range 2–10 μeV. Meanwhile, the HAYSTAC collaboration at Yale University has very recently demonstrated that the same experimental approach can be expanded up to an axion mass of around 30 μeV. Since smaller-volume cavities (usually with lower quality factors) are needed to probe higher frequencies, however, the single-cavity approach is limited to axion masses below about 40 μeV. One novel method to probe higher masses is to use multiple matched cavities, as for example followed by the ADMX and the South Korean Center for Axion and Precision Physics.

Transitions

A different way to exploit the tiny electric-field oscillations from dark-matter axions in a strong magnetic field is to use transitions between materials with different dielectric constants: at surfaces, the axion-induced electromagnetic oscillations have a discontinuity, which is to be balanced by radiation from the surface. For a mirror with a surface area of 1 m² in a 10 T field, this would lead to an undetectable emission of around 10–27 W if axions make up all of the dark matter. Furthermore, the emission power does not depend on the axion mass. In principle, if a parabolic mirror with a surface area of 10,000 m² could be magnetised with a 10 T field, the predicted radiation power (10–23 W) could be focused and detected using state-of-the-art amplification techniques, but such an experiment seems impractical at present.

Fig. 4.

Alternatively, many magnetised dielectric discs in parallel can be placed in front of a mirror (figure 3): since the emission from all surfaces is coherent, constructive interference can boost the signal sensitivity for a given frequency range determined by the spacing between the discs. First studies performed in the past years at the Max Planck Institute for Physics in Munich have revealed that, for axion masses around 100 μeV, the sensitivity could be good enough to cover the predicted dark-matter axion mass range. The MADMAX (Magnetized Disc and Mirror Axion Experiment) collaboration, formed in October 2017, aims to use this approach to close the sensitivity gap in the well-motivated range for dark-matter axions with masses around 100 μeV. First design studies indicate that it is feasible to build a dipole magnet with the required properties using established niobium-titanium superconductor technology. As a first step, a prototype experiment is planned consisting of a booster with a reduced number of discs installed inside a prototype magnet. The experiment will be located at DESY in Hamburg, and first measurements sensitive to new ALPs parameter ranges are planned within the next few years.

Model independent searches

These direct searches for axion dark matter are very promising, but they are hampered at present by the unknown axion mass and rely on cosmological assumptions. Other, less-model dependent, experiments are required to further probe the existence of ALPs.

Fig. 5.

ALPs with energies of the order of a few keV could be produced in the solar centre, and could be detected on Earth by pointing a strong dipole magnet at the Sun: axions entering the magnet could be converted into photons in the same way they are in cavity experiments. The difference is that the Sun would emit relativistic axions with an energy spectrum very similar to the thermal spectrum in its core, so experiments need to detect X-ray photons and are sensitive to axion masses up to a maximum depending on the length of the apparatus (figure 4, top). This helioscope technique was brought to the fore by the CERN Axion Solar Telescope (CAST), shown in figure 5, which began operations in 2002 and has excluded axion masses above 0.02 eV. As a successor, the International Axion Observatory (IAXO) was formally founded in July 2017 and received an advanced grant from the European Research Council earlier this year. The near-term goal of the collaboration is to build a scaled-down prototype version of the experiment, called babyIAXO, which is under discussion for possible location at DESY.

Fig. 6.

The third, laboratory-based, approach to search for WISPs also aims to generate and detect ALPs without any model assumption. In the first section of such an experiment, laser light is sent through a strong magnetic field so that ALPs might be generated via interactions of optical photons with the magnetic field. The second section is separated from the first one by a light-tight wall that can only be penetrated by ALPs. These would stream through a strong magnetic field behind the wall, allowing them to be re-converted into photons and giving the impression of light shining through a wall (figure 4, bottom).

Such experiments have been performed since the early 1990s, but no hint for any ALP has shown up. Today, the most advanced project in this laboratory-based category is ALPS II, currently being set up at DESY (figure 6). This experiment will use two optical resonators implemented into the apparatus to “recycle” the light before and increase the re-conversion probability of ALPs into photons behind the wall, allowing ALPS II to reach sensitivities beyond ALP–photon coupling limits from helioscopes. It also plans to use 20 dipoles from the former HERA collider, each of which has to be mechanically straightened, to generate the magnetic field.

Gaining momentum

Fig. 7.

Searches for very lightweight axions and ALPs, potentially explaining all of the dark matter around us, are strongly gaining momentum. CERN has been supporting such activities in the past (with solar-axion and dark-matter searches at CAST, and the OSQAR and CROWS experiments using the shining-light-through-walls approach) and is also involved in the R&D phase for next-generation experiments such as IAXO (CERN Courier September 2014 p17). With the new initiatives of MADMAX and IAXO, both of which could be located at DESY, and the ALPS II experiment under construction there, experimental axion physics in Europe is set to probe a large fraction of a well-motivated parameter space (figure 7). In addition to complementary experiments worldwide, the next 10 years or so should shine a bright light on WISPs as the solution to the dark-matter riddle, with thrilling data runs expected to start in the early 2020s.

The post Search for WISPs gains momentum appeared first on CERN Courier.

]]>
Feature Interest is growing in new experiments that probe dark-matter candidates such as axions and other very weakly interacting sub-eV particles. https://cerncourier.com/wp-content/uploads/2018/08/CCSep18Axions-fig7.png
Largest WIMP survey sets new limits https://cerncourier.com/a/largest-wimp-survey-sets-new-limits/ Mon, 09 Jul 2018 10:55:03 +0000 https://preview-courier.web.cern.ch/?p=12363 XENON1T is a 3D-imaging liquid-xenon time projection chamber located at Gran Sasso National Laboratory in Italy.

The post Largest WIMP survey sets new limits appeared first on CERN Courier.

]]>
XENON1T data

On 28 May, the world’s largest and most sensitive detector for direct searches of dark matter in the form of weakly interacting massive particles (WIMPs) released its latest results. XENON1T, a 3D-imaging liquid-xenon time projection chamber located at Gran Sasso National Laboratory in Italy, reported its first results last year (CERN Courier July/August 2017 p10). Now, the 165-strong international collaboration has presented the results from an unprecedentedly large exposure of approximately one tonne × year.

The results are based on 1300 kg out of the total 2000 kg active xenon target and 279 days of data-taking, improving the sensitivity by almost four orders of magnitude compared to XENON10 (the first detector of the XENON dark-matter project, which has been hosted at Gran Sasso since 2005). The data are consistent with background expectations, and place the most stringent limit yet on spin-independent interactions of WIMPs with ordinary matter for a WIMP mass higher than 6 GeV/c².

XENON1T spokesperson Elena Aprile of Columbia University in the US describes the result as a milestone in dark-matter exploration. “Showing the result after a one tonne × year exposure was important in a field that moves fast,” she explains. “It is also clear from the new result that we will win faster with a yet-larger mass and lower radon background, which is why we are now pushing the XENONnT upgrade.”

The post Largest WIMP survey sets new limits appeared first on CERN Courier.

]]>
News XENON1T is a 3D-imaging liquid-xenon time projection chamber located at Gran Sasso National Laboratory in Italy. https://cerncourier.com/wp-content/uploads/2018/07/CCJulAug_WIMP2.png
Trigger-level searches for low-mass dijet resonances https://cerncourier.com/a/trigger-level-searches-for-low-mass-dijet-resonances/ Fri, 01 Jun 2018 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/trigger-level-searches-for-low-mass-dijet-resonances/ Dijet searches look for a resonance in the two-jet invariant mass spectrum.

The post Trigger-level searches for low-mass dijet resonances appeared first on CERN Courier.

]]>

The LHC is not only the highest-energy collider ever built, it also delivers proton–proton collisions at a much higher rate than any machine before. The LHC detectors measure each of these events in unprecedented detail, generating enormous volumes of data. To cope, the experiments apply tight online filters (triggers) that identify events of interest for subsequent analysis. Despite careful trigger design, however, it is inevitable that some potentially interesting events are discarded.

The LHC-experiment collaborations have devised strategies to get around this, allowing them to record much larger event samples for certain physics channels. One such strategy is the ATLAS trigger-object level analysis (TLA), which consists of a search for new particles with masses below the TeV scale decaying to a pair of quarks or gluons. The analysis uses selective readout to reduce the event size and therefore allow more events to be recorded, increasing the sensitivity to new physics in domains where rates of Standard Model (SM) background processes are very large.

Dijet searches look for a resonance in the two-jet invariant mass spectrum. The strong-interaction multi-jet background is expected to be smoothly falling, thus a bump-like structure would be a clear sign of a deviation from the SM prediction. As the invariant mass decreases, the rate of multi-jet events increases steeply – to the point where, in the sub-TeV mass range, the data-taking system of ATLAS cannot handle the full rate due to limited data-storage resources. Instead, the ATLAS trigger system discards most of the events in this mass range, reducing the sensitivity to low-mass dijet resonances.

By recording only the final-state objects used to make the trigger decision, however, this limitation can be bypassed. For a dijet-resonance search, the only necessary ATLAS detector signals are the calorimeter information used to reconstruct the jets. This compact data format records far less information for each event, about 1% of the usual amount, allowing ATLAS to record dijet events at a rate 20 times larger than what is possible with standard data-taking (figure, left).

While the TLA technique gives access to physics at lower thresholds, the ATLAS detector information for these events is incomplete. Dedicated reconstruction and calibration techniques had to be developed to deal with the partial event information and, as a result, the invariant mass computed from TLA jets is comparable to that using jets reconstructed from the full detector readout within 0.05%.

The data recorded by ATLAS in 2015 and 2016 at a centre-of-mass energy of 13 TeV did not reveal any bump-like structure in the TLA dijet spectrum. The unprecedented statistical precision allowed ATLAS to set its strongest limits on resonances decaying to quarks in the mass range between 450 GeV and 1 TeV (figure, right). The analysis is sensitive to new particles that could mediate interactions between the SM particles and a dark sector, and to other new resonances at the electroweak scale. This analysis probes an important mass region that could not otherwise be explored in this final state with comparable sensitivity.

ATLAS joins CMS and LHCb with an analysis technique that requires fewer storage resources to collect more LHC data. The technique will be extended in the future, with upgraded trigger farms and detectors making tracking information available at early trigger levels. It will thus play an important role at LHC Run 3 and at the high-luminosity LHC upgrade.

The post Trigger-level searches for low-mass dijet resonances appeared first on CERN Courier.

]]>
News Dijet searches look for a resonance in the two-jet invariant mass spectrum. https://cerncourier.com/wp-content/uploads/2018/06/CCJune18_News-Atlas.jpg
CMS searches for third-generation leptoquarks https://cerncourier.com/a/cms-searches-for-third-generation-leptoquarks/ Fri, 23 Mar 2018 11:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-searches-for-third-generation-leptoquarks/ Several phenomenological studies have suggested that anomalies in B decays could be explained by the existence of hypothetical new particles which couple to both leptons and quarks.

The post CMS searches for third-generation leptoquarks appeared first on CERN Courier.

]]>

Anomalies in decays of B mesons, in which a bottom quark changes flavour to become a charmed quark, reported by the LHCb, Belle and Babar collaborations, have triggered considerable excitement in the particle-physics community (see “Beauty quarks test lepton universality“). The combined results of these experiments suggest that the decay rates of B → D τ ν and B → D* τ ν differ by more than four standard deviations from the Standard Model (SM) predictions.

Several phenomenological studies have suggested that these differences could be explained by the existence of hypothetical new particles called leptoquarks (LQs), which couple to both leptons and quarks. Such particles appear naturally in several scenarios of new physics, including models inspired by grand unified theories or Higgs-compositeness models. Leptoquarks that couple to the third generation of SM fermions (top and bottom quarks, and the tau lepton and its associated neutrino) are considered to be of particular interest to explain these flavour anomalies.

Leptoquarks coupling to fermions of the first and also the second generation of the SM have been the target of many searches by collider experiments at the successive energy frontiers (SPS, LEP, HERA, Tevatron). The most sensitive searches have been performed at the LHC, resulting in the exclusion of LQs with masses below 1.1 TeV. Searches for third-generation LQs were first performed at the Tevatron, and the baton has now been passed to the LHC.

The first investigation by the CMS collaboration used events recorded at an energy of 8 TeV during LHC Run 1, and targeted LQ pair production via the strong interaction with the decay channel of the LQ to a top quark and a tau lepton. The result of this search, reported by CMS in 2015, was that third-generation LQs with masses below 0.685 TeV were excluded. These early results have now been extended using the 2016 dataset at 13 TeV, employing more sophisticated analysis methods. The new search investigates final states containing an electron or a muon, one or two tau leptons that decay to hadrons and additional jets. To achieve sensitivity to the largest possible range of LQ masses, the analysis uses several event categories in which deviations from the SM predictions are searched for. The SM backgrounds mainly consist of top-quark pair production and W+ jets events, whose contributions are derived from the data rather than from simulation.

No significant indication of the existence of third-generation LQs has yet been found in any of the categories studied (see left-hand figure). The collaboration was therefore able to place exclusion limits on the product of the production cross section and branching fraction as small as 0.01 pb, which translate into lower limits on LQ masses extending above 1 TeV.

Combining the result of a search for the pair-production of supersymmetric bottom squarks, which can be reinterpreted as a search for LQs in the decay mode of a bottom quark and a tau neutrino, results in limits that probe the TeV mass range over all possible LQ branching ratios (see figure, right). Another recent search targets different LQs that decay into a bottom quark and a tau lepton. Using a smaller dataset at 13 TeV, this search excludes masses below 0.85 TeV for a unity branching fraction.

This is the first time that searches at the LHC have achieved sufficient sensitivity to explore the mass range favoured by phenomenological analyses of LQs and the current flavour anomalies. No hints of these states have been found, but analyses are under way using larger datasets and including additional signatures.

The post CMS searches for third-generation leptoquarks appeared first on CERN Courier.

]]>
News Several phenomenological studies have suggested that anomalies in B decays could be explained by the existence of hypothetical new particles which couple to both leptons and quarks. https://cerncourier.com/wp-content/uploads/2018/06/CCApr18_News-CMS.jpg
CMS hunts for heavy neutral leptons https://cerncourier.com/a/cms-hunts-for-heavy-neutral-leptons/ Fri, 16 Feb 2018 12:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-hunts-for-heavy-neutral-leptons/ Through their mixing with the Standard Model neutrinos, sterile Majorana neutrinos could be produced at the LHC in leptonic W-boson decays.

The post CMS hunts for heavy neutral leptons appeared first on CERN Courier.

]]>

The quest to search for new physics inspires searches in CMS for very rare processes, which, if discovered, could open the door to a new understanding of particle physics.

One such process is the production and decay of heavy sterile Majorana neutrinos, a type of heavy neutral lepton (HNL) introduced to describe the very small neutrino masses via the so-called seesaw mechanism. Two further fundamental puzzles of particle physics can be solved by adding three HNLs to the Standard Model (SM) particle spectrum: the lightest (with a mass of a few keV) can serve as a dark-matter candidate; the two heavier ones (heavier than about a GeV) could, when mass-degenerate, be responsible for a sizable amount of CP violation and thus help explain the cosmological matter–antimatter asymmetry.

Through their mixing with the SM neutrinos (see figure, left), the heavier HNLs could be produced at the LHC in leptonic W-boson decays. Subsequently, the HNL can decay to another W boson and a lepton, leading to a signal containing three isolated leptons. Depending on how weakly the new particles couple to the SM neutrinos, characterised by the parameters |VeN|2, |VμN|2 and |VτN|2, they can either decay shortly after production, or after flying some distance in the detector.

A new search performed with data collected in 2016 by CMS focuses on prompt trilepton (electrons or muons) signatures of HNL production. It explores a mass range from 1 GeV to 1.2 TeV, more than doubling the scope of LHC results so far. It also probes a mass regime that was unexplored since the days of the Large Electron-Positron collider (LEP), indicating that eventually the LHC will supersede these results with more data.

The trilepton final state does not lead to a sharp peak in an invariant mass spectrum, and therefore the search has to employ various kinematic properties of the events to be able to detect a possible presence of HNLs. To be sensitive to very low HNL masses, the search uses soft muons (with pT > 5 GeV) and electrons (pT > 10 GeV). While no signs of HNL have been found so far (see figure, right), the constraints on |VμN|2 (|VeN|2 is similar) in the high-mass region are the strongest to date. In the low mass region, the analysis has comparable sensitivity to previous searches.

Using dedicated analysis techniques, it is foreseen to extend this search to explore the parameter space where HNLs have longer lifetimes and so travel large distances in the detector before they decay. Together with more data this will enable CMS to significantly improve the sensitivity at low masses and eventually probe unexplored territory in this important region of HNL parameter space.

The post CMS hunts for heavy neutral leptons appeared first on CERN Courier.

]]>
News Through their mixing with the Standard Model neutrinos, sterile Majorana neutrinos could be produced at the LHC in leptonic W-boson decays. https://cerncourier.com/wp-content/uploads/2018/06/CCnew8_02_18.jpg
Searches for dark photons at LHCb https://cerncourier.com/a/searches-for-dark-photons-at-lhcb/ Mon, 15 Jan 2018 09:15:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/searches-for-dark-photons-at-lhcb/ While the dark photon does not couple directly to Standard Model particles, quantum-mechanical mixing between the photon and dark-photon fields can generate a small interaction.

The post Searches for dark photons at LHCb appeared first on CERN Courier.

]]>

The possibility that dark-matter particles may interact via an unknown force, felt only feebly by Standard Model (SM) particles, has motivated an effort to search for so-called dark forces.

The force-carrying particle for such hypothesised interactions is referred to as a dark photon, A’, in analogy with the ordinary photon that mediates the electromagnetic interaction. While the dark photon does not couple directly to SM particles, quantum-mechanical mixing between the photon and dark-photon fields can generate a small interaction. This provides a portal through which dark photons may be produced and through which they might decay into visible final states.

The minimal A’ model has two unknown parameters: the dark photon mass, m(A’), and the strength of its quantum-mechanical mixing with the photon field. Constraints have been placed on visible A’ decays by previous beam-dump, fixed-target, collider, and rare-meson-decay experiments.

However, much of the A’ parameter space that is of greatest interest (based on quantum field theory arguments) is currently unexplored. Using data collected in 2016, LHCb recently performed a search for the decay A’μ+μ in a mass range from the dimuon threshold up to 70 GeV. While no evidence for a signal was found, strong limits were placed on the A’–photon mixing strength. These constraints are the most stringent to date for the mass range 10.6 < m(A’) < 70 GeV and are comparable to the best existing limits on this parameter.

Furthermore, the search was the first to achieve sensitivity to long-lived dark photons using a displaced-vertex signature, providing the first constraints in an otherwise unexplored region of A’ parameter space. These results demonstrate the unique sensitivity of the LHCb experiment to dark photons, even using a data sample collected with a trigger that is inefficient for low-mass A’ decays. Looking forward to Run 3, the number of expected A’μ+μ decays in the low-mass region should increase by a factor of 100 to 1000 compared to the 2016 data sample. LHCb is now developing searches for A’e+e decays which are sensitive to lower-mass dark photons, both in LHC Run 2 and in particular Run 3 when the luminosity will be higher. This will further expand LHCb’s dark-photon programme.

The post Searches for dark photons at LHCb appeared first on CERN Courier.

]]>
News While the dark photon does not couple directly to Standard Model particles, quantum-mechanical mixing between the photon and dark-photon fields can generate a small interaction. https://cerncourier.com/wp-content/uploads/2018/06/CCnew8_01_18.jpg
ATLAS extends searches for natural supersymmetry https://cerncourier.com/a/atlas-extends-searches-for-natural-supersymmetry/ Mon, 15 Jan 2018 09:15:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-extends-searches-for-natural-supersymmetry/ Not only can SUSY accommodate dark matter and gauge–force unification at high energy, it offers a natural explanation for why the Higgs boson is so light compared to the Planck scale.

The post ATLAS extends searches for natural supersymmetry appeared first on CERN Courier.

]]>

Despite many negative searches during the last decade and more, supersymmetry (SUSY) remains a popular extension of the Standard Model (SM). Not only can SUSY accommodate dark matter and gauge–force unification at high energy, it offers a natural explanation for why the Higgs boson is so light compared to the Planck scale. In the SM, the Higgs boson mass can be decomposed into a “bare” mass and a modification due to quantum corrections. Without SUSY, but in the presence of a high-energy new physics scale, these two numbers are extremely large and thus must almost exactly oppose one another – a peculiar coincidence called the hierarchy problem. SUSY introduces a set of new particles that each balances the mass correction of its SM partner, providing a “natural” explanation for the Higgs boson mass.

Thanks to searches at the LHC and previous colliders, we know that SUSY particles must be heavier than their SM counterparts. But if this difference in mass becomes too large, particularly for the particles that produce the largest corrections to the Higgs boson mass, SUSY would not provide a natural solution of the hierarchy problem.

New SUSY searches from ATLAS using data recorded at an energy of 13 TeV in 2015 and 2016 (some of which were shown for the first time at SUSY 2017 in Mumbai from 11–15 December) have extended existing bounds on the masses of the top squark and higgsinos, the SUSY partners of the top quark and Higgs bosons, respectively, that are critical for natural SUSY. For SUSY to remain natural, the mass of the top squark should be below around 1 TeV and that of the higgsinos below a few hundred GeV.

ATLAS has now completed a set of searches for the top squark that push the mass limits up to 1 TeV. With no sign of SUSY yet, these searches have begun to focus on more difficult to detect scenarios in which SUSY could hide amongst the SM background. Sophisticated techniques including machine learning are employed to ensure no signal is missed.

First ATLAS results have also been released for higgsino searches. If the lightest SUSY particles are higgsino-like, their masses will often be close together and such “compressed” scenarios lead

to the production of low-momentum particles. One new search at ATLAS targets scenarios with leptons reconstructed at the lowest momenta still detectable. If the SUSY mass spectrum is extremely compressed, the lightest charged SUSY particle will have an extended lifetime, decay invisibly, and leave an unusual detector signature known as a “disappearing track”.

Such a scenario is targeted by another new ATLAS analysis. These searches extend for the first time the limits on the lightest higgsino set by the Large Electron Positron (LEP) collider 15 years ago. The search for higgsinos remains among the most challenging and important for natural SUSY. With more data and new ideas, it may well be possible to discover, or exclude, natural SUSY in the coming years.

The post ATLAS extends searches for natural supersymmetry appeared first on CERN Courier.

]]>
News Not only can SUSY accommodate dark matter and gauge–force unification at high energy, it offers a natural explanation for why the Higgs boson is so light compared to the Planck scale. https://cerncourier.com/wp-content/uploads/2018/06/CCnew9_01_18.jpg
Majorana neutrinos remain elusive https://cerncourier.com/a/majorana-neutrinos-remain-elusive/ Fri, 10 Nov 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/majorana-neutrinos-remain-elusive/ Neutrinoless double beta-decay is only possible if neutrinos and antineutrinos are identical or “Majorana” particles.

The post Majorana neutrinos remain elusive appeared first on CERN Courier.

]]>

Researchers at the Cryogenic Underground Observatory for Rare Events (CUORE), located at Gran Sasso National Laboratories (LNGS) in Italy, have reported the latest results in their search for neutrinoless double beta-decay based on CUORE’s first full data set. This exceedingly rare process, which is predicted to occur less than once about every 1026 years in a given nucleus, if it occurs at all, involves two neutrons in an atomic nucleus simultaneously decaying into two protons with the emission of two electrons and no neutrinos. This is only possible if neutrinos and antineutrinos are identical or “Majorana” particles, as posited by Ettore Majorana 80 years ago, such that the two neutrinos from the decay cancel each other out.

The discovery of neutrinoless double beta-decay (NDBD) would demonstrate that lepton number is not a symmetry of nature, perhaps playing a role in the observed matter–antimatter asymmetry in the universe, and constitute firm evidence for physics beyond the Standard Model. Following the discovery two decades ago that neutrinos have mass (a necessary condition for them to be Majorana particles), several experiments worldwide are competing to spot this exotic decay using a variety of techniques and different NDBD candidate nuclei.

CUORE is a tonne-scale cryogenic bolometer comprising 19 copper-framed towers that each house a matrix of 52 cube-shaped crystals of highly purified natural tellurium (containing more than 34% tellurium-130). The detector array, which has been cooled below a temperature of 10 mK and is shielded from cosmic rays by 1.4 km of rock and thick lead sheets, was designed and assembled over a 10 year period. Following initial results in 2015 from a CUORE prototype containing just one tower, the full detector with 19 towers was cooled down in the CUORE cryostat one year ago and the collaboration has now released its first publication, submitted to Physical Review Letters, with much higher statistics. The large volume of detector crystals greatly increases the likelihood of recording a NDBD event during the lifetime of the experiment.

Based on around seven weeks of data-taking, alternated with an intense programme of commissioning of the detector from May to September 2017 and corresponding to a total tellurium exposure of 86.3 kg per year, CUORE finds no sign of NDBD, placing a lower limit of the decay half-life of NDBD in tellurium-130 of 1.5 × 1025 years (90% C.L.). This is the most stringent limit to date on this decay, says the team, and suggests that the effective Majorana neutrino mass is less than 140−400 meV, where the large range results from the nuclear matrix-element estimates employed. “This is the first preview of what an instrument this size is able to do,” says CUORE spokesperson Oliviero Cremonesi of INFN. “Already, the full detector array’s sensitivity has exceeded the precision of the measurements reported in April 2015 after a successful two-year test run that enlisted one detector tower.”

Over the next five years CUORE will collect around 100 times more data. Combined with search results in other isotopes, the possible hiding places of Majorana neutrinos will shrink much further.

The post Majorana neutrinos remain elusive appeared first on CERN Courier.

]]>
News Neutrinoless double beta-decay is only possible if neutrinos and antineutrinos are identical or “Majorana” particles. https://cerncourier.com/wp-content/uploads/2018/06/CCnew2_10_17.jpg
ATLAS hunts for new physics with dibosons https://cerncourier.com/a/atlas-hunts-for-new-physics-with-dibosons/ Fri, 22 Sep 2017 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-hunts-for-new-physics-with-dibosons/ Many Standard Model extensions predict new resonances that can decay into a pair of bosons, for example: VV, Vh, Vγ and γγ.

The post ATLAS hunts for new physics with dibosons appeared first on CERN Courier.

]]>

Beyond the Standard Model of particle physics (SM), crucial open questions remain such as the nature of dark matter, the overabundance of matter compared to antimatter in the universe, and also the mass scale of the scalar sector (what makes the Higgs boson so light?). Theorists have extended the SM with new symmetries or forces that address these questions, and many such extensions predict new resonances that can decay into a pair of bosons (diboson), for example: VV, Vh, Vγ and γγ, where V stands for a weak boson (W and Z), h for the Higgs boson, and γ is a photon.

The ATLAS collaboration has a broad search programme for diboson resonances, and the most recent results using 36 fb–1 of proton–proton collision data at the LHC taken at a centre-of-mass energy of 13 TeV in 2015 and 2016 have now been released. Six different final states characterised by different boson decay modes were considered in searches for a VV resonance: 4, ℓℓνν, ℓℓqq, νqq, ννqq and qqqq, where , ν and q stand for charged leptons (electrons and muons), neutrinos and quarks, respectively. For the Vh resonance search, the dominant Higgs boson decay into a pair of b-quarks (branching fraction of 58%) was exploited together with four different V decays leading to ℓℓbb, νbb, ννbb and qqbb final states. A Zγ resonance was sought in final states with two leptons and a photon.

A new resonance would appear as an excess (bump) over the smoothly distributed SM background in the invariant mass distribution reconstructed from the final-state particles. The left figure shows the observed WZ mass distribution in the qqqq channel together with simulations of some example signals. An important key to probe very high-mass signals is to identify high-momentum hadronically decaying V and h bosons. ATLAS developed a new technique to reconstruct the invariant mass of such bosons combining information from the calorimeters and the central tracking detectors. The resulting improved mass resolution for reconstructed V and h bosons increased the sensitivity to very heavy signals.

No evidence for a new resonance was observed in these searches, allowing ATLAS to set stringent exclusion limits. For example, a graviton signal predicted in a model with extra spatial dimensions was excluded up to masses of 4 TeV, while heavy weak-boson-like resonances (as predicted in composite Higgs boson models) decaying to WZ bosons are excluded for masses up to 3.3 TeV. Heavier Higgs partners can be excluded up to masses of about 350 GeV, assuming specific model parameters.

The post ATLAS hunts for new physics with dibosons appeared first on CERN Courier.

]]>
News Many Standard Model extensions predict new resonances that can decay into a pair of bosons, for example: VV, Vh, Vγ and γγ. https://cerncourier.com/wp-content/uploads/2018/06/CCnew12_08_17.jpg
CAST experiment constrains solar axions https://cerncourier.com/a/cast-experiment-constrains-solar-axions/ Fri, 19 May 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cast-experiment-constrains-solar-axions/ The CERN Axion Solar Telescope has reported important new exclusion limits on coupling of axions to photons.

The post CAST experiment constrains solar axions appeared first on CERN Courier.

]]>

In a paper published in Nature Physics, the CERN Axion Solar Telescope (CAST) has reported important new exclusion limits on coupling of axions to photons. Axions are hypothetical particles that interact very weakly with ordinary matter and therefore are candidates to explain dark matter. They were postulated decades ago to solve the “strong CP” problem in the Standard Model (SM), which concerns an unexpected time-reversal symmetry of the nuclear forces. Axion-like particles, unrelated to the strong-CP problem but still viable dark-matter candidates, are also predicted by several theories of physics beyond the SM, notably string theory.

A variety of Earth- and space-based observatories are searching possible locations where axions could be produced, ranging from the inner Earth to the galactic centre and right back to the Big Bang. CAST looks for solar axions using a “helioscope” constructed from a test magnet originally built for the Large Hadron Collider. The 10 m-long superconducting magnet acts like a viewing tube and is pointed directly at the Sun: solar axions entering the tube would be converted by its strong magnetic field into X-ray photons, which would be detected at either end of the magnet. Starting in 2003, the CAST helioscope, mounted on a movable platform and aligned with the Sun with a precision of about 1/100th of a degree, has tracked the movement of the Sun for an hour and a half at dawn and an hour and a half at dusk, over several months each year.

In the latest work, based on data recorded between 2012 and 2015, CAST finds no evidence for solar axions. This has allowed the collaboration to set the best limits to date on the strength of the coupling between axions and photons for all possible axion masses to which CAST is sensitive. The limits concern a part of the axion parameter space that is still favoured by current theoretical predictions and is very difficult to explore experimentally, allowing CAST to encroach on more restrictive constraints set by astrophysical observations. “Even though we have not been able to observe the ubiquitous axion yet, CAST has surpassed even the sensitivity originally expected, thanks to CERN’s support and unrelenting work by CASTers,” says CAST spokesperson Konstantin Zioutas. “CAST’s results are still a point of reference in our field.”

The experience gained by CAST over the past 15 years will help physicists to define the detection technologies suitable for a proposed, much larger, next-generation axion helioscope called IAXO. Since 2015, CAST has also broadened its research at the low-energy frontier to include searches for dark-matter axions and candidates for dark energy, such as solar chameleons.

The post CAST experiment constrains solar axions appeared first on CERN Courier.

]]>
News The CERN Axion Solar Telescope has reported important new exclusion limits on coupling of axions to photons. https://cerncourier.com/wp-content/uploads/2018/06/CCnew3_05_17.jpg
ATLAS explores the energy frontier https://cerncourier.com/a/atlas-explores-the-energy-frontier/ Fri, 19 May 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-explores-the-energy-frontier/ Utilising pairs of jets (dijets), a recent ATLAS search was able to probe the highest invariant mass of any of its searches, measuring events with energies as high as 8.1 TeV.

The post ATLAS explores the energy frontier appeared first on CERN Courier.

]]>

The start of LHC Run 2 in 2015 saw the centre-of-mass energy of proton–proton collisions increase from 8 to 13 TeV, dramatically increasing the possibility to create heavy particles predicted by many models of new physics. The ATLAS collaboration has recently released the first search results from its analysis of the full 2015 and 2016 data sets, providing the largest combined LHC data set analysed so far.

ATLAS

New heavy particles are likely to decay immediately inside the detector into known objects such as pairs of jets, leptons or bosons. These decay products will typically have large transverse momentum, due to the high mass of the parent particle, and this raises challenges both for the detector and the algorithms used to identify the decay products.

Utilising pairs of jets (dijets), a recent ATLAS search was able to probe the highest invariant mass of any of its searches, measuring events with energies as high as 8.1 TeV and thereby pushing up the experimentʼs sensitivity to hypothetical new resonances. Additionally, ATLAS has released the results of searches in events containing pairs of muons or electrons or single muons/electrons plus a neutrino, which extend the sensitivity to new resonance masses up to 4.5 and 5.1 TeV, respectively. Heavy particles with an affinity for coupling to the Higgs boson were also examined up to a mass of 3.7 TeV.

ATLAS has also searched for vector-like top-quark partners, which are strongly interacting particles invoked by models with new high-scale symmetries and which may be produced at the LHC. The final states sought in these analyses are a single high-transverse-momentum electron or muon, plus either several jets and a large component of missing transverse momentum or a large-radius jet consistent with a W or Z boson plus some missing transverse momentum and one b-tagged jet. The presence of vector-like top quarks is excluded for particle masses of up to 1.35 TeV, depending on the physics model chosen.

Finally, ATLAS has performed direct searches for dark matter by looking for single energetic photons plus missing transverse momentum and for a Higgs boson plus missing transverse momentum. These are potential signatures of the production and decay of a pair of weakly interacting massive particles (with the photon arising from initial-state radiation and the Higgs boson being produced in the decay of a Z’ dark-matter mediator).

The data are found to be consistent with Standard Model predictions for all of the searches conducted thus far. The second phase of Run 2 is about to begin and is scheduled to continue until the end of 2018, roughly tripling the integrated luminosity collected so far. This huge amount of data yet to be recorded will further extend the reach of these searches for new physics.

The post ATLAS explores the energy frontier appeared first on CERN Courier.

]]>
News Utilising pairs of jets (dijets), a recent ATLAS search was able to probe the highest invariant mass of any of its searches, measuring events with energies as high as 8.1 TeV. https://cerncourier.com/wp-content/uploads/2018/06/CCnew10_05_17.jpg
SUSY searches in the electroweak sector https://cerncourier.com/a/susy-searches-in-the-electroweak-sector/ Fri, 19 May 2017 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/susy-searches-in-the-electroweak-sector/ CMS has recently reported searches for electroweak production of neutralinos and charginos in different final states.

The post SUSY searches in the electroweak sector appeared first on CERN Courier.

]]>

The sensitivity of searches for supersymmetry (SUSY) has been boosted by the increased centre-of-mass energy of LHC Run 2. Analyses of the first Run 2 data recorded in 2015 and early 2016 focused on the production of strongly interacting SUSY particles – the partners of Standard Model (SM) gluons (“gluinos”) and quarks (“squarks”).

With the large data set accumulated during the rest of 2016, attention now turns to a more challenging but equally important part of the SUSY particle spectrum: the supersymmetric partners of SM electroweak gauge (“winos”, “binos”) and Higgs (“higgsinos”) bosons. The spectrum of the minimal supersymmetric extension of the SM contains six of these particles: two charged (“charginos”) and four neutral (“neutralinos”) ones. The cross-sections for the direct production of pairs of these particles are typically three to five orders of magnitude lower than that for gluino pair production, but such events might be the only indication of supersymmetry at the LHC if the partners of gluons, quarks and leptons are heavy.

CMS has recently reported searches for electroweak production of neutralinos and charginos in different final states. Decays of these particles to the lightest SUSY particle (LSP) – which are candidates for dark matter – are expected to produce Z, W and H bosons, or photons. If the SUSY partners of leptons (sleptons) are sufficiently light they can also be part of the decay chain. In all of these cases, since final states with two or more leptons constitute a large fraction of the signal events, CMS has searched for supersymmetry in final states with multiple leptons. These searches are complemented by analyses targeting hadronic decays of Higgs bosons in these events.

None of the searches performed by CMS show any significant deviation of the observed event counts from the estimated yields for SM processes. In benchmark models with reduced SUSY particle content, the strongest constraints on the electroweak production of pairs of the lightest chargino and the second-lightest neutralino are obtained by assuming their decay chains involve sleptons, with mass limits reaching up to 1.15 TeV, depending on the slepton’s mass and flavour. For direct decays of the chargino (neutralino) to a W (Z) boson and the lightest neutralino, the excluded regions reach up to 0.61 TeV.

A particularly interesting case, favoured by “natural” supersymmetry, are models with small mass differences between the lightest chargino and neutralino states. In these models, the transverse momenta of the leptons can be significantly lower than the typical thresholds of 10–20 GeV used in most analyses. CMS has designed a specific search to enhance the sensitivity to final states with two low-momentum leptons of opposite charge that includes a dedicated online selection for muons with transverse momenta as low as 3 GeV. The search reaches an unprecedented sensitivity: for a mass difference of 20 GeV, the exclusion reaches a mass of 230 GeV.

Based on data recorded in 2016, CMS has covered models of electroweak production of “wino”-like charginos and neutralinos with searches in different final states. More results are expected soon, and the sensitivity of the searches will largely profit from the extension of the data set in the remaining two years of LHC Run 2.

The post SUSY searches in the electroweak sector appeared first on CERN Courier.

]]>
News CMS has recently reported searches for electroweak production of neutralinos and charginos in different final states. https://cerncourier.com/wp-content/uploads/2018/06/CCnew13_05_17.jpg
Search for sterile neutrinos triples up https://cerncourier.com/a/search-for-sterile-neutrinos-triples-up/ Fri, 19 May 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/search-for-sterile-neutrinos-triples-up/ Fermilab’s short-baseline neutrino programme targets sterile neutrinos.

The post Search for sterile neutrinos triples up appeared first on CERN Courier.

]]>
CCsbn1_05_17

This summer, two 270 m3 steel containment vessels are making their way by land, sea and river from CERN in Europe to Fermilab in the US, a journey that will take five weeks. Each vessel houses one of the 27,000-channel precision wire chambers of the ICARUS detector, which uses advanced liquid-argon technology to detect neutrinos. Having already operated successfully in the CERN to Gran Sasso neutrino beam from 2010 to 2012, and spent the past two years being refurbished at CERN, ICARUS will team up with two similar detectors at Fermilab to deliver a new physics opportunity: the ability to resolve some intriguing experimental anomalies in neutrino physics and perform the most sensitive search to date for eV-scale sterile neutrinos. This new endeavour, comprised of three large liquid-argon detectors (SBND, MicroBooNE and ICARUS) sitting in a single intense neutrino beam at Fermilab, is known as the Short-Baseline Neutrino (SBN) programme.

The sterile neutrino is a hypothetical particle, originally introduced by Bruno Pontecorvo in 1967, which doesn’t experience any of the known forces of the Standard Model. Sterile-neutrino states, if they exist, are not directly observable since they don’t interact with ordinary matter, but the phenomenon of neutrino oscillations provides us with a powerful probe of physics beyond the Standard Model. Active–sterile mixing, just like standard three-neutrino mixing, could generate additional oscillations among the standard neutrino flavours but at wavelengths that are distinct from the now well-measured “solar” and “atmospheric” oscillation effects. Anomalies exist in the data of past neutrino experiments that present intriguing hints of possible new physics. We now require precise follow-up experiments to either confirm or rule out the existence of additional, sterile-neutrino states.    

On the scent of sterile states

The discovery nearly two decades ago of neutrino-flavour oscillations led to the realisation that each of the familiar flavours (νe, νμ, ντ ) is actually a linear superposition of states of distinct masses (ν1, ν2, ν3 ). The wavelength of an oscillation is determined by the difference in the squared masses of the participating mass states, m2i – m2j. The discoveries that were awarded the 2015 Nobel Prize in Physics correspond to the atmospheric mass-splitting Δm2ATM = |m23– m22| = 2.5 × 10–3 eV2 and the solar mass-splitting Δm2SOLAR = m22 – m21 = 7.5 × 10–5 eV2, so-named because of how they were first observed. Any additional and mostly sterile mass states, therefore, could generate a unique oscillation driven by a new mass scale in the neutrino sector: m2mostly sterile – m2mostly active.

The most significant experimental hint of new physics comes from the LSND experiment performed at the Los Alamos National Laboratory in the 1990s, which observed a 3.8σ excess of electron antineutrinos appearing in a mostly muon antineutrino beam in a region where standard mixing would predict no significant effect. Later, in the 2000s, the MiniBooNE experiment at Fermilab found excesses of both electron neutrinos and electron antineutrinos, although there is some tension with the original LSND observation. Other hints come from the apparent anomalous disappearance of electron antineutrinos over baselines less than a few hundred metres at nuclear-power reactors (the “reactor anomaly”), and the lower than expected rate in radioactive-source calibration data from the gallium-based solar-neutrino experiments GALLEX and SAGE (the “gallium anomaly”). Numerous other searches in appearance and disappearance channels have been conducted at various neutrino experiments with null results (including ICARUS when it operated in the CERN to Gran Sasso beam), and these have thus constrained the parameter space where light sterile neutrinos could still be hiding. A global analysis of the available data now limits the possible sterile–active mass-splitting, m2mostly sterile – m2mostly active, to a small region around 1–2 eV2

CCsbn2_05_17

Long-baseline accelerator-based neutrino experiments such as NOvA at Fermilab, T2K in Japan, and the future Deep Underground Neutrino Experiment (DUNE) in the US, which will involve detectors located 1300 km from the source, are tuned to observe oscillations related to the atmospheric mass-splitting, Δm2ATM ~ 10–3 eV2. Since the mass-squared difference between the participating states and the length scale of the oscillation they generate are inversely proportional to one another, a short-baseline accelerator experiment such as SBN, with detector distances of the order 1 km, is most sensitive to an oscillation generated by a mass-squared difference of order 1 eV2 – exactly the region we want to search.

Three detectors, one beam

The SBN programme has been designed to definitively address this question of short-baseline neutrino oscillations and test the existence of light sterile neutrinos with unprecedented sensitivity. The key to SBN’s reach is the deployment of multiple high-precision neutrino detectors, all of the same technology, at different distances along a single high-intensity neutrino beam. Use of an accelerator-based neutrino source has the bonus that both electron-neutrino appearance and muon-neutrino disappearance oscillation channels can be investigated simultaneously.

The neutrino source is Fermilab’s Booster Neutrino Beam (BNB), which has been operating at high rates since 2002 and providing beam to multiple experiments. The BNB is generated by impinging 8 GeV protons from the Booster onto a beryllium target and magnetically focusing the resulting hadrons, which decay to produce a broad-energy neutrino beam peaked around 700 MeV that is made up of roughly 99.5% muon neutrinos and 0.5% electron neutrinos.

The three SBN detectors are each liquid-argon time projection chambers (LArTPCs) located along the BNB neutrino path (see images above). MicroBooNE, an 87 tonne active-mass LArTPC, is located 470 m from the neutrino production target and has been collecting data since October 2015. The Short-Baseline Near Detector (SBND), a 112 tonne active-mass LArTPC to be sited 110 m from the target, is currently under construction and will provide the high-statistics characterisation of the un-oscillated BNB neutrino fluxes that is needed to control systematic uncertainties in searches for oscillations at the downstream locations. Finally, ICARUS, with 476 tonnes of active mass and located 600 m from the BNB target, will achieve a sufficient event rate at the downstream location where a potential oscillation signal may be present. Many of the upgrades to ICARUS implemented during its time at CERN over the past few years are in response to unique challenges presented by operating a LArTPC detector near the surface, as opposed to the underground Gran Sasso laboratory where it operated previously. The SBN programme is being realised by a large international collaboration of researchers with major detector contributions from CERN, the Italian INFN, Swiss NSF, UK STFC, and US DOE and NSF. At Fermilab, new experimental halls to house the ICARUS and SBND detectors were constructed in 2016 and are now awaiting the LArTPCs. ICARUS and SBND are expected to begin operation in 2018 and 2019, respectively, with approximately three years of ICARUS data needed to reach the programme’s design sensitivity.

A rich physics programme

In a combined analysis, the three SNB detectors allow for the cancellation of common systematics and can therefore test the νμ→ νe oscillation hypothesis at a level of 5σ or better over the full range of parameter space originally allowed at 99% C.L. by the LSND data. Recent measurements, especially from the NEOS, IceCube and MINOS experiments, have constrained the possible sterile-neutrino parameters significantly and the sensitivity of the SBN programme is highest near the most favoured values of Δm2. In addition to νe appearance, SBN also has the sensitivity to νμ disappearance needed to confirm an oscillation interpretation of any observed appearance signal, thus providing a more robust result on sterile-neutrino-induced oscillations (figure 1).

CCsbn3_05_17

SBN was conceived to unravel the physics of light sterile neutrinos, but the scientific reach of the programme is broader than just the searches for short-baseline neutrino oscillations. The SBN detectors will record millions of neutrino interactions that can be used to make precise measurements of neutrino–argon interaction cross-sections and perform detailed studies of the rather complicated physics involved when neutrinos scatter off a large nucleus such as argon. The SBND detector, for example, will see of the order 100,000 muon-neutrino interactions and 1000 electron-neutrino interactions per month. For comparison, existing muon-neutrino measurements of these interactions are based on only a few thousand total events and there are no measurements at all with electron neutrinos. The position of the ICARUS detector also allows it to see interactions from two neutrino beams running concurrently at Fermilab (the Booster and Main Injector neutrino beams), allowing for a large-statistics measurement of muon and electron neutrinos in a higher-energy regime that is important for future experiments.

In fact, the science programme of SBN has several important connections to the future long-baseline neutrino experiment at Fermilab, DUNE. DUNE will deploy multiple 10 kt LArTPCs 1.5 km underground in South Dakota, 1300 km from Fermilab. The three detectors of SBN present an R&D platform for advancing this exciting technology and are providing direct experimental activity for the global DUNE community. In addition, the challenging multi-detector oscillation analyses at SBN will be an excellent proving ground for sophisticated event reconstruction and data-analysis techniques designed to maximally exploit the excellent tracking and calorimetric capabilities of the LArTPC. From the physics point of view, discovering or excluding sterile neutrinos plays an important role in the ability of DUNE to untangle the effects of charge-parity violation in neutrino oscillations, a primary physics goal of the experiment. Also, precise studies of neutrino–argon cross-sections at SBN will help control one of the largest sources of systematic uncertainties facing long-baseline oscillation measurements.    

Closing in on a resolution

The hunt for light sterile neutrinos has continued for several decades now, and global analyses are regularly updated with new results. The original LSND data still contain the most significant signal, but the resolution on Δm2 was poor and so the range of values allowed at 99% C.L. spans more than three orders of magnitude. Today, only a small region of mass-squared values remain compatible with all of the available data, and a new generation of improved experiments, including the SBN programme, are under way or have been proposed that can rule on sterile-neutrino oscillations in exactly this region.

There is currently a lot of activity in the sterile-neutrino area. The nuPRISM and JSNS2 proposals in Japan could also test for νμ→ νe appearance, while new proposals like the KPipe experiment, also in Japan, can contribute to the search for νμ disappearance. The MINOS+ and IceCube detectors, both of which have already set strong limits on νμ disappearance, still have additional data to analyse. A suite of experiments is already currently under way (NEOS, DANSS, Neutrino-4) or in the planning stages (PROSPECT, SoLid, STEREO) to test for electron-antineutrino disappearance over short baselines at reactors, and others are being planned that will use powerful radioactive sources (CeSOX, BEST). These electron-neutrino and -antineutrino disappearance searches are highly complementary to the search modes being explored at SBN. 

The Fermilab SBN programme offers world-leading sensitivity to oscillations in two different search modes at the most relevant mass-splitting scale as indicated by previous data. We will soon have critical new information regarding the possible existence of eV-scale sterile neutrinos, resulting in either one of the most exciting discoveries across particle physics in recent years or the welcome resolution of a long-standing unresolved puzzle in neutrino physics.

LArTPCs rule the neutrino-oscillation waves
  A schematic diagram of the ICARUS liquid-argon time projection chamber (LArTPC) detector, where electrons create signals on three rotated wire planes. The concept of the LArTPC for neutrino detection was first conceived by Carlo Rubbia in 1977, followed by many years of pioneering R&D activity and the successful operation of the ICARUS detector in the CNGS beam from 2010 to 2012, which demonstrated the effectiveness of single-phase LArTPC technology for neutrino physics. A LArTPC provides both precise calorimetric sampling and 3D tracking similar to the extraordinary imaging features of a bubble chamber, and is also fully electronic and therefore potentially scalable to large, several-kilotonne masses. Charged particles propagating in the liquid argon ionise argon atoms and free electrons drift under the influence of a strong, uniform electric field applied across the detector volume. The drifted ionisation electrons induce signals or are collected on planes of closely spaced sense wires located on one side of the detector boundary, with the wire signals proportional to the amount of energy deposited in a small cell. The very low electron drift speeds, in the range of 1.6 mm/μs, require a continuous read-out time of 1–2 milliseconds for a detector a few metres across. This creates a challenge when operating these detectors at the surface, as the SBN detectors will be at Fermilab, so photon-detection systems will be used to collect fast scintillation light and time each event.

 

The post Search for sterile neutrinos triples up appeared first on CERN Courier.

]]>
Feature Fermilab’s short-baseline neutrino programme targets sterile neutrinos. https://cerncourier.com/wp-content/uploads/2018/06/CCsbn1_05_17.jpg
ATLAS pushes SUSY beyond 2 TeV https://cerncourier.com/a/atlas-pushes-susy-beyond-2-tev/ Thu, 13 Apr 2017 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-pushes-susy-beyond-2-tev/ The strongly produced partners of the gluon and quarks, the gluino and squarks, would decay to final states containing energetic jets, possibly leptons, and two LSPs.

The post ATLAS pushes SUSY beyond 2 TeV appeared first on CERN Courier.

]]>

The ATLAS experiment has released several new results in its search for supersymmetry (SUSY) using the full 13 TeV LHC data set from 2015 and 2016, obtaining sensitivity for certain new particles with masses exceeding 2 TeV.

ATLAS

SUSY is one of the most studied extensions of the Standard Model (SM) and, if realised in nature, it would introduce partners for all the SM particles. Under the assumption of R-parity conservation, SUSY particles would be pair-produced and the lightest SUSY particle (LSP) would be stable. The strongly produced partners of the gluon and quarks, the gluino and squarks, would decay to final states containing energetic jets, possibly leptons, and two LSPs. If the LSP is only weakly interacting, which would make it a dark-matter candidate, it would escape the detector unseen, resulting in a signature with missing transverse momentum.

A recent ATLAS analysis [1] searched for this signature, while a second [2] targets models where each gluino decays via the partner of the top quark (the “stop”), producing events with many jets originating from a b quark (b jets). Both analyses find consistency with SM expectations, excluding squarks and gluinos from the first two generations at 95% confidence level up to masses of 2 TeV (see figure). Pair-produced stops could decay to final states containing up to six jets, including two b jets, or through the emission of a Higgs or Z boson. Two dedicated ATLAS searches [3, 4] find no evidence for these processes, excluding stop masses up to 950 GeV.

SUSY might alternatively be manifested in more complicated ways. R-parity violating (RPV) SUSY features an LSP that can decay and hence evade missing transverse momentum-based searches. Moreover, SUSY particles could be long-lived or metastable, leading to unconventional detector signatures. Two dedicated searches [5, 6] for the production of gluino pairs and stop pairs decaying via RPV couplings have recently been studied by ATLAS, both looking for final states with multiple jets but little missing transverse momentum. In the absence of deviations from background predictions, strong exclusion limits are extracted that complement those of R-parity conserving scenarios.

The production of metastable SUSY particles could give rise to decay vertices that are separated by from the proton–proton collision point in a measurable way. An ATLAS search [7] based on a dedicated tracking and vertexing algorithm has now ruled out large regions of the parameter space of such models. A second search [8] exploited the new layer of the ATLAS pixel tracking detector to identify short track segments produced by particles decaying close to the LHC beam pipe, yielding sensitivity to non-prompt decays of SUSY charginos with lifetimes of the order of a nanosecond. The result constrains an important class of SUSY models where the dark-matter candidate is the partner of the W boson.

The ATLAS SUSY search programme with the new data set is in full swing, with many more signatures being investigated to close in on models of electroweak-scale supersymmetry.

The post ATLAS pushes SUSY beyond 2 TeV appeared first on CERN Courier.

]]>
News The strongly produced partners of the gluon and quarks, the gluino and squarks, would decay to final states containing energetic jets, possibly leptons, and two LSPs. https://cerncourier.com/wp-content/uploads/2018/06/CCnew5_04_17.jpg
Physics at its limits https://cerncourier.com/a/physics-at-its-limits/ Thu, 13 Apr 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/physics-at-its-limits/ A 100 km-circumference collider would address many of the outstanding questions in modern particle physics.

The post Physics at its limits appeared first on CERN Courier.

]]>

Since Democritus, humans have wondered what happens as we slice matter into smaller and smaller parts. After the discovery almost 50 years ago that protons are made of quarks, further attempts to explore smaller distances have not revealed tinier substructures. Instead, we have discovered new, heavier elementary particles, which although not necessarily present in everyday matter are crucial components of nature’s fundamental make-up. The arrangement of the elementary particles and the interactions between them is now well described by the Standard Model (SM), but furthering our understanding of the basic laws of nature requires digging even deeper.

Quantum physics gives us two alternatives to probe nature at smaller scales: high-energy particle collisions, which induce short-range interactions or produce heavy particles, and high-precision measurements, which can be sensitive to the ephemeral influence of heavy particles enabled by the uncertainty principle. The SM was built from these two approaches, with a variety of experiments worldwide during the past 40 years pushing both the energy and the precision frontiers. The discovery of the Higgs boson at the LHC is a perfect example: precise measurements of Z-boson decays at previous lepton machines such as CERN’s Large Electron–Positron (LEP) collider pointed indirectly but unequivocally to the existence of the Higgs. But it was the LHC’s proton–proton collisions that provided the high energy necessary to produce it directly. With exploration of the Higgs fully under way at the LHC and the machine set to operate for the next 20 years, the time is ripe to consider what tool should come next to continue our journey.

Aiming at a high-energy collider with a clean collision environment, CERN has for several years been developing an e+e linear collider called CLIC. With an energy up to 3 TeV, CLIC would combine the precision of an e+e collider with the high-energy reach of a hadron collider such as the LHC. But with the lack so far of any new particles at the LHC beyond the Higgs, evidence is mounting that even higher energies may be required to fully explore the next layer of phenomena beyond the SM. Prompted by the outcome of the 2013 European Strategy for Particle Physics, CERN has therefore undertaken a five-year study for a Future Circular Collider (FCC) facility built in a new 100 km-circumference tunnel (see image below).

Such a tunnel could host an e+e collider (called FCC-ee) with an energy and intensity much higher than LEP, improving by orders of magnitude the precision of Higgs and other SM measurements. It could also house a 100 TeV proton–proton collider (FCC-hh) with a discovery potential more than five times greater than the 27 km-circumference LHC. An electron–proton collider (FCC-eh), furthermore, would allow the proton’s substructure to be measured with unmatchable precision. Further opportunities include the collision of heavy ions in FCC-hh and FCC-eh, and fixed-target experiments using the injector complex. The earliest that such a machine could enter operation is likely to be the mid 2030s, when the LHC comes to the end of its operational lifetime, but the long lead times for collider projects demand that we start preparing now (see timeline below). A Conceptual Design Report (CDR) for a 100 km collider is expected to be completed by the end of 2018 and hundreds of institutions have joined the international FCC study since its launch in 2014. An independent study for a similar facility is also under way in China.

The CDR will document the accelerator, infrastructures and experiments, as well as a plethora of physics studies proving FCC’s ability to match the long-term needs of global high-energy-physics programmes. The first FCC physics workshop took place at CERN in January to review the status of these studies and discuss the complementarity between the three FCC modes.

The post-LHC landscape

To chart the physics landscape of future colliders, we must first imagine what questions may or may not remain at the end of the LHC programme in the mid-2030s. At the centre of this, and perhaps the biggest guaranteed physics goal of the FCC programme, is our understanding of the Higgs boson. While there is no doubt that the Higgs was the last undiscovered piece of the SM, it is not the closing chapter of the millennia-old reductionist paradigm. The Higgs is the first of its kind – an elementary scalar particle – and it therefore raises deep theoretical questions that beckon a new era of exploration (figure 1, p39).

Consider its mass. In the SM there is no symmetry that protects the Higgs mass from large quantum corrections that drag it up to the mass scale of the particles it interacts with. You might conclude that the relatively low mass of the Higgs implies that it simply does not interact with other heavy particles. But there is good, if largely theoretical, evidence to the contrary. We know that at energies 16 orders of magnitude above the Higgs mass where general relativity fails to provide a consistent quantum description of matter, there must exist a full quantum theory that includes gravity. The fact that the Higgs is so much lighter than this scale is known as the hierarchy problem, and many candidate theories (such as supersymmetry) exist that require new heavy particles interacting with the Higgs. By comparing precise measurements of the Higgs boson with precision SM predictions, we are indirectly searching for evidence of these theories. The SM provides an uncompromising script for the Higgs interactions and any deviation from it would demand its extension.

Even setting to one side grandiose theoretical ideas such as quantum gravity, there are other physical reasons why the Higgs may provide a window to undiscovered sectors. As it carries no spin and is electrically neutral, the Higgs may have so-called “relevant” interactions with new neutral scalar particles. These interactions, even if they take place only at very high energies, remain relevant at low energies – contrary to interactions between new neutral scalars and the other SM particles. The possibility of new hidden sectors already has strong experimental support: although we understand the SM very well, it does not account for roughly 80% of all the matter in the universe. We call the missing mass dark matter, and candidate theories abound. Given the importance of the puzzle, searches for dark-matter particles will continue to play a central role at the LHC and certainly at future colliders.

Furthermore, the SM cannot explain the origin of the matter–antimatter asymmetry that created enough matter for us to exist, otherwise known as baryogenesis. Since the asymmetry was created in the early universe when temperatures and energies were high, we must explore higher energies to uncover the new particles responsible for it. With the LHC we are only at the beginning of this search. Another outstanding question lies in the origin of the neutrino masses, which the SM alone cannot account for. As with dark matter, there are numerous theories for neutrino masses, such as those involving “sterile” neutrinos that are in the reach of lepton and hadron colliders. These and other outstanding questions might also imply the existence of further spatial dimensions, or larger symmetries that unify leptons and quarks or the known forces. The LHC’s findings notwithstanding, future colliders like the FCC are needed to explore these fundamental mysteries more deeply, possibly revealing the need for a paradigm shift.

Electron–positron potential

The capabilities of circular e+e colliders are well illustrated by LEP, which occupied the LHC tunnel from 1989 to 2000. Its point-like collisions between electrons and positrons and precisely known beam energy allowed the four LEP experiments to test the SM to new levels of precision. Putting such a machine in a 100 km tunnel and taking advantage of advances in accelerator technology such as superconducting radio-frequency cavities would offer even greater levels of precision on a larger number of processes. We would be able to change the collision energy in the range 91–350 GeV, for example, allowing data to be collected at the Z pole, at the WW production threshold, at the peak of ZH production, and at the top–antitop quark threshold. Controlling the beam energy at the 100 keV level would allow exquisite measurements of the Z- and W-boson masses, while the high luminosity of FCC-ee will lead to samples of up to 1013 Z and 108 W bosons, not to mention several million Higgs bosons and top-quark pairs. The experimental precision would surpass any previous experiment and challenge cutting-edge theory calculations.

FCC-ee would quite literally provide a quantum leap in our understanding of the Higgs. Like the W and Z gauge bosons, the Higgs receives quantum electroweak corrections typically measuring a few per cent in magnitude due to fluctuations of massive particles such as the top quark. This aspect of the gauge bosons was successfully explored at LEP, but now it is the turn of the Higgs – the keystone in the electroweak sector of the SM. The millions of Higgs bosons produced by FCC-ee, with its clinically precise environment, would push the accuracy of the measurements to the per-mille level, accessing the quantum underpinnings of the Higgs and probing deep into this hitherto unexplored frontier. In the process, e+e→ HZ, the mass recoiling against the Z has a sharp peak that allows a unique and absolute determination of the Higgs decay width and production cross-section. This will provide an absolute normalisation for all Higgs measurements performed at the FCC, enabling exotic Higgs decays to be measured in a model-independent manner.

The high statistics promised by the FCC-ee programme go far beyond precision Higgs measurements. Other signals of new physics could arise from the observation of flavour-changing neutral currents or lepton-flavour-violating decays by the precise measurements of the Z and H invisible decay widths, or by direct observation of particles with extremely weak couplings such as right-handed neutrinos and other exotic particles. Given the particular energy and luminosity of a 100 km e+e machine, the precision of the FCC-ee programme on electroweak measurements would allow new physics effects to be probed at scales as high as 100 TeV. If installed before FCC-hh, it would therefore anticipate what the hadron machine must focus on.

The energy frontier

The future proton–proton collider FCC-hh would operate at seven times the LHC energy, and collect about 10 times more data. The discovery reach for high-mass particles – such as Z´ or W´ gauge bosons corresponding to new fundamental forces, or gluinos and squarks in supersymmetric theories – will increase by a factor five or more, depending on the luminosity. The production rate of particles already within the LHC reach, such as top quarks or Higgs bosons, will increase by even larger factors. During its planned 25 years of data-taking, more than 1010 Higgs bosons will be created by FCC-hh, which is 10,000 times more than collected by the LHC so far and 100 times more than will be available by the end of LHC operations. These additional statistics will enable the FCC-hh experiments to improve the separation of Higgs signals from the huge backgrounds that afflict most LHC studies, overcoming some of the dominant systematics that limit the precision attainable from the LHC.

While the ultimate precision on most Higgs properties can only be achieved with FCC-ee, several demand complementary information from FCC-hh. For example, the direct measurement of the coupling between the Higgs and the top quark necessitates that they be produced together, requiring an energy beyond the reach of the FCC-ee. At 100 TeV, almost 109 of the 1012 produced top quarks will radiate a Higgs boson, allowing the top-Higgs interaction to be measured with a statistical precision at the 1% level – a factor 10 improvement over what is hoped for from the LHC. Similar precision can be reached for Higgs decays that are too rare to be studied in detail at FCC-ee, such as those to muon pairs or to a Z and a photon. All of these measurements will be complementary to those obtained with FCC-ee, and will use them as reference inputs to precisely correlate the strength of the signals obtained through various production and decay modes.

One respect in which a 100 TeV proton–proton collider would come to the fore is in revealing how the Higgs behaves in private. The Higgs is the only particle in the SM that interacts with itself. As the Higgs scalar potential defines the potential energy contained in a fluctuation of the Higgs field, these self-interactions are neatly defined as the derivatives of the scalar electroweak potential. With the Higgs boson being an excitation about the minimum of this potential, we know that its first derivative is zero. The second derivative of the potential is simply the Higgs mass, which is already known to sub-per-cent accuracy. But the third and fourth derivatives are unknown, and unless we gain access to Higgs self-interactions they could remain so. The rate of Higgs pair-production events, which in some part occur through Higgs self-interactions, would grow precipitously at FCC-hh and enable this unique property of the Higgs to be measured with an accuracy of 5% per cent. Among many other uses, such a measurement would comprehensively explore classes of baryogenesis models that rely on modifying the Higgs potential, and thus help us to understand the origin of matter.

FCC-hh would also allow an exhaustive exploration of new TeV-scale phenomena. Indirect evidence for new physics can emerge from the scattering of W bosons at high energy, from the production of Higgs bosons at very large transverse momentum, or by testing the far “off-shell” nature of the Z boson via the measurement of lepton pairs with invariant masses in the multi-TeV region. The plethora of new particles predicted by most models of symmetry-breaking alternative to the SM can be searched for directly, thanks to the immense mass reach of 100 TeV collisions. The search for dark matter, for example, will cover the possible space of parameters of many theories relying on weakly interacting massive particles, guaranteeing a discovery or ruling them out. Theories that address the hierarchy problem will also be conclusively tested. For supersymmetry, the mass reach of FCC-hh pushes beyond the regions motivated by this puzzle alone. For composite Higgs theories, the precision Higgs coupling measurements and searches for new heavy resonances will fully cover the motivated territory. A 100 TeV proton collider will even confront exotic scenarios such as the twin Higgs, which are nightmarishly difficult to test. These theories predict very rare or exotic Higgs decays, possibly visible at FCC-hh thanks to its enormous Higgs production rates.

Beyond these examples, a systematic effort is ongoing to categorise the models that can be conclusively tested, and to find the loopholes that might allow some models to escape detection. This work will influence the way detectors for the new collider are designed. Work is already starting in earnest to define the features of these detectors, and efforts in the FCC CDR study will focus on comprehensive simulations of the most interesting physics signals. The experimental environment of a proton–proton collider is difficult due to the large number of background sources and the additional noise caused by the occurrence of multiple interactions among the hundreds of billions of protons crossing each other at the same time. This pile-up of events will greatly exceed those observed at the LHC, and will pose a significant challenge to the detectors’ performance and to the data-acquisition systems. The LHC experience is of immense value for projecting the scale of the difficulties that will have to be met by FCC-hh, but also for highlighting the increasing role of proton colliders in precision physics beyond their conventional role of discovery machines.

Asymmetric collisions

Smashing protons into electrons opens up a whole different type of physics, which until now has only been explored in detail by a single machine: the HERA collider at DESY in Germany. FCC-eh would collide a 60 GeV electron beam from a linear accelerator, external and tangential to the main FCC tunnel, with a 50 TeV proton beam. It would collect factors of thousands more luminosity than HERA while exhibiting the novel concept of synchronous, symbiotic operation alongside the pp collider. The facility would serve as the most powerful, high-resolution microscope to examine the substructure of matter ever built, with high-energy electron–proton collisions providing precise information on the quark and gluon structure of the proton.

This unprecedented facility would enhance Higgs studies, including the study of the coupling to the charm quark, and broaden the new-physics searches also performed at FCC-hh and FCC-ee. Unexpected discoveries such as quark substructure might also arise. Uniquely, in electron–proton collisions new particles can be created in lepton–quark fusion processes or may be radiated in the exchange of a photon or other vector boson. FCC-eh could also provide access to Higgs self-interactions and extended Higgs sectors, including scenarios involving dark matter. If neutrino oscillations arise from the existence of heavy sterile neutrinos, direct searches at the FCC-eh would have great discovery prospects in kinematic regions complementary to FCC-hh and FCC-ee, giving the FCC complex a striking potential to shine light on the origin of neutrino masses.

Unknown unknowns

In principle, the LHC could have – and still could provide – answers to many of these outstanding questions in particle physics. That no new particles beyond the Higgs have yet been found, or any significant deviations from theory detected, does not mean that these questions have somehow evaporated. Rather, it shows that any expectations for early discoveries beyond the SM at the LHC – often based on theoretical, and in some cases aesthetic, arguments – were misguided. In times like this, when theoretical guidance is called into question, we must pursue experimental answers as vigorously as possible. The combination of accelerators that are being considered for the FCC project offer, by their synergies and complementarities, an extraordinary tool for investigating these questions (figure 2).

There are numerous instances in which the answer nature has offered was not a reply to the question first posed. For example, Michelson and Morley’s experiment designed to study the properties of the ether ended up disproving the existence of the ether and led to Einstein’s theory of special relativity. The Kamiokande experiment in Japan, originally built to observe proton decays, instead discovered neutrino masses. The LHC itself could have disproven the SM by discovering that the Higgs boson is not an elementary but a composite particle – and may still do so, with its future more precise measurements.

The possibility of unknown unknowns does not diminish the importance of an experiment’s scientific goals. On the contrary, it demonstrates that the physics goals for future colliders can play the crucial role of getting a new facility off the ground, even if a completely unanticipated discovery results. This is true of all expeditions into the unknown. We should not forget that Columbus set sail to find a westerly passage to Asia. Without this goal, he would not have discovered the Americas.

The post Physics at its limits appeared first on CERN Courier.

]]>
Feature A 100 km-circumference collider would address many of the outstanding questions in modern particle physics. https://cerncourier.com/wp-content/uploads/2018/06/CCfcc1_04_17.jpg
The LHC’s extra dimension https://cerncourier.com/a/the-lhcs-extra-dimension/ Fri, 13 Jan 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-lhcs-extra-dimension/ Searches at ATLAS and CMS constrain models of extra dimensions.

The post The LHC’s extra dimension appeared first on CERN Courier.

]]>

At 10.00 a.m. on 9 August 2016, physicists gathered at the Sheraton hotel in Chicago for the “Beyond the Standard Model” session at the ICHEP conference. The mood was one of slight disappointment. An excess of “diphoton” events at a mass of 750 GeV reported by the LHC’s ATLAS and CMS experiments in 2015 had not shown up in the 2016 data, ending a burst of activity that saw some 540 phenomenology papers uploaded to the arXiv preprint server in a period of just eight months. Among the proposed explanations for the putative new high-mass resonance were extra space–time dimensions, an idea that has been around since Theodor Kaluza and Oscar Klein attempted to unify the electromagnetic and gravitational forces a century ago.

In the modern language of string theory, extra dimensions are required to ensure the mathematical consistency of the theory. They are typically thought to be very small, close to the Planck length (10–35 m). In the 1990s, however, theorists trying to solve problems with supersymmetry suggested that some of these extra dimensions could be as large as 10–19 m, corresponding to an energy scale in the TeV range. In 1998, as proposed by Arkani-Hamed and co-workers, theories emerged with even larger extra dimensions, which predicted detectable effects in contemporary collider experiments. In such large extra-dimension (LED) scenarios, gravity can become stronger than we perceive in 3D due to the increased space available. In addition to showing us an entirely different view of the universe, extra dimensions offer an elegant solution to the so-called hierarchy problem, which arises because the Planck scale (where gravity becomes as strong as the other three forces) is 17 orders of magnitude larger than the electroweak scale.

Particle physicists normally ignore gravity because it is feeble compared with the other three forces. In theories where gravity gets stronger at small distances due to the opening of extra dimensions, however, it can catch up and lead to phenomena at colliders with high enough rates that they can be measured in experiments. The possibility of having extra space dimensions at the TeV scale was a game changer. Scientists from experiments at the LEP, Tevatron and HERA colliders quickly produced tailored searches for signals for this new beyond-the-Standard Model (SM) physics scenario. No evidence was found in their accumulated data, setting lower limits on the scale of extra dimensions of around 1 TeV.

By the turn of the century, a number of possible new experimental signatures had been identified for extra-dimension searches, many of which were studied in detail while assessing the physics performance of the LHC experiments. For the case of LEDs, where gravity is the only force that can expand in these dimensions, high-energy collider experiments were just one approach. Smaller “tabletop” scale experiments aiming to measure the strength of gravity at sub-millimetre distances were also in pursuit of extra dimensions, but no deviation from the Newtonian law has been observed to date. In addition, there were also significant constraints from astrophysics processes on the possible number and size of these dimensions.

Enter the LHC

Analysis strategies to search for extra dimensions have been deployed from the beginning of high-energy LHC operations in 2010, and the recent increase in the LHC’s collision energy to 13 TeV has extended the search window considerably. Although no positive signal of the presence of extra dimensions has been observed so far, a big leap forward has been taken in excluding large portions of the TeV scale phase-space where extra dimensions could live.

A particular feature of LED-type searches is the production of a single very energetic “mono-object” that does not balance the transverse momentum carried by anything else emerging from the collision (as would be required by momentum and energy conservation). Examples of such objects are particle jets, very energetic photons or heavy W and Z vector bosons. Such collisions only appear to be imbalanced, however, because the emerging jet or boson is balanced by a graviton that escapes detection. Hence SM processes such as the production of a jet plus a Z boson that decays into neutrinos can mimic a graviton production signal. The absence of any excess in the mono-jet or mono-photon event channels at the LHC has put stringent limits on LEDs (figure 1), with 2010 data already bypassing previous collider search limits. LEDs can also manifest themselves as a new contribution to the continuum in the invariant mass spectrum of two energetic photons (figure 2) or fermions (dileptons or dijets). Here too, though, no signals have been observed, and the LHC has now excluded such contributions for extra-dimension scales up to several TeV.

In 1999, another extra-dimension scenario was proposed by Randall and Sundrum (RS), which led to a quite different phenomenology compared with that expected from LEDs. In its simplest form, the RS idea contains two fundamental 3D branes: one on which most if not all SM particles live, and one on which gravity lives. Gravity is assumed to be intrinsically strong, but the warped space between the two branes makes it appear weak on the brane where we live. The experimental signature of such scenarios is the production of so-called Kaluza–Klein (spin-2 graviton) resonances that can be observed in the invariant mass spectra of difermions or dibosons. The most accessible spectra to the LHC experiments include the diphoton and dilepton spectra, in which no new resonance signal has been found, and at present the limits on putative Kaluza–Klein gravitons are about 4 TeV, depending on RS-model parameters. Analyses of dijet final states provide even more stringent limits of up to 7 TeV. Further extensions of the RS model, in particular the production of top quark–antiquark resonances, offer a more sensitive signature, but despite intense searches, no signal has been detected.

Searching in the dark

At the start of 2000, it was realised that large or warped extra dimensions could lead to a new type of signature at the LHC: microscopic black holes. These can form when two colliding partons come close enough to each other, namely to within the Schwarzschild radius or black-hole event horizon, and can be as large as a femtometre in the presence of TeV-scale extra dimensions at the LHC. Such microscopic black holes would evaporate via Hawking radiation on time scales of around 10–27 s, way before they could suck up any matter, and provide an ideal opportunity to study quantum gravity in the laboratory.

Black holes that are produced with a mass significantly above the formation threshold are expected to evaporate in high-energy multi-particle final states leading to plenty of particle jets, leptons, photons and even Higgs particles. Searches for such energetic multi-object final states in excess of the SM expectation have been performed since the first collisions at the LHC at 7 TeV, but none have been found. If black holes are produced closer to the formation threshold, these would be expected to decay in a much smaller final-state topology, for instance into dijets. The CMS and ATLAS experiments have been looking for all of these final states up until the latest 13 TeV data (figure 3), but no signal has been observed so far for black-hole masses up to about 9 TeV.

Several other possible incarnations of extra-dimension theories have been proposed and searched for at the LHC. So-called TeV-type extra dimensions allow for more SM particles, for example partners of the heavy W and Z bosons, to enter in the bulk, and these would show up as high-mass resonances in dilepton and other invariant mass spectra. These new resonances have a spin equal to one, and hence such signatures could be more tedious to detect because they can interfere with the SM Drell–Yan production background. Nevertheless, no such resonances have been discovered so far.

In so-called universal extra-dimension (UED) scenarios, all particles have states that can go into the bulk. If this scenario is correct, a completely new particle spectrum of partners of the SM particles should show up at the LHC at high masses. Although this looks very much like what would be expected from supersymmetry, where all known SM particles have partners, the Kaluza–Klein partners would have exactly the same spin as their SM partners, whereas supersymmetry transforms bosons into fermions and vice versa. Alas, no new particles either for Kaluza–Klein partners or supersymmetry candidates have been observed, pushing the lower mass limits beyond 1 TeV for certain particle types.

Final hope

Collider data so far have not yet given us any sign of the existence of extra dimensions, or for that matter a sign that gravity is becoming strong at the TeV scale. It is possible that, even if they exist, the extra dimensions could be as small as predicted by string theory, in which case they would not be able to solve the hierarchy problem. The idea is still very much alive, however, and searches will continue as more data are recorded at the LHC.

Even excellent and attractive ideas always need confirmation from data, and inevitably the initial high enthusiasm for extra-dimension theories may have waned somewhat in recent years. Although such confirmation could come from the next generation of colliders, such as possible higher-energy machines, there is unfortunately no guarantee. It could be that we have to turn to even more outlandish ideas to progress further.

The post The LHC’s extra dimension appeared first on CERN Courier.

]]>
Feature Searches at ATLAS and CMS constrain models of extra dimensions. https://cerncourier.com/wp-content/uploads/2018/06/CCexd1_01_17.jpg
Particle physics under the spotlight in Chicago https://cerncourier.com/a/particle-physics-under-the-spotlight-in-chicago/ https://cerncourier.com/a/particle-physics-under-the-spotlight-in-chicago/#respond Fri, 16 Sep 2016 12:55:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/particle-physics-under-the-spotlight-in-chicago/ No result at ICHEP 2016 was more highly anticipated than the updates on the 750 GeV diphoton resonance hinted at in data from the ATLAS and CMS experiments in 2015.

The post Particle physics under the spotlight in Chicago appeared first on CERN Courier.

]]>

This summer, the city of Chicago in Illinois was not only a vacation destination for North American tourists – it was also the preferred destination for more than 1400 scientists, students, educators and members of industry from around the world. Fifty-one countries from Africa, Asia, Australia, Europe, North America and South America were represented at the 38th International Conference on High Energy Physics (ICHEP), which is the largest such conference ever held.

Indeed, the unexpectedly large interest in the meeting caused some re-thinking of the conference agenda. A record 1600 abstracts were submitted, of which 600 were selected for parallel presentations and 500 for posters by 65 conveners. During three days of plenary sessions, 36 speakers from around the world overviewed results presented at the parallel and poster sessions.

One of the most popular parallel-session themes concerned enabling technologies, totalling around 400 abstract submissions, and rich collaborative opportunities were discussed in the new “technology applications and industrial opportunities” track. Another innovation at ICHEP 2016 concerned diversity and inclusion, which appeared as a separate parallel track. A number of new initiatives in communication, education and outreach were also piloted. These included lunchtime sessions aimed at increasing ICHEP participants’ skills in outreach and communication through news and social media, “art interventions” and a physics slam, where five scientists competed to earn audience applause through presentations of their research. The outreach programme was complemented by events at 30 public libraries in Chicago and a public lecture about gravitational waves.

While the public had an increasing number of ways to connect with the conference, however, the main attraction for attendees remained the same: new science results. And no result was more highly anticipated than the updates on the 750 GeV diphoton resonance hinted at in data from the ATLAS and CMS experiments recorded during 2015.

Exploring the unknown

The spectacular performance of the LHC during 2016, which saw about 20 fb–1 of 13 TeV proton–proton collisions delivered to ATLAS and CMS by the time of the conference, gave both experiments unprecedented sensitivity to new particles and interactions. The collaborations reported on dozens of different searches for new phenomena. In a dramatic parallel session, both ATLAS and CMS revealed that their 2016 data do not confirm the previous hints of a diphoton resonance at 750 GeV (figure 1); apparently, those hints were nothing more than tantalising statistical fluctuations. Disappointed theorists were happily distracted by other new results, however. As expected, these include interesting excesses worth keeping an eye on as more data become available. Still in the running for future big discoveries are the production of heavy particles predicted by supersymmetry and exotic theories, and the direct production at the LHC of dark-matter particles. So far, no signs of such particles have been seen at ATLAS or CMS.

Many other experiments reported on their own searches for new particles and interactions, including new LHCb results on the most sensitive search to date for CP violation in the decays of neutral D mesons which, if detected, would allow researchers to probe CP violation in the up-type quark sector. Final results from the MEG (Mu to E Gamma) experiment at the Paul Scherrer Institute in Switzerland revealed the most sensitive search to date for charged lepton-flavour violation, which would also be a clear signature of new physics. Using bottom and charm quarks to probe new physics, the Beijing Spectrometer (BES) at IHEP in China and the Belle experiment at KEK in Japan showcased a series of precision and rare-process results. While they have a few interesting discrepancies from Standard Model (SM) predictions, presently no signs of physics beyond the SM have emerged.

Meanwhile on the heavy-ion front, the ALICE experiment at the LHC joined ATLAS, CMS and LHCb in presenting new observations of the dramatic and mysterious properties of quark–gluon plasma. This was complemented by results from the STAR and PHENIX experiments at RHIC at the Brookhaven National Laboratory in the US.

Rediscovering the Higgs

Perhaps unsurprisingly, given that its discovery in 2012 was one of the biggest in particle physics for a generation, the Higgs boson was the subject of 30 parallel-session talks. New LHC measurements are a great indicator of how the Higgs boson is being used as a new tool for discovery. Already Run 2 of the LHC has produced more Higgs bosons than in Run 1, and the Higgs has been “rediscovered” in the new data with a significance of 10σ (figure 2). A major focus of the new analyses is to demonstrate the production of Higgs particles in association with a W or Z boson, or with a pair of top quarks and their decay patterns. These production and decay channels are important tests of Higgs properties, and so far the Higgs seems to behave just as the SM predicts.

About 20 new searches looking for heavier cousins of the Higgs were reported. These “heavy Higgs”, once produced, could decay in ways very similar to the Higgs itself, or might decay into a pair of Higgs bosons. Other searches covered the possibility that the Higgs boson itself has exotic decays: “invisible” decays into undetected particles, decays into exotic bosons or decays that violate the conservation of lepton flavour. No signals have emerged yet, but the LHC experiments are providing increasing sensitivity and coverage of the full menu of possibilities.

Neutrino mysteries

With neutrinos currently among the most interesting objects to study to look for signs of physics beyond the SM, ICHEP included reports from three powerful long-baseline neutrino experiments: T2K at J-PARC in Japan, and NOνA and MINOS at Fermilab in the US, which are addressing some of the fundamental questions about neutrinos such as CP violation, the ordering of their masses and their mixing behaviour. While not yet conclusive, the results presented at ICHEP show that neutrino physics is entering a new era of sensitivity and maturity. Data from T2K currently favour the idea of CP violation in the lepton sector, which is one of the conditions required for the observed dominance of matter over antimatter in the universe, while data from NOνA disfavour the idea that mixing of the second and third neutrino flavours is maximal, representing a test of a new symmetry that underlies maximal mixing (figure 3).

With nearly twice the antineutrino data in 2016 compared with its 2015 result, the T2K experiment’s observed electron antineutrino appearance rate is lower than would be expected if CP asymmetry is conserved (left). With data accumulated until May 2016, representing 16% of its planned total, NOvA’s results (right) show an intriguing preference for non-maximal mixing – that is, a preference for sin2θ23 ≠ 0.5.

The long simmering issue of sterile neutrinos – hypothesised particles that do not interact via SM forces – also received new attention in Chicago. The 20 year-old signal from the LSND experiment at Los Alamos National Laboratory in the US, which indicates 4σ evidence for such a particle, was matched some years ago by anomalies from the MiniBooNE experiment at Fermilab. As reported at ICHEP, however, cosmological data and new results from IceCUBE in Antarctica and MINOS+ at Fermilab do not confirm the existence of sterile neutrinos. On the other hand, the Daya Bay experiment in China, Reno in South Korea and Double Chooz in France all confirm a reactor neutrino flux that is low compared with the latest modelling, which could arise from mixing with sterile neutrinos. However, all three of these experiments also confirm a “bump” in the neutrino spectrum at an energy of around 5 MeV that is not predicted, so there is certainly more work to be done in understanding the modelling.

Probing the dark sector

Dark matter dominates the universe, but its identity is still a mystery. Indeed, some theorists speculate about the existence of an entire “dark sector” made up of dark photons and multiple species of dark matter. Numerous approaches are being pursued to detect dark matter directly, and these are complemented by searches at the LHC, surveys of large-scale structure and attempts to observe high-energy particles from dark-matter annihilation or decay in or around our Galaxy. Regarding direct detection, experiments are advancing steadily in sensitivity: the latest examples reported at ICHEP came from LUX in the US and PandaX-II in China, and already they exclude a substantial fraction of the parameter space of supersymmetric dark-matter candidates (figure 4).

Dark energy – the name given to the entity thought to be driving the cosmic acceleration of today’s universe – is one of two provocative mysteries, the other concerning the primordial epoch of cosmic inflation. ICHEP sessions concerned both current and planned observations of such effects, using either optical surveys of large-scale structure or the cosmic microwave background. Both approaches together can probe the nature of dark energy by looking at the abundance of galaxy clusters as a function of redshift; as reported at the Chicago event, this is already happening via the Dark Energy Survey and the South Pole Telescope.

Progress in theory

Particle theory has been advancing rapidly along two main lines: new ideas and approaches for persistent mysteries such as dark matter and naturalness, and more precise calculations of SM processes that are relevant for ongoing experiments. As emphasised at ICHEP 2016, new ideas for the identity of dark matter have had implications for LHC searches and for attempts to observe astrophysical dark-matter annihilation, in addition to motivating a new experimental programme looking for dark photons. A balanced view of the naturalness problem, which concerns the extent to which fundamental parameters appear tuned for our existence, was presented at ICHEP. While supersymmetry is still the leading explanation, theorists are also studying alternatives such as the “relaxion”. This shifts attention to the dynamics of the early universe, with consequences that may be observable in future experiments.

There have also been tremendous developments in theoretical calculations with higher-order QCD and electroweak corrections, which are critical for understanding the SM backgrounds when searching for new physics – particularly at the LHC and, soon, at the SuperKEKB B factory in Japan. The LHC’s experimental precision on top-quark production is now reaching the point where theory requires next-to-next-to-next-to-leading-order corrections just to keep up, and this is starting to happen. In addition, recent lattice QCD calculations play a key role in extracting fundamental parameters such as the CKM mixing matrix, as well as squeezing down uncertainties to the point where effects of new phenomena may conclusively emerge.

Facilities focus

With particle physics being a global endeavour, the LHC at CERN serves as a shining example of a successful large international science project. At a session devoted to future facilities, leaders from major institutions presented the science case and current status of new projects that require international co-operation. These include the International Linear Collider (ILC) in Japan, the Circular Electron–Positron Collider (CEPC) in China, an energy upgrade of the LHC, the Compact Linear Collider (CLIC) and the Future Circular Collider (FCC) at CERN, the Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE) in the US, and the Hyper-K neutrino experiment in Japan.

While the high-energy physics experiments of the future were a key focus, one of the well-attended sessions at ICHEP 2016 concerned professional issues critical to a successful future for the field of particle physics. Diversity and inclusion were the subject of four hours of parallel sessions, discussions and posters, with themes such as communication, inclusion and respect in international collaboration and how harassment and discrimination in scientific communities create barriers to access. The sessions were mostly standing-room only, with supportive but candid discussion of the deep divides, harassment, and biases – both explicit and implicit – that need to be overcome in the science community. Speakers described a number of positive initiatives, including the Early Career, Gender and Diversity office established by the LHCb collaboration, the Study Group on Diversity in the ATLAS collaboration, and the American Physical Society’s “Bridge Program” to increase the number of physics PhDs among students from under-represented backgrounds.

ICHEP 2016 clearly showed that there are a vast number of scientific opportunities on offer now and in the future with which to further explore the smallest and largest structures in the universe. The LHC is performing beyond expectations, and will soon enter a new era with its planned high-luminosity upgrade. Meanwhile, propelled by surprising discoveries from a series of pioneering experiments, neutrino physics has progressed dramatically, and its progress will continue with new and innovative experiments. Intense kaon and muon beams, and SuperKEKB, will provide excellent opportunities to search for new physics in different ways, and will help to inform future research directions. Diverse approaches to probe the nature of dark matter and dark energy are also on their way. While we cannot know what will be the headline results at the next ICHEP event – which will be held in 2018 in Seoul, South Korea – we can be certain that surprises are in store.

The post Particle physics under the spotlight in Chicago appeared first on CERN Courier.

]]>
https://cerncourier.com/a/particle-physics-under-the-spotlight-in-chicago/feed/ 0 Feature No result at ICHEP 2016 was more highly anticipated than the updates on the 750 GeV diphoton resonance hinted at in data from the ATLAS and CMS experiments in 2015. https://cerncourier.com/wp-content/uploads/2016/09/CCfea18_08_16.jpg
CMS highlights from the fourth LHCP conference https://cerncourier.com/a/cms-highlights-from-the-fourth-lhcp-conference/ https://cerncourier.com/a/cms-highlights-from-the-fourth-lhcp-conference/#respond Fri, 08 Jul 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-highlights-from-the-fourth-lhcp-conference/ The search for new physics in 13 TeV proton collisions continues in earnest, with six new results presented at LHCP.

The post CMS highlights from the fourth LHCP conference appeared first on CERN Courier.

]]>

The CMS collaboration presented 15 new results at the fourth annual Large Hadron Collider Physics (LHCP) conference on 13–18 June in Lund, Sweden. The results included a mixture of searches for new physics and Standard Model measurements at a centre-of-mass energy of 13 TeV. CMS also summarized its detector and physics-object performance on recently collected 2016 data, demonstrating that the collaboration has emerged from the winter shutdown ready for discovery physics.

The search for new physics in 13 TeV proton collisions continues in earnest, with six new results presented at LHCP. A combined search for high-mass resonances decaying to the Zγ final state, with Z bosons decaying to leptons, in the 8 and 13 TeV data sets yields no significant deviation from background expectations for masses ranging from a few hundred GeV to 2 TeV (EXO-16-021). A similar search in the same channel, but with Z bosons decaying to quarks, produced a similar conclusion (EXO-16-020). CMS has also searched for heavy Z´ bosons that decay preferentially to third-generation fermions, including decays to pairs of top quarks (B2G-15-003) and τ leptons (EXO-16-008), and found no excess above the Standard Model prediction.

The top quark-pair analysis uses special techniques to search the all-hadronic final state, where the highly boosted top quarks are reconstructed as single jets, while the search in the τ lepton channel is carried out in four final states depending on the decay mode. No significant signals are observed in either search, resulting in the exclusion of Z´ bosons up to a mass of 3.3 (3.8) TeV for widths of 10 (30)% relative to the mass in the top search, and 2.1 TeV in the τ lepton search. Another search using the τ lepton looks for heavy neutrinos from right-handed W bosons and third-generation scalar leptoquarks in events containing jets and two hadronically decaying taus. This is the first such search for heavy neutrinos using τ leptons, and CMS finds the data well described by Standard Model backgrounds.

CMS continues to probe for possible dark-matter candidates, most recently in final states that contain top quarks (EXO-16-017) or photons (EXO-16-014) plus missing energy. The data are consistent with Standard Model backgrounds and limits are placed on model parameters associated with the dark matter and graviton hypotheses. A search for supersymmetric particles in the lepton-plus-jets final state was also presented for the first time (SUS-16-011). This analysis targets so-called compressed spectra in which weakly interacting supersymmetric particles can have similar masses, giving rise to muons and electrons with very low transverse momentum. No significant signals are observed and limits are placed on the masses of top squarks and gluinos under various assumptions about the mass splittings of the intermediate states.

Finally, a search for a heavy vector-like top quark T decaying to a standard top quark and a Higgs boson (B2G-16-005) was presented for the first time at LHCP. For T masses above 1 TeV, the top quark and Higgs boson are highly boosted and their decay products are reconstructed using similar techniques as in B2G-15-003. Here the data are also consistent with background expectations, allowing CMS to set limits on the product of the cross section and branching fraction for T masses in the range 1.0–1.8 TeV.

Several new Standard Model measurements were shown for the first time at LHCP, including the first measurement of the top-quark cross section at 5 TeV (TOP-16-015) based on data collected during a special proton–proton reference run in 2015 (figure 1). A first measurement by CMS of the WW di-boson cross-section at 13 TeV was also reported (SMP-16-006), where the precision has already reached better than 10%. Finally, three new results on Higgs boson physics were presented for the first time, including the first searches at 13 TeV for vector boson fusion Higgs production in the bottom quark decay channel (HIG-16-003) and a search for Higgs bosons produced in the context of the MSSM model that decay via the τ lepton channel (HIG-16-006). A first look at Higgs lepton-flavor-violating decays in the 13 TeV data (HIG-16-005), using the μτ channel, does not confirm a slight (2.4σ) excess observed in Run 1, although more data is needed to make a definitive conclusion.

The post CMS highlights from the fourth LHCP conference appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cms-highlights-from-the-fourth-lhcp-conference/feed/ 0 News The search for new physics in 13 TeV proton collisions continues in earnest, with six new results presented at LHCP. https://cerncourier.com/wp-content/uploads/2016/07/CCnew4_06_16.jpg
CMS benefits from higher boosts for improved search potential in Run 2 https://cerncourier.com/a/cms-benefits-from-higher-boosts-for-improved-search-potential-in-run-2/ https://cerncourier.com/a/cms-benefits-from-higher-boosts-for-improved-search-potential-in-run-2/#respond Fri, 20 May 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-benefits-from-higher-boosts-for-improved-search-potential-in-run-2/ The production cross-sections of many new-physics processes are predicted to rise dramatically compared with Run 1.

The post CMS benefits from higher boosts for improved search potential in Run 2 appeared first on CERN Courier.

]]>

With the increase in the centre-of-mass energy provided by the Run 2 LHC collisions, the production cross-sections of many new-physics processes are predicted to rise dramatically compared with Run 1, in contrast to those of the background processes. However, this increase in the cross-section is not the only way to enhance search sensitivities in Run 2. The higher energy leads to particle production that is more highly boosted. The large boosts result in the collimation of the decay products of the boosted object, which therefore overlap in the detector. For example, a Z boson that decays to a quark and an antiquark will normally produce two jets if it has a low boost. The same decay of a highly boosted Z boson will – in contrast – produce a single massive jet, because the decay products of the quark and antiquark will merge. Using jet-substructure observables, such as the jet mass or the so-called N-subjettiness, the search sensitivity for boosted objects like boosted top (t) quarks or W, Z and Higgs bosons can be enhanced.

CMS has retuned and optimised these techniques for Run 2 analyses, implementing the latest ideas and algorithms from the realm of QCD and jet-substructure phenomenology. It has been a collaboration-wide effort to commission these tools for analysis use, relying on experts in jet reconstruction and bottom-quark tagging, and on data-analysis techniques from many groups in CMS. These new algorithms significantly improve the identification efficiency of boosted objects compared with Run 1.

Several Run 2 CMS studies probing the boosted regime have already appeared, using the 2015 data set. While searches for boosted entities are pursued by many CMS analysis groups, the Beyond 2 Generations (B2G) group focuses specifically on final states composed of one or more boosted objects. Signal processes of interest in the B2G group include W´ → tb and diboson (VV/VH/HH) resonances, where W´ represents a new heavy W boson, “V” a W or Z boson, and H a Higgs boson. Other B2G studies focus on searches for pair- or singly produced vector-like quarks T and B through the decays T → Wb and B → tW. The search range for these novel particles generally lies between 700 GeV and 4 TeV, yielding many boosted objects when these particles decay.

Another study in the B2G group is the search for a more massive version (Z´) of the elementary Z boson, decaying to a top-quark pair (Z´ → tt). This search is performed in the semileptonic decay channel, for which the final state consists of a boosted top-quark candidate, a lepton, missing transverse momentum, and a tagged bottom-quark jet. Here, the boosted topology not only affects the reconstruction of the top-quark candidate, but also the lepton, whose isolation can be spoiled by the nearby bottom-quark jet. Again, special identification criteria are implemented to maintain a high signal acceptance. This analysis excludes Z´ masses up to 3.4 (4.0) TeV for signal widths equal to 10% (30%) of the Z´ mass, already eclipsing Run 1 limits. A complementary analysis, in the all-hadronic topology, is now under way – an event display showing two boosted top-quark candidates is shown in the figure. The three-subjet topology seen for each boosted top-quark candidate is as expected for such decays.

With these new boosted-object reconstruction techniques now implemented and commissioned for Run 2, CMS anxiously awaits possible discoveries with the 2016 LHC data set.

The post CMS benefits from higher boosts for improved search potential in Run 2 appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cms-benefits-from-higher-boosts-for-improved-search-potential-in-run-2/feed/ 0 News The production cross-sections of many new-physics processes are predicted to rise dramatically compared with Run 1. https://cerncourier.com/wp-content/uploads/2016/05/CCnew7_05_16.jpg
MoEDAL releases new mass limits for the production of monopoles https://cerncourier.com/a/moedal-releases-new-mass-limits-for-the-production-of-monopoles/ https://cerncourier.com/a/moedal-releases-new-mass-limits-for-the-production-of-monopoles/#respond Fri, 20 May 2016 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/moedal-releases-new-mass-limits-for-the-production-of-monopoles/ As a monopole approaches a coil its magnetic charge drives an electrical current which continues to flow after the monopole has passed if the wire is superconducting.

The post MoEDAL releases new mass limits for the production of monopoles appeared first on CERN Courier.

]]>

In April, the MoEDAL collaboration submitted its first physics-research publication on the search for magnetic monopoles utilising a 160 kg prototype MoEDAL trapping detector exposed to 0.75 fb–1 of 8 TeV pp collisions, which was subsequently removed and monitored by a SQUID magnetometer located at ETH Zurich. This is the first time that a dedicated scalable and reusable trapping array has been deployed at an accelerator facility.

The innovative MoEDAL detector (CERN Courier May 2010 p19) employs unconventional methodologies designed to search for highly ionising messengers of new physics such as magnetic monopoles or massive (pseudo-)stable electrically charged particles from a number of beyond-the-Standard-Model scenarios. The largely passive MoEDAL detector is deployed at point 8 on the LHC ring, sharing the intersection region with LHCb. It employs three separate detector systems. The first is comprised of nuclear track detectors (NTDs) sensitive only to new physics. Second, it is uniquely able to trap particle messengers of physics from beyond the Standard Model, for further study in the laboratory. Third, MoEDAL’s radiation environment is monitored by a TimePix pixel-detector array.

Clearly, a unique property of the magnetic monopole is that it has magnetic charge. Imagine that a magnetic monopole traverses the superconducting wire coil of a superconducting quantum interference device (SQUID). As the monopole approaches the coil, its magnetic charge drives an electrical current within the superconducting coil. The current continues to flow in the coil after the monopole has passed because the wire is superconducting, without electrical resistance. The induced current depends only on the magnetic charge and is independent of the monopole’s speed and mass.

In the early 1980s, Blas Cabrera was the first to deploy a SQUID device (CERN Courier April 2001 p12) in an experiment to directly detect magnetic monopoles from the cosmos. The MoEDAL detector can also directly detect magnetic charge using SQUID technology, but in a different way. Rather than the monopole being directly detected in the SQUID coil à la Cabrera, MoEDAL captures the monopoles – in this case produced in LHC collisions – in aluminium trapping volumes that are subsequently monitored by a single SQUID magnetometer.

No evidence for trapped monopoles was seen in data analysed for MoEDAL’s first physics publication described here. The resulting mass limit for monopole production with a single Dirac (magnetic) charge (1gD) is roughly half that of the recent ATLAS 8 TeV result. However, mass limits for the production of monopoles with the higher charges 2gD and 3gD are the LHC’s first to date, and superior to those from previous collider experiments. Figure 1 shows the cross-section upper limits for the production of spin-1/2 monopoles by the Drell–Yan (DY) mechanism with charges up to 4gD. Additionally, a model-independent 95% CL upper limit was obtained for monopole charge up to 6gD and mass reaching 3.5 TeV, again demonstrating MoEDAL’s superior acceptance of higher charges.

Despite a relatively small solid-angle coverage and modest integrated luminosity, MoEDAL’s prototype monopole trapping detector probed ranges of charge, mass and energy inaccessible to the other LHC experiments. The full detector system containing 0.8 tonnes of aluminium trapping detector volumes and around 100 m2 of plastic NTDs was installed late in 2014 for the LHC start-up at 13 TeV in 2015. The MoEDAL collaboration is now working on the analysis of data obtained from pp and heavy-ion running in 2015, with the exciting possibility of revolutionary discoveries to come.

The post MoEDAL releases new mass limits for the production of monopoles appeared first on CERN Courier.

]]>
https://cerncourier.com/a/moedal-releases-new-mass-limits-for-the-production-of-monopoles/feed/ 0 News As a monopole approaches a coil its magnetic charge drives an electrical current which continues to flow after the monopole has passed if the wire is superconducting. https://cerncourier.com/wp-content/uploads/2016/05/CCnew15_05_16.jpg
Searches with boosted topologies at Run 2 https://cerncourier.com/a/searches-with-boosted-topologies-at-run-2/ https://cerncourier.com/a/searches-with-boosted-topologies-at-run-2/#respond Fri, 18 Mar 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/searches-with-boosted-topologies-at-run-2/ ATLAS exploited the optimised boson tagging in the search for heavy resonances decaying into a pair of two electroweak gauge bosons (WW, WZ, ZZ) or of a gauge boson and a Higgs boson (WH, ZH).

The post Searches with boosted topologies at Run 2 appeared first on CERN Courier.

]]>

The first LHC run was highlighted by the discovery of the long-awaited Higgs boson at a mass of about 125 GeV, but we have no clue why nature chose this mass. Supersymmetry explains this by postulating a partner particle to each of the Standard Model (SM) fermions and bosons, but these new particles have not yet been found. A complementary approach to address this issue is to widen the net and look for signatures beyond those expected from the SM.

Searches for new physics in Run 1 found no signals, and from these negative results we know that new particles may be heavy. For this reason, their decay products, such as top quarks, electroweak gauge bosons (W, Z) or Higgs bosons, may be very energetic and could be highly boosted. When such particles are produced with large momentum and decay into quark final states, the decay products often collimate into a small region of the detector. The collimated sprays of hadrons (jets) originating from the nearby quarks are therefore not reliably distinguished. Special techniques have been developed to reconstruct such boosted particles into jets with a wide opening angle, and to identify the cores associated with the quarks using soft-particle-removal procedures (grooming). ATLAS performed an extensive optimisation of top, W, Z and Higgs boson identification, exploiting a wide range of jet clustering and grooming algorithms as well as kinematic properties of jet substructure before the second LHC run. This led to a factor of two improvement in W/Z tagging, compared with the technique used previously in terms of background rejection for the same efficiency for W/Z boson transverse momenta around 300–500 GeV.

ATLAS exploited the optimised boson tagging in the search for heavy resonances decaying into a pair of two electroweak gauge bosons (WW, WZ, ZZ) or of a gauge boson and a Higgs boson (WH, ZH) at 13 TeV collisions. Events are categorised into different numbers of charged/neutral leptons, and all possible combinations are considered except for fully leptonic and fully hadronic WH or ZH decays. For the Higgs boson, only the dominant decay into b quarks is considered. Figure 1 shows the results of WZ searches with a 2015 data set corresponding to 3.2 fb–1, presented as the lower limits on the production cross-section times the branching fraction for a new massive gauge boson with certain mass. No evidence for new physics has been found with these preliminary searches.

The boosted techniques have evolved into a fundamental tool for beyond SM searches at high energy. ATLAS foresees that the search will be greatly enhanced by the techniques, and seeks opportunities to adapt them in uncharted territory for the upcoming LHC run.

The post Searches with boosted topologies at Run 2 appeared first on CERN Courier.

]]>
https://cerncourier.com/a/searches-with-boosted-topologies-at-run-2/feed/ 0 News ATLAS exploited the optimised boson tagging in the search for heavy resonances decaying into a pair of two electroweak gauge bosons (WW, WZ, ZZ) or of a gauge boson and a Higgs boson (WH, ZH). https://cerncourier.com/wp-content/uploads/2016/03/CCnew3_03_16.jpg
CMS hunts for supersymmetry in uncharted territory https://cerncourier.com/a/cms-hunts-for-supersymmetry-in-uncharted-territory/ https://cerncourier.com/a/cms-hunts-for-supersymmetry-in-uncharted-territory/#respond Fri, 18 Mar 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-hunts-for-supersymmetry-in-uncharted-territory/ With the increase in the LHC centre-of-mass energy from 8 to 13 TeV, the production cross-section for hypothetical SUSY partners rises.

The post CMS hunts for supersymmetry in uncharted territory appeared first on CERN Courier.

]]>

The CMS collaboration is continuing its hunt for signs of supersymmetry (SUSY), a popular extension to the Standard Model that could provide a weakly interacting massive-particle candidate for dark matter, if the lightest supersymmetric particle (LSP) is stable.

With the increase in the LHC centre-of-mass energy from 8 to 13 TeV, the production cross-section for hypothetical SUSY partners rises; the first searches to benefit are those looking for the strongly coupled SUSY partners of the gluon (gluino) and quarks (squarks) that had the most stringent mass limits from Run 1 of the LHC. By decaying to a stable LSP, which does not interact in the detector and instead escapes, SUSY particles can leave a characteristic experimental signature of a large imbalance in transverse momentum.

Searches for new physics based on final states with jets (a bundle of particles) and large transverse-momentum imbalance are sensitive to broad classes of new-physics models, including supersymmetry. CMS has searched for SUSY in this final state using a variable called the “stransverse mass”, MT2, to measure the transverse-momentum imbalance, which strongly suppresses fake contributions due to potential hadronic-jet mismeasurement. This allows us to control the background from copiously produced QCD multi-jet events. The remaining background comes from Standard Model processes such as W, Z and top-quark pair production with decays to neutrinos, which also produce a transverse-momentum imbalance. We estimate our backgrounds from orthogonal control samples in data targeted to each. To cover a wide variety of signatures, we categorise our signal events according to the number of jets, the number of jets arising from bottom quarks, the sum of the transverse momenta of hadronic jets (HT), and MT2. Some SUSY scenarios predict spectacular signatures, such as four top quarks and two LSPs, which would give large values for all of these quantities, while others with small mass splittings produce much softer signatures.

Unfortunately, we did not observe any evidence for SUSY in the 2015 data set. Instead, we are able to significantly extend the constraints on the masses of SUSY partners beyond those from the LHC Run 1. The gluino has the largest production cross-section and many potential decay modes. If the gluino decays to the LSP and a pair of quarks, we exclude gluino masses up to 1550–1750 GeV, depending on the quark flavour, extending our Run 1 limits by more than 300 GeV. We are also sensitive to squarks, with our constraints summarised in figure 1. We set limits on bottom-squark masses up to 880 GeV, top squarks up to 800 GeV, and light-flavour squarks up to 600–1260 GeV, depending on how many states are degenerate in mass.

Even though SUSY was not waiting for us around the corner at 13 TeV, we look forward to the 2016 run, where a large increase in luminosity gives us another chance at discovery.

The post CMS hunts for supersymmetry in uncharted territory appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cms-hunts-for-supersymmetry-in-uncharted-territory/feed/ 0 News With the increase in the LHC centre-of-mass energy from 8 to 13 TeV, the production cross-section for hypothetical SUSY partners rises. https://cerncourier.com/wp-content/uploads/2016/03/CCnew7_03_16.jpg
ATLAS searches for strong SUSY production at Run 2 https://cerncourier.com/a/atlas-searches-for-strong-susy-production-at-run-2/ https://cerncourier.com/a/atlas-searches-for-strong-susy-production-at-run-2/#respond Fri, 12 Feb 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-searches-for-strong-susy-production-at-run-2/ Squarks and gluinos would decay to quarks and the undetectable LSP, producing an excess of events with energetic jets and missing transverse momentum.

The post ATLAS searches for strong SUSY production at Run 2 appeared first on CERN Courier.

]]>

As the LHC delivered proton–proton collisions at the record energy of 13 TeV during the summer and autumn of last year, the experiments were eagerly scrutinising the new data. They were on alert for the signatures that would be left by new heavy particles, indicating a breakdown of the Standard Model in the new energy regime. A few days before CERN closed for the Christmas break, and only six weeks after the last proton–proton collisions of 2015, the ATLAS collaboration released the results of seven new searches for supersymmetric particles.

Supersymmetry (SUSY) predicts that, for every known elementary particle, there is an as-yet-undiscovered partner whose spin quantum number differs by half a unit. In most models, the lightest SUSY particle (LSP) is stable, electrically neutral and weakly interacting, hence it is a good dark-matter candidate. SUSY also protects the Higgs boson mass from catastrophically large quantum corrections, because the contributions from normal particles and their partners cancel each other out. The cancellation is effective only if some of the SUSY particles have masses in the range probed by the LHC. There are therefore well-founded hopes that SUSY particles might be detected in the higher-energy collisions of LHC Run 2.

The data collected by the ATLAS detector in 2015 are just an appetiser. The 3.2 fb–1 of integrated luminosity available are an order of magnitude less that collected in Run 1, and a small fraction of that expected by 2018. The first Run 2 searches for SUSY particles have focused on the partners of the quarks and gluons (called squarks and gluinos). They would be abundantly produced through strong interactions with cross-sections up to 40 times larger than in Run 1. The sensitivity has also been boosted by detector upgrades (in particular, the new “IBL” pixel layer installed near to the beam pipe) and improvements in the data analysis.

Squarks and gluinos would decay to quarks and the undetectable LSP, producing an excess of events with energetic jets and missing transverse momentum. The seven searches looked for such a signature, with different selections depending on the number of jets, b-tagged jets and leptons, to be sensitive to different production and decay modes. Six of the searches found event rates in good agreement with the Standard Model prediction, and placed new limits on squark and gluino masses. The figure shows the new limits for a gluino decaying to two b quarks and a neutralino LSP. For a light neutralino, the Run 1 limit of 1300 GeV on the gluino mass has been extended to 1780 GeV by the new results.

The seventh search looked for events with a Z boson, jets and missing transverse momentum, a final state where a 3σ was observed in the Run 1 data. Intriguingly, the new data show a modest 2σ excess over the background prediction. This intriguing excess, and a full investigation of all SUSY channels, make the upcoming 2016 data eagerly awaited.

The post ATLAS searches for strong SUSY production at Run 2 appeared first on CERN Courier.

]]>
https://cerncourier.com/a/atlas-searches-for-strong-susy-production-at-run-2/feed/ 0 News Squarks and gluinos would decay to quarks and the undetectable LSP, producing an excess of events with energetic jets and missing transverse momentum. https://cerncourier.com/wp-content/uploads/2016/02/CCnew6_02_16.jpg
CMS bridges the gap in jet measurements https://cerncourier.com/a/cms-bridges-the-gap-in-jet-measurements/ https://cerncourier.com/a/cms-bridges-the-gap-in-jet-measurements/#respond Fri, 12 Feb 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-bridges-the-gap-in-jet-measurements/ A little-known fact is that the last three days of Run 1 were reserved for relatively low-energy proton–proton collisions at 2.76 TeV.

The post CMS bridges the gap in jet measurements appeared first on CERN Courier.

]]>

The LHC Run 1, famed for its discovery of the Higgs boson, came to a conclusion on Valentine’s Day (14 February) 2013. The little-known fact is that the last three days of the run were reserved neither for the highest energies nor for the heaviest ions, but for relatively low-energy proton–proton collisions at 2.76 TeV centre-of-mass energy. Originally designed as a reference run for heavy-ion studies, it also provided the perfect opportunity to bridge the wide gap in jet measurements between the Tevatron’s 1.96 TeV and the LHC’s 7 and 8 TeV.

Jet measurements are often plagued by large uncertainties arising from the jet-energy scale, which itself is subject to changes in detector conditions and reconstruction software. Because the 2.76 TeV run was an almost direct continuation of the 8 TeV proton–proton programme, it provided a rare opportunity to measure jets at two widely separated collision energies with almost identical detector conditions and with the same analysis software. CMS used this Valentine’s Day gift from the LHC to measure the inclusive jet cross-section over a wide range of angles (absolute rapidity |y| < 3.0) and for jet transverse momenta pT from 74 to 592 GeV, nicely complementing the measurements performed at 8 TeV. The data are compared with the theoretical prediction at next-to-leading-order QCD (the theory of the strong force) using different sets of parameterisations for the structure of the colliding protons. This measurement tests and confirms the predictions of QCD at 2.76 TeV, and extends the kinematic range probed at this centre-of-mass energy beyond those available from previous studies.

Calculating ratios of the jet cross-sections at different energies allows particularly high sensitivity to certain aspects of the proton structure. The main theory scale-uncertainties from missing orders of perturbative calculations mostly cancel out in the ratio, leaving exposed the nearly pure, so-called DGLAP, evolution of the proton parton density functions (PDFs). In particular, one can monitor directly the evolution of the gluon density as a function of the energy of the collisions. This lays a solid foundation for future searches for new physics, for which the parametrisations of the PDFs are the leading uncertainty. Also, the experimental uncertainties cancel out in the ratio if the conditions are stable enough, as indeed they were for this period of data-taking. This principle was proven by ATLAS with 2.76 TeV data collected in 2011 (2013 Eur. Phys. J. C 73 2509), but with a data set 20 times smaller.

The figure demonstrates the excellent agreement of the ratio of 2.76 and 8 TeV data with the QCD predictions, laying a solid foundation for future searches for new physics through smaller QCD uncertainties. This opportunistic use of the 2.76 GeV data by CMS has again proven the versatility and power of the LHC programme – a true Valentine’s Day for jet aficionados.

The post CMS bridges the gap in jet measurements appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cms-bridges-the-gap-in-jet-measurements/feed/ 0 News A little-known fact is that the last three days of Run 1 were reserved for relatively low-energy proton–proton collisions at 2.76 TeV. https://cerncourier.com/wp-content/uploads/2016/02/CCnew8_02_16.jpg
SHiP sets a new course in intensity-frontier exploration https://cerncourier.com/a/ship-sets-a-new-course-in-intensity-frontier-exploration/ https://cerncourier.com/a/ship-sets-a-new-course-in-intensity-frontier-exploration/#respond Fri, 12 Feb 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/ship-sets-a-new-course-in-intensity-frontier-exploration/ The go-ahead to prepare a Comprehensive Design Report is received.

The post SHiP sets a new course in intensity-frontier exploration appeared first on CERN Courier.

]]>
 

SHiP is an experiment aimed at exploring the domain of very weakly interacting particles and studying the properties of tau neutrinos. It is designed to be installed downstream of a new beam-dump facility at the Super Proton Synchrotron (SPS). The CERN SPS and PS experiments Committee (SPSC) has recently completed a review of the SHiP Technical and Physics Proposal, and it recommended that the SHiP collaboration proceed towards preparing a Comprehensive Design Report, which will provide input into the next update of the European Strategy for Particle Physics, in 2018/2019.

Why is the SHiP physics programme so timely and attractive? We have now observed all the particles of the Standard Model, however it is clear that it is not the ultimate theory. Some yet unknown particles or interactions are required to explain a number of observed phenomena in particle physics, astrophysics and cosmology, the so-called beyond-the-Standard Model (BSM) problems, such as dark matter, neutrino masses and oscillations, baryon asymmetry, and the expansion of the universe.

While these phenomena are well-established observationally, they give no indication about the energy scale of the new physics. The analysis of new LHC data collected at √ = 13 TeV will soon have directly probed the TeV scale for new particles with couplings at O(%) level. The experimental effort in flavour physics, and searches for charged lepton flavour violation and electric dipole moments, will continue the quest for specific flavour symmetries to complement direct exploration of the TeV scale.

However, it is possible that we have not observed some of the particles responsible for the BSM problems due to their extremely feeble interactions, rather than due to their heavy masses. Even in the scenarios in which BSM physics is related to high-mass scales, many models contain degrees of freedom with suppressed couplings that stay relevant at much lower energies.

Given the small couplings and mixings, and hence typically long lifetimes, these hidden particles have not been significantly constrained by previous experiments, and the reach of current experiments is limited by both luminosity and acceptance. Hence the search for low-mass BSM physics should also be pursued at the intensity frontier, along with expanding the energy frontier.

SHiP is designed to give access to a large class of interesting models. It has discovery potential for the major observational puzzles of modern particle physics and cosmology, and can explore some of the models down to their natural “bottom line”. SHiP also has the unique potential to test lepton flavour universality by comparing interactions of muon and tau neutrinos.

SPS: the ideal machine

SHiP is a new type of intensity-frontier experiment motivated by the possibility to search for any type of neutral hidden particle with mass from sub-GeV up to O(10) GeV with super-weak couplings down to 10–10. The proposal locates the SHiP experiment on a new beam extraction line that branches off from the CERN SPS transfer line to the North Area. The high intensity of the 400 GeV beam and the unique operational mode of the SPS provide ideal conditions. The current design of the experimental facility and estimates of the physics sensitivities assume the SPS accelerator in its present state. Sharing the SPS beam time with other SPS fixed-target experiments and the LHC should allow 2 × 1020 protons on target to be produced in five years of nominal operation.

The key experimental parameters in the phenomenology of the various hidden-sector models are relatively similar. This allows common optimisation of the design of the experimental facility and of the SHiP detector. Because the hidden particles are expected to be predominantly accessible through the decays of heavy hadrons and in photon interactions, the facility is designed to maximise their production and detector acceptance, while providing the cleanest possible environment. As a result, with 2 × 1020 protons on target, the expected yields of different hidden particles greatly exceed those of any other existing and planned facility in decays of both charm and beauty hadrons.

As shown in the figure (left), the next critical component of SHiP after the target is the muon shield, which deflects the high flux of muon background away from the detector. The detector for the hidden particles is designed to fully reconstruct the exclusive decays of hidden particles and to reject the background down to below 0.1 events in the sample of 2 × 1020 protons on target. The detector consists of a large magnetic spectrometer located downstream of a 50 m-long and 5 × 10 m-wide decay volume. To suppress the background from neutrinos interacting in the fiducial volume, the decay volume is maintained under a vacuum. The spectrometer is designed to accurately reconstruct the decay vertex, mass and impact parameter of the decaying particle at the target. A set of calorimeters followed by muon chambers provide identification of electrons, photons, muons and charged hadrons. A dedicated high-resolution timing detector measures the coincidence of the decay products, which allows the rejection of combinatorial backgrounds. The decay volume is surrounded by background taggers to detect neutrino and muon inelastic scattering in the surrounding structures, which may produce long-lived SM V0 particles, such as KL, etc. The experimental facility is also ideally suited for studying interactions of tau neutrinos. The facility will therefore host a tau-neutrino detector largely based on the Opera concept, upstream of the hidden-particle decay volume (CERN Courier November 2015 p24).

Global milestones and next steps

The SHiP experiment aims to start data-taking in 2026, as soon as the SPS resumes operation after Long Shutdown 3 (LS3). The 10 years consist, globally, of three years for the comprehensive design phase and then, following approval, a bit less than five years of civil engineering, starting in 2021, in parallel with four years for detector production and staged installation of the experimental facility, and two years to finish the detector installation and commissioning.

The key milestones during the upcoming comprehensive design phase are aimed at further optimising the layout of the experimental facility and the geometry of the detectors. This involves a detailed study of the muon-shield magnets and the geometry of the decay volume. It also comprises revisiting the neutrino background in the fiducial volume, together with the background detectors, to decide on the required type of technology for evacuating the decay volume. Many of the milestones related to the experimental facility are of general interest beyond SHiP, such as possible improvements to the SPS extraction, and the design of the target and the target complex. SHiP has already benefitted from seven weeks of beam time in test beams at the PS and SPS in 2015, for studies related to the Technical Proposal (TP). A similar amount of beam time has been requested for 2016, to complement the comprehensive design studies.

The SHiP collaboration currently consists of almost 250 members from 47 institutes in 15 countries. In only two years, the collaboration has formed and taken the experiment from a rough idea in the Expression of Interest to an already mature design in the TP. The CERN task force, consisting of key experts from CERN’s different departments, which was launched by the CERN management in 2014 to investigate the implementation of the experimental facility, brought a fundamental contribution to the TP. The SHiP physics case was demonstrated to be very strong by a collaboration of more than 80 theorists in the SHiP Physics Proposal.

The intensity frontier greatly complements the search for new physics at the LHC. In accordance with the recommendations of the last update of the European Strategy for Particle Physics, a multi-range experimental programme is being actively developed all over the world. Major improvements and new results are expected during the next decade in neutrino and flavour physics, proton-decay experiments and measurements of the electric dipole moments. CERN will be well-positioned to make a unique contribution to exploration of the hidden-particle sector with the SHiP experiment at the SPS.

• For further reading, see cds.cern.ch/record/2007512.

The post SHiP sets a new course in intensity-frontier exploration appeared first on CERN Courier.

]]>
https://cerncourier.com/a/ship-sets-a-new-course-in-intensity-frontier-exploration/feed/ 0 Feature The go-ahead to prepare a Comprehensive Design Report is received. https://cerncourier.com/wp-content/uploads/2016/02/CCshi1_02_16.jpg
Latest ATLAS results with 13 TeV proton–proton collisions at the LHC https://cerncourier.com/a/latest-atlas-results-with-13-tev-proton-proton-collisions-at-the-lhc/ https://cerncourier.com/a/latest-atlas-results-with-13-tev-proton-proton-collisions-at-the-lhc/#respond Fri, 15 Jan 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/latest-atlas-results-with-13-tev-proton-proton-collisions-at-the-lhc/ Since the first ATLAS results from LHC Run 2 were presented at this summer’s conferences (EPS-HEP 2015 and LHCP 2015) with an amount of data corresponding to an integrated luminosity of approximately 80 pb–1, the LHC has continued to ramp up in luminosity. The maximum instantaneous luminosity for 2015 was 5 × 1033 cm–2s–1, which already approaches the Run 1 record of 7 × 1033 cm–2s–1. […]

The post Latest ATLAS results with 13 TeV proton–proton collisions at the LHC appeared first on CERN Courier.

]]>

Since the first ATLAS results from LHC Run 2 were presented at this summer’s conferences (EPS-HEP 2015 and LHCP 2015) with an amount of data corresponding to an integrated luminosity of approximately 80 pb–1, the LHC has continued to ramp up in luminosity. The maximum instantaneous luminosity for 2015 was 5 × 1033 cm–2s–1, which already approaches the Run 1 record of 7 × 1033 cm–2s–1. ATLAS recorded more than 4 fb–1 in 2015, with different physics analyses using from 3.32 to 3.60 fb–1, depending on the parts of the detector required to be fully operational with good data quality.

The main goal of the early measurements presented this summer was to study in detail the performance of the detector, to characterise the main Standard Model processes at 13 TeV, and to perform the first searches for phenomena beyond the Standard Model at Run 2. These early searches focused on processes such as high-mass quantum and rotating black-hole production in dijet, multijet and lepton-jet event topologies, for which the higher centre-of-mass energy provided an immediate improvement in sensitivity beyond the reach of the Run 1 data.

The recently completed 2015 data set corresponds to more than 30 times that of this summer. With these data, the full programme of measurements and searches at Run 2 has started, and the first results were presented by the collaboration at a joint ATLAS and CMS seminar on 15 December 2015 during CERN Council week.

These new results benefitted from the first calibration of electron, muon and jet reconstruction and trigger algorithms, in situ using the data. The new insertable B layer of pixel detectors significantly improves the precision of the track measurements near the interaction region and is therefore crucial for tagging jets containing heavy quarks.

First measurements include the ZZ cross-section and single top quark, and the Wt production channels at 13 TeV. Top-quark pair production has also been investigated in measurements where the top-quark pair is produced in association with additional jets. These measurements are crucial to provide further checks of the modelling implemented in state-of-the-art generators used to simulate these processes at NLO QCD precision. These measurements can also subsequently be used to further constrain physics beyond the Standard Model that would alter these production modes.

The new data also allowed the first measurements of the Higgs boson production cross-section at 13 TeV, inclusively in the diphoton and ZZ decay channels.

With the increased centre-of-mass energy, and the availability of significantly more data than in the summer, new-particle search results were awaited with much anticipation. A large number of searches for new phenomena motivated by theories beyond the Standard Model in dijet, multijets, photon jets, diphoton, dilepton, single lepton and missing transverse energy channels were completed. Searches for vector-boson pair (VV) and Higgs and vector-boson (VH) topologies with boosted jets have also been completed. Searches for strongly produced supersymmetry (SUSY) that made use of signatures with 0 or 1 lepton, or a Z boson, jets and missing transverse energy and also topologies with B jets, have improved sensitivity from Run 1. Finally, searches for Higgs bosons from extended electroweak symmetry-breaking sectors in final states with a pair of tau leptons, and in pairs of vector bosons, have been performed.

So far, no definitive observation of new physics has been observed in the data, although two excesses have been observed. The first, with a significance of 2.2 standard deviations, was seen in the search for SUSY with gluino production with subsequent decays into a Z boson and missing energy; a 3 standard-deviation excess was observed in this channel in Run 1. The second excess was observed in the search for diphoton resonances where a peak is seen at 750 GeV with a local significance of 3.6 standard deviations, corresponding to a global significance of 2.0 standard deviations. More data will be needed to probe the nature of these excesses.

Limits on a large variety of theories beyond the Standard Model have been derived. The ATLAS experiment is completing its measurements and search programme on the data collected in 2015, and is preparing for the data to come in 2016.

• For more details on the ATLAS results presented at the seminar, see https://twiki.cern.ch/twiki/bin/view/AtlasPublic/December2015-13TeV.

The post Latest ATLAS results with 13 TeV proton–proton collisions at the LHC appeared first on CERN Courier.

]]>
https://cerncourier.com/a/latest-atlas-results-with-13-tev-proton-proton-collisions-at-the-lhc/feed/ 0 News
CMS presents new 13 TeV results at end-of-year jamboree https://cerncourier.com/a/cms-presents-new-13-tev-results-at-end-of-year-jamboree/ https://cerncourier.com/a/cms-presents-new-13-tev-results-at-end-of-year-jamboree/#respond Fri, 15 Jan 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-presents-new-13-tev-results-at-end-of-year-jamboree/ The first phase of collisions after the LHC restart earlier this year provided CMS with data at the novel energy of 13 TeV, enabling CMS to explore uncharted domains of physics. At the end of this exciting year, CMS and ATLAS presented comprehensive overviews of their latest results from analyses performed on the collected data. Here […]

The post CMS presents new 13 TeV results at end-of-year jamboree appeared first on CERN Courier.

]]>

The first phase of collisions after the LHC restart earlier this year provided CMS with data at the novel energy of 13 TeV, enabling CMS to explore uncharted domains of physics. At the end of this exciting year, CMS and ATLAS presented comprehensive overviews of their latest results from analyses performed on the collected data. Here we highlight only a few of the key CMS results – refer to the further reading (below) for more.

Before exploring the “unknown”, CMS first strove to rediscover the “known”, as a means to validate the excellent performance of the detector after emerging from the consolidation and upgrade period of Long Shutdown 1. Convincing performance studies as well as early measurements had already been presented at this year’s summer conferences. Meanwhile, the studies and physics measurements continued as the size of the data sample increased over the course of the autumn. In total, CMS approved 33 new public results for the end-of-year jamboree, capping off a successful period of commissioning, data collection and analysis. In contrast to the studies performed for other Standard Model particles, CMS preferred to remain blinded for studies involving the LHC’s most famous particle, the Higgs boson discovered in 2012, because the collected data sample was not large enough for a Higgs boson signal to be detectable.

However, it was the anticipation of results on searches for new phenomena that filled CERN’s main auditorium beyond capacity. The CMS focus was on searches that would already be sensitive to new physics with the small data sample collected in 2015. Hadron jets play a crucial role in searches for exotic particles such as excited quarks, whose observation would demonstrate that quarks are not elementary particles but rather composite objects, and for heavier cousins of the W boson. These new particles would demonstrate their presence by transforming into two particle jets (a “dijet”). The highest-mass dijet event observed by CMS is shown in the figure. In carrying out this study, CMS searches for bumps in the mass distribution of the dijet system. Seeing no significant excess over the background, a new CMS publication based on the 13 TeV data imposes limits on the masses of these hypothetical particles ranging from 2.6 TeV to 7 TeV, depending on the new-physics model.

CMS also searched for the presence of heavy particles such as a Z´ (Z-prime) boson in the dilepton spectrum, in which unstable exotic particles would transform into pairs of electrons or muons. While CMS observed high-mass events, with dielectrons up to a mass of 2.9 TeV and dimuons up to 2.4 TeV, the data are compatible with the Standard Model and do not provide evidence for new physics.

Finally, CMS observed a slight excess in events with two photons at a diphoton mass around 760 GeV. However, small fluctuations such as this have been observed regularly in the past, including at LHC Run 1, and often disappear as more data is collected. Therefore we are still far from the threshold associated with a new discovery, but the stage is set for great excitement and anticipation in the upcoming 2016 run of the LHC.

The post CMS presents new 13 TeV results at end-of-year jamboree appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cms-presents-new-13-tev-results-at-end-of-year-jamboree/feed/ 0 News
CMS data-scouting and a search for low-mass dijet resonances https://cerncourier.com/a/cms-data-scouting-and-a-search-for-low-mass-dijet-resonances/ https://cerncourier.com/a/cms-data-scouting-and-a-search-for-low-mass-dijet-resonances/#comments Fri, 13 Nov 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-data-scouting-and-a-search-for-low-mass-dijet-resonances/ Proton beams crossed inside each of the CMS and ATLAS detectors 20 million times a second during the 2012 LHC proton–proton run. However, the physics programme of CMS is based on only a small subset of these crossings, corresponding to about 1000 events per second for the highest beam intensities attained that year. This restriction is due […]

The post CMS data-scouting and a search for low-mass dijet resonances appeared first on CERN Courier.

]]>
Proton beams crossed inside each of the CMS and ATLAS detectors 20 million times a second during the 2012 LHC proton–proton run. However, the physics programme of CMS is based on only a small subset of these crossings, corresponding to about 1000 events per second for the highest beam intensities attained that year. This restriction is due to technological limitations on the speed at which information can be recorded. The CMS detector has around 70 million electronics channels, yielding up to about half-a-million bytes per event. This volume of data makes it impossible to record every event that occurs. A so-called trigger system is used in real time to select which events to retain. Events are typically required to contain at least one object with a large transverse momentum relative to the proton beam axis. This restriction is effective at reducing the event rate but it also reduces sensitivity to new phenomena that might occur at a smaller transverse-momentum scale, and therefore it reduces sensitivity to the production of new particles, or “resonances”, below certain mass values. While many important studies have been performed with the standard triggers, the necessary reduction imposed by these triggers seriously limits sensitivity to resonances with masses below around 1 TeV that decay to a two-jet (“dijet”) final state, where a “jet” refers to a collimated stream of particles, such as pions and kaons, which is the signature of an underlying quark or gluon.

To recover sensitivity to events that would otherwise be lost, CMS implemented a new triggering scheme, which began in 2011, referred to as “data scouting”. A dedicated trigger algorithm was developed to retain events with a sum of jet transverse energies above the relatively low threshold of 250 GeV, at a rate of about 1000 events per second. To compensate for this large rate and to remain within the boundaries imposed by the available bandwidth and disk-writing speed, the event size was reduced by a factor of 1000 by retaining only the jet energies and momenta in an event, reconstructed at a higher-level trigger stage. Because of the minimal amount of information recorded, no subsequent offline data processing was possible, and the scouted data were appropriate for a few studies only, such as the dijet resonance search. The resonance search was implemented directly in the CMS data-quality monitoring system so that, should deviations from the Standard Model expectation be observed, the trigger could be adjusted to collect the events in the full event format.

The first results to use data-scouting were reported by CMS in 2012. These results were based on 0.13 fb–1 of proton–proton collisions at a center-of-mass energy √s = 7 TeV, collected during the last 16 hours of the 2011 run. New results on dijet resonances have now been presented, which employ data-scouting in a much larger sample of 18.8 fb–1 collected at √s = 8 TeV in 2012. The results are summarised in the figure, which shows exclusion limits on the coupling strength (gB) of a hypothetical baryonic Z´B boson that decays to a dijet final state, as a function of the Z´B mass. The CMS results, shown in comparison with previous results, demonstrate the success of the data-scouting method: using very limited disk-writing resources, corresponding to only about 10% of what is typically allocated for a CMS analysis, the exclusion limits for low-mass resonances (below around 1 TeV) are improved by more than a factor of four. Although no evidence for a new particle is found, data-scouting has established itself as a valuable tool in the search for new physics at the LHC.

The post CMS data-scouting and a search for low-mass dijet resonances appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cms-data-scouting-and-a-search-for-low-mass-dijet-resonances/feed/ 46 News
Supersymmetry searches: the most comprehensive ATLAS summary to date https://cerncourier.com/a/supersymmetry-searches-the-most-comprehensive-atlas-summary-to-date/ https://cerncourier.com/a/supersymmetry-searches-the-most-comprehensive-atlas-summary-to-date/#respond Wed, 28 Oct 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/supersymmetry-searches-the-most-comprehensive-atlas-summary-to-date/ ATLAS has summarised 22 Run 1 searches, using more than 310,000 models to work out where the elusive SUSY particles might be hiding. The first run of the LHC taught us at least two significant things. First, that there really is a Higgs boson, with properties broadly in line with those predicted by the Standard Model. Second, […]

The post Supersymmetry searches: the most comprehensive ATLAS summary to date appeared first on CERN Courier.

]]>
ATLAS

ATLAS has summarised 22 Run 1 searches, using more than 310,000 models to work out where the elusive SUSY particles might be hiding.

The first run of the LHC taught us at least two significant things. First, that there really is a Higgs boson, with properties broadly in line with those predicted by the Standard Model. Second, that the hotly anticipated supersymmetric (SUSY) particles – which were believed to be needed to keep the Higgs boson mass under control – have not been found.

If, as many believe, SUSY is the solution to the Higgs-mass problem, there should be a heavy partner particle for each of the familiar Standard Model fermions and bosons. So why have we missed the super partners? Are they not present at LHC energies? Or are they just around the corner, waiting to be found?

ATLAS has recently taken stock of its progress in addressing the question of the missing SUSY particles. This herculean task examined an astonishing 500 million different models, each representing a possible combination of SUSY-particle masses. The points were drawn from the 19 parameter “phenomenological Minimal Supersymmetric Standard Model (pMSSM)” and concentrated on those models that can contribute to the cosmological dark matter.

The ambitious project involved the detailed simulation of more than 600 million high-energetic proton–proton collisions, using the power of the LHC computing grid. Teams from 22 individual ATLAS SUSY searches examined whether they had sensitivity to each of the 310,000 most promising models. This told them which combinations of SUSY masses have been ruled out by the ATLAS Run 1 searches and which masses would have evaded detection so far.

The results are illuminating. They show that in Run 1, ATLAS had particular sensitivity to SUSY particles with sub-TeV masses and with strong interactions. Their best constraints are on the fermionic SUSY partner of the gluon and, to a lesser extent, on the scalar partners of the quarks. Weakly interacting SUSY particles have been much harder to pin down, because those particles are produced more rarely. The conclusions are broadly consistent with those obtained using simplified models, which are being used to guide Run 2 SUSY searches.

The paper goes on to examine the knock-on effects of the ATLAS searches for other experiments. The ATLAS searches constrain the SUSY models that are being hunted by underground searches for dark-matter relics, and by indirect searches, including those measuring rare B-meson decays and the magnetic moment of the muon.

Today, the higher-energy of the 13 TeV LHC is bringing increased sensitivity to rare processes and to higher-mass particles. The ATLAS physics teams are excited to be using their fresh knowledge about where SUSY might be hiding to start the hunt afresh.

The post Supersymmetry searches: the most comprehensive ATLAS summary to date appeared first on CERN Courier.

]]>
https://cerncourier.com/a/supersymmetry-searches-the-most-comprehensive-atlas-summary-to-date/feed/ 0 News
Is the Standard Model about to crater? https://cerncourier.com/a/is-the-standard-model-about-to-crater/ https://cerncourier.com/a/is-the-standard-model-about-to-crater/#respond Wed, 28 Oct 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/is-the-standard-model-about-to-crater/   There are now quite a few discrepancies, or “tensions”, between laboratory experiments and the predictions of the Standard Model (SM) of particle physics. All of them are of the 2–3σ variety, exactly the kind that physicists learn not to take seriously early on. But many have shown up in a series of related measurements, […]

The post Is the Standard Model about to crater? appeared first on CERN Courier.

]]>
 

There are now quite a few discrepancies, or “tensions”, between laboratory experiments and the predictions of the Standard Model (SM) of particle physics. All of them are of the 2–3σ variety, exactly the kind that physicists learn not to take seriously early on. But many have shown up in a series of related measurements, and this is what has attracted physicists’ attention.

In this article, I will concentrate on two sets of discrepancies, both associated with data taken at √s = 7 and 8 TeV in LHC’s Run 1:

1. Using 3 fb–1 of data, LHCb has reported discrepancies with more or less precise SM predictions, all relating to the rare semileptonic transitions b → sl+l, particularly with l = μ. If real, they would imply the presence of new lepton non-universal (LNU) interactions at an energy scale ΛLNU  ≳ 1 TeV, well above the scale of electroweak symmetry breaking. Especially enticing, such effects would suggest lepton flavour violation (LFV) at rates much larger than expected in the Standard Model.

2. Using 20 fb–1 of data, ATLAS and CMS have reported 2–3σ excesses near 2 TeV in the invariant mass of dibosons VV = WW, WZ, ZZ and VH = WH, ZH, where H is the 125 GeV Higgs boson discovered in Run 1. To complicate matters, there is also a ~3σ excess near 2 TeV in a CMS search for a right-handed-coupling WR decaying to l+ljet jet (for l = e, but not μ), and a 2.3σ excess near Mjj = 1.9 TeV in dijet production. (Stop! I hear you say, and I can’t blame you!)

If either set of discrepancies were to be confirmed in Run 2, the Standard Model would crack wide open, with new particles and their new interactions providing high-energy experimentalists and theorists with many years of exciting exploration and discovery. If both should be confirmed, Katy bar the door!

But first, I want to tip my hat to one of the longest-standing of all such SM discrepancies: the 2004 measurement of g−2 for the muon is 2.2–2.7σ higher than calculated. For a long time, this has been down-played by many, including me. After all, who pays attention to 2.5σ? (Answer: more than 1000 citations!) But now other things are showing up and, for LHCb, muons seem to be implicated. Maybe there’s something there. We should know in a few years. The new muon g-2 experiment, E989 at Fermilab, is expected to have first results in 2017–2018.

b → sµ+µ at LHCb

Features of LHCb’s measurements of B-meson decays involving b → sl+l transitions hint consistently at a departure from the SM:

1. The measured ratio, RK, of branching ratios of B+ → K+μ+μ to B+ → K+e+e is 25% lower than the SM prediction, a 2.6σ departure.

2. In an independent measurement, the branching ratio of B+ → K+μ+μ is 30% lower than the SM prediction, a 2σ deficit. This suggests that the discrepancy is in muons, rather than electrons. LHCb’s muon measurement is more robust than for electrons. However, all indications on the electron mode, including earlier results from Belle and BaBar, are that B → K(*)e+e is consistent with the SM.

3. The quantity P’5 in B0 → K*0μ+μ angular distributions exhibits a 2.9σ discrepancy in each of two bins. The size of the theoretical error is being questioned, however.

4. CMS and LHCb jointly measured the branching ratio of Bs → μ+μ. The result is consistent with the SM prediction but, interestingly, its central value is also 25% lower (at 1σ).

The RK and other measurements suggest lepton non-universality in b → sl+l transitions, and with a strength not very different from that of these rare SM processes. This prospect has inspired an avalanche of theoretical proposals of new LNU physics above the electroweak scale, all involving the exchange of multi-TeV particles such as leptoquarks or Z’ bosons.

As a very exciting consequence, LNU interactions at high energy are, in general, accompanied by lepton flavour-violating interactions, unless the leptons involved are chosen to be mass eigenstates. But, as we know from the mismatch between the gauge and mass eigenstates of quarks in the charged weak-interaction currents, there is no reason to make such a choice. Further, that choice makes no sense at ΛLNU, far above the electroweak scale where those masses are generated. Therefore, if the LHCb anomalies were to be confirmed in Run 2, LFV decays such as B → K(*)μe/μτ and Bs → μe/μτ should occur at rates much larger than expected in the SM. (Note that LNU and LFV processes do occur in the SM but, being due to neutrino-mass differences, they are tiny.)

LHCb is searching for b → sμe and sμτ in Run 1 data, and will continue in Run 2 with much more data. The μe modes are easier targets experimentally than μτ. However, the simplest hypothesis for LNU is that it occurs in the third-generation gauge eigenstates, e.g., a b’b’τ’τ’ interaction. Then, through the usual mass-matrix diagonalisation, the lighter generations get involved, with LFV processes suppressed by mixing matrix elements that are analogous to the familiar CKM elements. In this case, b → sμτ likely will be the largest source of LFV in B-meson decays.

A final note: there are slight hints of the LFV decay H → μτ. CMS and ATLAS have reported small branching ratios that amount to 2.4σ and 1.2σ, respectively. These are tantalizing, and certainly will be clarified in Run 2.

Diboson excesses at ATLAS and CMS

I will limit this discussion to diboson, VV and VH, excesses near 2 TeV, even though the WR → l+ljet jet and dijet excesses are of similar size and should not be forgotten. ATLAS and CMS measured high-invariant-mass VV (V = W, Z) in non-leptonic events in which both highly boosted V decay into qq’ (also called “fat” V-jets) and semi-leptonic events in which one V decays into l±ν or l+l. In the ATLAS non-leptonic data, a highly boosted V-jet is called a W (Z) if its mass MV is within 13 GeV of 82.4 (92.8) GeV. In its semi-leptonic data, V = W or Z if 65 < MV < 105 GeV. In the non-leptonic events, ATLAS observed excesses in all three invariant-mass “pots”, MWW, MWZ and MZZ, although there may be as much as 30% overlap between neighbouring pots. Each of the three excesses amount to 5–10 events. The largest excess is in MWZ. It is centred at 2 TeV, with a 3.4σ local, 2.5σ global significance. ATLAS’s WZ data and exclusion plot are in figure 1. The WZ excess has been estimated to correspond to a cross-section times branching ratio of about 10 fb. ATLAS observed no excesses near 2 TeV in its semileptonic data. Given the low statistics of the non-leptonic excesses, this is not yet an inconsistency.

In its non-leptonic data, CMS defined a V-jet to be a W or Z candidate if its mass is between 70 and 100 GeV. The exclusion plot for this data shows a ~1.5σ excess over the expected limit near MVV = 1.9 TeV. In the semi-leptonic data, the V-jet is called a W if 65 < MV < 105 GeV or a Z if 70 < MV < 110 GeV – a quite substantial overlap. There is a 2σ excess over the expected background near 1.8 TeV in the l+l V-jet but less than 1σ in the l±ν V-jet. When the semi-leptonic and non-leptonic data are combined, there is still a 1.5–2σ excess near 1.8 TeV. The CMS exclusion plots are in figure 2.

ATLAS and CMS also searched for resonant structure in VH production. ATLAS looked in the channels lν/l+l/νν + bb with one and two b-tags. Exclusion plots up to 1.9 TeV show no deviation greater that 1σ from the expected background. CMS looked in non-leptonic and semi-leptonic channels. The observed non-leptonic exclusion curves look like a sine wave of amplitude 1σ on the expected falling background with, as luck would have it, a maximum at 1.7 TeV and a minimum at 2 TeV. On the other hand, a search for WH → lνbb has a 2σ excess centred at 1.9 TeV in the electron, but not the muon, data.

Many will look at these 2–3σ effects and say they are to be expected when there is so much data and so many analyses; indeed, something would be wrong if there were not. Others, including many theorists, will point to the number, proximity and variety of these fluctuations in both experiments at about the same mass, and say something is going on here. After all, physics beyond the SM and its Higgs boson has been expected for a long time and for good theoretical reasons.

It is no surprise, then, that a deluge of more than 60 papers has appeared since June, vying to explain the 2 TeV bumps. The two most popular explanations are (1) a new weakly coupled W’, Z’ triplet that mixes slightly with the familiar W, Z, and (2) a triplet of ρ-like vector bosons heralding new strong interactions associated with H being a composite Higgs boson. A typical motivation for the W’ scenario is the restoration of right–left symmetry in the weak interactions. The composite Higgs is a favourite of “naturalness theorists” trying to understand why H is so light. The new interactions of both scenarios have an “isospin” SU(2) symmetry. The new isotriplets X are produced at the femtobarn level, mainly in the Drell–Yan process of qq annihilation. Their main decay modes are X± → W± L ZL and X0 → W+L WL, where VL is a longitudinally polarised weak boson. Generally, the W’, Z’ and the ρ (or its parity partner, an a1-like triplet) can also decay to WL, ZL plus H itself. It follows that the diboson excess attributed to ZZ would really have to be WZ and, possibly, WW. The W, Z-polarisation and the absence of real ZZ are important early tests of these models. (A possibility not considered in the composite Higgs papers, is the production of an f0-like I = 0 scalar, also at 2 TeV, which decays to W+LWL and ZLZL.)

Although the most likely explanation of the 2 TeV bumps may well be statistics, we should have confirmation soon. The resonant cross-sections are five or more times larger at 13 TeV than at 8 TeV. Thus, the expected LHC running this year and next will produce as much or more diboson data as all of Run 1.

What if both lepton flavour violation and the VV and VH bumps were to be discovered in Run 2? Both would suggest new interactions at or above a few TeV. Surely they would have to be related, but how? New weak interactions could be flavour non-universal (but, then, not right–left symmetric). New strong interactions of Higgs compositeness could easily be flavour non-universal. The possibilities seem endless. So do the prospects for discovery. Stay tuned!

• For the B-meson anomalies, the experimental papers are arxiv.org/abs/1406.6482, arxiv.org/abs/1403.8044, arxiv.org/abs/1505.04160and arxiv.org/abs/1411.4413. For the diboson excesses near 2 TeV, the details of the V-jet construction are in arxiv.org/abs/1506.00962, arxiv.org/abs/1503.04677and arxiv.org/abs/1409.6190for ATLAS, and arxiv.org/abs/1405.1994and arxiv.org/abs/1405.3447 for CMS.

• Among the “tensions” not discussed in this article is also the B → D* τν decay, illustrated on the cover of this issue.

The post Is the Standard Model about to crater? appeared first on CERN Courier.

]]>
https://cerncourier.com/a/is-the-standard-model-about-to-crater/feed/ 0 Feature
Searches for new phenomena with LHC Run-2 https://cerncourier.com/a/searches-for-new-phenomena-with-lhc-run-2/ https://cerncourier.com/a/searches-for-new-phenomena-with-lhc-run-2/#respond Fri, 25 Sep 2015 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/searches-for-new-phenomena-with-lhc-run-2/ After demonstrating a good understanding of the detector and observing most of the Standard Model particles using the first data of LHC Run 2 collected in July (CERN Courier September 2015 p8), the ATLAS collaboration is now stepping into the unknown, open to the possibility that dimensions beyond the familiar four could make themselves known through […]

The post Searches for new phenomena with LHC Run-2 appeared first on CERN Courier.

]]>

After demonstrating a good understanding of the detector and observing most of the Standard Model particles using the first data of LHC Run 2 collected in July (CERN Courier September 2015 p8), the ATLAS collaboration is now stepping into the unknown, open to the possibility that dimensions beyond the familiar four could make themselves known through the appearance of microscopic black holes.

Relative to the other fundamental forces, gravity is weak. In particular, why is the natural energy scale of quantum gravity, the Planck mass MPl, roughly 17 orders of magnitude larger than the scales of electroweak interactions? One exciting solution to this so-called hierachy problem exists in “brane” models, where the particles of the Standard Model are mainly confined to a three-plus-one-dimensional brane and gravity acts in the full space of the “bulk”. As gravity escapes into the hypothesized extra dimensions, it therefore “appears” weak in the known four-dimensional world.

With enough large, additional dimensions, the effective Planck mass, MD, is reduced to a scale where quantum gravitational effects become important within the energy range of the LHC. Theory suggests that microscopic black holes will form more readily in this higher-dimensional universe. With the increase of the centre-of-mass energy to 13 TeV at the start of Run 2, the early collisions could already produce signs of these systems.

If produced by the LHC, a black hole with a mass near MD – a quantum black hole – will decay faster than it can thermalize, predominately producing a pair of particles with high transverse momentum (pT). Such decays would appear as a localized excess in the dijet mass distribution (figure 1). This signature is also consistent with theories that predict parton scattering via the exchange of a black hole – so-called gravitational scattering.

A black hole with a mass well above MD will behave as a classical thermal state and decay through Hawking emission to a relatively large number of high-pT particles. The frequency at which Standard Model particles are expected to be emitted is proportional to the number of charge, spin, flavour and colour states available. ATLAS can therefore perform a robust search for a broad excess in the scalar sum of jet pT (HT) in high-multiplicity events (figure 2), or in similar final states that include a lepton. The requirement of a lepton (electron or muon) helps to reduce the large multijet background.

Even though the reach of these analyses extends beyond the previous limits, they have so far revealed no evidence for black holes or any of the other signatures to which they are potentially sensitive. Run 2 is just underway and with more luminosity to come, this is only the beginning.

The post Searches for new phenomena with LHC Run-2 appeared first on CERN Courier.

]]>
https://cerncourier.com/a/searches-for-new-phenomena-with-lhc-run-2/feed/ 0 News
On the trail of long-lived particles https://cerncourier.com/a/on-the-trail-of-long-lived-particles/ https://cerncourier.com/a/on-the-trail-of-long-lived-particles/#respond Wed, 22 Jul 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/on-the-trail-of-long-lived-particles/ When searching for new particles in ATLAS, it is often assumed that they will either decay to observable Standard Model particles at the centre of the detector, or escape undetected, in which case their presence can be inferred by measuring an imbalance of the total transverse momentum. This assumption was a guiding principle in designing […]

The post On the trail of long-lived particles appeared first on CERN Courier.

]]>

When searching for new particles in ATLAS, it is often assumed that they will either decay to observable Standard Model particles at the centre of the detector, or escape undetected, in which case their presence can be inferred by measuring an imbalance of the total transverse momentum. This assumption was a guiding principle in designing the layout of the ATLAS detector.

However, another possibility exists: what if new particles are long lived? Many models of new physics include heavy particles with lifetimes large enough to allow them to travel measurable distances before decaying. Heavy particles typically decay quickly into lighter particles, unless the decay is suppressed by some mechanism. Suppression could occur if couplings are small, if the decaying particle is only slightly heavier than the only possible decay products, or if the decay is mediated by very heavy virtual exchange particles. Looking for signatures of these models in the LHC data implies exploiting the ATLAS detector in ways it was not necessarily designed for.

These models can give rise to a broad range of possible signatures, depending on the lifetime, charge, velocity and decay channels of the long-lived particle. Decays to charged particles within the ATLAS detector volume can be detected as “displaced vertices”. Heavy charged particles that traverse the detector will move more slowly than their Standard Model counterparts, and will leave a trail of large ionization-energy deposits. Particles with very long lifetimes could even stop in the dense material of the calorimeter and decay at a later time. The ATLAS collaboration has performed dedicated searches to explore all of these spectacular – and challenging – signatures.

Standard reconstruction algorithms are not optimal for such unconventional signatures, so the ATLAS collaboration has used detailed knowledge of the experiment’s sub-detectors to develop dedicated algorithms; for example, to reconstruct charged-particle tracks from displaced decays or to measure the ionization-charge deposited by long-lived charged particles. A class of specialized triggers for picking up these signatures has also been designed and deployed.

These searches generally have very low background, but it is nevertheless essential to estimate the level because some of the signatures could be faked by instrumental effects that are not well-modelled in the simulation. Sophisticated data-driven background estimation techniques have therefore been developed.

One postulated type of long-lived particle is the “R hadron” – a supersymmetric particle with colour-charge combined with Standard Model quarks and gluons. Several ATLAS searches are sensitive to R hadrons, and between them they cover a wide range of lifetimes, as the figure (top right) shows (ATLAS Collaboration 2013 and 2015a). Other analyses have searched for a long-lived hidden-sector pion (“v pion”) by looking for displaced vertices in different ATLAS sub-detectors (ATLAS Collaboration 2015b and 2015c). Exotic Higgs-boson decays to long-lived neutral particles that decay to jets were constrained to a branching ratio smaller than 1% at the 95% confidence level, for a range of lifetime values, as in the figure (right).

With 13-TeV collisions under way at the LHC, the probability of producing heavy new particles has increased enormously, revitalizing the searches for new physics. ATLAS experimentalists are rising to the challenge of exploring as many new physics signatures as possible, including those related to long-lived particles.

The post On the trail of long-lived particles appeared first on CERN Courier.

]]>
https://cerncourier.com/a/on-the-trail-of-long-lived-particles/feed/ 0 News
RHIC smashes record for polarized-proton collisions at 200 GeV https://cerncourier.com/a/rhic-smashes-record-for-polarized-proton-collisions-at-200-gev/ https://cerncourier.com/a/rhic-smashes-record-for-polarized-proton-collisions-at-200-gev/#respond Tue, 02 Jun 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/rhic-smashes-record-for-polarized-proton-collisions-at-200-gev/ The Relativistic Heavy Ion Collider at Brookhaven National Laboratory has shattered its own record for producing polarized-proton collisions at 200 GeV

The post RHIC smashes record for polarized-proton collisions at 200 GeV appeared first on CERN Courier.

]]>

The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory has shattered its own record for producing polarized-proton collisions at 200 GeV collision energy. In the experimental run currently underway, accelerator physicists are delivering 1.2 × 1012 collisions per week – more than double the number routinely achieved in 2012, the last run dedicated to polarized-proton experiments at this collision energy.

The achievement is, in part, the result of a method called “electron lensing”, which uses negatively charged electrons to compensate for the tendency of the positively charged protons in one circulating beam to repel the like-charged protons in the other beam when the two oppositely directed beams pass through one another in the collider. In 2012, these beam–beam interactions limited the ability to produce high collision rates, so the RHIC team commissioned electron lenses and a new lattice to mitigate the beam–beam effect. RHIC is now the first collider to use electron lenses for head-on beam–beam compensation. The team also upgraded the source that produces the polarized protons to generate and feed more particles into the circulating beams, and made other improvements in the accelerator chain to achieve higher luminosity.

With new luminosity records for collisions of gold beams, plus the first-ever head-on collisions of gold with helium-3, 2014 proved to be an exceptional year for RHIC. Now, the collider is on track towards another year of record performance, and research teams are looking forward to a wealth of new insights from the data to come.

The post RHIC smashes record for polarized-proton collisions at 200 GeV appeared first on CERN Courier.

]]>
https://cerncourier.com/a/rhic-smashes-record-for-polarized-proton-collisions-at-200-gev/feed/ 0 News The Relativistic Heavy Ion Collider at Brookhaven National Laboratory has shattered its own record for producing polarized-proton collisions at 200 GeV
Collaboration meets for the first FCC week https://cerncourier.com/a/collaboration-meets-for-the-first-fcc-week/ https://cerncourier.com/a/collaboration-meets-for-the-first-fcc-week/#respond Mon, 27 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/collaboration-meets-for-the-first-fcc-week/ As many as 340 physicists, engineers, science managers and journalists gathered in Washington DC for the first annual meeting of the global Future Circular Collider (FCC) study. The FCC week covered all aspects of the study – designs of 100-km hadron and lepton colliders, infrastructures, technology R&D, experiments and physics. The meeting began with an exciting […]

The post Collaboration meets for the first FCC week appeared first on CERN Courier.

]]>
CCnew13_04_15

As many as 340 physicists, engineers, science managers and journalists gathered in Washington DC for the first annual meeting of the global Future Circular Collider (FCC) study. The FCC week covered all aspects of the study – designs of 100-km hadron and lepton colliders, infrastructures, technology R&D, experiments and physics.

The meeting began with an exciting presentation by US congressman Bill Foster, who recalled the history of the LHC as well as the former design studies for a Very Large Hadron Collider. A special session on Thursday was devoted to the experience with the US LHC Accelerator Research Program (LARP), to the US particle-physics strategy, and US R&D activities in high-field magnets and superconducting RF. A well-attended industrial exhibition and a complementary “industry fast-track” session were focused on Nb3Sn and high-temperature superconductor development.

James Siegrist from the US Department of Energy (DOE) pointed the way for aligning the high-field magnet R&D efforts at the four leading US magnet laboratories (Brookhaven, Fermilab, Berkeley Lab and the National High Magnetic Field Laboratory) with the goals of the FCC study. An implementation plan for joint magnet R&D will be composed in the near future. Discussions with further US institutes and universities are ongoing, and within the coming months several other DOE laboratories should join the FCC collaboration. A first US demonstrator magnet could be ready as early as 2016.

A total of 51 institutes have joined the FCC collaboration since February 2014, and the FCC study has been recognized by the European Commission (EC). Through the EuroCirCol project within the HORIZON2020 programme, the EC will fund R&D by 16 beneficiaries – including KEK in Japan – on the core components of the hadron collider. The four key themes addressed by EuroCirCol are the FCC-hh arc design (led by CEA Saclay), the interaction-region design (John Adams Institute), the cryo-beam-vacuum system (CELLS consortium), and the high-field magnet design (CERN). On the last day of the FCC week, the first meeting of the FCC International Collaboration was held. Leonid Rivkin was confirmed as chair of the board, with a mandate consistent with the production of the Conceptual Design Report, that is, to the end of 2018.

The next FCC Week will be held in Rome on 11–15 April 2016.

• The FCC Week in Washington was jointly organized by CERN and the US DOE, with support from the IEEE Council of Superconductivity. More than a third of the participants (120) came from the US. CERN (93), Germany (20), China (16), UK (16), Italy (12), France (11), Russia (11), Japan (10), Switzerland (10) and Spain (6) were also strongly represented. For further information, visit cern.ch/fccw2015.

The post Collaboration meets for the first FCC week appeared first on CERN Courier.

]]>
https://cerncourier.com/a/collaboration-meets-for-the-first-fcc-week/feed/ 0 News https://cerncourier.com/wp-content/uploads/2015/04/CCnew13_04_15.jpg
CMS prepares to search for heavy top-quark partners in Run 2 https://cerncourier.com/a/cms-prepares-to-search-for-heavy-top-quark-partners-in-run-2/ https://cerncourier.com/a/cms-prepares-to-search-for-heavy-top-quark-partners-in-run-2/#respond Thu, 09 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cms-prepares-to-search-for-heavy-top-quark-partners-in-run-2/ A variety of theories beyond the Standard Model attempt to address the hierarchy problem.

The post CMS prepares to search for heavy top-quark partners in Run 2 appeared first on CERN Courier.

]]>
As the experiment collaborations get ready for Run 2 at the LHC, the situation of the searches for new physics is rather different from what it was in 2009, when Run 1 began. Many models have been constrained and many limits have been set. Yet a fundamental question remains: why is the mass of the newly discovered Higgs boson so much below the Planck energy scale? This is the so-called hierarchy problem. Quantum corrections to the mass of the Higgs boson that involve known particles such as the top quark are divergent and tend to push the mass to a very high energy scale. To account for the relatively low mass of the Higgs boson requires fine-tuning, unless some new physics enters the picture to save the situation.

CCnew11_03_15

A variety of theories beyond the Standard Model attempt to address the hierarchy problem. Many of these predict new particles whose quantum-mechanical contributions to the mass of the Higgs boson precisely cancel the divergences. In particular, models featuring heavy partners of the top quark with vector-like properties are compelling, because the cancellations are then achieved in a natural way. These models, which often assume an extension of the Standard Model Higgs sector, include the two-Higgs doublet model (2HDM), the composite Higgs model, and the little Higgs model. In addition, theories based on the presence of extra dimensions of space often predict the existence of vector-like quarks.

The discovery of the Higgs boson was a clear and unambiguous target for Run 1. In contrast, there could be many potential discoveries of new particles or sets of particles to hope for in Run 2, but currently no model of new physics is favoured a priori above any other.

One striking feature common to many of these new models is that the couplings with third-generation quarks are enhanced. This results in final states containing b quarks, vector bosons, Higgs bosons and top quarks that can have significant Lorentz boosts, so that their individual decay products often overlap and merge. Such “boosted topologies” can be exploited thanks to dedicated reconstruction algorithms that were developed and became well established in the context of the analyses of Run-1 data.

Searches for top-quark partners performed by CMS on the data from Run 1 span a large variety of different strategies and selection criteria, to push the mass-sensitivity as high as possible. These searches have now been combined to reach the best exclusion limit from the Run-1 data: heavy top-quark partners with masses below 800 GeV are now excluded at the 95% confidence level. The figure shows a simulated event with a top-quark partner decaying into a top-quark plus a Higgs boson (T → tH) in a fully hadronic final state.

CMS plans to employ these techniques to analyse boosted topologies not only in the analysis framework, but for the very first time also in the trigger system of the experiment when the LHC starts up this year. The new triggers for boosted topologies are expected to open new regions of phase space, which would be out of reach otherwise. Some of these searches are expected to already be very sensitive within the first few months of data-taking in 2015. The higher centre-of-mass energy increases the probability for pair production of these new particles, as well as of single production. The CMS collaboration is now preparing to exploit the early data from Run 2 in the search for top-quark partners produced in 13 TeV proton collisions.

The post CMS prepares to search for heavy top-quark partners in Run 2 appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cms-prepares-to-search-for-heavy-top-quark-partners-in-run-2/feed/ 0 News A variety of theories beyond the Standard Model attempt to address the hierarchy problem. https://cerncourier.com/wp-content/uploads/2015/04/CCnew11_03_15.jpg
The TPS begins to shine https://cerncourier.com/a/the-tps-begins-to-shine/ https://cerncourier.com/a/the-tps-begins-to-shine/#respond Thu, 09 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-tps-begins-to-shine/ The challenges in building a new light source.

The post The TPS begins to shine appeared first on CERN Courier.

]]>
TPS Storage ring

On 31 December, commissioning of the Taiwan Photon Source (TPS) at the National Synchrotron Radiation Research Center (NSRRC) brought 2014 to a close on a highly successful note as a 3 GeV electron beam circulated in the new storage ring for the first time. A month later, the TPS was inaugurated in a ceremony that officially marked the end of the 10-year journey since the project was proposed in 2004, the past five years being dedicated to the design, development, construction and installation of the storage ring.

The new photon source is based on a 3 GeV electron accelerator consisting of a low-emittance synchrotron storage ring 518.4 m in circumference and a booster ring (CERN Courier June 2010 p16). The two rings are designed in a concentric fashion and housed in a doughnut-shaped building next to a smaller circular building where the Taiwan Light Source (TLS), the first NSRRC accelerator, sits (see cover). The TLS and the new TPS will together serve scientists worldwide whose experiments require photons ranging from infrared radiation to hard X-rays with energies above 10 keV.

Four-stage commissioning

The task of commissioning the TPS comprised four major stages involving: the linac system plus the transportation of the electron beam from the linac to the booster ring; the booster ring; the transportation of the electron beam from the booster ring to the storage ring; and, finally, the storage ring. Following the commissioning of the linac system in May 2011, the acceptance tests of key TPS subsystems progressed one after the other over the next three years. The 700 W liquid-helium cryogenic system, beam-position monitor electronics, power supplies for quadrupole and sextupole magnets, and two sets of 2 m-long in-vacuum undulators completed their acceptance tests in 2012. Two modules of superconducting cavities passed their 300 kW high-power tests. The welding, assembly and baking of the 14 m-long vacuum chambers designed and manufactured by in-house engineers were completed in 2013. Then, once the installation of piping and cable trays had begun, the power supply and other utilities were brought in, and set-up could start on the booster ring and subsystems in the storage ring.

The installation schedule was also determined by the availability of magnets. By April 2014, 80% of the 800 magnets had been installed in the TPS tunnel, allowing completion of the accelerator installation in July (bottom right). Following the final alignment of each component, preparation for the integration tests of the complete TPS system in the pre-commissioning phase was then fully under way by autumn.

The US$230 million project (excluding the NSRRC staff wages) involved more than 145 full-time staff members

The performance tests and system integration of the 14 sub­systems in the pre-commissioning stage started in August. By 12 December, the TPS team had begun commissioning the booster ring. The electron beam was accelerated to 3 GeV on 16 December and the booster’s efficiency reached more than 60% a day later. Commissioning of the storage ring began on 29 December. On the next day, the team injected the electrons for the first time and the beam completed one cycle. The 3 GeV electron beam with a stored current of 1 mA was then achieved and the first synchrotron light was observed in the early afternoon on 31 December (far right). The stored current reached 5 mA a few hours later, just before the shut down for the New Year holiday. As of the second week of February 2015, the TPS stored beam current had increased to 50 mA.

The US$230 million project (excluding the NSRRC staff wages) involved more than 145 full-time staff members in design and construction. Like any other multi-million-dollar, large-scale project, reaching “first light” required ingenious problem solving and use of resources. Following the groundbreaking ceremony in February 2010, the TPS project was on a fast track, after six months of preparing the land for construction. Pressures came from the worldwide financial crisis, devaluation of the domestic currency, reduction of the initial approved funding, attrition of young engineers who were recruited by high-tech industries once they had been trained with special skills, and bargaining with vendors. In addition, the stringent project requirements left little room for even small deviations from the delivery timetable or system specifications, which could have allowed budget re-adjustments.

To meet its mandate on time, the project placed reliance and pressure on experienced staff members. Indeed, more than half of the TPS team and the supporting advisors had participated in the construction of the TLS in 1980s. During construction of the TPS, alongside the in-house team were advisers from all over the world whose expertise played an important role in problem ­solving. In addition, seven intensive review meetings took place, conducted by the Machine Advisory Committee.

From the land preparation in 2010 onwards, the civil-construction team faced daily challenges. For example, at the heart of the Hsinchu Science Park, the TPS site is surrounded by heavy traffic, 24 hours a day, all year round. To eliminate the impact of vibration from all possible sources, the 20 m wide concrete floor of the accelerator tunnel is 1.6 m thick. Indeed, the building overall can resist an earthquake acceleration of 0.45 g, which is higher than the Safe Shutdown Earthquake criteria for US nuclear power plants required by the US Nuclear Regulatory Commission.

The civil engineering took an unexpected turn at the very start when a deep trench of soft soil, garbage and rotting plants was uncovered 14 m under the foundations. The 100 m long trench was estimated to be 10 m wide and nearly 10 m thick. The solution was to fill the trench with a customized lightweight concrete with the hardness and geological characteristics of the neighbouring foundations. The delay in construction caused by clearing out the soft soil led to installation of the first accelerator components inside the TPS shielding walls in a dusty, unfinished building with no air conditioning. The harsh working environment in summer, with temperatures sometimes reaching 38 °C, made the technological challenges seem almost easy.

Technology transfer

The ultra-high-vacuum system was designed and manufactured by NSRRC scientists and engineers, who also trained local manufacturers in the special technique of welding, the clean-room setup, and processing in an oil-free environment. This transfer of technology is helping the factories to undertake work involving the extensive use of lightweight aluminum alloy in the aviation industry. During the integration tests, the magnetic permeability of the vacuum system in the booster ring, perfectly tailored for the TPS, proved not to meet the required standard. The elliptical chambers were removed immediately to undergo demagnetization heat-treatment in a furnace heated to 1050 °C. For the 2 m long components this annealing took place in a local factory, while shorter components were treated at the NSRRC. The whole system was back online after only three weeks – with an unexpected benefit. After the annealing process, the relative magnetic permeability of the stainless vacuum steel chambers reached 1.002, lower than the specification of 1.01 currently adopted at light-source facilities worldwide.

The power supplies of the booster dipole magnets were produced abroad and had several problems. These included protection circuits that overheated to the extent that a fire broke out, causing the system to shut down during initial integration tests in August. As the vendor could not schedule a support engineer to arrive on site before late November, the NSRRC engineers instead quickly implemented a reliable solution themselves and resumed the integration process in about a week. The power supplies for the quadrupole and sextupole magnets of the storage ring were co-produced by the NSRRC and a domestic manufacturer, and deliver a current of 250 A, stable to less than 2.5 mA. Technology transfer from the NSRRC to the manufacturer on the design and production of this precise power supply is another byproduct of the TPS project.

24-hour shifts

Ahead of completion of the TPS booster ring, the linac was commissioned at a full-scale test site built as an addition to the original civil-construction plan (CERN Courier July/August 2011 p11). The task of disassembling and moving the linac to the TPS booster ring, re-assembling it and testing it again was not part of the initial plan in 2009. The relocation process nearly doubled the effort and work time. As a result, the four-member NSRRC linac team had to work 24-hour shifts to keep to the schedule and budget – saving US$700,000 of disassembly and re-assembly fees had this been carried out by the original manufacturer. After the linac had been relocated, the offsite test facility was transformed into a test site for the High-Brightness Injector Group.

Initially, the TPS design included four superconducting radio­frequency (SRF) modules based on the 500 MHz modules designed and manufactured at KEK in Japan for the KEKB storage ring. However, after the worldwide financial crisis in 2008 caused the cost of materials to soar nearly 30%, the number of SRF modules was reduced to three and the specification for the stored electron beam was reduced from 400 mA to 300 mA. But collaboration and technology transfer on a higher-order mode-damped SRF ­cavity for high-intensity storage rings from KEK has allowed the team at NSRRC to modify the TPS cavity to produce higher operational power and enable a stored electron beam of up to 500 mA – better, that is, than the original specification. (Meanwhile, the first phase of commissioning in December used three conventional five-cell cavities from the former PETRA collider at DESY – one for the booster and two for the storage ring – which had been purchased from DESY and refurbished by the NSRRC SRF team.)

The TPS accelerator uses more than 800 magnets designed by the NSRRC magnet group, which were contracted to manufacturers in New Zealand and Denmark for mass production. To control the electron beam’s orbit as defined by the specification, the magnetic pole surfaces must be machined to an accuracy of less than 0.01 mm. At the time, the New Zealand factory was also producing complicated and highly accurate magnets for the NSLS-II accelerator at Brookhaven National Laboratory. To prevent delays in delivering the TPS magnets – a possible result of limited factory resources being shared by two large accelerator projects – the NSRRC assigned staff members to stay at the overseas factory to perform on-site inspection and testing at the production line. Any product that failed to meet the specification was returned to the production line immediately. The manufacturer in New Zealand also constructed a laboratory that simulated the indoor environment of the TPS with a constant ambient temperature. Once the magnets reached an equilibrium temperature corresponding to a room temperature of 25°C in the controlled laboratory, various tests were conducted.

Like the linac, the TPS cryogenic system was commissioned at a separate, specially constructed test site. The helium cryogenic plant was dissembled and reinstalled inside the TPS storage ring in March 2014, followed by two months of function tests. With the liquid nitrogen tanks situated at the northeast corner, outside and above the TPS building, feeding the TPS cooling system – which stretches more than several hundred metres – is a complex operation. It needs to maintain a smooth transport and a long-lasting fluid momentum, without triggering any part of the system to shut down because of fluctuations in the coolant temperature or ­pressure. The cold test and the heat-load test of the liquid helium transfer-line is scheduled to finish by the end of March 2015 so that the liquid helium supply will be ready for the SRF cavities early in April.

Since both the civil engineering and the construction of the accelerator itself proceeded in parallel, the TPS team needed to conduct acceptance tests of most subsystems off-site, owing to the compact and limited space in the NSRRC campus. When all of the components began to arrive at the yet-to-be completed storage ring, the installation schedule was planned mainly according to the availability of magnets. This led to a two-step installation plan. In the first half of the ring, bare girders were set up first, followed by the installation of the magnets as they were delivered and then the vacuum chambers. For the second half of the ring, girders with pre-mounted magnets were installed, followed by the vacuum chambers. This allowed error-sorting with the beam-dynamics model to take place before finalizing the layout of the magnets for the minimum impact on the beam orbit. Afterwards, the final alignment of each component and tests of the integrated hardware were carried out in readiness for the commissioning phase.

Like other large-scale projects, leadership played a critical role in the success of completing the TPS construction to budget and on schedule. Given the government budget mechanism and the political atmosphere created by the worldwide economic turmoil over the past decade, leaders of the TPS project were frequently second-guessed on every major decision. Only by having the knowledge of a top physicist, the mindset of a peacemaker, the sharp sense of an investment banker and the quality of a versatile politician, were the project leaders able to guide the team to focus unwaveringly on the ultimate goal and turn each crisis into an opportunity.

The post The TPS begins to shine appeared first on CERN Courier.

]]>
https://cerncourier.com/a/the-tps-begins-to-shine/feed/ 0 Feature The challenges in building a new light source. https://cerncourier.com/wp-content/uploads/2015/04/CCiyl3_03_15.jpg
In search of hidden light https://cerncourier.com/a/viewpoint-in-search-of-hidden-light/ Thu, 09 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/viewpoint-in-search-of-hidden-light/ Swapan Chattopadhyay discusses the HL-LHC, and the future search for dark matter/energy.

The post In search of hidden light appeared first on CERN Courier.

]]>

In my journey as a migrant scientist, crossing continents and oceans to serve physics, institutions and nations wherever and whenever I am needed and called upon, CERN has always been the focal point of illumination. It has been a second home to whichever institution and country I have been functioning from, particularly at times of major personal and professional transition. Today, at the completion of yet another major transition across the seas, I am beginning to connect to the community from my current home at Fermilab and Northern Illinois University. Eight years ago, I wrote in this column on “Amazing particles and light” and, serendipitously, I am drawn by CERN’s role in shaping developments in particle physics to comment again in this International Year of Light, 2015.

“For the rest of my life I want to reflect on what light is!”, Albert Einstein exclaimed in 1916. A little later, in the early 1920s, S N Bose proposed a new behaviour for discrete quanta of light in aggregate and explained Planck’s law of “black-body radiation” transparently, leading to a major classification of particles according to quantum statistics. The “photon statistics” eventually became known as the Bose–Einstein statistics, predicting a class of particles known as “bosons”. Sixty years later, in 1983, CERN discovered the W and Z boson at its Super Proton Synchrotron collider, at what was then the energy frontier. In another 30 years, a first glimpse of a Higgs boson appeared in 2012 at today’s high-energy frontier at the LHC, again at CERN.

CERN has again taken the progressive approach of basing such colliders on technological innovation

Today, CERN’s highest-priority particle-physics project for the future is the High-Luminosity LHC upgrade. However, the organization has also taken the lead in exploring for the long-term future the scientific, technological and fiscal limits of the highest energy scales achievable in laboratory based particle colliders, via the recently launched Future Circular Collider (FCC) design effort, to be completed by 2018. In this bold initiative, in line with its past tradition, CERN has again taken the progressive approach of basing such colliders on technological innovation, pushing the frontier of high-field superconducting dipole magnets beyond the 16 T range. The ambitious strategy inspires societal aspirations, and has the promise of returning commensurate value to global creativity and collaboration. It also leaves room for a luminous electron–positron collider as a Higgs factory at the energy frontier, either as an intermediate stage in the FCC itself or as a possibility elsewhere in the world, and is complementary to the development of emerging experimental opportunities with neutrino beams at the intensity frontier in North America and Asia.

What a marvellous pursuit it is to reach ever higher energies via brute-force particle colliders in an earth-based laboratory. Much of the physics at the energy frontier, however, is hidden in the so-called “dark sector” of the vacuum. Lucio Rossi wrote in this column last month how light is the most important means to see, helping us to bridge reality with the mind. Yet even light could have a dark side and be invisible – “hidden-sector photons” could have a role to play in the world of dark matter, along with the likes of axions. And dark energy – is it real, what carries it?

All general considerations for the laboratory detection of dark matter and dark energy lead to the requirement of spectacular signal sensitivities with the discrimination of one part in 1025, and an audacious ability to detect possible dark-energy “zero-point” fluctuation signals at the level of 10–15 g. Already today, the electrodynamics of microwave superconducting cavities offers a resonant selectivity of one part in 1022 in the dual “transmitter–receiver” mode. Vacuum, laser and particle/atomic beam techniques promise gravimeters at 10–12 g levels. Can we stretch our imagination to consider eavesdropping on the spontaneous disappearance of the “visible” into the “dark”, and back again? Or of sensing directly in a laboratory setting the zero-point fluctuations of the dark-energy density, complementing the increasingly precise refinement of the nonzero value of the cosmological constant via cosmological observations?

The comprehensive skills base in accelerator, detector and information technologies accumulated across decades at CERN and elsewhere could inspire non-traditional laboratory searches for the “hidden sector” of the vacuum at the cosmic frontier, complementing the traditional collider-based energy frontier.

Like the synergy between harmony and melody in music – as in the notes of the harmonic minor chord of Vivaldi’s Four Seasons played on the violin, and the same notes played melodiously in ascending and descending order in the universal Indian raga Kirwani (a favourite of Bose, played on the esraj) – the energy frontier and the cosmic frontier are tied together intimately in the echoes of the Big Bang, from the laboratory to outer space.

The post In search of hidden light appeared first on CERN Courier.

]]>
Opinion Swapan Chattopadhyay discusses the HL-LHC, and the future search for dark matter/energy. https://cerncourier.com/wp-content/uploads/2015/04/CCvie3_03_15-1.jpg
A luminous future for the LHC https://cerncourier.com/a/a-luminous-future-for-the-lhc/ https://cerncourier.com/a/a-luminous-future-for-the-lhc/#respond Mon, 23 Feb 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/a-luminous-future-for-the-lhc/ The latest news on the high-luminosity upgrade scheduled for 10 years from now.

The post A luminous future for the LHC appeared first on CERN Courier.

]]>

To maintain scientific progress and exploit the full capacity of the LHC, the collider will need to operate at higher luminosity. Like shining a brighter light on an object, this will allow more accurate measurements of new particles, the observation of rarer processes, and increase the discovery reach with rare events at the high-energy frontier. The High-Luminosity LHC (HL-LHC) project began in 2011 under the framework of a European Union (EU) grant as a conceptual study, with the aim to increase its luminosity by a factor of 5–10 beyond the original design value and provide 3000 fb–1 in 10 to 12 years.

Two years later, CERN Council recognized the project as the top priority for CERN and for Europe (CERN Courier July/August 2013 p9), and then confirmed its priority status in CERN’s scientific and financial programme in 2014 by approving the laboratory’s medium-term plan for 2014–2019. Since this approval, new activities have started up to deliver key technologies that are needed for the upgrade. The latest results and recommendations by the various reviews that took place in 2014 were the main topics for discussion at the 4th Joint HiLumi LHC/LARP Annual Meeting, which was hosted by KEK in Tsukuba in November.

The latest updates

The event began with plenary sessions where members of the collaboration management – from CERN, KEK, the US LHC Accelerator Research Program (LARP) and the US Department of Energy – gave invited talks. The first plenary session closed with an update on the status of HL-LHC by the project leader, CERN’s Lucio Rossi, who also officially announced the new HL-LHC timeline. The plenary was followed by expert talks on residual dose-rate studies, layout and integration, optics and operation modes and progress on cooling, quench and assembly (together known as QXF). Akira Yamamoto of KEK presented the important results and recommendations of the recent superconducting cable review.

There were invited talks on the LHC Injectors Upgrade (LIU) by project leader Malika Meddahi from CERN, and on the outcomes of the 2nd ECFA HL-LHC Experiments Workshop held in October – an indication of the close collaboration with the experimentalists. One of the highlights of the plenaries was the status update on the Preliminary Design Report – the main deliverable of the project, which is to be published soon. There were three days of parallel sessions reviewing the progress in design and R&D in the various work packages – named in terms of activities – both with and without EU funding.

Refined optics and layout of the high-luminosity insertions have been provided by the activity on accelerator physics and performance, in collaboration with the other work packages. This new baseline takes into account the updated design of the magnets (in particular those of the matching section), the results of the energy deposition and collimation studies, and the constraints resulting from the integration of the components in the tunnel. The work towards the definition of the specifications for the magnets and their field quality has progressed, with an emphasis on the matching section for which a first iteration based on the requirements resulting from studies of beam dynamics has been completed. The outcomes include an updated impedance model of the LHC and a preliminary estimate of the resulting intensity limits and beam–beam effects. The studies confirmed the need for low-impedance collimators. In addition, an updated set of beam parameters consistent through all of the injectors and the LHC has been defined in collaboration with the LIU team.

Progress with magnets

The main efforts of the activity on magnets for insertion regions (IRs) in the past 18 months focused on the exploration of different options for the layout of the interaction region. The main parameters of the magnet lattice, such as operational field/gradients, apertures, lengths and magnet technology, have been chosen as a result of the worldwide collaboration, including US LARP and KEK. A baseline for the layout of the new interaction region is one of the main results of this work. There is now a coherent layout, agreed with the beam dynamics, energy deposition, cooling and vacuum teams, covering the whole interaction region.

The engineering design of most of the IR magnets has now started and the first hardware tests are expected in 2015. There was also good news from the quench-protection side, which can meet all of the key requirements based on the results from tests performed on the magnets. In addition, there is a solution for cooling the inner triplet (IT) quadrupoles and the separation dipole, D1. It relies on two heat exchangers for the IT quadrupole/orbit correctors assembly, with a separate system for the D1 dipole and the high-order corrector magnets. Besides these results, considerable effort was devoted to selecting the technologies and the design for the other magnets required in the lattice, namely the orbit correctors, the high-order correctors and the recombination dipole, D2.

Crabs and collimators

The crab-cavities activity delivered designs for three prototype crab cavities, based on four-rod, RF-dipole (RFD) and double quarter-wave (DQW) structures. They were all tested successfully against the design gradient with higher-than-expected surface resistance. Further design improvements to the initial prototypes were made to comply with the strict requirements for higher-order-mode damping, while maintaining the deflecting field performance. There was significant progress on the engineering design of the dressed cavities and the two-cavity cryomodule conceptual design for tests at CERN’s Super Proton Synchrotron (SPS).

Full design studies, including thermal and mechanical analysis, were done for all three cavities, culminating in a major international design review where the three designs were assessed by a panel of independent leading superconducting RF experts. As an outcome of this review, the activity will focus the design effort for the SPS beam tests on the RFD and DQW cavities, with development of the 4-rod cavity continuing at a lower priority and not foreseen for the SPS tests. A key milestone – to freeze the cavity designs and interfaces – has also been met. In addition, a detailed road map to follow the fabrication and installation in the SPS has been prepared to meet the deadline of the extended year-end technical stop of 2016–2017.

The wrap-up talk on the IR-collimation activity also reviewed the work of related non-EU-funded work packages, namely machine protection (WP7), energy deposition and absorber co-ordination (WP10), and beam transfer and kickers (WP14). The activity has reached several significant milestones, following the recommendations of the collimation-project external review, which took place in spring 2013. Highlights include important progress towards the finalization of the layouts for the IR collimation. A solid baseline solution has been proposed for the two most challenging cleaning requirements: proton losses around the betatron-cleaning insertion and losses from ion collisions. The solution is based on new collimators – the target collimator long dispersion suppressor, or TCLD – to be integrated into the cold dispersion suppressors. Thanks to the use of shorter 11 T dipoles that will replace the existing 15-m-long dipoles, there will be sufficient space for the installation of warm collimators between two cold magnets. This collimation solution is elegant and modular because it can be applied, in principle, at any “old” dipole location. As one of the most challenging and urgent upgrades for the high-luminosity era, solid baselines for the collimation upgrade in the dispersion suppressors around IR7 and IR2 were also defined. In addition, simulations have continued for advanced collimation layouts in the matching sections of IR1 and IR5, improving significantly the cleaning of “debris” from collisions downstream around the high-luminosity experiments.

Cold powering

The cold-powering activity has seen the world-record current of 20 kA at 24 K in an electrical transmission line consisting of two 20-m-long MgB2 superconducting cables. Another achievement was with the novel design of the part of the cold-powering system that transfers the current from room temperature to the superconducting link. Following further elaboration, this was adopted as the baseline. The idea is that high-temperature superconducting (HTS) current-leads will be modular components that are connected via a flexible HTS cable to a compact cryostat, where the electrical joints between the HTS and MgB2 parts of the superconducting link are made. Simulation studies were also made to evaluate the electromagnetic and thermal behaviour of the MgB2 cables contained in the cold mass of the superconducting link, under static and transient conditions.

The final configuration has tens of high-current cables packed in a compact envelope to transfer a total current of about 150 kA feeding different magnet circuits. Cryogenic-flow schemes were also elaborated for the cold-powering systems at points 7, 1 and 5 on the LHC. An experimental study performed in the 20-m-long superconducting line at CERN was launched to understand quench propagation in the MgB2 superconducting cables operated in helium gas. In addition, integration studies of the cold-powering systems in the LHC were also done, with priority given to the system at point 7.

The meeting also covered updates on other topics such as machine protection, cryogenics, vacuum and beam instrumentation. Delicate arbitration took place between the needs of crab-cavity tests in the SPS at long straight section 4 and the requirements for the continuing study and tests of electron-cloud mitigation of those working on vacuum aspects (see Old machine to validate new technology below).

Summaries of the EU-funded work packages closed the meeting, showing “excellent technical progress thanks to the hard and smart work of many, including senior and junior”, as project leader Rossi concluded in his wrap-up talk.

Upcoming meetings will be the LARP/HiLumi LHC meeting on 11–13 May at Fermilab and the final FP7 HiLumi LHC/LARP collaboration meeting on 26–30 October at CERN. As a contribution to the UNESCO International Year of Light, special events celebrating this occasion will be organized by HL-LHC throughout the year – see cern.ch/go/light. (See also Viewpoint )

Old machine to validate new technology

Crab cavities have never been tested on hadron beams. So for the recently selected HL-LHC crab cavities (RFD and DQW, see main text), tests in the SPS beam are considered to be crucial. The goals are to validate the cavities with beam in terms of, for example, electric field, ramping, RF controls and impedance, and to study other parameters such as cavity transparency, RF noise, emittance growth and nonlinearities.

Long straight section 4 (LSS4) of the SPS already has a cold section, which was set up for the cold-bore experiment (COLDEX). Originally designed to measure synchrotron-radiation-induced gas release, COLDEX has become a key tool for evaluating electron-cloud effects. It mimics the cold bore and beam screen of the LHC for electron-cloud studies. Installed in the bypass line of the beam pipe, COLDEX is assembled on a moving table so that beam can pass either through the experiment during machine development runs or through the standard SPS beam pipe during normal operation. It has been running again since the SPS started up again last year after the first long shutdown, providing key information on new materials and technology to reduce or suppress severe electron-cloud effects that would otherwise be detrimental to LHC beams with 25 ns bunch spacing – as planned for Run 2.

Naturally, SPS LSS4 would be the right place to put the crab-cavity prototypes for the beam test. The goal was originally to install them during the extended year-end technical stop of 2016–2017, to validate the cavities in 2017, the year in which series construction must be launched. However, installing the cavities together with their powering and cryogenic infrastructure in an access time of 11–12 weeks is a real challenge. So at the meeting in Tsukuba, the idea of bringing forward part of the installation to 2015–2016 was discussed. However, in view of the severe electron-cloud effects that were computed in 2014 for LHC beam at high intensity, and the consequent need for a longer and deeper study to validate various solutions, COLDEX needs to run beyond 2015.

So what other options are there for testing the crab cavities? A preliminary look at possible locations for an additional cold section in the SPS led to LSS5. This would result in having two permanent “super” facilities to test equipment with proton beams. The hope is that these facilities would not only be available for testing crab cavities for the HL-LHC project, but would also provide a world facility for testing superconducting RF-accelerating structures in intense, high-energy proton beams. With the installation of adequate cryogenics and power infrastructure, a facility in LSS5 could further evolve and possibly also allow tests of beam damage and other beam effects for future superconducting magnets, for example for the Future Circular Collider study (CERN Courier April 2014 p16). This new idea raises many questions, but the experts are confident that these can be solved with suitable design and imagination.

The post A luminous future for the LHC appeared first on CERN Courier.

]]>
https://cerncourier.com/a/a-luminous-future-for-the-lhc/feed/ 0 Feature The latest news on the high-luminosity upgrade scheduled for 10 years from now. https://cerncourier.com/wp-content/uploads/2015/02/CCHIG7_02_15th-1.jpg
Narrowing down the ‘stealth stop’ gap with ATLAS https://cerncourier.com/a/narrowing-down-the-stealth-stop-gap-with-atlas/ https://cerncourier.com/a/narrowing-down-the-stealth-stop-gap-with-atlas/#respond Tue, 27 Jan 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/narrowing-down-the-stealth-stop-gap-with-atlas/ In late 2011, ATLAS launched a dedicated programme targeting searches for the supersymmetric partner of the top quark – the scalar top, or "stop" – which could be pair-produced in high-energy proton–proton collisions.

The post Narrowing down the ‘stealth stop’ gap with ATLAS appeared first on CERN Courier.

]]>
In late 2011, ATLAS launched a dedicated programme targeting searches for the supersymmetric partner of the top quark – the scalar top, or “stop” – which could be pair-produced in high-energy proton–proton collisions. If not much heavier than the top quark, this new particle is expected to play a key role in explaining why the Higgs boson is light.

While earlier supersymmetry (SUSY) searches at the LHC have already set stringent exclusion limits on strongly produced SUSY particles, these generic searches were not very sensitive to the stop. If it exists, the stop could decay in a number of ways, depending on its mass and other SUSY parameters. Most of the searches at the LHC assume that the stop decays to the lightest SUSY particle (LSP) and one or more Standard Model particles. The LSP is typically assumed to be stable and only weakly interacting, making it a viable candidate for dark matter. Events with stop-pair production would therefore feature large missing transverse momentum as the two resulting LSPs escape the detector.

The first set of results from the searches by ATLAS were presented at the International Conference on High-Energy Physics (ICHEP) in 2012. A stop with mass between around 225 and 500 GeV for a nearly massless LSP was excluded for the simplest decay mode. Exclusion limits were also set for more complex stop decays.

CCnew7_01_15th

These searches revealed a sensitivity gap when the stop is about as heavy as the top quark – a scenario that is particularly interesting and well motivated theoretically. Such a “stealth stop” hides its presence in the data, because it resembles the top quark, which is pair-produced roughly six times more abundantly.

Use of the full LHC Run-1 data set, together with the development of novel analysis techniques, has pushed the stop exclusion in all directions. The figure shows the ATLAS limits as of the ICHEP 2014 conference, in the plane of LSP mass versus stop mass for each of the following stop decays: to an on-shell top quark and the LSP (right-most area); to an off-shell top quark and the LSP (middle area); to a bottom quark, off-shell W boson, and the LSP (left-most grey area); or to a charm quark and the LSP (left-most pink area). The exclusion is achieved by the complementarity of four targeted searches (ATLAS Collaboration 2014a–2014d). The results eliminate a stop of mass between approximately 100 and 700 GeV (lower masses were excluded by data from the Large Electron–Positron collider) for a light LSP. Gaps in the excluded region for intermediate stop masses are reduced but persist, including the prominent region corresponding to the stealth stop.

Standard Model top-quark measurements can be exploited to get a different handle on the potential presence of a stealth stop. The latest ATLAS high-precision top–antitop cross-section measurement, together with a state-of-the-art theoretical prediction, has allowed ATLAS to exclude a stealth stop between the mass of the top quark and 177 GeV, for a stop decaying to a top quark and the LSP.

The measurement of the top–antitop spin correlation adds extra sensitivity because the stop and the top quark differ by half a unit in spin. The latest ATLAS measurement (ATLAS Collaboration 2014e) uses the distribution of the azimuthal angle between the two leptons from the top decays, together with cross-section information, to extend the limit for the stealth stop up to 191 GeV.

The rigorous search programme undertaken by ATLAS has ruled out large parts of interesting regions of the stop model and closed in on a stealth stop. It leaves the door open for discovery of a stop beyond the current mass reach, or in remaining sensitivity gaps, at the higher-energy and higher-luminosity LHC Run 2.

The post Narrowing down the ‘stealth stop’ gap with ATLAS appeared first on CERN Courier.

]]>
https://cerncourier.com/a/narrowing-down-the-stealth-stop-gap-with-atlas/feed/ 0 News In late 2011, ATLAS launched a dedicated programme targeting searches for the supersymmetric partner of the top quark – the scalar top, or "stop" – which could be pair-produced in high-energy proton–proton collisions. https://cerncourier.com/wp-content/uploads/2015/01/CCnew7_01_15th.jpg
ARIEL begins a new future in rare isotopes https://cerncourier.com/a/ariel-begins-a-new-future-in-rare-isotopes/ https://cerncourier.com/a/ariel-begins-a-new-future-in-rare-isotopes/#respond Tue, 27 Jan 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/ariel-begins-a-new-future-in-rare-isotopes/ First beam in the superconducting linac marks a fine start for TRIUMF’s new flagship facility.

The post ARIEL begins a new future in rare isotopes appeared first on CERN Courier.

]]>

TRIUMF is Canada’s national laboratory for particle and nuclear physics, located in Vancouver. Founded in 1968, the laboratory’s particle-accelerator-driven research has grown from nuclear and particle physics to include vibrant programmes in materials science, nuclear medicine and accelerator science, while maintaining strong particle-physics activities elsewhere, for example at CERN and the Japan Proton Accelerator Research Complex. Currently, the laboratory’s flagship on-site programme uses rare-isotope beams (RIBs) for both discovery and application in the physical and health sciences.

Rare isotopes are not found in nature, yet they have properties that have shaped the evolution of the universe in fundamental ways, from powering the burning of stars to generating the chemical elements that make up life on Earth. These isotopes are foundational for modern medical-imaging techniques, such as positron-emission tomography and single-photon emission computed tomography, and are useful for therapeutic purposes, including the treatment of cancer tumours. They are also powerful tools for scientific discovery, for example in determining the structure and dynamics of atomic nuclei, understanding the processes by which heavy elements in the universe were created, enabling precision tests of fundamental symmetries that could challenge the Standard Model of particle physics, and serving as probes of the interfaces between materials.

TRIUMF’s Isotope Separator and Accelerator – ISAC – is one of the world’s premier RIB facilities. ISAC’s high proton-beam power (up to 50 kW) that produces the rare isotopes, its chain of accelerators that propels them up to energies of 6–18 MeV per nucleon for heavy and light-mass beams, respectively, and its experimental equipment that measures their properties are unmatched in the world.

The Advanced Rare IsotopE Laboratory (ARIEL) was conceived to expand these capabilities in important new directions, and to establish TRIUMF as a world-leading laboratory in accelerator technology and in rare-isotope research for science, medicine and business. To expand the number and scope of RIBs feeding TRIUMF’s experimental facilities, ARIEL will add two high-power driver beams – one electron and one proton – and two new isotope production-target and transport systems.

Together with the existing ISAC station, the two additional target stations will triple the current isotope-production capacity, enable full utilization of the existing experimental facilities, and satisfy researcher demand for isotopes used in nuclear astrophysics, fundamental nuclear studies and searches for new particle physics, as well as in characterizing materials and in medical-isotope research. In addition, ARIEL will deliver important social and economic impacts, in the production of medical isotopes for targeted cancer therapy, in the characterization of novel materials, and in the continued advancement of accelerator technology in Canada, both at the laboratory and in partnership with industry.

The e-linac

ARIEL-I, the first stage of ARIEL, was funded in 2010 by the Canada Foundation for Innovation (CFI), the British Columbia Knowledge Development Fund, and the Canadian government. It comprises the ARIEL building (figure 1), completed in 2013, and a 25 MeV, 100 kW superconducting radio-frequency (SRF) electron linear accelerator (e-linac), which is the first stage of a new electron driver designed ultimately to achieve 50 MeV and 500 kW for the production of radioactive beams via photo-fission.

The ARIEL-I e-linac accelerated its first beam to 23 MeV in September 2014

The ARIEL-I e-linac, which accelerated its first beam to 23 MeV in September 2014, is a state-of-the-art accelerator featuring a number of technological breakthroughs (figure 2). The 10 mA continuous wave (cw) electron beam is generated in a 300 kV DC thermionic gridded-cathode assembly modulated at 650 MHz, bunched by a room-temperature 1.3 GHz RF structure, and accelerated using up to five 1.3 GHz superconducting cavities, housed in one 10 MeV injector cryomodule (ICM) and two accelerator cryomodules, each providing 20 MeV energy gain.

The design and layout of the e-linac are compatible with a future recirculation arc that can be tuned either for energy-recovery or energy-doubling operation. The electron source, designed and constructed at TRIUMF, exhibits reduced field-emission and a novel modulation scheme: the RF power is transmitted via a ceramic waveguide between the grounded vessel and the gun, so the amplifier is at ground potential. The source has been successfully tested to the full current specification of 10 mA cw. Specially designed short quadrupoles (figure 3) present minimum electron-beam aberrations by shaping the poles to be locally spherical, with radius 4π times the aperture radius (Baartman 2012).

The injector and accelerator cryomodules house the SRF cavities (figure 4), which are cooled to 2K and each driven by a 300 kW klystron. To take advantage of prior developments – and to contribute to future projects – TRIUMF chose the 1.3 GHz technology, the same as other global accelerator projects including the XFEL in Hamburg, the LCLS-II at SLAC, and the proposed International Linear Collider.

Through technology transfer from TRIUMF, the Canadian company PAVAC Industries Inc. fabricated the niobium cavities and TRIUMF constructed the cryomodules, based on the ISAC top-loading design (figure 2). The TRIUMF/PAVAC collaboration, which goes back to 2005, was born from the vision of “made in Canada” superconducting accelerators. Now, 10 years later, the relationship is a glowing example of a positive partnership between industry and a research institute.

International partnerships have been essential in facilitating technical developments for the e-linac. In 2008, TRIUMF went into partnership with the Variable Energy Cyclotron Centre (VECC) in Kolkata, for joint development of the ICM and the construction of two of them: one for ARIEL and one for ANURIB, India’s next-generation RIB facility, which is being constructed in Kolkata. In 2013, the collaboration was extended to include the development of components for ARIEL’s next phase, ARIEL-II. In addition, collaborations with Fermilab, the Helmholtz Zentrum Berlin and DESY were indispensable for the project.

ARIEL’s development is continuing with ARIEL-II, which will complete the e-linac and add the new proton driver, production targets and transport systems in preparation for first science in 2017. Funding for ARIEL-II has been requested from the CFI on behalf of 19 universities, led by the University of Victoria, and matching funds are being sought from five Canadian provinces.

ARIEL will bring unprecedented capabilities:

•The multi-user RIB capability will not only triple the RIB hours delivered to users, but also increase the richness of the science by enabling long-running experiments for fundamental symmetries that are not practical currently.

•Photo-fission will allow the production of very neutron-rich isotopes at unprecedented intensities for precision studies of r-process nuclei.

•The multi-user capability will establish depth-resolved β-detected NMR as a user facility, unique in the world.

•High production rates of novel alpha-emitting heavy nuclei will accelerate development of targeted alpha tumour therapy.

The new facility will also provide important societal benefits. In addition to the economic benefits from the commercialization of accelerator technologies (e.g. PAVAC), ARIEL will expand TRIUMF’s outstanding record in student development through participation in international collaborations and training in advanced instrumentation and accelerator technologies. The e-linac has provided the impetus to form Canada’s first graduate programme in accelerator physics. One of only a few worldwide, the programme is in high demand globally and has already produced award-winning graduates.

ARIEL is not only the future of TRIUMF, it also embodies the mission of TRIUMF at large: scientific excellence, societal impact, and economic benefit. And it is off to a great start.

The post ARIEL begins a new future in rare isotopes appeared first on CERN Courier.

]]>
https://cerncourier.com/a/ariel-begins-a-new-future-in-rare-isotopes/feed/ 0 Feature First beam in the superconducting linac marks a fine start for TRIUMF’s new flagship facility.
The promise of boosted topologies https://cerncourier.com/a/the-promise-of-boosted-topologies/ https://cerncourier.com/a/the-promise-of-boosted-topologies/#respond Mon, 27 Oct 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-promise-of-boosted-topologies/ While analyses are progressing to ascertain the consistency of the new boson discovered at the LHC with the Standard Model Higgs boson (H), the LHC collaborations continue to develop tools in their search for new physics that could lead beyond the Standard Model, and cast light on the many fundamental open questions that remain.

The post The promise of boosted topologies appeared first on CERN Courier.

]]>
CCnew10_09_14

While analyses are progressing to ascertain the consistency of the new boson discovered at the LHC with the Standard Model Higgs boson (H), the LHC collaborations continue to develop tools in their search for new physics that could lead beyond the Standard Model, and cast light on the many fundamental open questions that remain.

CCnew9_09_14

The LHC can now reach energies far above those needed to produce Standard Model particles such as W/Z/H bosons and top quarks. The extra energy results in massive final-state particles with high Lorentz boosts (γ > 2), i.e. “boosted topologies”. Searches for new physics at the LHC often involve these boosted topologies, so it is necessary to extend the particle-physicists’ toolkit to handle these cases. This includes investigation of non-isolated leptons, overlapping jets that contain “substructure” from the decay of the Standard Model particles, and bottom-quark jets that merge with nearby jets. Classical techniques fail to capture these challenging topologies, so new techniques must be developed to ensure the broadest sensitivity to new physics.

CCnew10_09_14

To analyse these topologies, much theoretical and experimental understanding has been accomplished during the past few years. Now the CMS collaboration has published searches involving boosted W/Z/H bosons and top quarks, using a large suite of tools to improve sensitivity by factors of around 10 over classical techniques. This suite of tools includes identifying leptons within boosted top-quark decays, identifying W and top-mass peaks inside merged jets, and identifying bottom-quark jets embedded within merged jets.

Figure 1 shows an event display of a boosted top-quark candidate recorded by CMS in 2012. The energy deposits in the calorimeters are shown as blue and green boxes, while the tracks are indicated with coloured lines. This jet has been found to exhibit a three-prong substructure that has been resolved with dedicated algorithms.

In the first analyses using these techniques, large improvements have been observed in high-mass sensitivity. Figure 2 shows the observed limits for a tt resonance search with and without using these boosted techniques. The blue line highlights the sensitivity of such a search using traditional, non-boosted techniques. The red and orange lines highlight the sensitivity using boosted techniques. At a mass, m, of 2 TeV, the sensitivity of the boosted techniques is 10 times better than traditional techniques.

This is just one of many analyses in which these new techniques have been deployed (see further reading below), and with a firm grasp on the relevant physics gained from experience in the LHC’s Run 1, CMS is now poised to apply the techniques broadly in Run 2.

The post The promise of boosted topologies appeared first on CERN Courier.

]]>
https://cerncourier.com/a/the-promise-of-boosted-topologies/feed/ 0 News While analyses are progressing to ascertain the consistency of the new boson discovered at the LHC with the Standard Model Higgs boson (H), the LHC collaborations continue to develop tools in their search for new physics that could lead beyond the Standard Model, and cast light on the many fundamental open questions that remain. https://cerncourier.com/wp-content/uploads/2014/10/CCnew10_09_14.jpg
Neutrinos cast light on coherent pion production https://cerncourier.com/a/neutrinos-cast-light-on-coherent-pion-production/ https://cerncourier.com/a/neutrinos-cast-light-on-coherent-pion-production/#respond Mon, 27 Oct 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/neutrinos-cast-light-on-coherent-pion-production/ Experiments at Fermilab are advancing an intriguing story that began three decades ago, with investigations of coherent neutrino interactions that produce pions yet leave the target nucleus unscathed.

The post Neutrinos cast light on coherent pion production appeared first on CERN Courier.

]]>
Experiments at Fermilab are advancing an intriguing story that began three decades ago, with investigations of coherent neutrino interactions that produce pions yet leave the target nucleus unscathed.

CCnew17_09_14

When neutrinos scatter coherently off an entire nucleus, the exchange of a Z0 or W± boson can lead to the production of a pion with the same charge. The first observations of such interactions came in the early 1980s from the Aachen–Padova experiment at CERN’s Proton Synchrotron, followed by an analysis of earlier data from Gargamelle. A handful of other experiments at CERN, Fermilab and Serpukhov provided additional measurements before the end of the 1990s. These experiments determined interaction cross-sections for high-energy neutrinos (5–100 GeV), which were in good agreement with the model of Deiter Rein and Lalit Sehgal of Aachen. Published shortly after the first measurements were made, their model is still used in some Monte Carlo simulations.

More recently, the SciBooNE and K2K collaborations attempted to measure the coherent production of charged pions at lower neutrino energies (less than 2 GeV). However, they found no evidence of the interaction, and published upper limits below Rein and Sehgal’s original estimation. These results, together with recent observations of coherent production of neutral pions by the MiniBooNE and NOMAD collaborations, have now motivated renewed interest and new models of coherent pion production.

CCnew18_09_14

In the NuMI beamline at Fermilab – which has a peak energy of 3.5 GeV and energies beyond 20 GeV – coherent charged-current pion production accounts for only 1% of all of the ways that a neutrino can interact. Nevertheless, both the ArgoNeuT and MINERvA collaborations have now successfully measured the cross-sections for charged-current pion production by recording the interactions of neutrinos and antineutrinos.

ArgoNeuT uses a liquid-argon time-projection chamber (TPC), and has results for coherent interactions of antineutrinos and neutrinos at mean energies of 3.6 GeV and 9.6 GeV, respectively (Acciarri et al. 2014). A very limited exposure produced only 30 candidates for coherent interactions of antineutrinos and 24 for neutrinos (figure 1), but a measurement was possible thanks to the high resolution and precise calorimetry achieved by the TPC. It is the first time that this interaction has been measured in a liquid-argon detector. ArgoNeuT’s results agree with the state-of-the-art theoretical predictions (figure 2), but its small detector size (<0.5 tonnes) limits the precision of the measurements.

MINERvA uses a fine-grained scintillator tracker to fully reconstruct and select the coherent interactions in a model-independent analysis. With 770 antineutrino and 1628 neutrino candidates, this experiment measured the cross-section as a function of incident antineutrino and neutrino energy (figure 2). The measured spectrum and angle of the coherently produced pions are not consistent with models used by oscillation experiments (Higuera et al. 2014), and they will be used to correct those models.

The techniques developed during both the ArgoNeuT and MINERvA analyses will be used by larger liquid-argon experiments, such as MicroBooNE, that are part of the new short-baseline neutrino programme at Fermilab. While these experiments will focus on neutrino oscillations and the search for new physics, they will also provide more insight into coherent pion production.

The post Neutrinos cast light on coherent pion production appeared first on CERN Courier.

]]>
https://cerncourier.com/a/neutrinos-cast-light-on-coherent-pion-production/feed/ 0 News Experiments at Fermilab are advancing an intriguing story that began three decades ago, with investigations of coherent neutrino interactions that produce pions yet leave the target nucleus unscathed. https://cerncourier.com/wp-content/uploads/2014/10/CCnew18_09_14.jpg