Theory Archives – CERN Courier https://cerncourier.com/c/theory/ Reporting on international high-energy physics Wed, 09 Jul 2025 07:12:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://cerncourier.com/wp-content/uploads/2025/03/cropped-favicon-32x32.png Theory Archives – CERN Courier https://cerncourier.com/c/theory/ 32 32 Quantum simulators in high-energy physics https://cerncourier.com/a/quantum-simulators-in-high-energy-physics/ Wed, 09 Jul 2025 07:12:13 +0000 https://cerncourier.com/?p=113530 Enrique Rico Ortega and Sofia Vallecorsa explain how quantum computing will allow physicists to model complex dynamics, from black-hole evaporation to neutron-star interiors.

The post Quantum simulators in high-energy physics appeared first on CERN Courier.

]]>
In 1982 Richard Feynman posed a question that challenged computational limits: can a classical computer simulate a quantum system? His answer: not efficiently. The complexity of the computation increases rapidly, rendering realistic simulations intractable. To understand why, consider the basic units of classical and quantum information.

A classical bit can exist in one of two states: |0> or |1>. A quantum bit, or qubit, exists in a superposition α|0> + β|1>, where α and β are complex amplitudes with real and imaginary parts. This superposition is the core feature that distinguishes quantum bits and classical bits. While a classical bit is either |0> or |1>, a quantum bit can be a blend of both at once. This is what gives quantum computers their immense parallelism – and also their fragility.

The difference becomes profound with scale. Two classical bits have four possible states, and are always in just one of them at a time. Two qubits simultaneously encode a complex-valued superposition of all four states.

Resources scale exponentially. N classical bits encode N boolean values, but N qubits encode 2N complex amplitudes. Simulating 50 qubits with double-precision real numbers for each part of the complex amplitudes would require more than a petabyte of memory, beyond the reach of even the largest supercomputers.

Direct mimicry

Feynman proposed a different approach to quantum simulation. If a classical computer struggles, why not use one quantum system to emulate the behaviour of another? This was the conceptual birth of the quantum simulator: a device that harnesses quantum mechanics to solve quantum problems. For decades, this visionary idea remained in the realm of theory, awaiting the technological breakthroughs that are now rapidly bringing it to life. Today, progress in quantum hardware is driving two main approaches: analog and digital quantum simulation, in direct analogy to the history of classical computing.

Optical tweezers

In analog quantum simulators, the physical parameters of the simulator directly correspond to the parameters of the quantum system being studied. Think of it like a wind tunnel for aeroplanes: you are not calculating air resistance on a computer but directly observing how air flows over a model.

A striking example of an analog quantum simulator traps excited Rydberg atoms in precise configurations using highly focused laser beams known as “optical tweezers”. Rydberg atoms have one electron excited to an energy level far from the nucleus, giving them an exaggerated electric dipole moment that leads to tunable long-range dipole–dipole interactions – an ideal setup for simulating particle interactions in quantum field theories (see “Optical tweezers” figure).

The positions of the Rydberg atoms discretise the space inhabited by the quantum fields being modelled. At each point in the lattice, the local quantum degrees of freedom of the simulated fields are embodied by the internal states of the atoms. Dipole–dipole interactions simulate the dynamics of the quantum fields. This technique has been used to observe phenomena such as string breaking, where the force between particles pulls so strongly that the vacuum spontaneously creates new particle–antiparticle pairs. Such quantum simulations model processes that are notoriously difficult to calculate from first principles using classical computers (see “A philosophical dimension” panel).

Universal quantum computation

Digital quantum simulators operate much like classical digital computers, though using quantum rather than classical logic gates. While classical logic manipulates classical bits, quantum logic manipulates qubits. Because quantum logic gates obey the Schrödinger equation, they preserve information and are reversible, whereas most classical gates, such as “AND” and “OR”, are irreversible. Many quantum gates have no classical equivalent, because they manipulate phase, superposition or entanglement – a uniquely quantum phenomenon in which two or more qubits share a combined state. In an entangled system, the state of each qubit cannot be described independently of the others, even if they are far apart: the global description of the quantum state is more than the combination of the local information at every site.

A philosophical dimension

The discretisation of space by quantum simulators echoes the rise of lattice QCD in the 1970s and 1980s. Confronted with the non-perturbative nature of the strong interaction, Kenneth Wilson introduced a method to discretise spacetime, enabling numerical solutions to quantum chromodynamics beyond the reach of perturbation theory. Simulations on classical supercomputers have since deepened our understanding of quark confinement and hadron masses, catalysed advances in high-performance computing, and inspired international collaborations. It has become an indispensable tool in particle physics (see “Fermilab’s final word on muon g-2”).

In classical lattice QCD, the discretisation of spacetime is just a computational trick – a means to an end. But in quantum simulators this discretisation becomes physical. The simulator is a quantum system governed by the same fundamental laws as the target theory.

This raises a philosophical question: are we merely modelling the target theory or are we, in a limited but genuine sense, realising it? If an array of neutral atoms faithfully mimics the dynamical behaviour of a specific gauge theory, is it “just” a simulation, or is it another manifestation of that theory’s fundamental truth? Feynman’s original proposal was, in a sense, about using nature to compute itself. Quantum simulators bring this abstract notion into concrete laboratory reality.

By applying sequences of quantum logic gates, a digital quantum computer can model the time evolution of any target quantum system. This makes them flexible and scalable in pursuit of universal quantum computation – logic able to run any algorithm allowed by the laws of quantum mechanics, given enough qubits and sufficient time. Universal quantum computing requires only a small subset of the many quantum logic gates that can be conceived, for example Hadamard, T and CNOT. The Hadamard gate creates a superposition: |0> (|0> + |1>) / 2. The T gate applies a 45° phase rotation: |1> eiπ/4|1>. And the CNOT gate entangles qubits by flipping a target qubit if a control qubit is |1>. These three suffice to prepare any quantum state from a trivial reference state: |ψ> = U1 U2 U3 … UN |0000…000>.

Trapped ions

To bring frontier physics problems within the scope of current quantum computing resources, the distinction between analog and digital quantum simulations is often blurred. The complexity of simulations can be reduced by combining digital gate sequences with analog quantum hardware that aligns with the interaction patterns relevant to the target problem. This is feasible as quantum logic gates usually rely on native interactions similar to those used in analog simulations. Rydberg atoms are a common choice. Alongside them, two other technologies are becoming increasingly dominant in digital quantum simulation: trapped ions and superconducting qubit arrays.

Trapped ions offer the greatest control. Individual charged ions can be suspended in free space using electromagnetic fields. Lasers manipulate their quantum states, inducing interactions between them. Trapped-ion systems are renowned for their high fidelity (meaning operations are accurate) and long coherence times (meaning they maintain their quantum properties for longer), making them excellent candidates for quantum simulation (see “Trapped ions” figure).

Superconducting qubit arrays promise the greatest scalability. These tiny superconducting circuit materials act as qubits when cooled to extremely low temperatures and manipulated with microwave pulses. This technology is at the forefront of efforts to build quantum simulators and digital quantum computers for universal quantum computation (see “Superconducting qubits” figure).

The noisy intermediate-scale quantum era

Despite rapid progress, these technologies are at an early stage of development and face three main limitations.

The first problem is that qubits are fragile. Interactions with their environment quickly compromise their superposition and entanglement, making computations unreliable. Preventing “decoherence” is one of the main engineering challenges in quantum technology today.

The second challenge is that quantum logic gates have low fidelity. Over a long sequence of operations, errors accumulate, corrupting the result.

Finally, quantum simulators currently have a very limited number of qubits – typically only a few hundred. This is far fewer than what is needed for high-energy physics (HEP) problems.

Superconducting qubits

This situation is known as the noisy “intermediate-scale” quantum era: we are no longer doing proof-of-principle experiments with a few tens of qubits, but neither can we control thousands of them. These limitations mean that current digital simulations are often restricted to “toy” models, such as QED simplified to have just one spatial and one time dimension. Even with these constraints, small-scale devices have successfully reproduced non-perturbative aspects of the theories in real time and have verified the preservation of fundamental physical principles such as gauge invariance, the symmetry that underpins the fundamental forces of the Standard Model.

Quantum simulators may chart a similar path to classical lattice QCD, but with even greater reach. Lattice QCD struggles with real-time evolution and finite-density physics due to the infamous “sign problem”, wherein quantum interference between classically computed amplitudes causes exponentially worsening signal-to-noise ratios. This renders some of the most interesting problems unsolvable on classical machines.

Quantum simulators do not suffer from the sign problem because they evolve naturally in real-time, just like the physical systems they emulate. This promises to open new frontiers such as the simulation of early-universe dynamics, black-hole evaporation and the dense interiors of neutron stars.

Quantum simulators will powerfully augment traditional theoretical and computational methods, offering profound insights when Feynman diagrams become intractable, when dealing with real-time dynamics and when the sign problem renders classical simulations exponentially difficult. Just as the lattice revolution required decades of concerted community effort to reach its full potential, so will the quantum revolution, but the fruits will again transform the field. As the aphorism attributed to Mark Twain goes: history never repeats itself, but it often rhymes.

Quantum information

One of the most exciting and productive developments in recent years is the unexpected, yet profound, convergence between HEP and quantum information science (QIS). For a long time these fields evolved independently. HEP explored the universe’s smallest constituents and grandest structures, while QIS focused on harnessing quantum mechanics for computation and communication. One of the pioneers in studying the interface between these fields was John Bell, a theoretical physicist at CERN.

Just as the lattice revolution needed decades of concerted community effort to reach its full potential, so will the quantum revolution

HEP and QIS are now deeply intertwined. As quantum simulators advance, there is a growing demand for theoretical tools that combine the rigour of quantum field theory with the concepts of QIS. For example, tensor networks were developed in condensed-matter physics to represent highly entangled quantum states, and have now found surprising applications in lattice gauge theories and “holographic dualities” between quantum gravity and quantum field theory. Another example is quantum error correction – a vital QIS technique to protect fragile quantum information from noise, and now a major focus for quantum simulation in HEP.

This cross-disciplinary synthesis is not just conceptual; it is becoming institutional. Initiatives like the US Department of Energy’s Quantum Information Science Enabled Discovery (QuantISED) programme, CERN’s Quantum Technology Initiative (QTI) and Europe’s Quantum Flagship are making substantial investments in collaborative research. Quantum algorithms will become indispensable for theoretical problems just as quantum sensors are becoming indispensable to experimental observation (see “Sensing at quantum limits”).

The result is the emergence of a new breed of scientist: one equally fluent in the fundamental equations of particle physics and the practicalities of quantum hardware. These “hybrid” scientists are building the theoretical and computational scaffolding for a future where quantum simulation is a standard, indispensable tool in HEP. 

The post Quantum simulators in high-energy physics appeared first on CERN Courier.

]]>
Feature Enrique Rico Ortega and Sofia Vallecorsa explain how quantum computing will allow physicists to model complex dynamics, from black-hole evaporation to neutron-star interiors. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_QSIM_fidelity.jpg
Four ways to interpret quantum mechanics https://cerncourier.com/a/four-ways-to-interpret-quantum-mechanics/ Wed, 09 Jul 2025 07:11:50 +0000 https://cerncourier.com/?p=113474 Carlo Rovelli describes the major schools of thought on how to make sense of a purely quantum world.

The post Four ways to interpret quantum mechanics appeared first on CERN Courier.

]]>
One hundred years after its birth, quantum mechanics is the foundation of our understanding of the physical world. Yet debates on how to interpret the theory – especially the thorny question of what happens when we make a measurement – remain as lively today as during the 1930s.

The latest recognition of the fertility of studying the interpretation of quantum mechanics was the award of the 2022 Nobel Prize in Physics to Alain Aspect, John Clauser and Anton Zeilinger. The motivation for the prize pointed out that the bubbling field of quantum information, with its numerous current and potential technological applications, largely stems from the work of John Bell at CERN the 1960s and 1970s, which in turn was motivated by the debate on the interpretation of quantum mechanics.

The majority of scientists use a textbook formulation of the theory that distinguishes the quantum system being studied from “the rest of the world” – including the measuring apparatus and the experimenter, all described in classical terms. Used in this orthodox manner, quantum theory describes how quantum systems react when probed by the rest of the world. It works flawlessly.

Sense and sensibility

The problem is that the rest of the world is quantum mechanical as well. There are of course regimes in which the behaviour of a quantum system is well approximated by classical mechanics. One may even be tempted to think that this suffices to solve the difficulty. But this leaves us in the awkward position of having a general theory of the world that only makes sense under special approximate conditions. Can we make sense of the theory in general?

Today, variants of four main ideas stand at the forefront of efforts to make quantum mechanics more conceptually robust. They are known as physical collapse, hidden variables, many worlds and relational quantum mechanics. Each appears to me to be viable a priori, but each comes with a conceptual price to pay. The latter two may be of particular interest to the high-energy community as the first two do not appear to fit well with relativity.

Probing physical collapse

The idea of the physical collapse is simple: we are missing a piece of the dynamics. There may exist a yet-undiscovered physical interaction that causes the wavefunction to “collapse” when the quantum system interacts with the classical world in a measurement. The idea is empirically testable. So far, all laboratory attempts to find violations of the textbook Schrödinger equation have failed (see “Probing physical collapse” figure), and some models for these hypothetical new dynamics have been ruled out by measurements.

The second possibility, hidden variables, follows on from Einstein’s belief that quantum mechanics is incomplete. It posits that its predictions are exactly correct, but that there are additional variables describing what is going on, besides those in the usual formulation of the theory: the reason why quantum predictions are probabilistic is our ignorance of these other variables.

The work of John Bell shows that the dynamics of any such theory will have some degree of non-locality (see “Non-locality” image). In the non-relativistic domain, there is a good example of a theory of this sort, that goes under the name of de Broglie–Bohm, or pilot-wave theory. This theory has non-local but deterministic dynamics capable of reproducing the predictions of non-relativistic quantum-particle dynamics. As far as I am aware, all existing theories of this kind break Lorentz invariance, and the extension of hidden variable theories to quantum-field theoretical domains appears cumbersome.

Relativistic interpretations

Let me now come to the two ideas that are naturally closer to relativistic physics. The first is the many-worlds interpretation – a way of making sense of quantum theory without either changing its dynamics or adding extra variables. It is described in detail in this edition of CERN Courier by one of its leading contemporary proponents (see “The minimalism of many worlds“), but the main idea is the following: being a genuine quantum system, the apparatus that makes a quantum measurement does not collapse the superposition of possible measurement outcomes – it becomes a quantum superposition of the possibilities, as does any human observer.

Non-locality

If we observe a singular outcome, says the many-worlds interpretation, it is not because one of the probabilistic alternatives has actualised in a mysterious “quantum measurement”. Rather, it is because we have split into a quantum superposition of ourselves, and we just happen to be in one of the resulting copies. The world we see around us is thus only one of the branches of a forest of parallel worlds in the overall quantum state of everything. The price to pay to make sense of quantum theory in this manner is to accept the idea that the reality we see is just a branch in a vast collection of possible worlds that include innumerable copies of ourselves.

Relational interpretations are the most recent of the four kinds mentioned. They similarly avoid physical collapse or hidden variables, but do so without multiplying worlds. They stay closer to the orthodox textbook interpretation, but with no privileged status for observers. The idea is to think of quantum theory in a manner closer to the way it was initially conceived by Born, Jordan, Heisenberg and Dirac: namely in terms of transition amplitudes between observations rather than quantum states evolving continuously in time, as emphasised by Schrödinger’s wave mechanics (see “A matter of taste” image).

Observer relativity

The alternative to taking the quantum state as the fundamental entity of the theory is to focus on the information that an arbitrary system can have about another arbitrary system. This information is embodied in the physics of the apparatus: the position of its pointer variable, the trace in a bubble chamber, a person’s memory or a scientist’s logbook. After a measurement, these physical quantities “have information” about the measured system as their value is correlated with a property of the observed systems.

Quantum theory can be interpreted as describing the relative information that systems can have about one another. The quantum state is interpreted as a way of coding the information about a system available to another system. What looks like a multiplicity of worlds in the many-worlds interpretation becomes nothing more than a mathematical accounting of possibilities and probabilities.

A matter of taste

The relational interpretation reduces the content of the physical theory to be about how systems affect other systems. This is like the orthodox textbook interpretation, but made democratic. Instead of a preferred classical world, any system can play a role that is a generalisation of the Copenhagen observer. Relativity teaches us that velocity is a relative concept: an object has no velocity by itself, but only relative to another object. Similarly, quantum mechanics, interpreted in this manner, teaches us that all physical variables are relative. They are not properties of a single object, but ways in which an object affects another object.

The QBism version of the interpretation restricts its attention to observing systems that are rational agents: they can use observations and make probabilistic predictions about the future. Probability is interpreted subjectively, as the expectation of a rational agent. The relational interpretation proper does not accept this restriction: it considers the information that any system can have about any other system. Here, “information” is understood in the simple physical sense of correlation described above.

Like many worlds – to which it is not unrelated – the relational interpretation does not add new dynamics or new variables. Unlike many worlds, it does not ask us to think about parallel worlds either. The conceptual price to pay is a radical weakening of a strong form of realism: the theory does not give us a picture of a unique objective sequence of facts, but only perspectives on the reality of physical systems, and how these perspectives interact with one another. Only quantum states of a system relative to another system play a role in this interpretation. The many-worlds interpretation is very close to this. It supplements the relational interpretation with an overall quantum state, interpreted realistically, achieving a stronger version of realism at the price of multiplying worlds. In this sense, the many worlds and relational interpretations can be seen as two sides of the same coin.

Every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics

I have only sketched here the most discussed alternatives, and have tried to be as neutral as possible in a field of lively debates in which I have my own strong bias (towards the fourth solution). Empirical testing, as I have mentioned, can only test the physical collapse hypothesis.

There is nothing wrong, in science, in using different pictures for the same phenomenon. Conceptual flexibility is itself a resource. Specific interpretations often turn out to be well adapted to specific problems. In quantum optics it is sometimes convenient to think that there is a wave undergoing interference, as well as a particle that follows a single trajectory guided by the wave, as in the pilot-wave hidden-variable theory. In quantum computing, it is convenient to think that different calculations are being performed in parallel in different worlds. My own field of loop quantum gravity treats spacetime regions as quantum processes: here, the relational interpretation merges very naturally with general relativity, because spacetime regions themselves become quantum processes, affecting each other.

Richard Feynman famously wrote that “every theoretical physicist who is any good knows six or seven different theoretical representations for exactly the same physics. He knows that they are all equivalent, and that nobody is ever going to be able to decide which one is right at that level, but he keeps them in his head, hoping that they will give him different ideas for guessing.” I think that this is where we are, in trying to make sense of our best physical theory. We have various ways to make sense of it. We do not yet know which of these will turn out to be the most fruitful in the future.

The post Four ways to interpret quantum mechanics appeared first on CERN Courier.

]]>
Feature Carlo Rovelli describes the major schools of thought on how to make sense of a purely quantum world. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_INTERP_Helgoland.jpg
The minimalism of many worlds https://cerncourier.com/a/the-minimalism-of-many-worlds/ Wed, 02 Jul 2025 11:29:05 +0000 https://cerncourier.com/?p=113491 David Wallace argues for the ‘decoherent view’ of quantum mechanics, where at the fundamental level there is neither probability nor wavefunction collapse.

The post The minimalism of many worlds appeared first on CERN Courier.

]]>
Physicists have long been suspicious of the “quantum measurement problem”: the supposed puzzle of how to make sense of quantum mechanics. Everyone agrees (don’t they?) on the formalism of quantum mechanics (QM); any additional discussion of the interpretation of that formalism can seem like empty words. And Hugh Everett III’s infamous “many-worlds interpretation” looks more dubious than most: not just unneeded words but unneeded worlds. Don’t waste your time on words or worlds; shut up and calculate.

But the measurement problem has driven more than philosophy. Questions of how to understand QM have always been entangled, so to speak, with questions of how to apply and use it, and even how to formulate it; the continued controversies about the measurement problem are also continuing controversies in how to apply, teach and mathematically describe QM. The Everett interpretation emerges as the natural reading of one strategy for doing QM, which I call the “decoherent view” and which has largely supplanted the rival “lab view”, and so – I will argue – the Everett interpretation can and should be understood not as a useless adjunct to modern QM but as part of the development in our understanding of QM over the past century.

The view from the lab

The lab view has its origins in the work of Bohr and Heisenberg, and it takes the word “observable” that appears in every QM textbook seriously. In the lab view, QM is not a theory like Newton’s or Einstein’s that aims at an objective description of an external world subject to its own dynamics; rather, it is essentially, irreducibly, a theory of observation and measurement. Quantum states, in the lab view, do not represent objective features of a system in the way that (say) points in classical phase space do: they represent the experimentalist’s partial knowledge of that system. The process of measurement is not something to describe within QM: ultimately it is external to QM. And the so-called “collapse” of quantum states upon measurement represents not a mysterious stochastic process but simply the updating of our knowledge upon gaining more information.

Valued measurements

The lab view has led to important physics. In particular, the “positive operator valued measure” idea, central to many aspects of quantum information, emerges most naturally from the lab view. So do the many extensions, total and partial, to QM of concepts initially from the classical theory of probability and information. Indeed, in quantum information more generally it is arguably the dominant approach. Yet outside that context, it faces severe difficulties. Most notably: if quantum mechanics describes not physical systems in themselves but some calculus of measurement results, if a quantum system can be described only relative to an experimental context, what theory describes those measurement results and experimental contexts themselves?

Dynamical probes

One popular answer – at least in quantum information – is that measurement is primitive: no dynamical theory is required to account for what measurement is, and the idea that we should describe measurement in dynamical terms is just another Newtonian prejudice. (The “QBist” approach to QM fairly unapologetically takes this line.)

One can criticise this answer on philosophical grounds, but more pressingly: that just isn’t how measurement is actually done in the lab. Experimental kit isn’t found scattered across the desert (each device perhaps stamped by the gods with the self-adjoint operator it measures); it is built using physical principles (see “Dynamical probes” figure). The fact that the LHC measures the momentum and particle spectra of various decay processes, for instance, is something established through vast amounts of scientific analysis, not something simply posited. We need an account of experimental practice that allows us to explain how measurement devices work and how to build them.

Perhaps this was viable in the 1930s, but today measurement devices rely on quantum principles

Bohr had such an account: quantum measurements are to be described through classical mechanics. The classical is ineliminable from QM precisely because it is to classical mechanics we turn when we want to describe the experimental context of a quantum system. To Bohr, the quantum–classical transition is a conceptual and philosophical matter as much as a technical one, and classical ideas are unavoidably required to make sense of any quantum description.

Perhaps this was viable in the 1930s. But today it is not only the measured systems but the measurement devices themselves that essentially rely on quantum principles, beyond anything that classical mechanics can describe. And so, whatever the philosophical strengths and weaknesses of this approach – or of the lab view in general – we need something more to make sense of modern QM, something that lets us apply QM itself to the measurement process.

Practice makes perfect

We can look to physics practice to see how. As von Neumann glimpsed, and Everett first showed clearly, nothing prevents us from modelling a measurement device itself inside unitary quantum mechanics. When we do so, we find that the measured system becomes entangled with the device, so that (for instance) if a measured atom is in a weighted superposition of spins with respect to some axis, after measurement then the device is in a similarly-weighted superposition of readout values.

Origins

In principle, this courts infinite regress: how is that new superposition to be interpreted, save by a still-larger measurement device? In practice, we simply treat the mod-squared amplitudes of the various readout values as probabilities, and compare them with observed frequencies. This sounds a bit like the lab view, but there is a subtle difference: these probabilities are understood not with respect to some hypothetical measurement, but as the actual probabilities of the system being in a given state.

Of course, if we could always understand mod-squared amplitudes that way, there would be no measurement problem! But interference precludes this. Set up, say, a Mach–Zehnder interferometer, with a particle beam split in two and then re-interfered, and two detectors after the re-interference (see “Superpositions are not probabilities” figure). We know that if either of the two paths is blocked, so that any particle detected must have gone along the other path, then each of the two outcomes is equally likely: for each particle sent through, detector A fires with 50% probability and detector B with 50% probability. So whichever path the particle went down, we get A with 50% probability and B with 50% probability. And yet we know that if the interferometer is properly tuned and both paths are open, we can get A with 100% probability or 0% probability or anything in between. Whatever microscopic superpositions are, they are not straightforwardly probabilities of classical goings-on.

Unfeasible interference

But macroscopic superpositions are another matter. There, interference is unfeasible (good luck reinterfering the two states of Schrödinger’s cat); nothing formally prevents us from treating mod-squared amplitudes like probabilities.

And decoherence theory has given us a clear understanding of just why interference is invisible in large systems, and more generally when we can and cannot get away with treating mod-squared amplitudes as probabilities. As the work of Zeh, Zurek, Gell-Mann, Hartle and many others (drawing inspiration from Everett and from work on the quantum/classical transition as far back as Mott) has shown, decoherence – that is, the suppression of interference – is simply an aspect of non-equilibrium statistical mechanics. The large-scale, collective degrees of freedom of a quantum system, be it the needle on a measurement device or the centre-of-mass of a dust mote, are constantly interacting with a much larger number of small-scale degrees of freedom: the short-wavelength phonons inside the object itself; the ambient light; the microwave background radiation. We can still find autonomous dynamics for the collective degrees of freedom, but because of the constant transfer of information to the small scale, the coherence of any macroscopic superposition rapidly bleeds into microscopic degrees of freedom, where it is dynamically inert and in practice unmeasurable.

Emergence and scale

Decoherence can be understood in the familiar language of emergence and scale separation. Quantum states are not fundamentally probabilistic, but they are emergently probabilistic. That emergence occurs because for macroscopic systems, the timescale by which energy is transferred from macroscopic to residual degrees of freedom is very long compared to the timescale of the macroscopic system’s own dynamics, which in turn is very long compared to the timescale by which information is transferred. (To take an extreme example, information about the location of the planet Jupiter is recorded very rapidly in the particles of the solar wind, or even the photons of the cosmic background radiation, but Jupiter loses only an infinitesimal fraction of its energy to either.) So the system decoheres very rapidly, but having done so it can still be treated as autonomous.

On this decoherent view of QM, there is ultimately only the unitary dynamics of closed systems; everything else is a limiting or special case. Probability and classicality emerge through dynamical processes that can be understood through known techniques of physics: understanding that emergence may be technically challenging but poses no problem of principle. And this means that the decoherent view can address the lab view’s deficiencies: it can analyse the measurement process quantum mechanically; it can apply quantum mechanics even in cosmological contexts where the “measurement” paradigm breaks down; it can even recover the lab view within itself as a limited special case. And so it is the decoherent view, not the lab view, that – I claim – underlies the way quantum theory is for the most part used in the 21st century, including in its applications in particle physics and cosmology (see “Two views of quantum mechanics” table).

Two views of quantum mechanics

Quantum phenomenon Lab view Decoherent view

Dynamics

Unitary (i.e. governed by the Schrödinger equation) only between measurements

Always unitary

Quantum/classical transition

Conceptual jump between fundamentally different systems

Purely dynamical: classical physics is a limiting case of quantum physics

Measurements

Cannot be treated internal to the formalism

Just one more dynamical interaction

Role of the observer

Conceptually central

Just one more physical system

But if the decoherent view is correct, then at the fundamental level there is neither probability nor wavefunction collapse; nor is there a fundamental difference between a microscopic superposition like those in interference experiments and a macroscopic superposition like Schrödinger’s cat. The differences are differences of degree and scale: at the microscopic level, interference is manifest; as we move to larger and more complex systems it hides away more and more effectively; in practice it is invisible for macroscopic systems. But even if we cannot detect the coherence of the superposition of a live and dead cat, it does not thereby vanish. And so according to the decoherent view, the cat is simultaneously alive and dead in the same way that the superposed atom is simultaneously in two places. We don’t need a change in the dynamics of the theory, or even a reinterpretation of the theory, to explain why we don’t see the cat as alive and dead at once: decoherence has already explained it. There is a “live cat” branch of the quantum state, entangled with its surroundings to an ever-increasing degree; there is likewise a “dead cat” branch; the interference between them is rendered negligible by all that entanglement.

Many worlds

At last we come to the “many worlds” interpretation: for when we observe the cat ourselves, we too enter a superposition of seeing a live and a dead cat. But these “worlds” are not added to QM as exotic new ontology: they are discovered, as emergent features of collective degrees of freedom, simply by working out how to use QM in contexts beyond the lab view and then thinking clearly about its content. The Everett interpretation – the many-worlds theory – is just the decoherent view taken fully seriously. Interference explains why superpositions cannot be understood simply as parameterising our ignorance; unitarity explains how we end up in superpositions ourselves; decoherence explains why we have no awareness of it.

Superpositions are not probabilities

(Forty-five years ago, David Deutsch suggested testing the Everett interpretation by simulating an observer inside a quantum computer, so that we could recohere them after they made a measurement. Then, it was science fiction; in this era of rapid progress on AI and quantum computation, perhaps less so!)

Could we retain the decoherent view and yet avoid any commitment to “worlds”? Yes, but only in the same sense that we could retain general relativity and yet refuse to commit to what lies behind the cosmological event horizon: the theory gives a perfectly good account of the other Everett worlds, and the matter beyond the horizon, but perhaps epistemic caution might lead us not to overcommit. But even so, the content of QM includes the other worlds, just as the content of general relativity includes beyond-horizon physics, and we will only confuse ourselves if we avoid even talking about that content. (Thus Hawking, who famously observed that when he heard about Schrödinger’s cat he reached for his gun, was nonetheless happy to talk about Everettian branches when doing quantum cosmology.)

Alternative views

Could there be a different way to make sense of the decoherent view? Never say never; but the many-worlds perspective results almost automatically from simply taking that view as a literal description of quantum systems and how they evolve, so any alternative would have to be philosophically subtle, taking a different and less literal reading of QM. (Perhaps relationalism, discussed in this issue by Carlo Rovelli, see “Four ways to interpret quantum mechanics“, offers a way to do it, though in many ways it seems more a version of the lab view. The physical collapse and hidden variables interpretations modify the formalism, and so fall outside either category.)

The Everett interpretation is just the decoherent view taken fully seriously

Does the apparent absurdity, or the ontological extravagance, of the Everett interpretation force us, as good scientists, to abandon many-worlds, or if necessary the decoherent view itself? Only if we accept some scientific principle that throws out theories that are too strange or that postulate too large a universe. But physics accepts no such principle, as modern cosmology makes clear.

Are there philosophical problems for the Everett interpretation? Certainly: how are we to think of the emergent ontology of worlds and branches; how are we to understand probability when all outcomes occur? But problems of this kind arise across all physical theories. Probability is philosophically contested even apart from Everett, for instance: is it frequency, rational credence, symmetry or something else? In any case, these problems pose no barrier to the use of Everettian ideas in physics.

The case for the Everett interpretation is that it is the conservative, literal reading of the version of quantum mechanics we actually use in modern physics, and there is no scientific pressure for us to abandon that reading. We could, of course, look for alternatives. Who knows what we might find? Or we could shut up and calculate – within the Everett interpretation.

The post The minimalism of many worlds appeared first on CERN Courier.

]]>
Feature David Wallace argues for the ‘decoherent view’ of quantum mechanics, where at the fundamental level there is neither probability nor wavefunction collapse. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_MANY_probes.jpg
Do muons wobble faster than expected? https://cerncourier.com/a/do-muons-wobble-faster-than-expected/ Wed, 26 Mar 2025 15:08:49 +0000 https://cerncourier.com/?p=112616 With a new measurement imminent, the Courier explores the experimental results and theoretical calculations used to predict ‘muon g-2’ – one of particle physics’ most precisely known quantities and the subject of a fast-evolving anomaly.

The post Do muons wobble faster than expected? appeared first on CERN Courier.

]]>
Vacuum fluctuation

Fundamental charged particles have spins that wobble in a magnetic field. This is just one of the insights that emerged from the equation Paul Dirac wrote down in 1928. Almost 100 years later, calculating how much they wobble – their “magnetic moment” – strains the computational sinews of theoretical physicists to a level rarely matched. The challenge is to sum all the possible ways in which the quantum fluctuations of the vacuum affect their wobbling.

The particle in question here is the muon. Discovered in cosmic rays in 1936, muons are more massive but ephemeral cousins of the electron. Their greater mass is expected to amplify the effect of any undiscovered new particles shimmering in the quantum haze around them, and measurements have disagreed with theoretical predictions for nearly 20 years. This suggests a possible gap in the Standard Model (SM) of particle physics, potentially providing a glimpse of deeper truths beyond it.

In the coming weeks, Fermilab is expected to present the final results of a seven-year campaign to measure this property, reducing uncertainties to a remarkable one part in 1010 on the magnetic moment of the muon, and 0.1 parts per million on the quantum corrections. Theorists are racing to match this with an updated prediction of comparable precision. The calculation is in good shape, except for the incredibly unusual eventuality that the muon briefly emits a cloud of quarks and gluons at just the moment it absorbs a photon from the magnetic field. But in quantum mechanics all possibilities count all the time, and the experimental precision is such that the fine details of “hadronic vacuum polarisation” (HVP) could be the difference between reinforcing the SM and challenging it.

Quantum fluctuations

The Dirac equation predicts that fundamental spin s = ½ particles have a magnetic moment given by g(eħ/2m)s, where the gyromagnetic ratio (g) is precisely equal to two. For the electron, this remarkable result was soon confirmed by atomic spectroscopy, before more precise experiments in 1947 indicated a deviation from g = 2 of a few parts per thousand. Expressed as a = (g-2)/2, the shift was a surprise and was named the magnetic anomaly or the anomalous magnetic moment.

Quantum fluctuation

This marked the beginning of an enduring dialogue between experiment and theory. It became clear that a relativistic field theory like the developing quantum electrodynamics (QED) could produce quantum fluctuations, shifting g from two. In 1948, Julian Schwinger calculated the first correction to be a = α/2π ≈ 0.00116, aligning beautifully with 1947 experimental results. The emission and absorption of a virtual photon creates a cloud around the electron, altering its interaction with the external magnetic field (see “Quantum fluctuation” figure). Soon, other particles would be seen to influence the calculations. The SM’s limitations suggest that undiscovered particles could also affect these calculations. Their existence might be revealed by a discrepancy between the SM prediction for a particle’s anomalous magnetic moment and its measured value.

As noted, the muon is an even more promising target than the electron, as its sensitivity to physics beyond QED is generically enhanced by the square of the ratio of their masses: a factor of around 43,000. In 1957, inspired by Tsung-Dao Lee and Chen-Ning Yang’s proposal that parity is violated in the weak interaction, Richard Garwin, Leon Lederman and Marcel Weinrich studied the decay of muons brought to rest in a magnetic field at the Nevis cyclotron at Columbia University. As well as showing that parity is broken in both pion and muon decays, they found g to be close to two for muons by studying their “precession” in the magnetic field as their spins circled around the field lines.

Precision

This iconic experiment was the prototype of muon-precession projects at CERN (see CERN Courier September/October 2024 p53), later at Brookhaven National Laboratory and now Fermilab (see “Precision” figure). By the end of the Brookhaven project, a disagreement between the measured value of “aμ” – the subscript indicating g-2 for the muon rather than the electron – and the SM prediction was too large to ignore, motivating the present round of measurements at Fermilab and rapidly improving theory refinements.

g-2 and the Standard Model

Today, a prediction for aμ must include the effects of all three of the SM’s interactions and all of its elementary particles. The leading contributions are from electrons, muons and tau leptons interacting electromagnetically. These QED contributions can be computed in an expansion where each successive term contributes only around 1% of the previous one. QED effects have been computed to fifth order, yielding an extraordinary precision of 0.9 parts per billion – significantly more precise than needed to match measurements of the muon’s g-2, though not the electron’s. It took over half a century to achieve this theoretical tour de force.

The weak interaction gives the smallest contribution to aμ, a million times less than QED. These contributions can also be computed in an expansion. Second order suffices. All SM particles except gluons need to be taken into account.

Gluons are responsible for the strong interaction and appear in the third and last set of contributions. These are described by QCD and are called “hadronic” because quarks and gluons form hadrons at the low energies relevant for the muon g-2 (see “Hadronic contributions” figure). HVP is the largest, though 10,000 times smaller than the corrections due to QED. “Hadronic light-by-light scattering” (HLbL) is a further 100 times smaller due to the exchange of an additional photon. The challenge is that the strong-interaction effects cannot be approximated by a perturbative expansion. QCD is highly nonlinear and different methods are needed.

Data or the lattice?

Even before QCD was formulated, theorists sought to subdue the wildness of the strong force using experimental data. In the case of HVP, this triggered experimental investigations of e+e annihilation into hadrons and later hadronic tau–lepton decays. Though apparently disparate, the production of hadrons in these processes can be related to the clouds of virtual quarks and gluons that are responsible for HVP.

Hadronic contributions

A more recent alternative makes use of massively parallel numerical simulations to directly solve the equations of QCD. To compute quantities such as HVP or HLbL, “lattice QCD” requires hundreds of millions of processor-core hours on the world’s largest supercomputers.

In preparation for Fermilab’s first measurement in 2021, the Muon g-2 Theory Initiative, spanning more than 120 collaborators from over 80 institutions, was formed to provide a reference SM prediction that was published in a 2020 white paper. The HVP contribution was obtained with a precision of a few parts per thousand using a compilation of measurements of e+e annihilation into hadrons. The HLbL contribution was determined from a combination of data-driven and lattice–QCD methods. Though even more complex to compute, HLbL is needed only to 10% precision, as its contribution is smaller.

After summing all contributions, the prediction of the 2020 white paper sits over five standard deviations below the most recent experimental world average (see “Landscape of muon g-2” figure). Such a deviation would usually be interpreted as a discovery of physics beyond the SM. However, in 2021 the result of the first lattice calculation of the HVP contribution with a precision comparable to that of the data-driven white paper was published by the Budapest–Marseille–Wuppertal collaboration (BMW). The result, labelled BMW 2020 as it was uploaded to the preprint archive the previous year, is much closer to the experimental average (green band on the figure), suggesting that the SM may still be in the race. The calculation relied on methods developed by dozens of physicists since the seminal work of Tom Blum (University of Connecticut) in 2002 (see CERN Courier May/June 2021 p25).

Landscape of muon g-2

In 2020, the uncertainties on the data-driven and lattice-QCD predictions for the HVP contribution were still large enough that both could be correct, but BMW’s 2021 paper showed them to be explicitly incompatible in an “intermediate-distance window” accounting for approximately 35% of the HVP contribution, where lattice QCD is most reliable.

This disagreement was the first sign that the 2020 consensus had to be revised. To move forward, the sources of the various disagreements – more numerous now – and the relative limitations of the different approaches must be understood better. Moreover, uncertainty on HVP already dominated the SM prediction in 2020. As well as resolving these discrepancies, its uncertainty must be reduced by a factor of three to fully leverage the coming measurement from Fermilab. Work on the HVP is therefore even more critical than before, as elsewhere the theory house is in order: Sergey Volkov (KITP) recently verified the fifth-order QED calculation of Tatsumi Aoyama, Toichiro Kinoshita and Makiko Nio, identifying an oversight not numerically relevant at current experimental sensitivities; new HLbL calculations remain consistent; and weak contributions have already been checked and are precise enough for the foreseeable future.

News from the lattice

Since BMW’s 2020 lattice results, a further eight lattice-QCD computations of the dominant up-and-down-quark (u + d) contribution to HVP’s intermediate-distance window have been performed with similar precision, with four also including all other relevant contributions. Agreement is excellent and the verdict is clear: the disagreement between the lattice and data-driven approaches is confirmed (see “Intermediate window” figure).

Intermediate window

Work on the short-distance window (about 10% of the HVP contribution) has also advanced rapidly. Seven computations of the u + d contribution have appeared, with four including all other relevant contributions. No significant disagreement is observed.

The long-distance window (around 55% of the total) is by far the most challenging, with the largest uncertainties. In recent weeks three calculations of the dominant u + d contribution have appeared, by the RBC–UKQCD, Mainz and FHM collaborations. Though some differences are present, none can be considered significant for the time being.

With all three windows cross-validated, the Muon g-2 Theory Initiative is combining results to obtain a robust lattice–QCD determination of the HVP contribution. The final uncertainty should be slightly below 1%, still quite far from the 0.2% ultimately needed.

The BMW–DMZ and Mainz collaborations have also presented new results for the full HVP contribution to aμ, and the RBC–UKQCD collaboration, which first proposed the multi-window approach, is also in a position to make a full calculation. (The corresponding result in the “Landscape of muon g-2” figure combines contributions reported in their publications.) Mainz obtained a result with 1% precision using the three windows described above. BMW–DMZ divided its new calculation into five windows and replaced the lattice–QCD computation of the longest distance window – “the tail”, encompassing just 5% of the total – with a data-driven result. This pragmatic approach allows a total uncertainty of just 0.46%, with the collaboration showing that all e+e datasets contributing to this long-distance tail are entirely consistent. This new prediction differs from the experimental measurement of aμ by only 0.9 standard deviations.

These new lattice results, which have not yet been published in refereed journals, make the disagreement with the 2020 data-driven result even more blatant. However, the analysis of the annihilation of e+e into hadrons is also evolving rapidly.

News from electron–positron annihilation

Many experiments have measured the cross-section for e+e annihilation to hadrons as a function of centre-of-mass energy (√s). The dominant contribution to a data-driven calculation of aμ, and over 70% of its uncertainty budget, is provided by the e+e π+π process, in which the final-state pions are produced via the ρ resonance (see “Two-pion channel” figure).

The most recent measurement, by the CMD-3 energy-scan experiment in Novosibirsk, obtained a cross-section on the peak of the ρ resonance that is larger than all previous ones, significantly changing the picture in the π+π channel. Scrutiny by the Theory Initiative has identified no major problem.

Two-pion channel

CMD-3’s approach contrasts that used by KLOE, BaBar and BESIII, which study e+e annihilation with a hard photon emitted from the initial state (radiative return) at facilities with fixed √s. BaBar has innovated by calibrating the luminosity of the initial-state radiation using the μ+μ channel and using a unique “next-to-leading-order” approach that accounts for extra radiation from either the initial or the final state – a necessary step at the required level of precision.

In 1997, Ricard Alemany, Michel Davier and Andreas Höcker proposed an alternative method that employs τ→ ππ0ν decay while requiring some additional theoretical input. The decay rate has been precisely measured as a function of the two-pion invariant mass by the ALEPH and OPAL experiments at LEP, as well as by the Belle and CLEO experiments at B factories, under very different conditions. The measurements are in good agreement. ALEPH offers the best normalisation and Belle the best shape measurement.

KLOE and CMD-3 differ by more than five standard deviations on the ρ peak, precluding a combined analysis of e+e → π+π cross-sections. BaBar and τ data lie between them. All measurements are in good agreement at low energies, below the ρ peak. BaBar, CMD-3 and τ data are also in agreement above the ρ peak. To help clarify this unsatisfactory situation, in 2023 BaBar performed a careful study of radiative corrections to e+e → π+π. That study points to the possible underestimate of systematic uncertainties in radiative-return experiments that rely on Monte Carlo simulations to describe extra radiation, as opposed to the in situ studies performed by BaBar.

The future

While most contributions to the SM prediction of the muon g-2 are under control at the level of precision required to match the forthcoming Fermilab measurement, in trying to reduce the uncertainties of the HVP contribution to a commensurate degree, theorists and experimentalists shattered a 20 year consensus. This has triggered an intense collective effort that is still in progress.

The prospect of testing the limits of the SM through high-precision measurements generates considerable impetus

New analyses of e+e are underway at BaBar, Belle II, BES III and KLOE, experiments are continuing at CMD-3, and Belle II is also studying τ decays. At CERN, the longer term “MUonE” project will extract HVP by analysing how muons scatter off electrons – a very challenging endeavour regarding the unusual accuracy required both in the control of experimental systematic uncertainties and also theoretically, for the radiative corrections.

At the same time, lattice-QCD calculations have made enormous progress in the last five years and provide a very competitive alternative. The fact that several groups are involved with somewhat independent techniques is allowing detailed cross checks. The complementarity of the data-driven and lattice-QCD approaches should soon provide a reliable value for the g-2 theoretical prediction at unprecedented levels of precision.

There is still some way to go to reach that point, but the prospect of testing the limits of the SM through high-precision measurements generates considerable impetus. A new white paper is expected in the coming weeks. The ultimate aim is to reach a level of precision in the SM prediction that allows us to fully leverage the potential of the muon anomalous magnetic moment in the search for new fundamental physics, in concert with the final results of Fermilab’s Muon g-2 experiment and the projected Muon g-2/EDM experiment at J-PARC in Japan, which will implement a novel technique.

The post Do muons wobble faster than expected? appeared first on CERN Courier.

]]>
Feature With a new measurement imminent, the Courier explores the experimental results and theoretical calculations used to predict ‘muon g-2’ – one of particle physics’ most precisely known quantities and the subject of a fast-evolving anomaly. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_MUON-top_feature.jpg
The beauty of falling https://cerncourier.com/a/the-beauty-of-falling/ Wed, 26 Mar 2025 14:34:00 +0000 https://cerncourier.com/?p=112815 Kurt Hinterbichler reviews Claudia de Rham's first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery.

The post The beauty of falling appeared first on CERN Courier.

]]>
The Beauty of Falling

A theory of massive gravity is one in which the graviton, the particle that is believed to mediate the force of gravity, has a small mass. This contrasts with general relativity, our current best theory of gravity, which predicts that the graviton is exactly massless. In 2011, Claudia de Rham (Imperial College London), Gregory Gabadadze (New York University) and Andrew Tolley (Imperial College London) revitalised interest in massive gravity by uncovering the structure of the best possible (in a technical sense) theory of massive gravity, now known as the dRGT theory, after these authors.

Claudia de Rham has now written a popular book on the physics of gravity. The Beauty of Falling is an enjoyable and relatively quick read: a first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery.

De Rahm begins by setting the stage with the breakthroughs that led to our current paradigm of gravity. The Michelson–Morley experiment and special relativity, Einstein’s description of gravity as geometry leading to general relativity and its early experimental triumphs, black holes and cosmology are all described in accessible terms using familiar analogies. De Rham grips the reader by weaving in a deeply personal account of her own life and upbringing, illustrating what inspired her to study these ideas and pursue a career in theoretical physics. She has led an interesting life, from growing up in various parts of the world, to learning to dive and fly, to training as an astronaut and coming within a hair’s breadth of becoming one. Her account of the training and selection process for European Space Agency astronauts is fascinating, and worth the read in its own right.

Moving closer to the present day, de Rahm discusses the detection of gravitational waves at gravitational-wave observatories such as LIGO, the direct imaging of black holes by the Event Horizon Telescope, and the evidence for dark matter and the accelerating expansion of the universe with its concomitant cosmological constant problem. As de Rham explains, this latter discovery underlies much of the interest in massive gravity; there remains the lingering possibility that general relativity may need to be modified to account for the observed accelerated expansion.

In the second part of the book, de Rham warns us that we are departing from the realm of well tested and established physics, and entering the world of more uncertain ideas. A pet peeve of mine is popular accounts that fail to clearly make this distinction, a temptation to which this book does not succumb. 

Here, the book offers something that is hard to find: a first-hand account of the process of thought and discovery in theoretical physics. When reading the latest outrageously overhyped clickbait headlines coming out of the world of fundamental physics, it is easy to get the wrong impression about what theoretical physicists do. This part of the book illustrates how ideas come about: by asking questions of established theories and tugging on their loose threads, we uncover new mathematical structures and, in the process, gain a deeper understanding of the structures we have.

Massive gravity, the focus of this part of the book, is a prime example: by starting with a basic question, “does the graviton have to be massless?”, a new structure was revealed. This structure may or may not have any direct relevance to gravity in the real world, but even if it does not, our study of it has significantly enhanced our understanding of the structure of general relativity. And, as has occurred countless times before with intriguing mathematical structures, it may ultimately prove useful for something completely different and unforeseen – something that its originators did not have even remotely in mind. Here, de Rahm offers invaluable insights both into uncovering a new theoretical structure and what happens next, as the results are challenged and built upon by others in the community.

The post The beauty of falling appeared first on CERN Courier.

]]>
Review Kurt Hinterbichler reviews Claudia de Rham's first-hand and personal glimpse into the life of a theoretical physicist and the process of discovery. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_REV_Beauty_feature.jpg
Charm and synthesis https://cerncourier.com/a/charm-and-synthesis/ Mon, 27 Jan 2025 07:43:29 +0000 https://cerncourier.com/?p=112128 Sheldon Glashow recalls the events surrounding a remarkable decade of model building and discovery between 1964 and 1974.

The post Charm and synthesis appeared first on CERN Courier.

]]>
In 1955, after a year of graduate study at Harvard, I joined a group of a dozen or so students committed to studying elementary particle theory. We approached Julian Schwinger, one of the founders of quantum electrodynamics, hoping to become his thesis students – and we all did.

Schwinger lined us up in his office, and spent several hours assigning thesis subjects. It was a remarkable performance. I was the last in line. Having run out of well-defined thesis problems, he explained to me that weak and electromagnetic interactions share two remarkable features: both are vectorial and both display aspects of universality. Schwinger suggested that I create a unified theory of the two interactions – an electroweak synthesis. How I was to do this he did not say, aside from slyly hinting at the Yang–Mills gauge theory.

By the summer of 1958, I had convinced myself that weak and electromagnetic interactions might be described by a badly broken gauge theory, and Schwinger that I deserved a PhD. I had hoped to partly spend a postdoctoral fellowship in Moscow at the invitation of the recent Russian Nobel laureate Igor Tamm, and sought to visit Niels Bohr’s institute in Copenhagen while awaiting my Soviet visa. With Bohr’s enthusiastic consent, I boarded the SS Île de France with my friend Jack Schnepps. Following a memorable and luxurious crossing – one of the great ship’s last – Jack drove south to Padova to work with Milla Baldo-Ceolin’s emulsion group in Padova, and I took the slow train north to Copenhagen. Thankfully, my Soviet visa never arrived. I found the SU(2) × U(1) structure of the electroweak model in the spring of 1960 at Bohr’s famous institute at Blegsdamvej 19, and wrote the paper that would earn my share of the 1979 Nobel Prize.

We called the new quark flavour charm, completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day

A year earlier, in 1959, Augusto Gamba, Bob Marshak and Susumo Okubo had proposed lepton–hadron symmetry, which regarded protons, neutrons and lambda hyperons as the building blocks of all hadrons, to match the three known leptons at the time: neutrinos, electrons and muons. The idea was falsified by the discovery of a second neutrino in 1962, and superseded in 1964 by the invention of fractionally charged hadron constituents, first by George Zweig and André Petermann, and then decisively by Murray Gell-Mann with his three flavours of quarks. Later in 1964, while on sabbatical in Copenhagen, James Bjorken and I realised that lepton–hadron symmetry could be revived simply by adding a fourth quark flavour to Gell-Mann’s three. We called the new quark flavour “charm”, completing two weak doublets of quarks to match two weak doublets of leptons, and establishing lepton–quark symmetry, which holds to this day.

Annus mirabilis

1964 was a remarkable year. In addition to the invention of quarks, Nick Samios spotted the triply strange Ω baryon, and Oscar Greenberg devised what became the critical notion of colour. Arno Penzias and Robert Wilson stumbled on the cosmic microwave background radiation. James Cronin, Val Fitch and others discovered CP violation. Robert Brout, François Englert, Peter Higgs and others invented spontaneously broken non-Abelian gauge theories. And to top off the year, Abdus Salam rediscovered and published my SU(2) × U(1) model, after I had more-or-less abandoned electroweak thoughts due to four seemingly intractable problems.

Four intractable problems of early 1964

How could the W and Z bosons acquire masses while leaving the photon massless?

Steven Weinberg, my friend from both high-school and college, brilliantly solved this problem in 1967 by subjecting the electroweak gauge group to spontaneous symmetry breaking, initiating the half-century-long search for the Higgs boson. Salam published the same solution in 1968.

How could an electroweak model of leptons be extended to describe the weak interactions of hadrons?

John Iliopoulos, Luciano Maiani and I solved this problem in 1970 by introducing charm and quark-lepton symmetry to avoid unobserved strangeness-changing neutral currents.

Was the spontaneously broken electroweak gauge model mathematically consistent?

Gerard ’t Hooft announced in 1971 that he had proven Steven Weinberg’s electroweak model to be renormalisable. In 1972, Claude Bouchiat, John Iliopoulos and Philippe Meyer demonstrated the electroweak model to be free of Adler anomalies provided that lepton–quark symmetry is maintained.

Could the electroweak model describe CP violation without invoking additional spinless fields?

In 1973, Makoto Kobayashi and Toshihide Maskawa showed that the electroweak model could easily and naturally violate CP if there are more than four quark flavours.

Much to my surprise and delight, all of them would be solved within just a few years, with the last theoretical obstacle removed by Makoto Kobayashi and Toshihide Maskawa in 1973 (see “Four intractable problems” panel). A few months later, Paul Musset announced that CERN’s Gargamelle detector had won the race to detect weak neutral-current interactions, giving the electroweak model the status of a predictive theory. Remarkably, the year had begun with Gell-Mann, Harald Fritzsch and Heinrich Leutwyler proposing QCD, and David Gross, Frank Wilczek and David Politzer showing it to be asymptotically free. The Standard Model of particle physics was born.

Charmed findings

But where were the charmed quarks? Early on Monday morning on 11 November, 1974, I was awakened by a phone call from Sam Ting, who asked me to come to his MIT office as soon as possible. He and Ulrich Becker were waiting for me impatiently. They showed me an amazingly sharp resonance. Could it be a vector meson like the ρ or ω and be so narrow, or was it something quite different? I hopped in my car and drove to Harvard, where my colleagues Alvaro de Rújula and Howard Georgi excitedly regaled me about the Californian side of the story. A few days later, experimenters in Frascati confirmed the BNL–SLAC discovery, and de Rújula and I submitted our paper “Is Bound Charm Found?” – one of two papers on the J/ψ discovery printed in Physical Review Letters on 5 July 1965 that would prove to be correct. Among five false papers was one written by my beloved mentor, Julian Schwinger.

Sam Ting at CERN in 1976

The second correct paper was by Tom Appelquist and David Politzer. Well before that November, they had realised (without publishing) that bound states of a charmed quark and its antiquark lying below the charm threshold would be exceptionally narrow due the asymptotic freedom of QCD. De Rújula suggested to them that such a system be called charmonium in an analogy with positronium. His term made it into the dictionary. Shortly afterward, the 1976 Nobel Prize in Physics was jointly awarded to Burton Richter and Sam Ting for “their pioneering work in the discovery of a heavy elementary particle of a new kind” – evidence that charm was not yet a universally accepted explanation. Over the next few years, experimenters worked hard to confirm the predictions of theorists at Harvard and Cornell by detecting and measuring the masses, spins and transitions among the eight sub-threshold charmonium states. Later on, they would do the same for 14 relatively narrow states of bottomonium.

Abdus Salam, Tom Ball and Paul Musset

Other experimenters were searching for particles containing just one charmed quark or antiquark. In our 1975 paper “Hadron Masses in a Gauge Theory”, de Rújula, Georgi and I included predictions of the masses of several not-yet-discovered charmed mesons and baryons. The first claim to have detected charmed particles was made in 1975 by Robert Palmer and Nick Samios at Brookhaven, again with a bubble-chamber event. It seemed to show a cascade decay process in which one charmed baryon decays into another charmed baryon, which itself decays. The measured masses of both of the charmed baryons were in excellent agreement with our predictions. Though the claim was not widely accepted, I believe to this day that Samios and Palmer were the first to detect charmed particles.

Sheldon Glashow and Steven Weinberg

The SLAC electron–positron collider, operating well above charm threshold, was certainly producing charmed particles copiously. Why were they not being detected? I recall attending a conference in Wisconsin that was largely dedicated to this question. On the flight home, I met my old friend Gerson Goldhaber, who had been struggling unsuccessfully to find them. I think I convinced him to try a bit harder. A couple of weeks later in 1976, Goldhaber and François Pierre succeeded. My role in charm physics had come to a happy ending. 

  • This article is adapted from a presentation given at the Institute of High-Energy Physics in Beijing on 20 October 2024 to celebrate the 50th anniversary of the discovery of the J/ψ.

The post Charm and synthesis appeared first on CERN Courier.

]]>
Feature Sheldon Glashow recalls the events surrounding a remarkable decade of model building and discovery between 1964 and 1974. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_GLASHOW_lectures.jpg
Rapid developments in precision predictions https://cerncourier.com/a/rapid-developments-in-precision-predictions/ Fri, 24 Jan 2025 15:57:39 +0000 https://cerncourier.com/?p=112358 Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions.

The post Rapid developments in precision predictions appeared first on CERN Courier.

]]>
High Precision for Hard Processes in Turin

Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions. To keep pace with experimental observations at the LHC and elsewhere, precision computing has had to develop rapidly in recent years – efforts that have been monitored and driven by the biennial High Precision for Hard Processes (HP2) conference for almost two decades now. The latest edition attracted 120 participants to the University of Torino from 10 to 13 September 2024.

All speakers addressed the same basic question: how can we achieve the most precise theoretical description for a wide variety of scattering processes at colliders?

The recipe for precise prediction involves many ingredients, so the talks in Torino probed several research directions. Advanced methods for the calculation of scattering amplitudes were discussed, among others, by Stephen Jones (IPPP Durham). These methods can be applied to detailed high-order phenomenological calculations for QCD, electroweak processes and BSM physics, as illustrated by Ramona Groeber (Padua) and Eleni Vryonidou (Manchester). Progress in parton showers – a crucial tool to bridge amplitude calculations and experimental results – was presented by Silvia Ferrario Ravasio (CERN). Dedicated methods to deal with the delicate issue of infrared divergences in high-order cross-section calculations were reviewed by Chiara Signorile-Signorile (Max Planck Institute, Munich).

The Torino conference was dedicated to the memory of Stefano Catani, a towering figure in the field of high-energy physics, who suddenly passed away at the beginning of this year. Starting from the early 1980s, and for the whole of his career, Catani made groundbreaking contributions in every facet of HP2. He was an inspiration to a whole generation of physicists working in high-energy phenomenology. We remember him as a generous and kind person, and a scientist of great rigour and vision. He will be sorely missed.

The post Rapid developments in precision predictions appeared first on CERN Courier.

]]>
Meeting report Achieving a theoretical uncertainty of only a few per cent in the measurement of physical observables is a vastly challenging task in the complex environment of hadronic collisions. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_FN_HP_feature.jpg
From spinors to supersymmetry https://cerncourier.com/a/from-spinors-to-supersymmetry/ Fri, 24 Jan 2025 15:34:01 +0000 https://cerncourier.com/?p=112273 In their new book, From Spinors to Supersymmetry, Herbi Dreiner, Howard Haber and Stephen Martin describe the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry.

The post From spinors to supersymmetry appeared first on CERN Courier.

]]>
From Spinors to Supersymmetry

This text is a hefty volume of around 1000 pages describing the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry. The authors of this volume, Herbi Dreiner, Howard Haber and Stephen Martin, are household names in the phenomenology of particle physics with many original contributions in the topics that are covered in the book. Haber is also well known at CERN as a co-author of the legendary Higgs Hunter’s Guide (Perseus Books, 1990), a book that most collider physicists of the pre and early LHC eras are very familiar with.

The book starts with a 250-page introduction (chapters one to five) to the Standard Model (SM), covering more or less the theory material that one finds in standard advanced textbooks. The emphasis is on the theoretical side, with no discussion on experimental results, providing a succinct discussion of topics ranging from how to obtain Feynman rules to anomaly-cancellation calculations. In chapter six, extensions of the SM are discussed, starting with the seesaw-extended SM, moving on to a very detailed exposition of the two-Higgs-doublet model and finishing with grand unification theories (GUTs).

The second part of the book (from chapter seven onwards) is about supersymmetry in general. It begins with an accessible introduction that is also applicable to other beyond-SM-physics scenarios. This gentle and very pedagogical pattern continues to chapter eight, before proceeding to a more demanding supersymmetry-algebra discussion in chapter nine. Superfields, supersymmetric radiative corrections and supersymmetry symmetry breaking, which are discussed in the subsequent chapters, are more advanced topics that will be of interest to specialists in these areas.

The third part (chapter 13 onwards) discusses realistic supersymmetric models starting from the minimal supersymmetric SM (MSSM). After some preliminaries, chapter 15 provides a general presentation of MSSM phenomenology, discussing signatures relevant for proton–proton and electron–positron collisions, as well as direct dark-matter searches. A short discussion on beyond-MSSM scenarios is given in chapter 16, including NMSSM, seesaw, GUTs and R-parity violating theories. Phenomenological implications, for example their impact on proton decay, are also discussed.

Part four includes basic Feynman diagram calculations in the SM and MSSM using two-component spinor formalism. Starting from very simple tree-level SM processes, like Bhabha scattering and Z-boson decays, it proceeds with tree-level supersymmetric processes, standard one-loop calculations and their supersymmetric counterparts, and Higgs-boson mass corrections. The presentation of this is very practical and useful for those who want to see how to perform easy calculations in SM or MSSM using two-component spinor formalism. The material is accessible and detailed enough to be used for teaching master’s or graduate-level students.

A valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry

The book finishes with almost 200 pages of appendices covering all sorts of useful topics, from notation to commonly used identity lists and group theory.

The book requires some familiarity with master’s-level particle-physics concepts, for example via Halzen and Martin’s Quarks and Leptons or Paganini’s Fundamentals of Particle Physics. Some familiarity with quantum field theory is helpful but not needed for large parts of the book. No effort is made to be brief: two-component spinor formalism is discussed in all its detail in a very pedagogic and clear way. Parts two and three are a significant enhancement to the well known A Supersymmetry Primer (arXiv:hep-ph/9709356), which is very popular among beginners to supersymmetry and written by Stephen Martin, one of authors of this volume. A rich collection of exercises is included in every chapter, and the appendix chapters are no exception to this.

Do not let the word supersymmetry in the title to fool you: even if you are not interested in supersymmetric extensions you can find a detailed exposition on two-component formalism for spinors, SM calculations with this formalism and a detailed discussion on how to design extensions of the scalar sector of the SM. Chapter three is particularly useful, describing in 54 pages how to get from the two-component to the four-component spinor formalism that is more familiar to many of us.

This is a book for advanced graduate students and researchers in particle-physics phenomenology, which nevertheless contains much that will be of interest to advanced physics students and particle-physics researchers in boththeory and experiment. This is because the size of the volume allows the authors to start from the basics and dwell in topics that most other books of that type cover in less detail, making them less accessible. I expect that Dreiner, Haber and Martin will become a valuable resource for all those who are interested in the extensions of the SM, especially if they include supersymmetry.

The post From spinors to supersymmetry appeared first on CERN Courier.

]]>
Review In their new book, From Spinors to Supersymmetry, Herbi Dreiner, Howard Haber and Stephen Martin describe the two-component formalism of spinors and its applications to particle physics, quantum field theory and supersymmetry. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_REV-spinors_feature.jpg
Shifting sands for muon g–2 https://cerncourier.com/a/shifting-sands-for-muon-g-2/ Wed, 20 Nov 2024 13:56:37 +0000 https://cern-courier.web.cern.ch/?p=111400 Two recent results may ease the tension between theory and experiment.

The post Shifting sands for muon g–2 appeared first on CERN Courier.

]]>
Lattice–QCD calculation

The Dirac equation predicts the magnetic moment of the muon (g) to be precisely two in units of the Bohr magneton. Virtual lines and loops add roughly 0.1% to this value, giving rise to a so-called anomalous contribution often quantified by aμ = (g–2)/2. Countless electromagnetic loops dominate the calculation, spontaneous symmetry breaking is evident in the effect of weak interactions, and contributions from the strong force are non-perturbative. Despite this formidable complexity, theoretical calculations of aμ have been experimentally verified to nine significant figures.

The devil is in the 10th digit. The experimental world average for aμ currently stands more than 5σ above the Standard Model (SM) prediction published by the Muon g-2 Theory Initiative in a 2020 white paper. But two recent results may ease this tension in advance of a new showdown with experiment next year.

The first new input is data from the CMD-3 experiment at the Budker Institute of Nuclear Physics, which yields aμconsistent with experimental data. Comparable electron–positron (e+e) collider data from the KLOE experiment at the National Laboratory of Frascati, the BaBar experiment at SLAC, the BESIII experiment at IHEP Beijing and CMD-3’s predecessor CMD-2, were the backbone of the 2020 theory white paper. With KLOE and CMD-3 now incompatible at the level of 5σ, theorists are exploring alternative bases for the theoretical prediction, such as an ab-initio approach based on lattice QCD and a data-driven approach using tau–lepton decays.

The second new result is an updated theory calculation of aμ by the Budapest–Marseille–Wuppertal (BMW) collaboration. BMW’s ab-initio lattice–QCD calculation of 2020 was the first to challenge the data-driven consensus expressed in the 2020 white paper. The recent update now claims a superior precision, driven in part by the pragmatic implementation of a data-driven approach in the low-mass region, where experiments are in good agreement. Though only accounting for 5% of the hadronic contribution to aμ, this “long distance” region is often the largest source of error in lattice–QCD calculations, and relatively insensitive to the use of finer lattices.

The new BMW result is fully compatible with the experimental world average, and incompatible with the 2020 white paper at the level of 4σ.

“It seems to me that the 0.9σ agreement between the direct experimental measurement of the magnetic moment of the muon and the ab-initio calculation of BMW has most probably postponed the possible discovery of new physics in this process,” says BMW spokesperson Zoltán Fodor (Wuppertal). “It is important to mention that other groups have partial results, too, so-called window results, and they all agree with us and in several cases disagree with the result of the data-driven method.”

These two analyses were among the many discussed at the seventh plenary workshop of the Muon g-2 Theory Initiative held in Tsukuba, Japan from 9 to 13 September. The theory initiative is planning to release an updated prediction in a white paper due to be published in early 2025. With multiple mature e+e and lattice–QCD analyses underway for several years, attention now turns to tau decays – the subject of a soon-to-be-announced mini-workshop to ensure their full availability for consideration as a possible basis for the 2025 white paper. Input data would likely originate from tau decays recorded by the Belle experiment at KEK and the ALEPH experiment at CERN, both now decommissioned.

I am hopeful we will be able to establish consolidation between independent lattice calculations at the sub-percent level

“From a theoretical point of view, the challenge for including the tau data is the isospin rotation that is needed to convert the weak hadronic tau decay to the desired input for hadronic vacuum polarisation,” explains theory-initiative chair Aida X El-Khadra (University of Illinois). Hadronic vacuum polarisation (HVP) is the most challenging part of the calculation of aμ, accounting for the effect of a muon emitting a virtual photon that briefly transforms into a flurry of quarks and gluons just before it absorbs the photon representing the magnetic field (CERN Courier May/June 2021 p25).

Lattice QCD offers the possibility of a purely theoretical calculation of HVP. While BMW remains the only group to have published a full lattice-QCD calculation, multiple groups are zeroing in on its most sensitive aspects (CERN CourierSeptember/October 2024 p21).

“The main challenge in lattice-QCD calculations of HVP is improving the precision to the desired sub-percent level, especially at long distances,” continues El-Khadra. “With the new results for the long-distance contribution by the RBC/UKQCD and Mainz collaborations that were already reported this year, and the results that are still expected to be released this fall, I am hopeful that we will be able to establish consolidation between independent lattice calculations at the sub-percent level. In this case we will provide a lattice-only determination of HVP in the second white paper.”

The post Shifting sands for muon g–2 appeared first on CERN Courier.

]]>
News Two recent results may ease the tension between theory and experiment. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_NA_twoprong_feature-1-1.jpg
Inside pentaquarks and tetraquarks https://cerncourier.com/a/inside-pentaquarks-and-tetraquarks/ Wed, 20 Nov 2024 13:52:05 +0000 https://cern-courier.web.cern.ch/?p=111383 Marek Karliner and Jonathan Rosner ask what makes tetraquarks and pentaquarks tick, revealing them to be at times exotic compact states, at times hadronic molecules and at times both – with much still to be discovered.

The post Inside pentaquarks and tetraquarks appeared first on CERN Courier.

]]>
Strange pentaquarks

Breakthroughs are like London buses. You wait a long time, and three turn up at once. In 1963 and 1964, Murray Gell-Mann, André Peterman and George Zweig independently developed the concept of quarks (q) and antiquarks (q) as the fundamental constituents of the observed bestiary of mesons (qq) and baryons (qqq).

But other states were allowed too. Additional qq pairs could be added at will, to create tetraquarks (qqqq), pentaquarks (qqqqq) and other states besides. In the
1970s, Robert L Jaffe carried out the first explicit calculations of multiquark states, based on the framework of the MIT bag model. Under the auspices of the new theory of quantum chromodynamics (QCD), this computationally simplified model ignored gluon interactions and considered quarks to be free, though confined in a bag with a steep potential at its boundary. These and other early theoretical efforts triggered many experimental searches, but no clear-cut results.

New regimes

Evidence for such states took nearly two decades to emerge. The essential precursors were the discovery of the charm quark (c) at SLAC and BNL in the November Revolution of 1974, some 50 years ago (p41), and the discovery of the bottom quark (b) at Fermilab three years later. The masses and lifetimes of these heavy quarks allowed experiments to probe new regimes in parameter space where otherwise inexplicable bumps in energy spectra could be resolved (see “Heavy breakthroughs” panel).

Heavy breakthroughs

Double hidden charm

With the benefit of hindsight, it is clear why early experimental efforts did not find irrefutable evidence for multiquark states. For a multiquark state to be clearly identifiable, it is not enough to form a multiquark colour-singlet (a mixture of colourless red–green–blue, red–antired, green–antigreen and blue–antiblue components). Such a state also needs to be narrow and long-lived enough to stand out on top of the experimental background, and has to have distinct decay modes that cannot be explained by the decay of a conventional hadron. Multiquark states containing only light quarks (up, down and strange) typically have many open decay channels, with a large phase space, so they tend to be wide and short-lived. Moreover, they share these decay channels with excited states of conventional hadrons and mix with them, so they are extremely difficult to pin down.

Multiquark states with at least one heavy quark are very different. Once hadrons are “dressed” by gluons, they acquire effective masses of the order of several hundred MeV, with all quarks coupling in the same way to gluons. For light quarks, the bare quark masses are negligible compared to the effective mass, and can be neglected to zeroth order. But for heavy quarks (c or b), the ratio of the bare quark masses to the effective mass of the hadron dramatically affects the dynamics and the experimental situation, creating narrow multiquark states that stand out. These states were not seen in the early searches simply because the relevant production cross sections are very small and particle identification requires very high spatial resolution. These features became accessible only with the advent of the huge luminosity and the superb spatial resolution provided by vertex detectors in bottom and charm factories such as BaBar, Belle, BESIII and LHCb.

The attraction between two heavy quarks scales like α2smq, where αs is the strong coupling constant and mq is the mass of the quarks. This is because the Coulomb-like part of the QCD potential dominates, scaling as –αs/r as a function of distance r, and yielding an analogue of the Bohr radius ~1/(αsmq). Thus, the interaction grows approximately linearly with the heavy quark mass. In at least one case (discussed below), the highly anticipated but as yet undiscovered bbud. tetraquark Tbb is expected to result in a state with a mass that is below the two-meson threshold, and therefore stable under strong interactions.

Exclusively heavy states are also possible. In 2020 and in 2024, respectively, LHCb and CMS discovered exotic states Tcccc(6900) and Tcccc(6600), which both decay into two J/ψ particles, implying a quark content (cccc). J/ψ does not couple to light quarks, so these states are unlikely to be hadronic molecules bound by light meson exchange. Though they are too heavy to be the ground state of a (cccc) compact tetraquark, they might perhaps be its excitations. Measuring their spin and parity would be very helpful in distinguishing between the various alternatives that have been proposed.

The first unambiguously exotic hadron, the X(3872) (dubbed χc1(3872) in the LHCb collaboration’s new taxonomy; see “What’s in a name?” panel), was discovered at the Belle experiment at KEK in Japan in 2003. Subsequently confirmed by many other experiments, its nature is still controversial. (More of that later.) Since then, there has been a rapidly growing body of experimental evidence for the existence of exotic multiquark hadrons. New states have been discovered at Belle, at the BaBar experiment at SLAC in the US, at the BESIII experiment at IHEP in China, and at the CMS and LHCb experiments at CERN (see “A bestiary of exotic hadrons“). In all cases with robust evidence, the exotic new states contain at least one heavy charm or bottom quark. The majority include two.

The key theoretical question is how the quarks are organised inside these multiquark states. Are they hadronic molecules, with two heavy hadrons bound by the exchange of light mesons? Or are they compact objects with all quarks located within a single confinement volume?

Compact candidate

The compact and molecular interpretations each provide a natural explanation for part of the data, but neither explains all. Both kinds of structures appear in nature, and certain states may be superpositions of compact and molecular states.

In the molecular case the deuteron is a good mental image. (As a bound state of a proton and a neutron, it is technically a molecular hexaquark.) In the compact interpretation, the diquark – an entangled pair of quarks with well-defined spin, colour and flavour quantum numbers – may play a crucial role. Diquarks have curious properties, whereby, for example, a strongly correlated red–green pair of quarks can behave like a blue antiquark, opening up intriguing possibilities for the interpretation of qqqq and qqqqq states.

Compact states

A clearcut example of a compact structure is the Tbb tetraquark with quark content bbud. Tbb has not yet been observed experimentally, but its existence is supported by robust theoretical evidence from several complementary approaches. As for any ground-state hadron, its mass is given to a good approximation by the sum of its constituent quark masses and their (negative) binding energy. The constituent masses implied here are effective masses that also include the quarks’ kinetic energies. The binding energy is negative as it was released when the compact state formed.

In the case of Tbb, the binding energy is expected to be so large that its mass is below all two-meson decay channels: it can only decay weakly, and must be stable with respect to the strong interaction. No such exotic hadron has yet been discovered, making Tbb a highly prized target for experimentalists. Such a large binding energy cannot be generated by meson exchange and must be due to colour forces between the very heavy b quarks. Tbb is an iso­scalar with JP = 1+. Its charmed analogue, Tcc = (ccud), also known as Tcc(3875)+, was observed by LHCb in 2021 to be a whisker away from stability, with a very small binding energy and width less than 1 MeV (CERN Courier September/October 2021 p7). The big difference between the binding energies of Tbb and Tcc, which make the former stable and the latter unstable, is due to the substantially greater mass of the b quark than the c quark, as discussed in the panel above. An intermediate case, Tbc = (bcud), is very likely also below threshold for strong decay and therefore stable. It is also easier to produce and detect than Tbb and therefore extremely tempting experimentally.

Molecular pentaquarks

At the other extreme, we have states that are most probably pure hadronic molecules. The most conspicuous examples are the Pc(4312), Pc(4440) and Pc(4457) pentaquarks discovered by LHCb in 2019, and labelled according to the convention adopted by the Particle Data Group as Pcc(4312)+, Pcc(4440)+ and Pcc(4457)+. All three have quark content (ccuud) and decay into J/ψp, with an energy release of order 300 MeV. Yet, despite having such a large phase space, all three have anomalously narrow widths less than about 10 MeV. Put more simply, the pentaquarks decay remarkably slowly, given how much energy stands to be released.

But why should long life count against the pentaquarks being tightly bound and compact? In a compact (ccuud) state there is nothing to prevent the charm quark from binding with the anticharm quark, hadronising as J/ψ and leaving behind a (uud) proton. It would decay immediately with a large width.

Anomalously narrow

On the other hand, hadronic molecules such as ΣcD and ΣcD* automatically provide a decay-suppression mechanism. Hadronic molecules are typically large, so the c quark inside the Σc baryon is typically far from the c quark inside the D or D* meson. Because of this, the formation of J/ψ = (c c) has a low probability, resulting in a long lifetime and a narrow width. (Unstable particles decay randomly within fixed half-lives. According to Heisenberg’s uncertainty principle, this uncertainty on their lifetime yields a reciprocal uncertainty on their energy, which may be directly observed as the width of the peak in the spectrum of their measured masses when they are created in particle collisions. Long-lived particles exhibit sharply spiked peaks, and short-lived particles exhibit broad peaks. Though the lifetimes of strongly interacting particle are usually not measurable directly, they may be inferred from these “widths”, which are measured in units of energy.)

Additional evidence in favour of their molecular nature comes from the mass of Pc(4312) being just below the ΣcD production threshold, and the masses of Pc(4440) and Pc(4457) being just below the ΣcD* production threshold. This is perfectly natural. Hadronic molecules are weakly bound, so they typically only form an S-wave bound state, with no orbital angular momentum. So ΣcD, which combines a spin-1/2 baryon and a spin-0 negative-parity meson, can only form a single state with JP = 1/2. By contrast, ΣcD*, which combines a spin-1/2 baryon and spin-1 negative-parity meson, can form two closely-spaced states with JP = 1/2 and 3/2, with a small splitting coming from a spin–spin interaction.

An example of a possible mixture of a compact state and a hadronic molecule is provided by the X(3872) meson

The robust prediction of the JP quantum numbers makes it very straightforward in principle to kill this physical picture, if one were to measure JP values different from these. Conversely, measuring the predicted values of JP would provide a strong confirmation (see “The 23 exotic hadrons discovered at the LHC table”).

These predictions have already received substantial indirect support from the strange-pentaquark sector. The spin-parity of the Pccs(4338), which also has a narrow width below 10 MeV, has been determined by LHCb to be 1/2, exactly as expected for a Ξc D molecule (see “Strange pentaquark” figure).

The mysterious X(3872)

An example of a possible mixture of a compact state and a hadronic molecule is provided by the already mentioned X(3872) meson. Its mass is so close to the sum of the masses of a D0 meson and a D*0 meson that no difference has yet been established with statistical significance, but it is known to be less than about 1 MeV. It can decay to J/ψπ+π with a branching ratio (3.5 ± 0.9)%, releasing almost 500 MeV of energy. Yet its width is only of order 1 MeV. This is an even more striking case of relative stability in the face of naively expected instability than for the pentaquarks. At first sight, then, it is tempting to identify X(3872) as a clearcut D0D*0 hadronic molecule.

Particle precision

The situation is not that simple, however. If X(3872) is just a weakly-bound hadronic molecule, it is expected to be very large, of the scale of a few fermi (10–15 m). So it should be very difficult to produce it in hard reactions, requiring a large momentum transfer. Yet this is not the case. A possible resolution might come from X(3872) being a mixture of a D0D*0molecular state and χc1(2P), a conventional radial excitation of P-wave charmonium, which is much more compact and is expected to have a similar mass and the same JPC = 1++ quantum numbers. Additional evidence in favour of such a mixing comes from comparing the rates of the radiative decays X(3872) → J/ψγ and X(3872) → ψ(2S)γ.

The question associated with exotic mesons and baryons can be posed crisply: is an observed state a molecule, a compact multiquark system or something in between? We have given examples of each. Definitive compact-multiquark behaviour can be confirmed if a state’s flavour-SU(3) partners are identified. This is because compact states are bound by colour forces, which are only weakly sensitive to flavour-SU(3) rotations. (Such rotations exchange up, down and strange quarks, and to a good approximation the strong force treats these light flavours equally at the energies of charmed and beautiful exotic hadrons.) For example, if X(3872) should in fact prove to be a compact tetraquark, it should have charged isospin partners that have not yet been observed.

On the experimental front, the sensitivity of LHCb, Belle II, BESIII, CMS and ATLAS have continued to reap great benefits to hadron spectroscopy. Together with the proposed super τ-charm factory in China, they are virtually guaranteed to discover additional exotic hadrons, expanding our understanding of QCD in its strongly interacting regime.

The post Inside pentaquarks and tetraquarks appeared first on CERN Courier.

]]>
Feature Marek Karliner and Jonathan Rosner ask what makes tetraquarks and pentaquarks tick, revealing them to be at times exotic compact states, at times hadronic molecules and at times both – with much still to be discovered. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24_EXOTIC_feature-1-1.jpg
A rich harvest of results in Prague https://cerncourier.com/a/a-rich-harvest-of-results-in-prague/ Wed, 20 Nov 2024 13:34:58 +0000 https://cern-courier.web.cern.ch/?p=111420 The 42nd international conference on high-energy physics reported progress across all areas of high-energy physics.

The post A rich harvest of results in Prague appeared first on CERN Courier.

]]>
The 42nd international conference on high-energy physics (ICHEP) attracted almost 1400 participants to Prague in July. Expectations were high, with the field on the threshold of a defining moment, and ICHEP did not disappoint. A wealth of new results showed significant progress across all areas of high-energy physics.

With the long shutdown on the horizon, the third run of the LHC is progressing in earnest. Its high-availability operation and mastery of operational risks were highly praised. Run 3 data is of immense importance as it will be the dataset that experiments will work with for the next decade. With the newly collected data at 13.6 TeV, the LHC experiments showed new measurements of Higgs and di-electroweak-boson production, though of course most of the LHC results were based on the Run 2 (2014 to 2018) dataset, which is by now impeccably well calibrated and understood. This also allowed ATLAS and CMS to bring in-depth improvements to reconstruction algorithms.

AI algorithms

A highlight of the conference was the improvements brought by state-of-the-art artificial-intelligence algorithms such as graph neural networks, both at the trigger and reconstruction level. A striking example of this is the ATLAS and CMS flavour-tagging algorithms, which have improved their rejection of light jets by a factor of up to four. This has important consequences. Two outstanding examples are: di-Higgs-boson production, which is fundamental for the measurement of the Higgs boson self-coupling (CERN Courier July/August 2024 p7); and the Higgs boson’s Yukawa coupling to charm quarks. Di-Higgs-boson production should be independently observable by both general-purpose experiments at the HL-LHC, and an observation of the Higgs boson’s coupling to charm quarks is getting closer to being within reach.

The LHC experiments continue to push the limits of precision at hadron colliders. CMS and LHCb presented new measurements of the weak mixing angle. The per-mille precision reached is close to that of LEP and SLD measurements (CERN Courier September/October 2024 p29). ATLAS presented the most precise measurement to date (0.8%) of the strong coupling constant extracted from the measurement of the transverse momentum differential cross section of Drell–Yan Z-boson production. LHCb provided a comprehensive analysis of the B0→ K0* μ+μ angular distributions, which had previously presented discrepancies at the level of 3σ. Taking into account long-distance contributions significantly weakens the tension down to 2.1σ.

Pioneering the highest luminosities ever reached at colliders (setting a record at 4.7 × 1034 cm–2 s–1), SuperKEKB has been facing challenging conditions with repeated sudden beam losses. This is currently an obstacle to further progress to higher luminosities. Possible causes have been identified and are currently under investigation. Meanwhile, with the already substantial data set collected so far, the Belle II experiment has produced a host of new results. In addition to improved CKM angle measurements (alongside LHCb), in particular of the γ angle, Belle II (alongside BaBar) presented interesting new insights in the long standing |Vcb| and |Vub| inclusive versus exclusive measurements puzzle (CERN Courier July/August 2024 p30), with new |Vcb| exclusive measurements that significantly reduce the previous 3σ tension.

Maurizio Pierini

ATLAS and CMS furthered their systematic journey in the search for new phenomena to leave no stone unturned at the energy frontier, with 20 new results presented at the conference. This landmark outcome of the LHC puts further pressure on the naturalness paradigm.

A highlight of the conference was the overall progress in neutrino physics. Accelerator-based experiments NOvA and T2K presented a first combined measurement of the mass difference, neutrino mixing and CP parameters. Neutrino telescopes IceCube with DeepCore and KM3NeT with ORCA (Oscillation Research with Cosmics in the Abyss) also presented results with impressive precision. Neutrino physics is now at the dawn of a bright new era of precision with the next-generation accelerator-based long baseline experiments DUNE and Hyper Kamiokande, the upgrade of DeepCore, the completion of ORCA and the medium baseline JUNO experiment. These experiments will bring definitive conclusions on the measurement of the CP phase in the neutrino sector and the neutrino mass hierarchy – two of the outstanding goals in the field.

The KATRIN experiment presented a new upper limit on the effective electron–anti-neutrino mass of 0.45 eV, well en route towards their ultimate sensitivity of 0.2 eV. Neutrinoless double-beta-decay search experiments KamLAND-Zen and LEGEND-200 presented limits on the effective neutrino mass of approximately 100 meV; the sensitivity of the next-generation experiments LEGEND-1T, KamLAND-Zen-1T and nEXO should reach 20 meV and either fully exclude the inverted ordering hypothesis or discover this long-sought process. Progress on the reactor neutrino anomaly was reported, with recent fission data suggesting that the fluxes are overestimated, thus weakening the significance of the anti-neutrino deficits.

Neutrinos were also a highlight for direct-dark-matter experiments as Xenon announced the observation of nuclear recoil events from8B solar neutrino coherent elastic scattering on nuclei, thus signalling that experiments are now reaching the neutrino fog. The conference also highlighted the considerable progress across the board on the roadmap laid out by Kathryn Zurek at the conference to search for dark matter in an extraordinarily large range of possibilities, spanning 89 orders of magnitude in mass from 10–23 eV to 1057 GeV. The roadmap includes cosmological and astrophysical observations, broad searches at the energy and intensity frontier, direct searches at low masses to cover relic abundance motivated scenarios, building a suite of axion searches, and pursuing indirect-detection experiments.

Lia Merminga and Fabiola Gianotti

Neutrinos also made the headlines in multi-messenger astrophysics experiments with the announcement by the KM3Net ARCA (Astroparticle Research with Cosmics in the Abyss) collaboration of a muon-neutrino event that could be the most energetic ever found. The energy of the muon from the interaction of the neutrino is compatible with having an energy of approximately 100 PeV, thus opening a fascinating window on astrophysical processes at energies well beyond the reach of colliders. The conference showed that we are now well within the era of multi-messenger astrophysics, via beautiful neutrinos, gamma rays and gravitational-wave results.

The conference saw new bridges across fields being built. The birth of collider-neutrino physics with the beautiful results from FASERν and SND fill the missing gap in neutrino–nucleon cross sections between accelerator neutrinos and neutrino astronomy. ALICE and LHCb presented new results on He3 production that complement the AMS results. Astrophysical He3 could signal the annihilation of dark matter. ALICE also presented a broad, comprehensive review of the progress in understanding strongly interacting matter at extreme energy densities.

The highlight in the field of observational cosmology was the recent data from DESI, the Dark Energy Spectroscopic Instrument in operation since 2021, which bring splendid new data on baryon acoustic oscillation measurements. These precious new data agree with previous indirect measurements of the Hubble constant, keeping the tension with direct measurements in excess of 2.5σ. In combination with CMB measurements, the DESI measurements also set an upper limit on the sum of neutrino masses at 0.072 eV, in tension with the inverted ordering of neutrino masses hypothesis. This limit is dependent on the cosmological model.

In everyone’s mind at the conference, and indeed across the domain of high-energy physics, it is clear that the field is at a defining moment in its history: we will soon have to decide what new flagship project to build. To this end, the conference organised a thrilling panel discussion featuring the directors of all the major laboratories in the world. “We need to continue to be bold and ambitious and dream big,” said Fermilab’s Lia Merminga, summarising the spirit of the discussion.

“As we have seen at this conference, the field is extremely vibrant and exciting,” said CERN’s Fabiola Gianotti at the conclusion of the panel. In these defining times for the future of our field, ICHEP 2024 was an important success. The progress in all areas is remarkable and manifest through the outstanding number of beautiful new results shown at the conference.

The post A rich harvest of results in Prague appeared first on CERN Courier.

]]>
Meeting report The 42nd international conference on high-energy physics reported progress across all areas of high-energy physics. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24FN_ICHEP1-2.jpg
The Balkans, in theory https://cerncourier.com/a/the-balkans-in-theory/ Wed, 13 Nov 2024 09:32:13 +0000 https://cern-courier.web.cern.ch/?p=111431 The Southeastern European Network in Mathematical and Theoretical Physics has organised scientific training and research activities since its foundation in Vrnjačka Banja in 2003.

The post The Balkans, in theory appeared first on CERN Courier.

]]>
The Southeastern European Network in Mathematical and Theoretical Physics (SEENET-MTP) has organised scientific training and research activities since its foundation in Vrnjačka Banja in 2003. Its PhD programme started in 2014, with substantial support from CERN.

The Thessaloniki School on Field Theory and Applications in HEP was the first school in the third cycle of the programme. Fifty-four students from 16 countries were joined by a number of online participants in a programme of lectures and tutorials.

We are now approaching 110 years since the general theory of relativity was founded and the theoretical prediction of the existence of black holes. There is subsequently at least half a century of developments related to the quantum aspects of black holes. At the Thessaloniki School, Tarek Anous (Queen Mary) delivered a pivotal series of lectures on the thermal properties of black holes, entanglement and the information paradox, which continues to be unresolved.

Nikolay Bobev (KU Leuven) summarised the ideas behind holography; Daniel Grumiller (TU Vienna) addressed the application of the holographic principle in flat spacetimes, including Carrollian/celestial holography; Slava Rychkov (Paris-Saclay) gave an introduction to conformal field theory in various dimensions; while Vassilis Spanos (NKU Athens) provided an introduction to modern cosmology. The programme was completed by Kostas Skenderis (Southampton), who addressed renormalisation in conformal field theory, anti-de Sitter and de Sitter spacetimes.

The post The Balkans, in theory appeared first on CERN Courier.

]]>
Meeting report The Southeastern European Network in Mathematical and Theoretical Physics has organised scientific training and research activities since its foundation in Vrnjačka Banja in 2003. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24FN_SEENET-1-1.jpg
An intricate web of interconnected strings https://cerncourier.com/a/an-intricate-web-of-interconnected-strings/ Tue, 24 Sep 2024 10:23:20 +0000 https://preview-courier.web.cern.ch/?p=111302 The Strings 2024 conference looked at the latest developments in the interconnected fields of quantum gravity and quantum field theory, all under the overarching framework of string theory.

The post An intricate web of interconnected strings appeared first on CERN Courier.

]]>
Strings 2024 participants

Since its inception in the mid-1980s, the Strings conference has sought to summarise the latest developments in the interconnected fields of quantum gravity and quantum field theory, all under the overarching framework of string theory. As one of the most anticipated gatherings in theoretical physics, the conference serves as a platform for exchanging knowledge, fostering new collaborations and pushing the boundaries of our understanding of the fundamental aspects of the physical laws of nature. The most recent edition, Strings 2024, attracted about 400 in-person participants to CERN in June, with several hundred more scientists following on-line.

One way to view string theory is as a model of fundamental interactions that provides a unification of particle physics with gravity. While generic features of the Standard Model and gravity arise naturally in string theory, it has lacked concrete experimental predictions so far. In recent years, the strategy has shifted from concrete model building to more systematically understanding the universal features that models of particle physics must satisfy when coupled to quantum gravity.

Into the swamp

Remarkably, there are very subtle consistency conditions that are invisible in ordinary particle physics, as they involve indirect arguments such as whether black holes can evaporate in a consistent manner. This has led to the notion of the “Swampland”, which encompasses the set of otherwise well-behaved quantum field theories that fail these subtle quantum-gravity consistency conditions. This may lead to concrete implications for particle physics and cosmology.

An important question addressed during the conference was whether these low-energy consistency conditions always point back to string theory as the only consistent “UV completion” (fundamental realisation at distance scales shorter than can be probed at low energies) of quantum gravity, as suggested by numerous investigations. Whether there is any other possible UV completion involving a version of quantum gravity unrelated to string theory remains an important open question, so it is no surprise that significant research efforts are focused in this direction.

Attempts at explicit model construction were also discussed, together with a joint discussion on cosmology, particle physics and their connections to string theory. Among other topics, recent progress on realising accelerating cosmologies in string theory was reported, as well as a stringy model for dark energy.

A different viewpoint, shared by many researchers, is to employ string theory rather as a framework or tool to study quantum gravity, without any special emphasis on its unification with particle physics. It has long been known that there is a fundamental tension when trying to combine gravity with quantum mechanics, which many regard as one of the most important, open conceptual problems in theoretical physics. This becomes most evident when one zooms in on quantum black holes. It was in this context that the holographic nature of quantum gravity was discovered – the idea that all the information contained within a volume of space can be described by data on its boundary, suggesting that the universe’s fundamental degrees of freedom can be thought of as living on a holographic screen. This may not only hold the key for understanding the decay of black holes via Hawking radiation, but can also teach us important lessons about quantum cosmology.

Strings serves as a platform for pushing the boundaries of our understanding of the fundamental aspects of the physical laws of nature

Thousands of papers have been written on this subject within the last decades, and indeed holographic quantum gravity continues to be one of string theory’s most active subfields. Recent breakthroughs include the exact or approximate solution of quantum gravity in low-dimensional toy models in anti-de Sitter space, the extension to de-Sitter space, an improved understanding of the nature of microstates of black holes, the precise way they decay, discovering connections between emergent geometry and quantum information theory, and developing powerful tools for investigating these phenomena, such as bootstrap methods.

Other developments that were reviewed include the use of novel kinds of generalised symmetries and string field theory. Strings 2024 also gave a voice to more tangentially related areas such as scattering amplitudes, non-perturbative quantum field theory, particle phenomenology and cosmology. Many of these topics are interconnected to the core areas mentioned in this article and with each other, both technically and/or conceptually. It is this intricate web of highly non-trivial consistent interconnections between subfields that generates meaning beyond the sum of its parts, and forms the unifying umbrella called string theory.

The conference concluded with a novel “future vision” session, which considered 100 crowd-sourced open questions in string theory that might plausibly be answered in the next 10 years. These 100 questions provide a glimpse of where string theory may head in the near future.

The post An intricate web of interconnected strings appeared first on CERN Courier.

]]>
Meeting report The Strings 2024 conference looked at the latest developments in the interconnected fields of quantum gravity and quantum field theory, all under the overarching framework of string theory. https://cerncourier.com/wp-content/uploads/2024/09/CCNovDec24_FN_Strings_feature.jpg
Tabletop experiment constrains neutrino size https://cerncourier.com/a/tabletop-experiment-constrains-neutrino-size/ Fri, 05 Jul 2024 08:46:51 +0000 https://preview-courier.web.cern.ch/?p=110840 How big is a neutrino? Results from BeEST set new limits on the size of the neutrino’s wave packet, but theorists are at odds over how to interpret the data.

The post Tabletop experiment constrains neutrino size appeared first on CERN Courier.

]]>
The BeEST experiment

How big is a neutrino? Though the answer depends on the physical process that created it, knowledge of the size of neutrino wave packets is at present so wildly unconstrained that every measurement counts. New results from the Beryllium Electron capture in Superconducting Tunnel junctions (BeEST) experiment in TRIUMF, Canada, set new lower limits on the size of the neutrino’s wave packet in terrestrial experiments – though theorists are at odds over how to interpret the data.

Neutrinos are created as a mixture of mass eigenstates. Each eigenstate is a wave packet with a unique group velocity. If the wave packets are too narrow, they eventually stop overlapping as the wave evolves, and quantum interference is lost. If the wave packets are too broad, a single mass eigenstate is resolved by Heisenberg’s uncertainty principle, and quantum interference is also lost. No quantum interference means no neutrino oscillations.

“Coherence conditions constrain the lengths of neutrino wave packets both from below and above,” explains theorist Evgeny Akhmedov of MPI-K Heidelberg. “For neutrinos, these constraints are compatible, and the allowed window is very large because neutrinos are very light. This also hints at an answer to the frequently asked question of why charged leptons don’t oscillate.”

The spatial extent of the neutrino wavepacket has so far only been constrained to within 13 orders of magnitude by reactor-neutrino oscillations, say the BeEST team. If wave-packet sizes were at the experimental lower limit set by the world’s oscillation data, it could have impacted future oscillation experiments, such as the Jiangmen Underground Neutrino Observatory (JUNO) that is currently under construction in China.

“This could have destroyed JUNO’s ability to probe the neutrino mass ordering,” says Akhmedov, “however, we expect the actual sizes to be at least six orders of magnitude larger than the lowest limit from the world’s oscillation data. We have no hope of probing them in terrestrial oscillation experiments, in my opinion, though the situation may be different for astrophysical and cosmological neutrinos.”

BeEST uses a novel method to constrain the size of the neutrino wavepacket. The group creates electron neutrinos via electron capture on unstable 7Be nuclei produced at the TRIUMF–ISAC facility in Vancouver. In the final state there are only two products: the electron neutrino and a newly transmuted 7Li daughter atom that receives a tiny energy “kick” by emitting the neutrino. By embedding the 7Be isotopes in superconducting quantum sensors at 0.1 K, the collaboration can measure this low-energy recoil to high precision. Via the uncertainty principle, the team infers a limit on the spatial localisation of the entire final-state system of 6.2 pm – more than 1000 times larger than the nucleus itself.

Consensus has not been reached on how to infer the new lower limit on the size of the neutrino wave packet, with the preprint quoting two lower limits in the vicinity of 10–11 m and 10–8 m based on different theoretical assumptions. Although they differ dramatically, even the weaker limit improves upon all previous reactor oscillation data by more than an order of magnitude, and is enough to rule out decoherence effects as an explanation for sterile-neutrino anomalies, says the collaboration.

“I think the more stringent limit is correct,” says Akhmedov, who points out that this is only about 1.5 orders of magnitude lower than some theoretical predictions. “I am not an experimentalist and therefore cannot judge whether an improvement of 1.5 orders of magnitude can be achieved in the foreseeable future, but I very much hope that this is possible.”

The post Tabletop experiment constrains neutrino size appeared first on CERN Courier.

]]>
News How big is a neutrino? Results from BeEST set new limits on the size of the neutrino’s wave packet, but theorists are at odds over how to interpret the data. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_NA_triumf_feature.jpg
High time for holographic cosmology https://cerncourier.com/a/high-time-for-holographic-cosmology/ Fri, 05 Jul 2024 08:19:41 +0000 https://preview-courier.web.cern.ch/?p=110889 On the Origin of Time is an intellectually thrilling book and a worthy sequel to Stephen Hawking’s bestsellers, writes Wolfgang Lerche.

The post High time for holographic cosmology appeared first on CERN Courier.

]]>
On the Origin of Time is an intellectually thrilling book and a worthy sequel to Stephen Hawking’s bestsellers. Thomas Hertog, who was a student and collaborator of Hawking, suggests that it may be viewed as the next book the famous scientist would have written if he were still alive. While addressing fundamental questions about the origin of the cosmos, Hertog sprinkles the text with anecdotes from his interactions with Hawking, easing up on the otherwise intense barrage of ideas and concepts. But despite its relaxed and popular style, the book will be most useful for physicists with a basic education in relativity and quantum theory.

Expanding universes

The book starts with an exhaustive journey through the history of cosmology. It reviews the ancient idea of an eternal mathematical universe, passes through the ages of Copernicus and Newton, and then enters the modern era of Einstein’s universe. Hertog thoroughly explores static and expanding universes, Hoyle’s steady-state cosmos, Hartle and Hawking’s no-boundary universe, Guth’s inflationary universe and Linde’s multiverse with eternal inflation. Everything culminates in the proposal for holographic quantum cosmology that the author developed together with the late Hawking.

What makes the book especially interesting is its philosophical reflections on the historical evolution of various underlying scientific paradigms. For example, the old Greeks developed the Platonic view that the workings of the world should be governed by eternal mathematical laws. This laid the groundwork for the reductionistic worldview that many scientists – especially particle physicists – subscribe to today.

Hertog argues that this way of thinking is flawed, especially when confronted with a Big Bang followed by a burst of inflation. Given the supremely fine-tuned structure of our universe, as is necessitated by the existence of atoms, galaxies and ultimately us, how could the universe “know” back at the time of the Big Bang that this fine-tuned world would emerge after inflation and phase transitions?

On the Origin of Time: Stephen Hawking’s Final Theory

The quest to scientifically understand this apparent intelligent design has led to physical scenarios such as eternal inflation, which produces an infinite collection of pocket universes with their own laws. These ideas blend the anthropic principle – that only a life-friendly universe can be observed – into the narrative of a multiverse.

However, for anthropic reasoning to make sense, one needs to specify what a typical observer would be, observes Hertog, because otherwise the statement is circular. Instead, he argues that one should interpret the history of the universe as an evolutionary process. Not only would physical objects continuously evolve, but also the laws that govern them, thereby building up an enormous chain of frozen accidents analogous to the evolutionary tree of biological species on Earth.

This represents a major paradigm shift as it introduces a retrospective element: one can only understand evolution by looking at it backwards in time. Deterministic and causal explanations apply only at a crude, coarse-grained level, while the precise way that structures and laws play out is governed by accumulated accidents. Essentially the question “how did everything start?” is superseded by the question “how did our universe become as it is today?” This may be seen as adopting a top-down view (into the past) instead of a bottom-up view (from the past).

Hawking criticised traditional cosmology for hiding certain assumptions, in particular the separation of the fundamental laws from initial boundary conditions and from the role of the observer. Instead, one should view the universe, at its most fundamental level, as a quantum superposition of many possible spacetimes, of which the observer is an intrinsic part.

From this Everettian viewpoint, wavefunctions behave like separate branches of reality. A measurement is like a fork in the road, where history divides into different outcomes. This line of thought has significant consequences. The author presents an illuminating analogy with the so-called delayed double-slit experiment, which was first conceived by John Archibald Wheeler. Here the measurement that determines whether an electron behaves as particle or wave is delayed until after the electron has already passed the slit. This demonstrates that the process of observation inflicts a retroactive component which, in a sense, creates the past history of the electron.

The fifth dimension 

Further ingredients are needed to transform this collection of ideas to a concrete proposal, argues Hertog. In short, these are quantum entanglement and holography. Holography has been recognised as a key property of quantum gravity, following Maldacena’s work on quantum black holes. It posits that all the information about the interior of a black hole is encoded at its horizon, which acts like a holographic screen. Inside, a fictitious fifth dimension emerges that plays the role of an energy scale.

A holographic universe would be the polar opposite of a Platonic universe with eternal laws

In Hawking and Hertog’s holographic quantum universe, one considers a Euclidean universe where the role of the holographic screen is played by the surface of our observations. The main idea is that the emergent dimension is time itself! In essence, the observed universe, with all its complexity, is like a holographic screen whose quantum bits encode its past history. Moving from the screen to the interior is equivalent to going back in time, from a highly entangled complex universe to a gradually less structured universe with fading physical laws and less entangled qubits. Eventually no entangled qubits remain. This is the origin of time as well as of the physical laws. Such a holographic universe would be the polar opposite of a Platonic universe with eternal laws.

Could these ideas be tested? Hertog argues that an observable imprint in the spectrum of primordial gravitational waves could be discovered in the future. For now, On the Origin of Time is delightful food for thought.

The post High time for holographic cosmology appeared first on CERN Courier.

]]>
Review On the Origin of Time is an intellectually thrilling book and a worthy sequel to Stephen Hawking’s bestsellers, writes Wolfgang Lerche. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_REV_Hawking.jpg
The next 10 years in astroparticle theory https://cerncourier.com/a/the-next-10-years-in-astroparticle-theory/ Fri, 05 Jul 2024 08:16:30 +0000 https://preview-courier.web.cern.ch/?p=110857 Newly appointed EuCAPT director Silvia Pascoli sets out her vision for disentangling fundamental questions involving dark matter, the baryon asymmetry, neutrinos, cosmic rays, gravitational waves, dark energy and other cosmic relics.

The post The next 10 years in astroparticle theory appeared first on CERN Courier.

]]>
Pulsar timing arrays

Astroparticle physics connects the extremely small with the extremely large. At the interface of particle physics, cosmology and astronomy, the field ties particles and interactions to the hot Big Bang cosmological model. This synergy allows us to go far beyond the limitations of terrestrial probes in our quest to understand nature at its most fundamental level. A typical example is neutrino masses, where cosmological observations from large-scale structure formation far exceed current bounds from terrestrial experiments. Astroparticle theory (APT) has accelerated quickly in the past 10 years. And this looks certain to continue in the next 10.

Today, neutrino masses, dark matter and the baryon asymmetry of the universe are the only evidence we have of physics beyond the Standard Model (BSM) of particle physics. Astroparticle theorists study how to extend the theory towards a new Standard Model – and the cosmological consequences of doing so.

New insights

For a long time, work on dark matter focused on TeV-scale models parallel to searches at the LHC and in ultra-low-noise detectors. The scope has now broadened to a much larger range of masses and models, from ultralight dark matter and axions to sub-GeV dark matter and WIMPs. Theoretical developments have gone hand-in-hand with new experimental opportunities. In the next 10 years, much larger detectors are planned for WIMP searches aiming towards the neutrino floor. Pioneering experimental efforts, even borrowing techniques from atomic and condensed-matter physics, test dark matter with much lower masses, providing new insights into what dark matter may be made of.

I strongly welcome efforts to broaden the reach in mass scales to efficiently hunt for any hint of what the new physics BSM may be

Neutrinos provide a complementary window on BSM physics. It is just over 25 years since the discovery of neutrino oscillation provided evidence that neutrinos have mass – a fact that cannot be accounted for in the SM (CERN Courier May/June 2024 p29). But the origin of neutrino masses remains a mystery. In the coming decade, neutrinoless double-beta decay experiments and new large experiments, such as JUNO, DUNE (see “A gold mine for neutrino physics“) and Hyper-Kamiokande, will provide a much clearer picture, determining the mass ordering and potentially discovering the neutrino’s nature and whether it violates CP symmetry. These results may, via leptogenesis, be related to the origin of the matter–antimatter asymmetry of the universe.

Recently, there has been renewed interest in models with scales accessible to current particle-physics experiments. These will exploit the powerful beams and capable detectors of the current and future experimental neutrino programme, and collider-based searches for heavy neutral leptons with MeV-to-TeV masses.

Overall, while the multi-TeV scale should continue to be a key focus for both particle and astroparticle physics experiments, I strongly welcome the theoretical and experimental efforts to broaden the reach in mass scales to efficiently hunt for any hint of what the new physics BSM may be.

Silvia Pascoli

Astroparticle physics also studies the particles that arrive on Earth from all around our universe. They come from extreme astrophysical environments, such as supernovae and active galactic nuclei, where they may be generated and accelerated to the highest energies. Thanks to their detection we can study the processes that fuel these astrophysical objects and gain an insight into their evolution (see “In defiance of cosmic-ray power laws“).

The discovery of gravitational waves (GWs) just a few years ago has shed new light on this field. Together with gamma rays, cosmic rays and the high-energy neutrinos detected at IceCube, the field of multi-messenger astronomy is in full bloom. In the coming years it will get a boost from the results of new, large experiments such as KM3Net, the Einstein Telescope, LISA and the Cherenkov Telescope Array – as well as many new theoretical developments, such as advanced particle-theory techniques for GW predictions.

In the field of GWs, last year’s results from pulsar timing arrays indicate the presence of a stochastic background of GWs. What is its origin? Is it of astrophysical nature or does it come from some dramatic event in the early universe, such as a strong first-order phase transition? In this latter case, we would be getting a glimpse of the universe when it was just born, opening up a new perspective on fundamental particles and interactions. Could it be that we have seen a new GeV-scale dark sector at work? It is too early to tell. But this is very exciting.

The post The next 10 years in astroparticle theory appeared first on CERN Courier.

]]>
Opinion Newly appointed EuCAPT director Silvia Pascoli sets out her vision for disentangling fundamental questions involving dark matter, the baryon asymmetry, neutrinos, cosmic rays, gravitational waves, dark energy and other cosmic relics. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_VIEW_Pulsar_feature.jpg
A logical freight train https://cerncourier.com/a/a-logical-freight-train/ Wed, 08 May 2024 09:35:51 +0000 https://preview-courier.web.cern.ch/?p=110661 Michael Duff has published a timely reflection on Steven Weinberg's scientific legacy.

The post A logical freight train appeared first on CERN Courier.

]]>
Steven Weinberg was a logical freight train – for many, the greatest theorist of the second half of the 20th century. It is timely to reflect on his legacy, the scientific component of which is laid out in a new collection of his publications selected by theoretical physicist Michael Duff (Imperial College).

Six chapters cover Weinberg’s most consequential contributions to effective field theory, the Standard Model, symmetries, gravity, cosmology and short-form popular science writing. I can’t identify any notable omissions and I doubt many others would, though some may raise an eyebrow at the exclusion of his paper deriving the Lee–Weinberg bound. Duff brings each chapter to life with first-hand anecdotes and details that will delight those of us most greatly separated from historical events. I am relatively young, and only had one meaningful interaction with Steven Weinberg.  Though my contemporaries and I inhabit a scientific world whose core concepts are interwoven with, if not formed by, Steven Weinberg’s scientific legacy, unlike Michael Duff we are poorly qualified to comment historically on the ecosystem in which this legacy grew, nor on aspects of personality. This makes his commentary particularly valuable to younger readers.

I can envisage three distinct audiences for this new collection. The first is the lay theorist – those who are widely enough read to recognise the depth of Weinberg’s impact in theoretical physics and would like to know more. Such readers will find Duff’s introductions to be insightful and entertaining – helpful preparation for the more technical aspects of the papers, though expertise is required to fully grapple with many of them. There are also a few hand-picked non-technical articles one would otherwise not encounter without some serious investigative effort, including some accessible articles on quantum field theory, effective field theory and life in the multiverse, in addition to the dedicated section on popular articles. These will delight any theory afficionado.

The second audience is practising theorists. If you’re going to invest in a printed collection of publications, then Weinberg is an obvious protagonist. Particle theorists consult his articles so often that they may as well have them close at hand. This collection contains those most often revisited and ought to be useful in this respect. Duff’s introductions also expose technical interconnections between the articles that might otherwise be missed.

Steven Weinberg: Selected Papers

The third audience I have in mind are beginning graduate students in particle theory, cosmology and beyond. It would not be a mistake to put this collection on recommended reading lists. In due course, most students should read many of these papers multiple times, so why not get on with it from the get-go? The section on effective field theories (EFTs) contains many valuable key ideas and perspectives. Plenty of those core concepts are still commonly encountered more by osmosis than with any rigour, and this can lead to confused notions around the general approach of EFT. Perhaps an incomplete introduction to EFT could be avoided for graduate students by cutting straight to the fundamentals contained here? The cosmology section also reveals many important modern concepts alongside lucid and fearless wrestling with big questions. The papers on gravity detail techniques that are frequently encountered in any first foray into modern amplitudology, as well as strategies to infer general lessons in quantum field theory from symmetries and self-consistency alone.

In my view, however, the most important section for beginning graduate students is that on the construction of the Standard Model (SM). It may be said that a collective amnesia has emerged regarding the scientific spirit that drove its development. The SM was built by model builders. I don’t say this facetiously. They made educated guesses about the structure of the “ultraviolet” (microscopic) world based on the “infrared” (long-distance) breadcrumbs embedded within low-energy experimental observations. Decades after this swashbuckling era came to an end, there is a growing tendency to view the SM as something rigid, providentially bestowed and permanent. The academic bravery and risk-taking that was required to take the necessary leaps forward then, and which may be required now, is no better demonstrated than in “A Model of Leptons”. All young theorists should read this multiple times. A Model of Leptons exemplifies that not only was Steven Weinberg an unstoppable force of logic, but also a plucky risk taker. It’s inspirational that its final paragraph, which laid out the structure of nature at the electroweak scale, ends with doubt and speculation: “And if this model is renormalisable, then what happens when we extend it to include the couplings of A and B to the hadrons?” By working their way through this collection, graduate students may be inspired to similar levels of ambition and jeopardy.

Amongst the greatest scientists of the last century

In the weeks that followed the passing of Stephen Weinberg, I sensed amongst a number of colleagues of all generations some moods that I could have anticipated; of the loss of not only a bona fide truth-seeker, but also of a leader, frequently the leader. I also perceived a feeling that transcended the scientific realm alone, of someone whose creative genius ought to be recognised amongst the greatest of scientists, musicians, artists and humanity of the last century. How can we productively reflect on that? I imagine we would all do well to learn not only of Weinberg’s important individual scientific insights, but also to attempt to absorb his overall methodology in identifying interesting questions, in breaking new trails in fundamental physics, and in pursuing logic and clarity wherever they may take you. This collection is not a bad place to start.

The post A logical freight train appeared first on CERN Courier.

]]>
Review Michael Duff has published a timely reflection on Steven Weinberg's scientific legacy. https://cerncourier.com/wp-content/uploads/2024/05/CCMayJun24_REV_weinberg.jpg
The inventive pursuit of UHF gravitational waves https://cerncourier.com/a/the-inventive-pursuit-of-uhf-gravitational-waves/ Sat, 04 May 2024 15:52:54 +0000 https://preview-courier.web.cern.ch/?p=110672 Since their first direct detection in 2015, gravitational waves have become pivotal in our quest to understand the universe.

The post The inventive pursuit of UHF gravitational waves appeared first on CERN Courier.

]]>
Since their first direct detection in 2015, gravitational waves (GWs) have become pivotal in our quest to understand the universe. The ultra-high-frequency (UHF) band offers a window to discover new physics beyond the Standard Model (CERN Courier March/April 2022 p22). Unleashing this potential requires theor­etical work to investigate possible GW sources and experiments with far greater sensitivities than those achieved today.

A workshop at CERN from 4 to 8 December 2023 leveraged impressive experimental progress in a range of fields. Attended by nearly 100 international scientists – a noteworthy increase from the 40 experts who attended the first workshop at ICTP Trieste in 2019 – the workshop showcased the field’s expanded research interest and collaborative efforts. Concretely, about 10 novel detector concepts have been developed since the first workshop.

One can look for GWs in a few different ways: observing changes in the space between detector components, exciting vibrations in detectors, and converting GWs into electromagnetic radiation in strong magnetic fields. Substantial progress has been made in all three experimental directions.

Levitating concepts

The leading concepts for the first approach involve optically levitated sensors such as high-aspect-ratio sodium–cyttrium–fluoride prisms, and semi-levitated sensors such as thin silicon or silicon–nitride nanomembranes in long optical resonators. These technologies are currently under study by various groups in the Levitated Sensor Detectors collaboration and at DESY.

For the second approach, the main focus is on millimetre-scale quartz cavities similar to those used in precision clocks. A network of such detectors, known as GOLDEN, is being planned, involving collaborations among UC Davis, University College London and Northwestern University. Superconducting radio-frequency cavities also present a promising technology. A joint effort between Fermilab and DESY is leveraging the existing MAGO prototype to gain insights and design further optimised cavities.

Regarding the third approach, a prominent example is optical high-precision interferometry, combined with a series of accelerator dipole magnets similar to those used in the light-shining-through-a-wall axion-search experiment, ALPS II (Any Light Particle Search II) or the axion helioscope CAST and its planned successor IAXO. In fact, ALPS II is anticipated to commence a dedicated GW search in 2028. Additionally, other notable concepts inspired by axion dark-matter searches involve toroidal magnets, exemplified by experiments like ABRACADABRA, or solenoidal magnets such as BASE or MADMAX.

All three approaches stand to benefit from burgeoning advances in quantum sensing, which promise to enhance sensitivities by orders of magnitude. In this landscape, axion dark-matter searches and UHF GW detection are poised to work in close collaboration, leveraging quantum sensing to achieve unprecedented results. Concepts that demonstrate synergies with axion-physics searches are crucial at this stage, and can be facilitated by incremental investments. Such collaboration builds awareness within the scientific community and presents UHF searches as an additional, compelling science case for their construction.

The workshop showcased the fields expanded research interest and collaborative efforts

Cross-disciplinary research is also crucial to understand cosmological sources and constraints on UHF GWs. For the former, our understanding of primordial black holes has significantly matured, transitioning from preliminary estimates to a robust framework. Additional sources, such as parabolic encounters and exotic compact objects, are also gaining clarity. For the latter, the workshop highlighted how strong magnetic fields in the universe, such as those in extragalactic voids and planetary magnetospheres, can help set limits on the conversion between electromagnetic and gravitational waves.

Despite much progress, the sensitivity needed to detect UHF GWs remains a visionary goal, requiring the constant pursuit of inventive new ideas. To aid this, the community is taking steps to be more inclusive. The living review produced after the first workshop (arXiv:2011.12414) will be revised to be more accessible for people outside our community, breaking down detector concepts into fundamental building blocks for easier understanding. Plans are also underway to establish a comprehensive research repository and standardise data formats. These initiatives are crucial for fostering a culture of open innovation and expanding the potential for future breakthroughs in UHF GW research. Finally, a new, fully customisable and flexible GW plotter including the UHF frequency range is being developed to benefit the entire GW community.

The journey towards detecting UHF GWs is just beginning. While current sensitivities are not yet sufficient, the community’s commitment to developing innovative ideas is unwavering. With the collective efforts of a dedicated scientific community, the next leap in gravitational-wave research is on the horizon. Limits exist to be surpassed!

The post The inventive pursuit of UHF gravitational waves appeared first on CERN Courier.

]]>
Meeting report Since their first direct detection in 2015, gravitational waves have become pivotal in our quest to understand the universe. https://cerncourier.com/wp-content/uploads/2024/05/CCMayJun24_FN_GW.jpg
The neutrino mass puzzle https://cerncourier.com/a/the-neutrino-mass-puzzle/ Sat, 04 May 2024 15:45:38 +0000 https://preview-courier.web.cern.ch/?p=110600 André de Gouvêa explains why neutrino masses imply the existence of new fundamental fields.

The post The neutrino mass puzzle appeared first on CERN Courier.

]]>
After all these years, neutrinos remain extraordinary – and somewhat deceptive. The experimental success of the three-massive-neutrino paradigm over the past 25 years makes it easy to forget that massive neutrinos are not part of the Standard Model (SM) of particle physics.

The problem lies with how neutrinos acquire mass. Nonzero neutrino masses are not possible without the existence of new fundamental fields, beyond those that are part of the SM. And we know virtually nothing about the particles associated with them. They could be bosons or fermions, light or heavy, charged or neutral, and experimentally accessible or hopelessly out of reach.

This is the neutrino mass puzzle. At its heart is the particle’s uniquely elusive nature, which is both the source of the problem and the main challenge in resolving it.

Mysterious and elusive

Despite outnumbering other known massive particles in the universe by 10 orders of magnitude, neutrinos are the least understood of the matter particles. Unlike electrons, they do not participate in electromagnetic interactions. Unlike quarks, they do not participate in the strong interactions that bind protons and neutrons together. Neutrinos participate only in aptly named weak interactions. Out of the trillions of neutrinos that the Sun beams through you each second, only a handful will interact with your body during your lifetime.

A pink puzzle piece representing neutrinos

Neutrino physics has therefore had a rather tortuous and slow history. The existence of neutrinos was postulated in 1930 but only confirmed in the 1950s. The hypothesis that there are different types of neutrinos was first raised in the 1940s but only confirmed in the 1960s. And the third neutrino type, postulated when the tau lepton was discovered in the 1970s, was only directly observed in the year 2000. Nonetheless, over the years neutrino experiments have played a decisive role in the development of the most successful theory in modern physics: the SM. And at the turn of the 21st century, neutrino experiments revealed that there is something missing in its description of particle physics.

Neutrinos are fermions with spin one-half that interact with the charged leptons (the electron, muon and tau lepton) and the particles that mediate the weak interactions (the W and Z bosons). There are three neutrino types, or flavours: electron-type (νe), muon-type (νμ) and tau-type (ντ), and each interacts exclusively with its namesake charged lepton. One of the predictions of the SM is that neutrino masses are exactly zero, but a little over 25 years ago, neutrino experiments revealed that this is not exactly true. Neutrinos have tiny but undeniably nonzero masses.

Mixing it up

The search for neutrino masses is almost as old as Pauli’s 93-year-old postulate that neutrinos exist. They were ultimately discovered around the turn of the millennium through the observation of neutrino flavour oscillations. It turns out that we can produce one of the neutrino flavours (for example νμ) and later detect it as a different flavour (for example νe) so long as we are willing to wait for the neutrino flavour to change. The probability associated with this phenomenon oscillates in spacetime with a characteristic distance that is inversely proportional to the differences of the squares of the neutrino masses. Given the tininess of neutrino masses and mass splittings, these distances are frequently measured in hundreds of kilometres in particle-physics experiments.

Neutrino oscillations also require the leptons to mix. This means that the neutrino flavour states are not particles with a well defined mass but are quantum superpositions of different neutrino states with well defined masses. The three mass eigenstates are related to the three flavour eigenstates via a three-dimensional mixing matrix, which is usually parameterised in terms of mixing angles and complex phases.

The masses of all known matter particles

In the last few decades, precision measurements of neutrinos produced in the Sun, in the atmosphere, in nuclear reactors and in particle accelerators in different parts of the world, have measured the mixing parameters at the several percent level. Assuming the mixing matrix is unitary, all but one have been shown to be nonzero. The measurements have revealed that the three neutrino mass eigenvalues are separated by two different mass-squared differences: a small one of order 10–4 eV2 and a large one of order 10–3 eV2. Data therefore reveal that at least two of the neutrino masses are different from zero. At least one of the neutrino masses is above 0.05 eV, and the second lightest is at least 0.008 eV. While neutrino oscillation experiments cannot measure the neutrino masses directly, precise measurements of beta-decay spectra and constraints from the large-scale structure of the universe offer complementary upper limits. The nonzero neutrino masses are constrained to be less than roughly 0.1 eV.

These masses are tiny when compared to the masses of all the other particles (see “Chasm” figure). The mass of the lightest charged fermion, the electron, is of order 106 eV. The mass of the heaviest fermion, the top quark, is of order 1011 eV, as are the masses of the W, Z and Higgs bosons. These particle masses are all at least seven orders of magnitude heavier than those of the neutrinos. No one knows why neutrino masses are dramatically smaller than those of all other massive particles.

The Standard Model and mass

To understand why the SM predicts neutrino masses to be zero, it is necessary to appreciate that particle masses are complicated in this theory. The reason is as follows. The SM is a quantum field theory. Interactions between the fields are strictly governed by their properties: spin, various “local” charges, which are conserved in interactions, and – for fermions like the neutrinos, charged leptons and quarks – another quantum number called chirality.

In quantum field theories, mass is the interaction between a right-chiral and a different left-chiral field. A naive picture is that the mass-interaction constantly converts left-chiral states into right-chiral ones (and vice versa) and the end result is a particle with a nonzero mass. It turns out, however, that for all known fermions, the left-chiral and right-chiral fermions have different charges. The immediate consequence of this is that you can’t turn one into the other without violating the conservation of some charge so none of the fermions are allowed to have mass: the SM naively predicts that all fermion masses are zero!

The Higgs field was invented to fix this shortcoming. It is charged in such a way that some right-chiral and left-chiral fermions are allowed to interact with one another plus the Higgs field which, uniquely among all known fields, is thought to have been turned on everywhere since the phase transition that triggered electroweak symmetry breaking very early in the history of the universe. In other words, so long as the vacuum configuration of the Higgs field is not trivial, fermions acquire a mass thanks to these interactions.

This is not only a great idea; it is also at least mostly correct, as spectacularly confirmed by the discovery of the Higgs boson a little over a decade ago. It has many verifiable consequences. One is that the strength with which the Higgs boson couples to different particles is proportional to the particle ’s mass – the Higgs prefers to interact with the top quark or the Z or W bosons relative to the electron or the light quarks. Another consequence is that all masses are proportional to the value of the Higgs field in the vacuum (1011 eV) and, in the SM, we naively expect all particle masses to be similar.

Neutrino masses are predicted to be zero because, in the SM, there are no right-chiral neutrino fields and hence none for the left-chiral neutrinos – the ones we know about – to “pair up” with. Neutrino masses therefore require the existence of new fields, and hence new particles, beyond those in the SM.

Wanted: new fields

The list of candidate new fields is long and diverse. For example, the new fields that allow for nonzero neutrino masses could be fermions or bosons; they could be neutral or charged under SM interactions, and they could be related to a new mass scale other than the vacuum value of the SM Higgs field (1011 eV), which could be either much smaller or much larger. Finally, while these new fields might be “easy” to discover with the current and near-future generation of experiments, they might equally turn out to be impossible to probe directly in any particle-physics experiment in the foreseeable future.

Though there are too many possibilities to list, they can be classified into three very broad categories: neutrinos acquire mass by interacting with the same Higgs field that gives mass to the charged fermions; by interacting with a similar Higgs field with different properties; or through a different mechanism entirely.

A purple puzzle piece representing neutrinos

At first glance, the simplest idea is to postulate the existence of right-chiral neutrino fields and further assume they interact with the Higgs field and the left-chiral neutrinos, just like right-chiral and left-chiral charged leptons and quarks. There is, however, something special about right-chiral neutrino fields: they are completely neutral relative to all local SM charges. Returning to the rules of quantum field theory, completely neutral chiral fermions are allowed to interact “amongst themselves” independent of whether there are other right-chiral or left-chiral fields around. This means the right-chiral neutrino fields should come along with a different mass that is independent from the vacuum value of the Higgs field of 1011 eV.

To prevent this from happening, the right-chiral neutrinos must possess some kind of conserved charge that is shared with the left-chiral neutrinos. If this scenario is realised, there is some new, unknown fundamental conserved charge out there. This hypothetical new charge is called lepton number: electrons, muons, tau leptons and neutrinos are assigned charge plus one, while positrons, antimuons, tau antileptons and antineutrinos have charge minus one. A prediction of this scenario is that the neutrino and the antineutrino are different particles since they have different lepton numbers. In more technical terms, the neutrinos are massive Dirac fermions, like the charged leptons and the quarks. In this scenario, there are new particles associated with the right-chiral neutrino field, and a new conservation law in nature.

Accidental conservation

As of today, there is no experimental evidence that lepton number is not conserved, and readers may question if this really is a new conservation law. In the SM, however, the conservation of lepton number is merely “accidental” – once all other symmetries and constraints are taken into account, the theory happens to possess this symmetry. But lepton number conservation is no longer an accidental symmetry when right-chiral neutrinos are added, and these chargeless and apparently undetectable particles should have completely different properties if it is not imposed.

If lepton number conservation is imposed as a new symmetry of nature, making neutrinos pure Dirac fermions, there appears to be no observable consequence other than nonzero neutrino masses. Given the tiny neutrino masses, the strength of the interaction between the Higgs boson and the neutrinos is predicted to be at least seven orders of magnitude smaller than all other Higgs couplings to fermions. Various ideas have been proposed to explain this remarkable chasm between the strength of the neutrino’s interaction with the Higgs field relative to that of all other fermions. They involve a plurality of theoretical concepts including extra-dimensions of space, mirror copies of our universe and dark sectors.

Nonzero neutrino masses

A second possibility is that there are more Higgs fields in nature and that the neutrinos acquire a mass by interacting with a Higgs field that is different from the one that gives a mass to the charged fermions. Since the neutrino mass is proportional to the vacuum value of a different Higgs field, the fact that the neutrino masses are so small is easy to tolerate: they are simply proportional to a different mass scale that could be much smaller than 1011 eV. Here, there are no right-chiral neutrino fields and the neutrino masses are interactions of the left-chiral neutrino fields amongst themselves. This is possible because, while the neutrinos possess weak-force charge they have no electric charge. In the presence of the nontrivial vacuum of the Higgs fields, the weak-force charge is effectively not conserved and these interactions may be allowed. The fact that the Higgs particle discovered at the LHC – associated with the SM Higgs field – does not allow for this possibility is a consequence of its charges. Different Higgs fields can have different weak-force charges and end up doing different things. In this scenario, the neutrino and the antineutrino are, in fact, the same particle. In more technical terms: the neutrinos are massive Majorana fermions.

Neutrino masses require the existence of new fields, and hence new particles, beyond those in the Standard Model

One way to think about this is as follows: the mass interaction transforms left-chiral objects into right-chiral objects. For electrons, for example, the mass converts left-chiral electrons into right-chiral electrons. It turns out that the antiparticle of a left-chiral object is right-chiral and vice versa, and it is tempting to ask whether a mass interaction could convert a left-chiral electron into a right-chiral positron. The answer is no: electrons and positrons are different objects and converting one into the other would violate the conservation of electric charge. But this is no barrier for the neutrino, and we can contemplate the possibility of converting a left-chiral neutrino into its right-chiral antiparticle without violating any known law of physics. If this hypothesis is correct, the hypothetical lepton-number charge, discussed earlier, cannot be conserved. This hypothesis is experimentally neither confirmed nor contradicted but could soon be confirmed with the observation of neutrinoless double-beta decays – nuclear decays which can only occur if lepton-number symmetry is violated. There is an ongoing worldwide campaign to search for the neutrinoless double-beta decay of various nuclei.

A new source of mass

In the third category, there is a source of mass different from the vacuum value of the Higgs field, and the neutrino masses are an amalgam of the vacuum value of the Higgs field and this new source of mass. A very low new mass scale might be discovered in oscillation experiments, while consequences of heavier ones may be detected in other types of particle-physics experiments, including measurements of beta and meson decays, charged-lepton properties, or the hunt for new particles at high-energy colliders. Searches for neutrinoless double-beta decay can reveal different sources for lepton-number violation, while ultraheavy particles can leave indelible footprints in the structure of the universe through cosmic collisions. The new physics responsible for nonzero neutrino masses might also be related to grand-unified theories or the origin of the matter–antimatter asymmetry of the universe, through a process referred to as leptogenesis. The range of possibilities spans 22 orders of magnitude (see “eV to ZeV” figure).

Challenging scenarios

Since the origin of the neutrino masses here is qualitatively different from that of all other particles, the values of the neutrino masses are expected to be qualitatively different. Experimentally, we know that neutrino masses are much smaller than all charged- fermion masses, so many physicists believe that the tiny neutrino masses are strong indirect evidence for a source of mass beyond the vacuum value of the Higgs field. In most of these scenarios, the neutrinos are also massive Majorana fermions. The challenge here is that if a new mass scale exists in fundamental physics, we know close to nothing about it. It could be within direct reach of particle-physics experiments, or it could be astronomically high, perhaps as large as 1012 times the vacuum value of the SM’s Higgs field.

Searching for neutrinoless double-beta decay is the most promising avenue to reveal whether neutrinos are Majorana or Dirac fermions

How do we hope to learn more? We need more experimental input. There are many outstanding questions that can only be answered with oscillation experiments. These could provide evidence for new neutrino-like particles or new neutrino interactions and properties. Meanwhile, searching for neutrinoless double-beta decay is the most promising avenue to experimentally reveal whether neutrinos are Majorana or Dirac fermions. Other activities include high-energy collider searches for new Higgs bosons that like to talk to neutrinos and new heavy neutrino-like particles that could be related to the mechanism of neutrino mass generation. Charged-lepton probes, including measurements of the anomalous magnetic moment of muons and searches for lepton-flavour violation, may provide invaluable clues, while surveys of the cosmic microwave background and the distribution of galaxies could also reveal footprints of the neutrino masses in the structure of the universe.

We still know very little about the new physics uncovered by neutrino oscillations. Only a diverse experimental programme will reveal the nature of the new physics behind the neutrino mass puzzle.

The post The neutrino mass puzzle appeared first on CERN Courier.

]]>
Feature André de Gouvêa explains why neutrino masses imply the existence of new fundamental fields. https://cerncourier.com/wp-content/uploads/2024/05/CCMayJun24_NEUTRINOS_frontis.jpg
A safe approach to quantum gravity https://cerncourier.com/a/a-safe-approach-to-quantum-gravity/ Fri, 03 May 2024 13:20:21 +0000 https://preview-courier.web.cern.ch/?p=110580 The “asymptotic safety” approach towards quantum gravity opens new avenues at the intersection between particle physics and gravity.

The post A safe approach to quantum gravity appeared first on CERN Courier.

]]>
The LHC experiments at CERN have been extremely successful in verifying the Standard Model (SM) of particle physics to very high precision. From the theoretical perspective, however, this model has two conceptual shortcomings. One is that the SM appears to be an “effective field theory” that is valid up to a certain energy scale only; the other is that gravity is not part of the model. This raises the question of what a theory comprising particle physics and gravity that is valid for all energy scales might look like. This directly leads to the domain of quantum gravity.

The typical scale associated with quantum-gravity effects is the Planck scale: 1015 TeV, or 10–35 m. This exceeds the scales accessible at the LHC by approximately 14 orders of magnitude, forcing us to ask: what can theorists possibly gain from investigating physics at energies beyond the Planck scale? The answer is simple: the SM includes many free parameters that must be fixed by experimental data. Since the number of these parameters proliferates when higher order interactions are included, one would like to constrain this high-dimensional parameter space.

At low energies, this can be done by implementing bounds derived from demanding unitarity and causality of physical processes. Ideally, one would like to derive similar constraints from consistency at trans-Planckian scales where quantum-gravity effects may play a major role. At first sight, this may seem counterintuitive. It is certainly true that gravity treated as an effective field theory itself does not yield any effect measurable at LHC scales due to its weakness; the additional constraints then arise from requiring that the effective field theories underlying the SM and gravity can be combined and extended into a framework that is valid at all energy scales. Presumably, this will not work for all effective field theories. Taking a “bottom-up” approach (identifying the set of theories for which this extension is possible) may constrain the set of free parameters. Conversely, to be phenomenologically viable, any theory describing trans-Planckian physics must be compatible with existing knowledge at the scales probed by collider experiments. This “top-down” approach may then constrain the potential physics scenarios happening at the quantum-gravity scale – a trajectory that has been followed, for example, by the swampland programme initiated from string theory at all scales.

From the theoretical viewpoint, the SM is formulated in the language of relativistic quantum field theories. On this basis, it is possible that the top-down route becomes more realistic the closer the formulation of trans-Planckian physics sticks to this language. For example, string theory is a promising candidate for a consistent description of trans-Planckian physics. However, connecting the theory to the SM has proven to be very difficult, mainly due to the strong symmetry requirements underlying the formulation. In this regard, the “asymptotic safety” approach towards quantum gravity may offer a more tractable option for implementing the top-down idea since it uses the language of relativistic quantum field theory.

Asymptotic safety

What is the asymptotic-safety scenario, and how does it link quantum gravity to particle physics? Starting from the gravity side, we have a successful classical theory: Einstein’s general relativity. If one tries to upgrade this to a quantum theory, things go wrong very quickly. In the early 1970s, it was shown by Gerard ’t Hooft and Martinus Veltman that applying the perturbative quantisation techniques that have proved highly successful for particle-physics theories fail for general relativity. In short, it introduces an infinite number of parameters (one for each allowed local interaction) and thus requires an infinite number of independent measurements to determine what the values of those parameters are. Although this path leads us to a quantum theory of gravity valid at all scales, the construction lacks predictive power. Still, it results in a perfectly predictive effective field theory describing gravity up to the Planck scale.

QCD running coupling

This may seem discouraging when attempting to formulate a quantum field theory of gravity without introducing new symmetry principles, for example supersymmetry, to remove additional free parameters. A loophole is provided by Kenneth Wilson’s modern understanding of renormalisation. Here, the basic idea is to organise quantum fluctuations according to their momentum and integrate-out these fluctuations, starting from the most energetic ones and proceeding towards lower energy modes. This creates what is called the Wilsonian renormalisation-group “flow” of a theory. Healthy high-energy completions are provided by renormalisation-group fixed points. At these special points the theory becomes scale-invariant, which ensures the absence of divergences. The fixed point also provides predictive power via the condition that the renormalisation-group flow hits the fixed point at high energies (see “Safety belt” figure). For asymptotically-free theories, where all interactions switch off at high energies, the underlying renormalisation-group fixed point is the free theory. This can be seen in the example of quantum chromodynamics (QCD): if the QCD gauge coupling diminishes when going to higher and higher energies, it approaches a fixed point at arbitrary high energies that is non-interacting. One can also envision high-energy completions based on a renormalisation-group fixed point with non-vanishing interactions, which is commonly referred to as asymptotic safety.

Forces of nature

In the context of gravity, the asymptotic-safety scenario was first proposed by Steven Weinberg in the late 1970s. Starting with the seminal work by Martin Reuter (University of Mainz) in 1998, the existence of a renormalisation-group fixed point suitable for rendering gravity asymptotically safe – the so-called Reuter fixed point – is supported by a wealth of first-principle computations. While similar constructions are well known in condensed-matter physics, the Reuter fixed point is distinguished by the fact that it may provide a unified description of all forces of nature. As such, it may have profound consequences for our understanding of the physics inside a black hole, give predictions for parameters of the SM such as the Higgs-boson mass, or disfavour certain types of physics beyond the SM.

The asymptotic-safety approach towards quantum gravity may offer a more tractable option for implementing the top-down idea

The predictive power of the fixed point arises as follows. Only a finite set of parameters exist that describe consistent quantum field theories emanating from the fixed point. One then starts to systematically integrate-out quantum fluctuations (from high to low energy), resulting in a family of effective descriptions in which the quantum fluctuations are taken into account. In practice, this process is implemented by the running of the theory’s couplings, generating what are known as renormalisation-group trajectories. To be phenomenologically viable, the endpoint of the renormalisation group trajectory must be compatible with observations. In the end, only one (or potentially none) of the trajectories emanating from the fixed point will provide a description of nature (see “Going with the flow” image). According to the asymptotic-safety principle, this trajectory must be identified by fixing the free parameters left by the fixed point based on experiments. Once this process is completed, the construction fixes all couplings in the effective field theory in terms of a few free parameters. Since this entails an infinite number of relations that can be probed experimentally, the construction is falsifiable.

Particle physics link

The link to particle physics follows from the observation that the asymptotic-safety construction remains operative once gravity is supplemented by the matter fields of the SM. Non-abelian gauge groups – such as those underlying the electroweak and strong forces, Yukawa interactions and fermion masses – are readily accommodated. A wide range of proof-of-concepts show that this is feasible, gradually bringing the ultimate computation involving the full SM into reach. The fact that gravity remains interacting at the smallest length scales too implies that the construction will feature non-minimal couplings between matter and the gravitational field as well as matter self-interactions of a very specific type. The asymptotic-safety mechanism may then provide the foundation for a realistic quantum field theory unifying all fundamental forces of nature.

Can particle physics tell us whether this specific idea about quantum gravity is on the right track? After all there still exists the vast hierarchy between the energy scales probed by collider experiments and the Planck scale. Surprisingly, the answer is positive! Conceptually, the interacting renormalisation-group fixed point for the gravity–matter theory again gives a set of viable quantum field theories in terms of a fixed number of free parameters. First estimates conducted by Jan Pawlowski and coworkers at Heidelberg University suggest that this number is comparable to the number of free parameters in the SM.

3D space of couplings

In practice, one may then be tempted to make the following connection. Currently, observables probed by collider physics are derived from the SM effective field theory. Hence, they depend on the couplings of the effective field theory. The asymptotic-safety mechanism expresses these couplings in terms of the free parameters associated with the interacting fixed point. Once the SM effective field theory is extended to include operators of sufficiently high mass dimension, the asymptotic-safety dictum predicts highly non-trivial relations between the couplings parameterising the effective field theory. These relations can be confronted with observations that test whether the observables measured experimentally are subject to these constraints. This can either be provided by matching to existing particle-physics data obtained at the LHC, or by astrophysical observations probing the strong-gravity regime. The theoretical programme of deriving such relations is currently under development. A feasible benchmark, showing that the underlying physics postulates are on the right track, would then be to “post-dict” the experimental results already available. Showing that a theory formulated at the Planck scale is compatible with the SM effective field theory would be a highly non-trivial achievement in itself.

Showing that a theory formulated at the Planck scale is compatible with the SM effective field theory would be a highly non-trivial achievement in itself

This line of testing quantum gravity experimentally may be seen as orthogonal to more gravity-focused tests that attempt to decipher the quantum nature of gravity. Recent ideas in these directions have evolved around developing tabletop experiments that probe the quantum superposition of macroscopic objects at sub-millimetre scales, which could ultimately be developed into a quantum-Cavendish experiment that probes the gravitational field of source masses in spatial quantum superposition states. The emission of a graviton could then lead to decoherence effects which give hints that gravity indeed has a force carrier similar to the other fundamental forces. Of course, one could also hope that experiments probing gravity in the strong-gravity regime find deviations from general relativity. So far, this has not been the case. This is why particle physics may be a prominent and fruitful arena in which to also test quantum-gravity theories such as asymptotic safety in the future.

For decades, quantum-gravity research has been disconnected from directly relevant experimental data. As a result, the field has developed a vast variety of approaches that aim to understand the laws of physics at the Planck scale. These include canonical quantisation, string theory, the AdS/CFT correspondence, loop quantum gravity and spin foams, causal dynamical triangulations, causal set theory, group field theory and asymptotic safety. The latter has recently brought a new perspective on the field: supplementing the quantum-gravity sector of the theory by the matter degrees of freedom of the SM opens an exciting window through which to confront the construction with existing particle-physics data. As a result, this leads to new avenues of research at the intersection between particle physics and gravity, marking the onset of a new era in quantum-gravity research in which the field travels from a purely theoretical to an observationally guided endeavour.

The post A safe approach to quantum gravity appeared first on CERN Courier.

]]>
Feature The “asymptotic safety” approach towards quantum gravity opens new avenues at the intersection between particle physics and gravity. https://cerncourier.com/wp-content/uploads/2024/05/CCMayJun24_ASYMP_space.jpg
Tango for two: LHCb and theory https://cerncourier.com/a/tango-for-two-lhcb-and-theory/ Sat, 13 Apr 2024 12:30:22 +0000 https://preview-courier.web.cern.ch/?p=110480 The 13th Implications of LHCb measurements and future prospects workshop showcased mutual enthusiasm between the experimental and theoretical communities

The post Tango for two: LHCb and theory appeared first on CERN Courier.

]]>
The 13th annual “Implications of LHCb measurements and future prospects” workshop, held at CERN on 25–27 October 2023, drew substantial interest with 231 participants. This collaborative event between LHCb and the theoretical community showcased the mutual enthusiasm for LHCb’s physics advances. The workshop featured five streams highlighting the latest experimental and theoretical developments in mixing and CP violation, heavy ions and fixed-target results, flavour-changing charged currents, QCD spectroscopy and exotics, and flavour-changing neutral currents.

The opening talk by Monica Pepe Altarelli underscored LHCb’s diverse physics programme, solidifying its role as a highly versatile forward detector. While celebrating successes, her talk candidly addressed setbacks, notably the new results in tests of lepton-flavour universality. LHCb detector and computing upgrades for Run 3 include a fully software-based trigger using graphics processing units. The collaboration is also working towards an Upgrade II programme for Long Shutdown 4 (2033–2034) that would position LHCb as a potentially unique global flavour facility.

On mixing and CP violation, the October workshop unveiled intriguing insights in both the beauty and charm sectors. In the beauty sector, notable highlights encompass measurements of the mixing parameter ΔΓs and of CP-violating phases such as ϕs,d, ϕssss and γ. CP asymmetries were further scrutinised in B  DD decays, accounting for SU(3) breaking and re-scattering effects. In the charm sector, the estimated CP asymmetries considering final-state interactions were found to be small compared to the experimental values related to D0 ππ+ and D0 KK+ decays. Novel measurements of CP violation in three-body charm hadron decays were also presented.

Unique capabilities

On the theoretical front, discussions delved into the current status of bottom-baryon lifetimes. Recent lattice predictions on the εK parameter were also showcased, offering refined constraints on the unitarity triangle. The LHCb experiment’s unique capabilities were discussed in the heavy ions and fixed-target session. Operating in fixed-target mode, LHCb collected data pertaining to proton–ion and lead–ion interactions during LHC Run 2 using the SMOG system. Key highlights included measurements impacting theoretical models of charm hadronisation, global analyses of nuclear parton density functions, and the identification of helium nuclei and deuterons. The first Run 3 data with the SMOG2 upgrade showed promising results in proton–argon and proton–hydrogen collisions, opening a path to measurements with implications for heavy-ion physics and astrophysics.

The session on flavour-changing charged currents unveiled a recent measurement concerning the longitudinal polarisation of D* mesons in B0 D*τντ decays, aligning with Standard Model (SM) expectations. Discussions delved into lepton-flavour-universality tests that showed a 3.3σ tension with predictions in the combined R(D(*)) measurement. Noteworthy were new lattice-QCD predictions for charged current decays, especially R(D(*)), showcasing disparities in the SM prediction across different lattice groups. Updates on the CKM matrix elements |Vub| and |Vcb| lead to a reduced tension between inclusive and exclusive determinations. The session also discussed the impact of high-energy constraints of Wilson coefficients on charged-current decays and Bayesian inference of form-factor parameters, regulated by unitarity and analyticity. The QCD spectroscopy and exotics session also featured important findings, including the discovery of novel baryon states, notably Ξb(6087)0 and Ξb(6095)0. Pentaquark exploration involved diverse charm–hadron combinations, alongside precision measurements of the Ω0c mass and first observations of b-hadron decays with potential exotic-state contributions. Charmonia-associated production provided fresh insights for testing QCD predictions, and an approach based on effective field theory (EFT) interpreting pentaquarks as hadronic molecules was presented. A new model-independent Born–Oppenheimer EFT framework for the interpretation of doubly heavy tetraquarks, utilising lattice QCD predictions, was introduced. Scrutinising charm–tetraquark decays and the interpretation of newly discovered hadron states at the LHC were also discussed.

During the flavour-changing neutral-current session a new analysis of B0 K*0μ+μ decays was presented, showing consistency with SM expectations. Stringent limits on branching fractions of rare charm decays and precise differential branching fraction measurements of b-baryon decays were also highlighted. Challenges in SM predictions for b  sℓℓ and rare charm decays were discussed, underscoring the imperative for a deeper comprehension of underlying hadronic processes, particularly leveraging LHCb data. Global analyses of b  dℓℓ and b  sℓℓ decays were presented, alongside future prospects for these decays in Run 3 and beyond. The session also explored strategies to enhance sensitivity to new physics in B± π±μ+μ decays.

The keynote talk, delivered by Svjetlana Fajfer, offered a comprehensive summary and highlighted existing anomalies that demand further consideration. Tackling these challenges necessitates precise measurements at both low and high energies, with the collaborative efforts of LHCb, Belle II, CMS and ATLAS. Additionally, advancements in lattice QCD and other novel theoretical approaches are needed for precise theoretical predictions in tandem with experimental efforts.

The post Tango for two: LHCb and theory appeared first on CERN Courier.

]]>
Meeting report The 13th Implications of LHCb measurements and future prospects workshop showcased mutual enthusiasm between the experimental and theoretical communities https://cerncourier.com/wp-content/uploads/2024/04/CCMarApr24_FN_Altarelli.jpg
Getting to the bottom of muon g-2 https://cerncourier.com/a/getting-to-the-bottom-of-muon-g-2/ Fri, 03 Nov 2023 12:10:42 +0000 https://preview-courier.web.cern.ch/?p=109639 The sixth plenary workshop of the Muon g-2 Theory Initiative covered the status and strategies for future improvements of the Standard Model prediction for the anomalous magnetic moment of the muon.

The post Getting to the bottom of muon g-2 appeared first on CERN Courier.

]]>
Muon g-2 Theory Initiative

About 90 physicists attended the sixth plenary workshop of the Muon g-2 Theory Initiative, held in Bern from 4 to 8 September, to discuss the status and strategies for future improvements of the Standard Model (SM) prediction for the anomalous magnetic moment of the muon. The meeting was particularly timely given the recent announcement of the results from runs two and three of the Fermilab g-2 experiment (Muon g-2 update sets up showdown with theory), which reduced the uncertainty of the world average to 0.19 ppm, in dire need of a SM prediction at commensurate precision. The main topics of the workshop were the two hadronic contributions to g-2, hadronic vacuum polarisation (HVP) and hadronic light-by-light scattering (HLbL), evaluated either with a lattice–QCD or data-driven approach.

Hadronic vacuum polarisation

The first one-and-a-half days were devoted to the evaluation of HVP – the largest QCD contribution to g-2, whereby a virtual photon briefly transforms into a hadronic “blob” before being reabsorbed – from e+e data. The session started with a talk from the CMD-3 collaboration at the VEPP-2000 collider, whose recent measurement of the e+e π+π cross section generated shock waves earlier this year by disagreeing (at the level of 2.5–5σ) with all previous measurements used in the Theory Initiative’s 2020 white paper. The programme also featured a comparison with results from the earlier CMD-2 experiment, and a report from seminars and panel discussions organised by the Theory Initiative in March and July on the details of the CMD-3 result. While concerns remain regarding the estimate of certain systematic effects, no major shortcomings could be identified.

Further presentations from BaBar, Belle II, BESIII, KLOE and SND detailed their plans for new measurements of the 2π channel, which in the case of BaBar and KLOE involve large data samples never analysed before for this measurement. Emphasis was put on the role of radiative corrections, including a recent paper by BaBar on additional radiation in initial-state-radiation events and, in general, the development of higher-order Monte Carlo generators. Intensive discussions reflected a broad programme to clarify the extent to which tensions among the experiments can be due to higher-order radiative effects and structure-dependent corrections. Finally, updated combined fits were presented for the 2π and 3π channels, for the former assessing the level of discrepancy among datasets, and for the latter showing improved determinations of isospin-breaking contributions.

CMD-3 generated shock waves by disagreeing with all previous measurements at the level of 2.5-5σ

Six lattice collaborations (BMW, ETMC, Fermilab/HPQCD/MILC, Mainz, RBC/UKQCD, RC*) presented updates on the status of their respective HVP programmes. For the intermediate-window quantity (the contribution of the region of Euclidean time between about 0.4–1.0 fm, making up about one third of the total), a consensus has emerged that differs from e+e-based evaluations (prior to CMD-3) by about 4σ, while the short-distance window comes out in agreement. Plans for improved evaluations of the long-distance window and isospin-breaking corrections were presented, leading to the expectation of new, full computations for the total HVP contribution in addition to the BMW result in 2024. Several talks addressed detailed comparisons between lattice-QCD and data-driven evaluations, which will allow physicists to better isolate the origin of the differences once more results from each method become available. A presentation on possible beyond-SM effects in the context of the HVP contribution showed that it seems quite unlikely that new physics can be invoked to solve the puzzles.

Light-by-light scattering

The fourth day of the workshop was devoted to the HLbL contribution, whereby the interaction of the muon with the magnetic field is mediated by a hadronic blob connected to three virtual photons. In contrast to HVP, here the data-driven and lattice-QCD evaluations agree. However, reducing the uncertainty by a further factor of two is required in view of the final precision expected from the Fermilab experiment. A number of talks discussed the various contributions that feed into improved phenomenological evaluations, including sub-leading contributions such as axial-vector intermediate states as well as short-distance constraints and their implementation. Updates on HLbL from lattice QCD were presented by the Mainz and RBC/UKQCD groups, as were results on the pseudoscalar transition form factor by ETMC and BMW. The latter in particular allow cross checks of the numerically dominant pseudoscalar- pole contributions between lattice QCD and data-driven evaluations.

It is critical that the Theory Initiative work continues beyond the lifespan of the Fermilab experiment

On the final day, the status of alternative methods to determine the HVP contribution were discussed, first from the MUonE experiment at CERN, then from τ data (by Belle, CLEOc, ALEPH and other LEP experiments). First MUonE results could become available at few-percent precision with data taken in 2025, while a competitive measurement would proceed after Long Shutdown 3. For the τ data, new input is expected from the Belle II experiment, but the critical concern continues to be control over isospin-breaking corrections. Progress in this direction from lattice QCD was presented by the RBC/UKQCD collaboration, together with a roadmap showing how, potentially in combination with data-driven methods, τ data could lead to a robust, complementary determination of the HVP contribution.

The workshop concluded with a discussion on how to converge on a recommendation for the SM prediction in time for the final Fermilab result, expected in 2025, including new information expected from lattice QCD, the BaBar 2π analysis and radiative corrections. A final decision for the procedure for an update of the 2020 white paper is planned to be taken at the next plenary meeting in Japan in September 2024. In view of the long-term developments discussed at the workshop – not least the J-PARC Muon g-2/EDM experiment, due to start taking data in 2028 – it is critical that the work by the Theory Initiative continues beyond the lifespan of the Fermilab experiment, to maximise the amount of information on physics beyond the SM that can be inferred from precision measurements of the anomalous magnetic moment of the muon.

The post Getting to the bottom of muon g-2 appeared first on CERN Courier.

]]>
Meeting report The sixth plenary workshop of the Muon g-2 Theory Initiative covered the status and strategies for future improvements of the Standard Model prediction for the anomalous magnetic moment of the muon. https://cerncourier.com/wp-content/uploads/2023/11/CCNovDec23_FN_muon_feature.jpg
Design principles of theoretical physics https://cerncourier.com/a/design-principles-of-theoretical-physics/ Fri, 21 Apr 2023 12:08:17 +0000 https://preview-courier.web.cern.ch/?p=108304 A CERN event explored new angles of attack on the biggest naturalness questions in fundamental physics, from the cosmological constant to the Higgs mass.

The post Design principles of theoretical physics appeared first on CERN Courier.

]]>
“Now I know what the atom looks like!” Ernest Rutherford’s simple statement belies the scientific power of reductionism. He had recently discovered that atoms have substructure, notably that they comprise a dense positively charged nucleus surrounded by a cloud of negatively charged electrons. Zooming forward in time, that nucleus ultimately gave way further when protons and neutrons were revealed at its core. A few stubborn decades later they too gave way with our current understanding being that they are comprised of quarks and gluons. At each step a new layer of nature is unveiled, sometimes more, sometimes less numerous in “building blocks” than the one prior, but in every case delivering explanations, even derivations, for the properties (in practice, parameters) of the previous layer. This strategy, broadly defined as “build microscopes, find answers” has been tremendously successful, arguably for millennia.

Natural patterns

While investigating these successively explanatory layers of nature, broad patterns emerge. One of which is known colloquially as “naturalness”. This pattern essentially asserts that in reversing the direction and going from one microscopic theory, “the UV-completion”, to its larger-scale shell, “the IR”, the values of parameters measured in the latter are, essentially, “typical”. Typical, in the sense that they reflect the scales, magnitudes and, perhaps most importantly, the symmetries of the underlying UV completion. As Murray Gell-Mann once said: “everything not forbidden is compulsory”.

So, if some symmetry is broken by a large amount by some interaction in the UV theory, the same symmetry, in whatever guise it may have adopted, will also be broken by a large amount in the IR theory. The only exception to this is accidental fine-tuning, where large UV-breakings can in principle conspire and give contributions to IR-breakings that, in practical terms, accidentally cancel to a high degree, giving a much smaller parameter than expected in the IR theory. This is colloquially known as “unnaturalness”.

There are good examples of both instances. There is no symmetry in QCD that could keep a proton light; unsurprisingly it has mass of the same order as the dominant mass scale in the theory, the QCD scale, mp ~ ΛQCD. But there is a symmetry in QCD that keeps the pion light. The only parameters in UV theory that break this symmetry are the light quark masses. Thus, the pion mass-squared is expected to be around m2π ~ mqΛQCD. Turns out, it is.

There are also examples of unnatural parameters. If you measure enough different physical observables, observations that are unlikely on their own become possible in a large ensemble of measurements – a sort of theoretical “look elsewhere effect”. For example, consider the fact that the Moon almost perfectly obscures the Sun during a lunar eclipse. There is no symmetry which requires that the angular size of the Moon should almost match that of the Sun to an Earth-based observer. Yet, given many planets and many moons, this will of course happen for some planetary systems.

However, if an observation of a parameter returns an apparently unnatural value, can one be sure that it is accidentally small? In other words, can we be confident we have definitively explored all possible phenomena in nature that can give rise to naturally small parameters? 

From 30 January to 3 February, participants of an informal CERN theory institute “Exotic Approaches to Naturalness” sought to answer this question. Drawn from diverse corners of the theorist zoo, more than 130 researchers gathered, both virtually and in person, to discuss questions of naturalness. The invited talks were chosen to expose phenomena in quantum field theory and beyond which challenge the naive naturalness paradigm.

Coincidences and correlations

The first day of the workshop considered how apparent numerical coincidences can lead to unexpectedly small parameters in the IR due to the result of selection rules that do not immediately manifest from a symmetry, known as “natural zeros”. A second set of talks considered how, going beyond quantum field theory, the UV and IR can potentially be unexpectedly correlated, especially in theories containing quantum gravity, and how this correlation can lead to cancellations that are not apparent from a purely quantum field theory perspective.

The second day was far-ranging, with the first talk unveiling some lower dimensional theories of the sort one more readily finds in condensed matter systems, in which “topological” effects lead to constraints on IR parameters. A second discussed how fundamental properties, such as causality, can impose constraints on IR parameters unexpectedly. The last demonstrated how gravitational effective theories, including those describing the gravitational waves emitted in binary black hole inspirals, have their own naturalness puzzles.

The ultimate goal is to now go forth and find new angles of attack on the biggest naturalness questions in fundamental physics

Midweek, alongside an inspirational theory colloquium by Nathaniel Craig (UC Santa Barbara), the potential role of cosmology in naturalness was interrogated. An early example made famous by Steven Weinberg concerns the role of the “anthropic principle” in the presently measured value of the cosmological constant. However, since then, particularly in recent years, theorists have found many possible connections and mechanisms linking naturalness questions to our universe and beyond.

The fourth day focussed on the emerging world of generalised and higher-form symmetries, which are new tools in the arsenal of the quantum field theorist. It was discussed how naturalness in IR parameters may potentially arise as a consequence of these recently uncovered symmetries, but whose naturalness would otherwise be obscured from view within a traditional symmetry perspective. The final day studied connections between string theory, the swampland and naturalness, exploring how the space of theories consistent with string theory leads to restricted values of IR parameters, which potentially links to naturalness. An eloquent summary was delivered by Tim Cohen (CERN).

Grand slam

In some sense the goal of the workshop was to push back the boundaries by equipping model builders with new and more powerful perspectives and theoretical tools linked to questions of naturalness, broadly defined. The workshop was a grand slam in this respect. However, the ultimate goal is to now go forth and use these new tools to find new angles of attack on the biggest naturalness questions in fundamental physics, relating to the cosmological constant and the Higgs mass.

The Standard Model, despite being an eminently marketable logo for mugs and t-shirts, is incomplete. It breaks down at very short distances and thus it is the IR of some more complete, more explanatory UV theory. We don’t know what this UV theory is, however, it apparently makes unnatural predictions for the Higgs mass and cosmological constant. Perhaps nature isn’t unnatural and generalised symmetries are as-yet hidden from our eyes, or perhaps string theory, quantum gravity or cosmology has a hand in things? It’s also possible, of course, that nature has fine-tuned these parameters by accident, however, that would seem – à la Weinberg – to point towards a framework in which such parameters are, in principle, measured in many different universes. All of these possibilities, and more, were discussed and explored to varying degrees.

Perhaps the most radical possibility, the most “exotic approach to naturalness” of all, would be to give up on naturalness altogether. Perhaps, in whatever framework UV completes the Standard Model, parameters such as the Higgs mass are simply incalculable, unpredictable in terms of more fundamental parameters, at any length scale. Shortly before the advent of relativity, quantum mechanics, and all that have followed from them, Lord Kelvin (attribution contested) once declared: “There is nothing new to be discovered in physics now. All that remains is more and more precise measurement”. The breadth of original ideas presented at the “Exotic Approaches to Naturalness” workshop, and the new connections constantly being made between formal theory, cosmology and particle phenomenology, suggest it would be similarly unwise now, as it was then, to make such a wager.

The post Design principles of theoretical physics appeared first on CERN Courier.

]]>
Meeting report A CERN event explored new angles of attack on the biggest naturalness questions in fundamental physics, from the cosmological constant to the Higgs mass. https://cerncourier.com/wp-content/uploads/2023/04/CCMayJun23_FN_yukawa.jpg
Lost in the landscape https://cerncourier.com/a/lost-in-the-landscape/ Fri, 03 Mar 2023 11:59:14 +0000 https://preview-courier.web.cern.ch/?p=107881 20 years since coining the string-theory "landscape", Leonard Susskind describes the emerging connections between quantum mechanics and gravity.

The post Lost in the landscape appeared first on CERN Courier.

]]>
What is string theory?

I take a view that a lot of my colleagues will not be too happy with. String theory is a very precise mathematical structure, so precise that many mathematicians have won Fields medals by making contributions that were string-theory motivated. It’s supersymmetric. It exists in flat or anti-de Sitter space (that is, a space–time with a negative curvature in the absence of matter or energy). And although we may not understand it fully at present, there does appear to be an exact mathematical structure there. I call that string theory with a capital “S”, and I can tell you with 100% confidence that we don’t live in that world. And then there’s string theory with a small “s” – you might call it string-inspired theory, or think of it as expanding the boundaries of this very precise theory in ways that we don’t know how to at present. We don’t know with any precision how to expand the boundaries into non-supersymmetric string theory or de Sitter space, for example, so we make guesses. The string landscape is one such guess. It’s not based on absolutely precise capital-S string theory, but on some conjectures about what this expanded small-s string theory might be. I guess my prejudice is that some expanded version of string theory is probably the right theory to describe particle physics. But it’s an expanded version, it’s not supersymmetric. Everything we do in anti-de-Sitter-space string theory is based on the assumption of absolute perfect supersymmetry. Without that, the models we investigate are rather speculative. 

How has the lack of supersymmetric discoveries at the LHC impacted your thinking?

All of the string theories we know about with any precision are exactly supersymmetric. So if supersymmetry is broken at the weak scale or beyond, it doesn’t help because we’re still facing a world that is not exactly supersymmetric. This only gets worse as we find out that supersymmetry doesn’t seem to even govern the world at the weak scale. It doesn’t even seem to govern it at the TeV scale. But that, I think, is secondary. The first primary fact is that the world is not exactly supersymmetric and string theory with a capital S is. So where are we? Who knows! But it’s exciting to be in a situation where there is confusion. Anything that can be said about how string theory can be precisely expanded beyond the supersymmetric bounds would be very interesting. 

What led you to coin the string theory “landscape” in 2003? 

A variety of things, among them the work of other people, in particular Polchinski and Bousso, who conjectured that string theories have a huge number of solutions and possible behaviours. This was a consequence, later articulated in a 2003 paper abbreviated “KKLT” after its authors, of the innumerable (initial estimates put it at more than 10500) different ways the additional dimensions of string theory can be hidden or “compactified”. Each solution has different properties, coupling constants, particle spectra and so forth. And they describe different kinds of universes. This was something of a shock and a surprise; not that string theory has many solutions, but that the numbers of these possibilities could be so enormous, and that among those possibilities were worlds with parameters, in particular the cosmological constant, which formed a discretuum as opposed to a continuum. From one point of view that’s troubling because some of us, me less than others, had hoped there was some kind of uniqueness to the solutions of string theory. Maybe there was a small number of solutions and among them we would find the world that we live in, but instead we found this huge number of possibilities in which almost anything could be found. On the other hand, we knew that the parameters of our world are unusual, exceptional, fine-tuned – not generic, but very special. And if the string landscape could say that there would be solutions containing the peculiar numbers that we face in physics, that was interesting. Another motivation came from cosmology: we knew on the basis of cosmic-microwave-background experiments and other things that the portion of the universe we see is very flat, implying that it is only a small part of the total. Together with the peculiar fine-tunings of the numbers in physics, it all fitted a pattern: the spectrum of possibilities would not only be large, but the spectrum of things we could find in the much bigger universe that would be implied by inflation and the flatness of the universe might just include all of these various possibilities. 

So that’s how anthropic reasoning entered the picture?

All this fits together well with the anthropic principle – the idea that the patterns of coupling constants and particle spectra were conditioned on our own existence. Weinberg was very influential in putting forward the idea that the anthropic principle might explain a lot of things. But at that time, and probably still now, many people hated the idea. It’s a speculation or conjecture that the world works this way. The one thing I learned over the course of my career is not to underestimate the potential for surprises. Surprises will happen, patterns that look like they fit together so nicely turn out to be just an illusion. This could happen here, but at the moment I would say the best explanation for the patterns we see in cosmology and particle physics is a very diverse landscape of possibilities and an extremely large universe – a multiverse, if you like – that somehow manifests all of these possibilities in different places. Is it possible that it’s wrong? Oh yes! We might just discover that this very logical, compelling set of arguments is not technically right and we have to go in some other direction. Witten, who had negative thoughts about the anthropic idea, eventually gave up and accepted that it seems to be the best possibility. And I think that’s probably true for a lot of other people. But it can’t have the ultimate influence that a real theory with quantitative predictions can have. At present it’s a set of ideas that fit together and are somewhat compelling, but unfortunately nobody really knows how to use this in a technical way to be able to precisely confirm it. That hasn’t changed in 20 years. In the meantime, theoretical physicists have gone off in the important direction of quantum gravity and holography. 

Possible string-theory solutions

What do you mean by holography in the string-theory context?

Holography predates the idea of the landscape. It was based on Bekenstein’s observation that the entropy of a black hole is proportional to the area of the horizon and not the volume of the black hole. It conjectures that the 3D world of ordinary experience is an image of reality coded on a distant 2D surface. A few years after the holographic principle was first conjectured, two precise versions of it were discovered; so called M(atrix) theory in 1996 and Maldacena’s “AdS/CFT” correspondence in 1997. The latter has been especially informative. It holds that there is a holographic duality between anti-de Sitter space formulated in terms of string theory, and quantum field theories that are similar to those that describe elementary particles. I don’t think string theory and holography are inconsistent with each other. String theory is a quantum theory that contains gravity, and all quantum mechanical gravity theories have to be holographic. String theory and holographic theory could well be the same thing. 

Almost anything we learn will be a large fraction of what we know

One of the things that troubles me about the standard model of cosmology, with inflation and a positive cosmological constant, is that the world, or at least the portion of it that we see, is de Sitter space. We do not have a good quantum understanding of de Sitter space. If we ultimately learn that de Sitter space is impossible, that would be very interesting. We are in a situation now that is similar to 20 years ago, where very little progress has been made in the quantum foundations of cosmology and in particular in the so-called measurement problem, where we don’t know how to use these ideas quantitatively to make predictions. 

What does the measurement problem have to do with it? 

The usual methodology of physics, in particular quantum mechanics, is to imagine systems that are outside the systems we are studying. We call these systems observers, apparatuses or measuring devices, and we sort of divide the world into those measuring devices and the things we’re interested in. But it’s quite clear that in the world of cosmology/de Sitter space/eternal inflation, that we’re all part of the same thing. And I think that’s partly why we are having trouble understanding the quantum mechanics of these things. In AdS/CFT, it’s perfectly logical to think about observers outside the system or observers on the boundary. But in de Sitter space there is no boundary; there’s only everything that’s inside the de Sitter space. And we don’t really understand the foundations or the methodology of how to think about a quantum world from the inside. What we’re really lacking is the kind of precise examples we have in the context of anti-de Sitter space, which we can analyse. This is something I’ve been looking for, as have many others including Witten, without much success. So that’s the downside: we don’t know very much.

What about the upsides? 

The upside is that almost anything we learn will be a large fraction of what we know. So there’s potential for great developments by simply understanding a few things about the quantum mechanics of de Sitter space. When I talk about this to some of my young friends, they say that de Sitter space is too hard. They are afraid of it. People have been burned over the years by trying to understand inflation, eternal inflation, de Sitter space, etc, so it’s much safer to work on anti-de Sitter space. My answer to that is: yes, you’re right, but it’s also true that a huge amount is known about anti-de Sitter space and it’s hard to find new things that haven’t been said before, whereas in de Sitter space the opposite is true. We will see, or at least the young people will see. I am getting to the point where it is hard to absorb new ideas.

To what extent can the “swampland” programme constrain the landscape?

The swampland is a good idea. It’s the idea that you can write down all sorts of naive semi-classical theories with practically infinite options, but that the consistency with quantum mechanics constrains the things that are possible, and those that violate the constraints are called the swampland. For example, the idea that there can’t be exact global symmetries in a quantum theory of gravity, so any theory you write down that has gravity and has a global symmetry in it, without having a corresponding gauge symmetry, will be in the swampland. The weak-gravity conjecture, which enables you to say something about the relative strengths of gauge forces and gravity acting on certain particles, is another good idea. It’s good to try to separate those things you can write down from a semi-classical point of view and those that are constrained by whatever the principles of quantum gravity are. The detailed example of the cosmological constant I am much less impressed by. The argument seems to be: let’s put a constraint on parameters in cosmology so that we can put de Sitter space in the swampland. But the world looks very much like de Sitter space, so I don’t understand the argument and I suspect people are wrong here.

What have been the most important and/or surprising physics results in your career?

I had one big negative surprise, as did much of the community. This was a while ago when the idea of “technicolour” – a dynamical way to break electroweak symmetry via new gauge interactions – turned out to be wrong. Everybody I knew was absolutely convinced that technicolour was right, and it wasn’t. I was surprised and shocked. As for positive surprises, I think it’s the whole collection of ideas called “it from qubit”. This has shown us that quantum mechanics and gravity are much more closely entangled with each other than we ever thought, and that the apparent difficulty in unifying them was because they were already unified; so to separate and then try to put them back together using the quantisation technique was wrong. Quantum mechanics and gravity are so closely related that in some sense they’re almost the same thing. I think that’s the message from the past 20 – and in particular the past 10 – years of it–from-qubit physics, which has largely been dominated by people like Maldacena and a whole group of younger physicists. This intimate connection between entanglement and spatial structure – the whole holographic and “ER equals EPR” ideas – is very bold. It has given people the ability to understand Hawking radiation, among other things, which I find extremely exciting. But as I said, and this is not always stated, in order to have real confidence in the results, it all ultimately rests on the assumption of theories that have exact supersymmetry. 

What are the near-term prospects to empirically test these ideas?

One extremely interesting idea is “quantum gravity in the lab” – the idea that it is possible to construct systems, for example a large sphere of material engineered to support surface excitations that look like conformal field theory, and then to see if that system describes a bulk world with gravity. There are already signs that this is true. For example, the recent claim, involving Google, that two entangled quantum computers have been used to send information through the analogue of a wormhole shows how the methods of gravity can influence the way quantum communication is viewed. It’s a sign that quantum mechanics and gravity
are not so different.

Do you have a view about which collider should follow the LHC? 

You know, I haven’t done real particle physics for a long time. Colliders fall into two categories: high-precision e+e colliders and high-energy proton–proton ones. So the question is: do we need a precision Higgs factory at the TeV scale or do we want to search for new phenomena at higher energies? My prejudice is the latter. I’ve always been a “slam ‘em together and see what comes out” sort of physicist. Analysing high-precision data is always more clouded. But I sure wouldn’t like anyone to take my advice on this too seriously.

The post Lost in the landscape appeared first on CERN Courier.

]]>
Opinion 20 years since coining the string-theory "landscape", Leonard Susskind describes the emerging connections between quantum mechanics and gravity. https://cerncourier.com/wp-content/uploads/2023/02/CCMarApr23_INT_Susskind.jpg
Back to the Swamp https://cerncourier.com/a/back-to-the-swamp/ Thu, 19 Jan 2023 10:59:54 +0000 https://preview-courier.web.cern.ch/?p=107773 The "swampland" programme has led to a series of conjectures that have sparked debate about how to connect string theory with the observed universe.

The post Back to the Swamp appeared first on CERN Courier.

]]>
Since its first revolution in the 1980s, string theory has been proposed as a framework to unify all known interactions in nature. As such, it is a perfect candidate to embed the standard models of particle physics and cosmology into a consistent theory of quantum gravity. Over the past decades, the quest to recover both models as low-energy effective field theories (EFTs) of string theory has led to many surprising results, and to the notion of a “landscape” of string solutions reproducing many key features of the universe.

back_to_the_swamp

Initially, the vast number of solutions led to the impression that any quantum field theory could be obtained as an EFT of string theory, hindering the predictive power of the theory. In fact, recent developments have shown that quite the opposite is true: many respectable-looking field theories become inconsistent when coupled to quantum gravity and can never be obtained as EFTs of string theory. This set is known as the “swampland” of quantum field theories. The task of the swampland programme is to determine the structure and boundaries of the swampland, and from there extract the predictive power of string theory. Over the past few years, deep connections between the swampland and a fundamental understanding of open questions in high-energy physics ranging from the hierarchy of fundamental scales to the origin and fate of the universe, have emerged.

The workshop Back to the Swamp, held at Instituto de Física Teórica UAM/CSIC in Madrid from 26 to 28 September, gathered leading experts in the field to discuss recent progress in our understanding of the swampland, as well as its implications for particle physics and cosmology. In the spirit of the two previous conferences Vistas over the Swampland and Navigating the Swampland, also hosted at IFT, the meeting featured 22 scientific talks and attracted about 100 participants.

The swampland programme has led to a series of conjectures that have sparked debate about how to connect string theory with the observed universe, especially with models of early-universe cosmology. This was reflected with several talks on the subject, ranging from new scrutiny of current proposals to obtain de Sitter vacua, which might not be consistently constructed in quantum gravity, new candidates for quintessence models that introduce a scalar field to explain the observed accelerated expansion  of the universe, and scenarios where dark matter is composed of primordial black holes. Several talks covered the implications of the programme for particle physics and quantum field theories in general. Topics included axion-based proposals to solve the strong-CP problem from the viewpoint of quantum gravity, as well as how axion physics and approximate symmetries can link swampland ideas with experiment and how the mathematical concept of “tameness” could describe those quantum field theories that are compatible with quantum gravity. Progress on the proposal to characterize large field distances and field-dependent weak couplings as emergent concepts, general bounds on supersymmetric quantum field theories from consistency of axionic string worldsheet theories, and several proposals on how dispersive bound and the boostrap programme are also relevant for swampland ideas. Finally, several talks covered more formal topics, such as a sharpened formulation of the distance conjecture, new tests of the tower weak gravity conjecture, the discovery of new corners in the string theory landscape, and arguments in favour of and against Euclidean wormholes.

The new results demonstrated the intense activity in the field and highlighted several current aspects of the swampland programme. It is clear that the different proposals and conjectures driving the programme have sharpened and become more interconnected. Each year, the programme attracts more scientists working in different specialities of string theory, and proposals to connect the swampland with experiment take a larger fraction of the efforts.

The post Back to the Swamp appeared first on CERN Courier.

]]>
Meeting report The "swampland" programme has led to a series of conjectures that have sparked debate about how to connect string theory with the observed universe. https://cerncourier.com/wp-content/uploads/2023/01/back_to_the_swamp.png
A theory of theories https://cerncourier.com/a/a-theory-of-theories/ Tue, 10 Jan 2023 11:42:08 +0000 https://preview-courier.web.cern.ch/?p=107599 Effective field theory (EFT) is now seen as the most fundamental and generic way to capture the physics of nature at all scales, with applications ranging from LHC physics to cosmology.

The post A theory of theories appeared first on CERN Courier.

]]>
Production of Higgs bosons in ATLAS

High-energy physics spans a wide range of energies, from a few MeV to TeV, that are all relevant. It is therefore often difficult to take all phenomena into account at the same time. Effective field theories (EFTs) are designed to break down this range of scales into smaller segments so that physicists can work in the relevant range. Theorists “cut” their theory’s energy scale at the order of the mass of the lightest particle omitted from the theory, such as the proton mass. Thus, multi-scale problems reduce to separate and single-scale problems (see “Scales” image). EFTs are today also understood to be “bottom-up” theories. Built only out of the general field content and symmetries at the relevant scales, they allow us to test hypotheses efficiently and to select the most promising ones without needing to know the underlying theories in full detail. Thanks to their applicability to all generic classical and quantum field theories, the sheer variety of EFT applications is striking. 

In hindsight, particle physicists were working with EFTs from as early as Fermi’s phenomenological picture of beta decay in which a four-fermion vertex replaces the W-boson propagator because the momentum is much smaller compared to the mass of the W boson (see “Fermi theory” image). Like so many profound concepts in theoretical physics, EFT was first considered in a narrow phenomenological context. One of the earliest instances was in the 1960s, when ad-hoc methods of current algebras were utilised to study weak interactions of hadrons. This required detailed calculations, and a simpler approach was needed to derive useful results. The heuristic idea of describing hadron dynamics with the most general Lagrangian density based on symmetries, the relevant energy scale and the relevant particles, which can be written in terms of operators multiplied by Wilson coefficients, was yet to be known. With this approach, it was possible to encode local symmetries in terms of the current algebra due to their association with conserved currents. 

For strong interactions, physicists described the interaction between pions with chiral perturbation theory, an effective Lagrangian, which simplified current algebra calculations and enabled the low-energy theory to be investigated systematically. This “mother” of modern EFTs describes the physics of hadrons and remains valid to an energy scale of the proton mass. Heavy-quark effective theory (HQET), introduced by Howard Georgi in 1990, complements chiral perturbation theory by describing the interactions of charm and bottom quarks. HQET allowed us to make predictions on B-meson decay rates, since the corrections could now be classified. The more powers of energy are allowed, the more infinities appear. These infinities are cancelled by available counter-terms. 

Different effective field theories

Similarly, it is possible to regard the Standard Model as the truncation of a much more general theory including non-renormalisable interactions, which yield corrections of higher order in energy. This perception of the whole Standard Model as an effective field theory started to be formed in the late 1970s by Weinberg and others (see “All things EFT: a lecture series hosted at CERN” panel). Among the known corrections to the Standard Model that do not satisfy its approximate symmetries are neutrino masses, postulated in the 1960s and discovered via the observation of neutrino oscillations in the late 1990s. While the scope of EFTs was unclear initially, today we understand that all successful field theories, with which we have been working in many areas of theoretical physics, are nothing but effective field theories. EFTs provide the theoretical framework to probe new physics and to establish precision programmes at experiments. The former is crucial for making accurate theoretical predictions, while the latter is central to the physics programme of CERN in general.

EFTs in particle physics

More than a decade has passed since the first run of the LHC, in which the Higgs boson and the mechanism for electroweak symmetry breaking were discovered. So far, there are no signals of new physics beyond the SM. EFTs are well suited to explore LHC physics in depth. A typical example for an event involving two scales is Higgs-boson production because there is a factor 10–100 between its mass and transverse momentum. The calculation of each Higgs-boson production process leads to large logarithms that can invalidate perturbation theory due to the large-scale separation. This is just one of many examples of the two-scale problem that arises when the full quantum field theory approach for high-energy colliders is applied. Traditionally, such two-scale problems have been treated in the framework of QCD factorisation and resummation. 

Fermi theory

Over the past two decades, it has been possible to recast two-scale problems at high-energy colliders with the advent of soft-collinear effective theory (SCET). SCET is nowadays a popular framework that is used to describe Higgs physics, jets and their substructure, as well as more formal problems, such as power corrections to reconstruct full amplitudes eventually. The difference between HQET and SCET is that SCET considers long-distance interactions between quarks and both soft and collinear particles, whereas HQET takes into account only soft interactions between a heavy quark and a parton. SCET is just one example where the EFT methodology has been indispensable, even though the underlying theory at much higher energies is known. Other examples of EFT applications include precision measurements of rare decays that can be described by QCD with its approximate chiral symmetry, or heavy quarks at finite temperature and density. EFT is also central to a deeper understanding of the so-called flavour anomalies, enabling comparisons between theory and experiment in terms of particular Wilson coefficients. 

All things EFT: a lecture series hosted at CERN

Steven Weinberg

A novel global lecture series titled “All things EFT” was launched at CERN in autumn 2020 as a cross-cutting online series focused on the universal concept of EFT, and its application to the many areas where it is now used as a core tool in theoretical physics. Inaugurated in a formidable historical lecture by the late Steven Weinberg, who reviewed the emergence and development of the idea of EFT through to its perception nowadays as encompassing all of quantum field theory and beyond, the lecture series has amassed a large following that is still growing. The series featured outstanding speakers, world-leading experts from cosmology to fluid dynamics, condensed-matter physics, classical and quantum gravity, string theory, and of course particle physics – the birthing bed of the powerful EFT framework. The second year of the series was kicked off in a lecture dedicated to the memory of Weinberg by Howard Georgi, who looked back on the development of heavy-quark effective theory and its immediate aftermath. 

Moreover, precision measurements of Higgs and electroweak observables at the LHC and future colliders will provide opportunities to detect new physics signals, such as resonances in invariant mass plots, or small deviations from the SM, seen in tails of distributions for instance at the HL-LHC – testing the perception of the SM as a low-energy incarnation of a more fundamental theo­­ry being probed at the electroweak scale. This is dubbed the SMEFT (SM EFT) or HEFT (Higgs EFT), depending on whether the Higgs fields are expressed in terms of the Higgs doublet or the physical Higgs boson. This particular EFT framework has recently been implemented in the data-analysis tools at the LHC, enabling the analyses across different channels and even different experiments (see “LHC physics” image). At the same time, the study of SMEFT and HEFT has sparked a plethora of theoretical investigations that have uncovered its remarkable underlying features, for example allowing EFT to be extended or placing constraints on the EFT coefficients due to Lorentz invariance, causality and analyticity.

EFTs in gravity

Since the inception of EFT, it was believed that the framework is applicable only to the description of quantum field theories for capturing the physics of elementary particles at high-energy scales, or alternatively at very small length scales. Thus, EFT seemed mostly irrelevant regarding gravitation, for which we are still lacking a full theory valid at quantum scales. The only way in which EFT seemed to be pertinent for gravitation was to think of general relativity as a first approximation to an EFT description of quantum gravity, which indeed provided a new EFT perspective at the time. However, in the past decade it has become widely acknowledged that EFT provides a powerful framework to capture gravitation occurring completely across large length scales, as long as these scales display a clear hierarchy. 

Gravitational-wave detectors

The most notable application to such classical gravitational systems came when it was realised that the EFT framework would be ideal to handle gravitational radiation emitted at the inspiral phase of a binary of compact objects, such as black holes. At this phase in the evolution of the binary, the compact objects are moving at non-relativistic velocities. Using the small velocity as the expansion parameter exhibits the separation between the various characteristic length scales of the system. Thus, the physics can be treated perturbatively. For example, it was found that even couplings manifestly change in classical systems across their characteristic scales, which was previously believed to be unique to quantum field theories. The application of EFT to the binary inspiral problem has been so successful that the precision frontier has been pushed beyond the state of the art, quickly surpassing the reach of work that has been focused on the two-body problem for decades via traditional methods in general relativity. 

This theoretical progress has made an even broader impact since the breakthrough direct discovery of gravitational waves (GWs) was announced in 2016. An inspiraling binary of black holes merged into a single black hole in less than a split second, releasing an enormous amount of energy in the form of GWs, which instigated even greater, more intense use of EFTs for the generation of theoretical GW data. In the coming years and decades, a continuous increase in the quantity and quality of real-world GW data is expected from the rapidly growing worldwide network of ground-based GW detectors, and future space-based interferometers, covering a wide range of target frequencies (see “Next generation” image).

EFTs in cosmology

Cosmology is inherently a cross-cutting domain, spanning scales over about 1060 orders of magnitude, from the Planck scale to the size of the observable universe. As such, cosmology generally cannot be expected to be tackled directly by each of the fundamental theories that capture particle physics or gravity. The correct description of cosmology relies heavily on the work in many disparate areas of research in theoretical and experimental physics, including particle physics and general relativity among many more.

Artist’s impression of the Euclid satellite

The development of EFT applications in cosmology – including EFTs of inflation, dark matter, dark energy and even EFTs of large-scale structure – has become essential to make observable predictions in cosmology. The discovery of the accelerated expansion of the universe in 1998 shows our difficulty in understanding gravity both at the quantum regime and the classical one. The cosmological constant problem and dark-matter paradigm might be a hint for alternative theories of gravity at very large scales. Indeed, the problems with gravity in the very-high and very-low energy range may well be tied together. The science programme of next-generation large surveys, such as ESA’s Euclid satellite (see “Expanding horizons” image), rely heavily on all these EFT applications for the exploitation of the enormous data that is going to be collected to constrain unknown cosmological parameters, thus helping to pinpoint viable theories.

The future of EFTs in physics

The EFT framework plays a key role at the exciting and rich interface between theory and experiment in particle physics, gravity and cosmology as well as in other domains, such as condensed-matter physics, which were not covered here. The technology for precision measurements in these domains is constantly being upgraded, and in the coming years and decades we are heading towards a growing influx of real-world data of higher quality. Future particle-collider projects, such as the Future Circular Collider at CERN, or China’s Circular Electron Positron Collider, are being planned and developed. Precision cosmology is also thriving, with an upcoming next-generation of very large surveys, such as the ground-based LSST, or space-based Euclid. GW detectors keep improving and multiplying, and besides those that are currently operating many more are planned, aimed at measuring various frequency ranges, which will enable a richer array of sources and events to be found.

EFTs provide the theoretical framework to probe new physics and to establish precision programmes at experiments across all domains of physics

Half a century after the concept has formally emerged, effective field theory is still full of surprises. Recently, the physical space of EFTs has been studied as a fundamental entity in its own right. These studies, by numerous groups worldwide, have exposed a new hidden “totally positive’’ geometric structure dubbed the EFT-hedron that constrains the EFT expansion in any quantum field theory, and even string theory, from first principles, including causality, unitarity and analyticity, to be satisfied by any amplitudes of these theories. This recent formal progress reflects the ultimate leap in the perception of EFT nowadays as the most fundamental and most generic theory concept to capture the physics of nature at all scales. Clearly, in the vast array of formidable open questions in physics that still lie ahead, effective field theory is here to stay – for good.

The post A theory of theories appeared first on CERN Courier.

]]>
Feature Effective field theory (EFT) is now seen as the most fundamental and generic way to capture the physics of nature at all scales, with applications ranging from LHC physics to cosmology. https://cerncourier.com/wp-content/uploads/2023/01/CCJanFeb23_EFT_feature.jpg
Joining forces for quantum gravity https://cerncourier.com/a/joining-forces-for-quantum-gravity/ Mon, 07 Nov 2022 15:28:07 +0000 https://preview-courier.web.cern.ch/?p=106991 The International Society for Quantum Gravity aims to turn disagreement into a call for better understanding, say Bianca Dittrich and Daniele Oriti

The post Joining forces for quantum gravity appeared first on CERN Courier.

]]>
The challenge of casting space–time and gravity in the language of quantum mechanics and unravelling their fundamental structure has occupied some of the best minds in physics for almost a century. Not only is it one of the hardest problems out there – requiring mastery of general relativity, quantum field theory, high-level mathematics and deep conceptual issues – but distinct sub-communities of researchers have developed around different and apparently mutually exclusive approaches. 

Historically, this reflected to a large extent the existing subdivision of theo­retical physics between the particle-physics community and the much smaller gravitational physics one, with condensed-matter theorists entirely alien, at the time, to the quantum gravity (QG) problem. Until 30 years ago, the QG landscape roughly featured two main camps, often identified simply with string theory and canonical loop quantum gravity, even if a few more hybrid formalisms already existed. Much progress has been achieved in this divided landscape, somehow maintaining each camp in the belief that one only had to push forward its own strategies to succeed. At a more sociological level, intertwined with serious scientific disagreements, this even led, in the early 2000s, to what the popular press dubbed the “String Wars”.

A new generation has grown up in a diverse, if conflicting, scientific landscape

Today there is a growing conviction that if we are going to make progress towards this “holy grail” of physics, we need to adopt a more open attitude. We need to pay serious attention to available tools, results and ideas wherever they originated, pursuing unified perspectives when suitable and contrasting them in a constructive manner otherwise. In fact, the past 30 years has seen the development of several QG approaches, the birth of new (hybrid) ones, fresh directions and many results. A new generation has grown up in a diverse, if conflicting, scientific landscape. Today there is much more emphasis on QG phenomenology and physical aspects, thanks to parallel advances in observational cosmology and astrophysics, alongside the recognition that some mathematical developments naturally cut across specific QG formalisms. There is also much more contact with “outside” communities such as particle physics, cosmology, condensed matter and quantum information, which are not interested in internal QG divisions but only in QG deliverables. Furthermore, several scientific overlaps between QG formalisms exist and are often so strong that they make the definition of sharp boundaries between them look artificial. 

Introducing the ISQG 

The time is ripe to move away from the String Wars towards a “multipolar QG pax”, in which diversity does not mean irreducible conflict and disagreement is turned into a call for better understanding. To this end, last year we created the International Society for Quantum Gravity (ISQG) with a founding committee representing different QG approaches and more than 400 members who do not necessarily agree scientifically, but value intelligent disagreement. 

ISQG’s scientific goals are to: promote top-quality research on each QG formalism and each open issue (mathematical, physical and in particular conceptual); stimulate cross-fertilisation across formalisms (e.g. by focusing on shared mathematical ingredients/ideas or on shared physical issues); be prepared for QG observations and tests (develop a common language to interpret experiments with QG implications, and a better understanding of how different approaches would differ in predictions); and push for new ideas and directions. Its sociological goals are equally important. It aims to help recognise that we are a single community with shared interests and goals, overcome barriers and diffidence among sub-communities, support young researchers and promote QG outside the community. A number of initiatives, as well as new funding schemes, are being planned to help achieve these goals.

We envision the main role of the ISQG as sponsoring and supporting the initiatives proposed by its members, in addition to organising its own. This includes a bi-annual conference series to be announced soon, focused workshops and schools, seminar series, career support for young researchers and the preparation of outreach and educational material on quantum gravity.

So far, the ISQG has been well received, with more than 100 participants attending its inaugural workshop in October 2021. Researchers in quantum gravity and related fields are welcome to join the society, contribute to its initiatives and help to create a community that transcends outdated boundaries between different approaches, which only hinder scientific progress. We need all of you!

The post Joining forces for quantum gravity appeared first on CERN Courier.

]]>
Opinion The International Society for Quantum Gravity aims to turn disagreement into a call for better understanding, say Bianca Dittrich and Daniele Oriti https://cerncourier.com/wp-content/uploads/2022/11/CCNovDec22_VIEW_ISQG.jpg
Snowmass back at KITP https://cerncourier.com/a/snowmass-back-at-kitp/ Mon, 21 Mar 2022 13:17:16 +0000 https://preview-courier.web.cern.ch/?p=98063 Theorists from the entire spectrum of high-energy physics convened to sketch a decadal vision in advance of the Snowmass Community Summer Study in Seattle this July.

The post Snowmass back at KITP appeared first on CERN Courier.

]]>
snowmass_theory_frontier_image

Between February 23-25, the Kavli Institute of Theoretical Physics (KITP) in Santa Barbara, California, hosted the Theory Frontier conference of the US Particle Physics Community Planning Exercise, “Snowmass 2021“. Organised by the Division of Particles and Fields of the American Physical Society (APS DPF), Snowmass aims to identify and document a scientific vision for the future of particle physics in the U.S. and abroad. The event brought together theorists from the entire spectrum of high-energy physics, fostering dialogue and revealing common threads, to sketch a decadal vision for high-energy theory in advance of the main Snowmass Community Summer Study in Seattle on 17-26 July.

It was also one of the first large in-person meetings for the US particle physics community since the start of the COVID-19 pandemic.

The conference began in earnest with Juan Maldacena’s (IAS) vision for formal theory in the coming decade. Highlighting promising directions in quantum field theory and quantum gravity, he surveyed recent developments in “bootstrap” techniques for conformal field theories, amplitudes and cosmology; implications of quantum information for understanding quantum field theories; new dualities in supersymmetric and non-supersymmetric field theories; progress on the black-hole information problem; and constraints on effective field theories from consistent coupling to quantum gravity. Following talks by Eva Silverstein (U. Stanford) on quantum gravity and cosmology and Xi Dong (UC Santa Barbara) on geometry and entanglement, David Gross (KITP) brought the morning to a close by recalling the role of string theory in the quest for unification and emphasising its renewed promise in understanding QCD.

Clay Cordova (Chicago), David Simmons-Duffin (Caltech), Shu Heng Shao (IAS) and Ibrahima Bah (Johns Hopkins) followed with a comprehensive overview of recent progress in quantum field theory. Cordova’s summary of supersymmetric field theory touched on the classification of superconformal field theories, improved understanding of maximally supersymmetric theories in diverse dimensions, and connections between supersymmetric and non-supersymmetric dynamics. Simmons-Duffin made a heroic attempt to convey the essentials of the conformal bootstrap in a 15-minute talk, while Shao surveyed generalised global symmetries and Bah detailed geometric techniques guiding the classification of superconformal field theories.

The first afternoon began with Raman Sundrum’s (Maryland) vision for particle phenomenology, in which he surveyed the pressing questions motivating physics beyond the Standard Model, some promising theoretical mechanisms for answering them, and the experimental opportunities that follow. Tim Tait (UC Irvine) followed with an overview of dark- matter models and motivation, drawing a contrast between the more top-down perspective on dark matter prevalent during the previous Snowmass process in 2013 (also hosted by KITP) and the much broader bottom-up perspective governing today’s thinking. Devin Walker (Dartmouth) and Gilly Elor (Mainz) brought the first day’s physics talks to a close with bosonic dark matter and new ideas in baryogenesis.

The final session of the first day was devoted to issues of equity and inclusion in the high-energy theory community, with  DPF early-career member Julia Gonski (Columbia) making a persuasive case giving a voice to early-career physicists in the years between Snowmass processes.  Connecting from Cambridge, Howard Georgi (Harvard) delivered a compelling speech on the essential value of diversity in physics, recalling Ann Nelson’s legacy and reminding the packed auditorium that “progress will not happen at all unless the good people who think that there is nothing they can do actually wake up and start doing.” This was followed by a panel discussion moderated by Devin Walker (Dartmouth) and featuring Georgi, Bah, Masha Baryakhtar (Washington), and Tien-Tien Yu (Oregon) in dialogue about their experiences.

Developments across all facets of the high-energy theory community are shaping new ways of exploring the universe from the shortest length scales to the very longest

The second and third days of the conference spanned the entire spectrum of activity within high-energy theory, consolidated around quantum information science with talks by Tom Hartman (Cornell), Raphael Bousso (Berkeley), Hank Lamm (Fermilab) and Yoni Kahn (Illinois). Marius Wiesemann (MPI), Felix Kling (DESY) and Ian Moult (Yale) discussed simulations for collider physics, and Michael Wagman (Fermilab), Huey-Wen Lin (Michigan State) and Thomas Blum (Connecticut) emphasised recent progress in lattice gauge theory. Recent developments in precision theory were covered by Bernhard Mistlberger (CTP), Emanuele Mereghetti (LANL) and Dave Soper (Oregon) and the status of scattering-amplitudes applications by Nima Arkani-Hamed (IAS), Mikhail Solon (Caltech) and Henriette Elvang (Michigan). Masha Baryakhtar (Washington), Nicholas Rodd (CERN) and Daniel Green (UC San Diego) reviewed astroparticle and cosmology theory, followed by an overview of effective field theory approaches in cosmology and gravity by Mehrdad Mirbabayi (ICTP) and Walter Goldberger (Yale); Isabel Garcia Garcia (KITP) discussed alternative approaches to effective field theories in gravitation. Recent findings in neutrino theory were covered by Alex Friedland (SLAC), Mu Chun Chen (UC Irvine) and Zahra Tabrizi (Northwestern). Bridging these themes with talks on amplitudes and collider physics, machine learning for particle theory and cosmological implications of dark sector models were talks by Lance Dixon (SLAC), Jesse Thaler (MIT) and Neal Weiner (New York). Connections with the many other “frontiers” in the Snowmass process were underlined by Laura Reina (Florida State), Lian-Tao Wang (Chicago), Pedro Machado (Fermilab), Flip Tanedo (UC Riverside), Steve Gottlieb (Indiana), and Alexey Petrov (Wayne State).

The rich and broad programme of the Snowmass Theory Conference demonstrates the vibrancy of high-energy theory at this interesting juncture for the field, following the discovery of the final missing piece of the Standard Model, the Higgs boson, in 2012. Subsequent developments across all facets of the high-energy theory community are shaping new ways of exploring the universe from the shortest length scales to the very longest. The many thematic threads and opportunities covered in the conference bode well for the final Snowmass discussions with the whole community in Seattle this summer.

The post Snowmass back at KITP appeared first on CERN Courier.

]]>
Meeting report Theorists from the entire spectrum of high-energy physics convened to sketch a decadal vision in advance of the Snowmass Community Summer Study in Seattle this July. https://cerncourier.com/wp-content/uploads/2022/03/snowmass_theory_frontier_image_featured.png
Artificial-neutrino experiments near precision era  https://cerncourier.com/a/artificial-neutrino-experiments-near-precision-era/ Wed, 29 Sep 2021 07:58:40 +0000 https://preview-courier.web.cern.ch/?p=95246 NuFact 2021 brought together experimentalists, theorists and accelerator physicists in pursuit of CKM-level precision in neutrino physics.

The post Artificial-neutrino experiments near precision era  appeared first on CERN Courier.

]]>
The 22nd International Workshop on Neutrinos from Accelerators (NuFact 2021) was held from 6 to 11 September, attracting a record 450 participants either online or in Cagliari, Italy. NuFact addresses topics in neutrino oscillations and neutrino-scattering physics, neutrino beams, muon physics, neutrinos beyond the Standard Model and the latest generation of neutrino detectors. The 2021 edition was organised by the Cagliari Division of INFN, the Italian Institute for Nuclear Physics and the University of Milano-Bicocca.

At the time of the first NuFact in 1999, it wasn’t at all clear that accelerator experiments could address leptonic CP violation in neutrinos. Fits still ignored θ13, which expresses the relatively small coupling between the third neutrino mass eigenstate and the electron, and the size of the solar-oscillation mass splitting, which drives the CP-violating amplitude. Today, leading experiments testify to a precision era of neutrino physics where every parameter in the neutrino mixing matrix must be fitted. TK2, NOvA and MINERvA all reported new analyses and speakers from Fermilab updated the conference on the commissioning of the laboratory’s short-baseline experiments ICARUS, MicroBooNE and SBND, which seek to clarify experimental hints of additional “sterile” neutrinos. After a long journey from CERN to Fermilab, the ICARUS detector, the largest and most downstream of the three liquid-argon detectors in the programme, has been filled with liquid argon, and data taking is now in full swing.

g-2 anomaly

As we strive to pin down the values of the neutrino mixing matrix with a precision approaching that of the CKM matrix, NuFact serves as a key forum for collaborations between theorists and experimentalists. Simon Corrodi (Argonne) showed how the latest results from Fermilab on the g-2 anomaly may suggest new physics in lepton couplings, with potential implications for neutrino couplings and neutrino propagation. Collaboration with accelerator physicists is also important. After the discovery in 2012 that θ13 is nonzero, the focus of experiments with artificial sources of neutrinos turned to the development of multi-MW beams and the need for new facilities. Keith Gollwitzer (Fermilab) kicked off the discussion by summarising Fermilab’s outstanding programme at the intensity frontier, paving the way for DUNE, and Megan Friend (KEK) presented impressive progress in Japan last year. The J-PARC accelerator complex is being upgraded to serve the new T2K near detector, for which the final TPC anode and cathode are now being tested at CERN. The J-PARC luminosity upgrade will also serve the Hyper-Kamiokande experiment, which is due to come online on approximately the same timeline as DUNE. Though the J-PARC neutrino beam will be less intense and by design more monochromatic than that from Fermilab to DUNE, the Hyper-Kamiokande detector will be both closer and larger, promising comparable statistics to DUNE, and addressing the same physics questions at a lower energy.

ENUBET and nuSTORM could operate in parallel with DUNE and Hyper-Kamiokande

A lively round-table discussion featured a dialogue between two of the experiments’ co-spokespersons, Stefan Söldner-Rembold (Manchester) and Francesca Di Lodovico (King’s College London). Both emphasised the complementarity of DUNE and Hyper-Kamiokande, and the need to reduce systematic uncertainties with ad-hoc experiments. J-PARC director Takahashi Kobayashi explored this point in the context of data-driven models and precision experiments such as ENUBET and nuSTORM. Both experiments are in the design phase, and could operate in parallel with DUNE and Hyper-Kamiokande in the latter half of this decade, said Sara Bolognesi (Saclay) and Kenneth Long (Imperial). A satellite workshop focused on potential synergies between these two CERN-based projects and a muon-collider demonstrator, while another workshop explored physics goals and technical challenges for “ESSnuSB” – a proposed neutrino beam at the European Spallation Source in Lund, Sweden. In a plenary talk, Nobel laureate and former CERN Director-General Carlo Rubbia went further still, exploring the possibility of a muon collider at the same facility.

The next NuFact will take place in August 2022 in Salt Lake City, Utah.

The post Artificial-neutrino experiments near precision era  appeared first on CERN Courier.

]]>
Meeting report NuFact 2021 brought together experimentalists, theorists and accelerator physicists in pursuit of CKM-level precision in neutrino physics. https://cerncourier.com/wp-content/uploads/2021/09/NuFact-2021.png
Stealing theorists’ lunch https://cerncourier.com/a/stealing-theorists-lunch/ Tue, 31 Aug 2021 21:49:46 +0000 https://preview-courier.web.cern.ch/?p=94049 Artificial-intelligence techniques have been used in experimental particle physics for 30 years, and are becoming increasingly widespread in theoretical physics. Anima Anandkumar and John Ellis explore the possibilities.

The post Stealing theorists’ lunch appeared first on CERN Courier.

]]>
John Ellis and Anima Anandkumar

How might artificial intelligence make an impact on theoretical physics?

John Ellis (JE): To phrase it simply: where do we go next? We have the Standard Model, which describes all the visible matter in the universe successfully, but we know dark matter must be out there. There are also puzzles, such as what is the origin of the matter in the universe? During my lifetime we’ve been playing around with a bunch of ideas for tackling those problems, but haven’t come up with solutions. We have been able to solve some but not others. Could artificial intelligence (AI) help us find new paths towards attacking these questions? This would be truly stealing theoretical physicists’ lunch.

 Anima Anandkumar (AA): I think the first steps are whether you can understand more basic physics and be able to come up with predictions as well. For example, could AI rediscover the Standard Model? One day we can hope to look at what the discrepancies are for the current model, and hopefully come up with better suggestions.

 JE: An interesting exercise might be to take some of the puzzles we have at the moment and somehow equip an AI system with a theoretical framework that we physicists are trying to work with, let the AI loose and see whether it comes up with anything. Even over the last few weeks, a couple of experimental puzzles have been reinforced by new results on B-meson decays and the anomalous magnetic moment of the muon. There are many theoretical ideas for solving these puzzles but none of them strike me as being particularly satisfactory in the sense of indicating a clear path towards the next synthesis beyond the Standard Model. Is it imaginable that one could devise an AI system that, if you gave it a set of concepts that we have, and the experimental anomalies that we have, then the AI could point the way?

 AA: The devil is in the details. How do we give the right kind of data and knowledge about physics? How do we express those anomalies while at the same time making sure that we don’t bias the model? There are anomalies suggesting that the current model is not complete – if you are giving that prior knowledge then you could be biasing the models away from discovering new aspects. So, I think that delicate balance is the main challenge.

 JE: I think that theoretical physicists could propose a framework with boundaries that AI could explore. We could tell you what sort of particles are allowed, what sort of interactions those could have and what would still be a well-behaved theory from the point of view of relativity and quantum mechanics. Then, let’s just release the AI to see whether it can come up with a combination of particles and interactions that could solve our problems. I think that in this sort of problem space, the creativity would come in the testing of the theory. The AI might find a particle and a set of interactions that would deal with the anomalies that I was talking about, but how do we know what’s the right theory? We have to propose some other experiments that might test it – and that’s one place where the creativity of theoretical physicists will come into play.

 AA: Absolutely. And many theories are not directly testable. That’s where the deeper knowledge and intuition that theoretical physicists have is so critical.

Is human creativity driven by our consciousness, or can contemporary AI be creative? 

AA: Humans are creative in so many ways. We can dream, we can hallucinate, we can create – so how do we build those capabilities into AI? Richard Feynman famously said “What I cannot create, I do not understand.” It appears that our creativity gives us the ability to understand the complex inner workings of the universe. With the current AI paradigm this is very difficult. Current AI is geared towards scenarios where the training and testing distributions are similar, however, creativity requires extrapolation – being able to imagine entirely new scenarios. So extrapolation is an essential aspect. Can you go from what you have learned and extrapolate new scenarios? For that we need some form of invariance or understanding of the underlying laws. That’s where physics is front and centre. Humans have intuitive notions of physics from early childhood. We slowly pick them up from physical interactions with the world. That understanding is at the heart of getting AI to be creative.

 JE: It is often said that a child learns more laws of physics than an adult ever will! As a human being, I think that I think. I think that I understand. How can we introduce those things into AI?

Could AI rediscover the Standard Model?

 AA: We need to get AI to create images, and other kinds of data it experiences, and then reason about the likelihood of the samples. Is this data point unlikely versus another one? Similarly to what we see in the brain, we recently built feedback mechanisms into AI systems. When you are watching me, it’s not just a free-flowing system going from the retina into the brain; there’s also a feedback system going from the inferior temporal cortex back into the visual cortex. This kind of feedback is fundamental to us being conscious. Building these kinds of mechanisms into AI is the first step to creating conscious AI.

 JE: A lot of the things that you just mentioned sound like they’re going to be incredibly useful going forward in our systems for analysing data. But how is AI going to devise an experiment that we should do? Or how is AI going to devise a theory that we should test?

 AA: Those are the challenging aspects for an AI. A data-driven method using a standard neural network would perform really poorly. It will only think of the data that it can see and not about data that it hasn’t seen – what we call “zero-short generalisation”. To me, the past decade’s impressive progress is due to a trinity of data, neural networks and computing infrastructure, mainly powered by GPUs [graphics processing units], coming together: the next step for AI is a wider generalisation to the ability to extrapolate and predict hitherto unseen scenarios.

Across the many tens of orders of magnitude described by modern physics, new laws and behaviours “emerge” non-trivially in complexity (see Emergence). Could intelligence also be an emergent phenomenon?

JE: As a theoretical physicist, my main field of interest is the fundamental building blocks of matter, and the roles that they play very early in the history of the universe. Emergence is the word that we use when we try to capture what happens when you put many of these fundamental constituents together, and they behave in a way that you could often not anticipate if you just looked at the fundamental laws of physics. One of the interesting developments in physics over the past generation is to recognise that there are some universal patterns that emerge. I’m thinking, for example, of phase transitions that look universal, even though the underlying systems are extremely different. So, I wonder, is there something similar in the field of intelligence? For example, the brain structure of the octopus is very different from that of a human, so to what extent does the octopus think in the same way that we do?

 AA: There’s a lot of interest now in studying the octopus. From what I learned, its intelligence is spread out so that it’s not just in its brain but also in its tentacles. Consequently, you have this distributed notion of intelligence that still works very well. It can be extremely camouflaged – imagine being in a wild ocean without a shell to protect yourself. That pressure created the need for intelligence such that it can be extremely aware of its surroundings and able to quickly camouflage itself or manipulate different tools.

 JE: If intelligence is the way that a living thing deals with threats and feeds itself, should we apply the same evolutionary pressure to AI systems? We threaten them and only the fittest will survive. We tell them they have to go and find their own electricity or silicon or something like that – I understand that there are some first steps in this direction, computer programs competing with each other at chess, for example, or robots that have to find wall sockets and plug themselves in. Is this something that one could generalise? And then intelligence could emerge in a way that we hadn’t imagined?

Similarly to what we see in the brain, we recently built feedback mechanisms into AI systems

 AA: That’s an excellent point. Because what you mentioned broadly is competition – different kinds of pressures that drive towards good, robust objectives. An example is generative adversarial models, which can generate very realistic looking images. Here you have a discriminator that challenges the generator to generate images that look real. These kinds of competitions or games are getting a lot of traction and we have now passed the Turing test when it comes to generating human faces – you can no longer tell very easily whether it is generated by AI or if it is a real person. So, I think those kinds of mechanisms that have competition built into the objective they optimise are fundamental to creating more robust and more intelligent systems.

 JE: All this is very impressive – but there are still some elements that I am missing, which seem very important to theoretical physics. Take chess: a very big system but finite nevertheless. In some sense, what I try to do as a theoretical physicist has no boundaries. In some sense, it is infinite. So, is there any hope that AI would eventually be able to deal with problems that have no boundaries?

 AA: That’s the difficulty. These are infinite-dimensional spaces… so how do we decide how to move around there? What distinguishes an expert like you from an average human is that you build your knowledge and develop intuition – you can quickly make judgments and find which narrow part of the space you want to work on compared to all the possibilities. That’s the aspect that is so difficult for AI to figure out. The space is enormous. On the other hand, AI does have a lot more memory, a lot more computational capacity. So can we create a hybrid system, with physicists and machine learning in tandem, to help us harness the capabilities of both AI and humans together? We’re currently exploring theorem provers: can we use the theorems that humans have proven, and then add reinforcement learning on top to create very fast theorem solvers? If we can create such fast theorem provers in pure mathematics, I can see them being very useful for understanding the Standard Model and the gaps and discrepancies in it. It is much harder than chess, for example, but there are exciting programming frameworks and data sets available, with efforts to bring together different branches of mathematics. But I don’t think humans will be out of the loop, at least for now.

The post Stealing theorists’ lunch appeared first on CERN Courier.

]]>
Opinion Artificial-intelligence techniques have been used in experimental particle physics for 30 years, and are becoming increasingly widespread in theoretical physics. Anima Anandkumar and John Ellis explore the possibilities. https://cerncourier.com/wp-content/uploads/2021/08/CCSepOct21_INT_EllisAnan.jpg
Loop Summit convenes in Como https://cerncourier.com/a/loop-summit-convenes-in-como/ Thu, 19 Aug 2021 12:50:11 +0000 https://preview-courier.web.cern.ch/?p=93732 The workshop explored new perturbative results and methods in quantum field theory, collider physics and gravity.

The post Loop Summit convenes in Como appeared first on CERN Courier.

]]>
Precision calculations in the Standard Model and beyond are very important for the experimental programme of the LHC, planned high-energy colliders and gravitational-wave detectors of the future. Following two years of pandemic-imposed virtual discussions, 25 invited experts gathered from 26 to 30 July at Cadenabbia on Lake Como, Italy, to present new results and discuss paths into the computational landscape of this year’s “Loop Summit”.

Loop Summit 2021

The conference surveyed topics relating to multi-loop and multi-leg calculations in quantum chromodynamics (QCD) and electroweak processes. In scattering processes, loops are closed particle lines and legs represent external particles. Both present computational challenges. Recent progress on many inclusive processes has been reported at three- or four-loop order, including for deep-inelastic scattering, jets at colliders, the Drell–Yan process, top-quark and Higgs-boson production, and aspects of bottom-quark physics. Much improved descriptions of scaling violations of parton densities, heavy-quark effects at colliders, power corrections, mixed QCD and electroweak corrections, and high-order QED corrections for e+e colliders have also recently been obtained. These will be important for many processes at the LHC, and pave the way to physics at facilities such as the proposed Future Circular Collider (FCC).

Quantum field theory provides a very elegant way to solve Einsteinian gravity

Weighty considerations

Although merging black holes can have millions of solar masses, the physics describing them remains classical, and quantum gravity happened, if at all, shortly after the Big Bang. Nevertheless, quantum field theory provides an elegant way to solve Einsteinian gravity. At this year’s Loop Summit, perturbative approaches to gravity were discussed that use field-theoretic methods at the level of the 5th and 6th post-Newtonian approximations, where the nth post-Newtonian order corresponds to a classical n-loop calculation between black-hole world lines. These calculations allow predictions of the binding energy and periastron advance of spiralling-in pairs of black holes, and relate them to gravitational-wave effects. In these calculations, the classical loops all link to world lines in classical graviton networks within the framework of an effective-field-theory representation of Einsteinian gravity.

Other talks discussed important progress on advanced analytic computation technologies and new mathematical methods such as computational improvements in massive Dirac-algebra, new ways to calculate loop integrals analytically, new ways to deal consistently with polarised processes, the efficient reduction of highly connected systems of integrals, the solution of gigantic systems of differential equations, and numerical methods based on loop-tree duality. All these methods will decrease the theory errors for many processes due to be measured in the high-luminosity phase of the LHC, and beyond.

Half of the meeting was devoted to developing new ideas in subgroups. In-person discussions are invaluable for highly technical discussions such as these — there is still no substitute for gathering around the blackboard informally and jotting down equations and diagrams. The next Loop Summit in this triennial series will take place in summer 2024.

The post Loop Summit convenes in Como appeared first on CERN Courier.

]]>
Meeting report The workshop explored new perturbative results and methods in quantum field theory, collider physics and gravity. https://cerncourier.com/wp-content/uploads/2021/08/Loop-Summit-2021-191.jpg
A relational take on quantum mechanics https://cerncourier.com/a/a-relational-take-on-quantum-mechanics/ Wed, 14 Jul 2021 07:31:46 +0000 https://preview-courier.web.cern.ch/?p=92971 Carlo Rovelli’s Helgoland is a well-written and easy-to-follow exploration of quantum mechanics and its interpretation.

The post A relational take on quantum mechanics appeared first on CERN Courier.

]]>
Helgoland

It is often said that “nobody understands quantum mechanics” – a phrase usually attributed to Richard Feynman. This statement may, however, be misleading to the uninitiated. There is certainly a high level of understanding of quantum mechanics. The point, moreover, is that there is more than one way to understand the theory, and each of these ways requires us to make some disturbing concessions.

Carlo Rovelli’s Helgoland is therefore a welcome popular book – a well-written and easy-to-follow exploration of quantum mechanics and its interpretation. Rovelli is a theorist working mainly on quantum gravity and foundational aspects of physics. He is also a very successful popular author, distinguished by his erudition and his ability to illuminate the bigger picture. His latest book is no exception.

Helgoland is a barren German island of the North Sea where Heisenberg co-invented quantum mechanics in 1925 while on vacation. The extraordinary sequence of events between 1925 and 1926, when Heisenberg, Jordan, Born, Pauli, Dirac and Schrödinger formulated quantum mechanics, is the topic of the opening chapter of the book. 

Helgoland cover

Rovelli only devotes a short chapter to discuss interpretations in general. This is certainly understandable, since the author’s main target is to discuss his own brainchild: relational quantum mechanics. This approach, however, does not do justice to popular ideas among experts, such as the many-worlds interpretation. The reader may be surprised not to find anything about the Copenhagen (or, more appropriately, Bohr’s) interpretation. This is for very good reason, however, since it is not generally considered to be a coherent interpretation. Having mostly historical significance, it has served as inspiration to approaches that keep the spirit of Bohr’s ideas, like consistent histories (not mentioned in the book at all), or Rovelli’s relational quantum mechanics.

Relational quantum mechanics was introduced by Rovelli in an original technical article in 1996 (Int. J. Theor. Phys. 35 1637). Helgoland presents a simplified version of these ideas, explained in more detail in Rovelli’s article, and in a way suitable for a more general audience. The original article, however, can serve as very nice complementary reading for those with some physics background. Relational quantum mechanics claims to be compatible with several of Bohr’s ideas. In some ways it goes back to the original ideas of Heisenberg by formulating the theory without a reference to a wavefunction. The properties of a system are defined only when the system interacts with another system. There is no distinction between observer and observed system. Rovelli meticulously embeds these ideas in a more general historical and philosophical context, which he presents in a captivating manner. He even speculates whether this way of thinking can help us understand topics that, in his opinion, are unrelated to quantum mechanics, such as consciousness.

Helgoland’s potential audience is very diverse and manages to transcend the fact that it is written for the general public. Professionals from both the sciences and the humanities will certainly learn something, especially if they are not acquainted with the nuances of the interpretations of modern physics. The book, however, as is explicitly stated by Rovelli, takes a partisan stance, aiming to promote relational quantum mechanics. As such, it may give a somewhat skewed view of the topic. In that respect, it would be a good idea to read it alongside books with different perspectives, such as Sean Carroll’s Something Deeply Hidden (2019) and Adam Becker’s What is Real? (2018).

The post A relational take on quantum mechanics appeared first on CERN Courier.

]]>
Review Carlo Rovelli’s Helgoland is a well-written and easy-to-follow exploration of quantum mechanics and its interpretation. https://cerncourier.com/wp-content/uploads/2021/06/CCJulAug21_REV_Helgoland.jpg
Muon g–2: the promise of a generation https://cerncourier.com/a/muon-g-2-the-promise-of-a-generation/ Thu, 15 Apr 2021 13:41:18 +0000 https://preview-courier.web.cern.ch/?p=92055 The recent Fermilab result offers a moment to reflect on perseverance and collaboration.

The post Muon g–2: the promise of a generation appeared first on CERN Courier.

]]>
CERN g-2 storage ring

It has been almost a century since Dirac formulated his famous equation, and 75 years since the first QED calculations by Schwinger, Tomonaga and Feynman were used to explain the small deviations in hydrogen’s hyperfine structure. These calculations also predicted that deviations from Dirac’s prediction a = (g–2)/2, where g is the gyromagnetic ratio e/2me, should be non-zero and thus “anomalous”. The result is famously engraved on Schwinger’s tombstone, standing as a monument to the importance of this result and a marker of things to come.

In January 1957 Garwin and collaborators at Columbia published the first measurements of g for the recently discovered muon, accurate to 5%, followed two months later by Cassels and collaborators at Liverpool with uncertainties of less than 1%. Leon Lederman is credited with initiating the CERN campaign of g–2 experiments from 1959 to 1979, starting with a borrowed 83 × 52 × 10 cm magnet from Liverpool and ending with a dedicated storage ring and a precision of better than 10 ppm.

Why was CERN so interested in the muon? In a 1981 review, Combley, Farley and Picasso commented that the CERN results for aμ had a higher sensitivity to new physics by “a modification to the photon propagator or new couplings” by a factor (mμ/me)2. Revealing a deeper interest, they also admitted “… this activity has brought us no nearer to the understanding of the muon mass [200 times that of the electron].”

With the end of the CERN muon programme, focus turned to Brookhaven and the E821 experiment, which took up the challenge of measuring aμ 20 times more precisely, providing sensitivity to virtual particles with masses beyond the reach of the colliders at the time. In 2004 the E821 collaboration delivered on its promise, reporting results accurate to about 0.6 ppm. At the time this showed a 2–3σ discrepancy with respect to the Standard Model (SM) – tantalising, but far from conclusive.

Spectacular progress
The theoretical calculation of g–2 made spectacular progress in step with experiment. Almost eclipsed by the epic 2012 achievement of calculating the QED contributions to five loops from 12,672 Feynman diagrams, huge advances in calculating the hadronic vacuum polarisation contributions to aμ have been made. A reappraisal of the E821 data using this information suggested at least a 3.5σ discrepancy with the SM. It was this that provided the impetus to Lee Roberts and colleagues to build the improved muon g–2 experiments at Fermilab, the first results from which are described in this issue, and at J-PARC. Full results from the Fermilab experiment alone should reduce the aμ uncertainties by at least another factor of three – down to a level that really challenges what we know about the SM.

Muon g–2 is a clear demonstration that theory and experiment must progress hand in hand

Of course, the interpretation of the new results relies on the choice of theory baseline. For example, one could choose, as the Fermilab experiment has, to use the consensus “International Theory Initiative” expectation for aμ. One could also take into account the new results provided by LHCb’s recent RK measurement, which hint that muons might behave differently than electrons. There will inevitably be speculation over the coming months about the right approach. Whatever one’s choice, muon g–2 is a clear demonstration that theory and experiment must progress hand in hand.

Perhaps the most important lesson is the continued cross-fertilisation and impetus to the physics delivered both at CERN and at Fermilab by recent results. The g–2 experiment, an international collaboration between dozens of labs and universities in seven countries, has benefited from students who cut their teeth on LHC experiments. Likewise, students who have worked at the precision frontier at Fermilab are now armed with the expertise of making blinded ppm measurements and are keen to see how they can make new measurements at CERN, for example at the proposed MUonE experiment, or at other muon experiments due to come online this decade.

“It remains to be seen whether or not future refinement of the [SM] will call for the discerning scrutiny of further measurements of even greater precision,” concluded Combley, Farley and Picasso in their 1981 review – a wise comment that is now being addressed.

The post Muon g–2: the promise of a generation appeared first on CERN Courier.

]]>
Opinion The recent Fermilab result offers a moment to reflect on perseverance and collaboration. https://cerncourier.com/wp-content/uploads/2021/04/Screenshot-2021-04-15-at-12.14.53-1.png
An anomalous moment for the muon https://cerncourier.com/a/an-anomalous-moment-for-the-muon/ Wed, 14 Apr 2021 12:58:58 +0000 https://preview-courier.web.cern.ch/?p=92019 To confidently discover new physics in the muon g−2 anomaly requires that data-driven and lattice-QCD calculations of the Standard-Model value agree, write Thomas Blum, Luchang Jin and Christoph Lehner.

The post An anomalous moment for the muon appeared first on CERN Courier.

]]>
Hadronic light-by-light computation

A fermion’s spin tends to twist to align with a magnetic field – an effect that becomes dramatically macroscopic when electron spins twist together in a ferromagnet. Microscopically, the tiny magnetic moment of a fermion interacts with the external magnetic field through absorption of photons that comprise the field. Quantifying this picture, the Dirac equation predicts fermion magnetic moments to be precisely two in units of Bohr magnetons, e/2m. But virtual lines and loops add an additional 0.1% or so to this value, giving rise to an “anomalous” contribution known as “g–2” to the particle’s magnetic moment, caused by quantum fluctuations. Calculated to tenth order in quantum electrodynamics (QED), and verified experimentally to about two parts in 1010, the electron’s magnetic moment is one of the most precisely known numbers in the physical sciences. While also measured precisely, the magnetic moment of the muon, however, is in tension with the Standard Model.

Tricky comparison

The anomalous magnetic moment of the muon was first measured at CERN in 1959, and prior to 2021, was most recently measured by the E821 experiment at Brookhaven National Laboratory (BNL) 16 years ago. The comparison between theory and data is much trickier than for electrons. Being short-lived, muons are less suited to experiments with Penning traps, whereby stable charged particles are confined using static electric and magnetic fields, and the trapped particles are then cooled to allow precise measurements of their properties. Instead, experiments infer how quickly muon spins precess in a storage ring – a situation similar to the wobbling of a spinning top, where information on the muon’s advancing spin is encoded in the direction of the electron that is emitted when it decays. Theoretical calculations are also more challenging, as hadronic contributions are no longer so heavily suppressed when they emerge as virtual particles from the more massive muon.

All told, our knowledge of the anomalous magnetic moment of the muon is currently three orders of magnitude less precise than for electrons. And while everything tallies up, more or less, for the electron, BNL’s longstanding measurement of the magnetic moment of the muon is 3.7σ greater than the Standard Model prediction (see panel “Rising to the moment”). The possibility that the discrepancy could be due to virtual contributions from as-yet-undiscovered particles demands ever more precise theoretical calculations. This need is now more pressing than ever, given the increased precision of the experimental value expected in the next few years from the Muon g–2 collaboration at Fermilab in the US and other experiments such as the Muon g–2/EDM collaboration at J-PARC in Japan. Hotly anticipated results from the first data run at Fermilab’s E989 experiment were released on 7 April. The new result is completely consistent with the BNL value but with a slightly smaller error, leading to a slightly larger discrepancy of 4.2σ with the Standard Model when the measurements are combined (see Fermilab strengthens muon g-2 anomaly).

Hadronic vacuum polarisation

The value of the muon anomaly, aμ, is an important test of the Standard Model because currently it is known very precisely – to roughly 0.5 parts per million (ppm) – in both experiment and theory. QED dominates the value of aμ, but due to the non-perturbative nature of QCD it is strong interactions that contribute most to the error. The theoretical uncertainty on the anomalous magnetic moment of the muon is currently dominated by so-called hadronic vacuum polarisation (HVP) diagrams. In HVP, a virtual photon briefly explodes into a “hadronic blob”, before being reabsorbed, while the magnetic-field photon is simultaneously absorbed by the muon. While of order α2 in QED, it is all orders in QCD, making for very difficult calculations.

Rising to the moment

Artist

In the Standard Model, the magnetic moment of the muon is computed order-by-order in powers of a for QED (each virtual photon represents a factor of α), and to all orders in as for QCD.

At the lowest order in QED, the Dirac term (pictured left) accounts for precisely two Bohr magnetons and arises purely from the muon (μ) and the real external photon (γ) representing the magnetic field.

 

At higher orders in QED, virtual Standard Model particles, depicted by lines forming loops, contribute to a fractional increase of aμ with respect to that value: the so-called anomalous magnetic moment of the muon. It is defined to be aμ = (g–2)/2, where g is the gyromagnetic ratio of the muon – the number of Bohr magnetons, e/2m, which make up the muon’s magnetic moment. According to the Dirac equation, g = 2, but radiative corrections increase its value.

The biggest contribution is from the Schwinger term (pictured left, O(α)) and higher-order QED diagrams.

 

aμQED = (116 584 718.931 ± 0.104) × 10–11

Electroweak lines (pictured left) also make a well-defined contribution. These diagrams are suppressed by the heavy masses of the Higgs, W and Z bosons.

aμEW = (153.6 ± 1.0) × 10–11

The biggest QCD contribution is due to hadronic vacuum polarisation (HVP) diagrams. These are computed from leading order (pictured left, O(α2)), with one “hadronic blob” at all orders in as (shaded) up to next-to-next-to-leading order (NNLO, O(α4), with three hadronic blobs) in the HVP.

 

 

Hadronic light-by-light scattering (HLbL, pictured left at O(α3) and all orders in αs (shaded)), makes a smaller contribution but with a larger fractional uncertainty.

 

 

 

Neglecting lattice–QCD calculations for the HVP in favour of those based on e+e data and phenomenology, the total anomalous magnetic moment is given by

aμSM = aμQED + aμEW + aμHVP + aμHLbL = (116 591 810 ± 43) × 10–11.

This is somewhat below the combined value from the E821 experiment at BNL in 2004 and the E989 experiment at Fermilab in 2021.

aμexp = (116 592 061 ± 41) × 10–11

The discrepancy has roughly 4.2σ significance:

aμexp– aμSM = (251 ± 59) × 10–11.

Historically, and into the present, HVP is calculated using a dispersion relation and experimental data for the cross section for e+e hadrons. This idea was born of necessity almost 60 years ago, before QCD was even on the scene, let alone calculable. The key realisation is that the imaginary part of the vacuum polarisation is directly related to the hadronic cross section via the optical theorem of wave-scattering theory; a dispersion relation then relates the imaginary part to the real part. The cross section is determined over a relatively wide range of energies, in both exclusive and inclusive channels. The dominant contribution – about three quarters – comes from the e+e π+π channel, which peaks at the rho meson mass, 775 MeV. Though the integral converges rapidly with increasing energy, data are needed over a relatively broad region to obtain the necessary precision. Above the τ mass, QCD perturbation theory hones the calculation.

Several groups have computed the HVP contribution in this way, and recently a consensus value has been produced as part of the worldwide Muon g–2 Theory Initiative. The error stands at about 0.58% and is the dominant part of the theory error. It is worth noting that a significant part of the error arises from a tension between the most precise measurements, by the BaBar and KLOE experiments, around the rho–meson peak. New measurements, including those from experiments at Novosibirsk, Russia and Japan’s Belle II experiment, may help resolve the inconsistency in the current data and reduce the error by a factor of two or so. 

The alternative approach, of calculating the HVP contribution from first principles using lattice QCD, is not yet at the same level of precision, but is getting there. Consistency between the two approaches will be crucial for any claim of new physics.

Lattice QCD

Kenneth Wilson formulated lattice gauge theory in 1974 as a means to rid quantum field theories of their notorious infinities – a process known as regulating the theory – while maintaining exact gauge invariance, but without using perturbation theory. Lattice QCD calculations involve the very large dimensional integration of path integrals in QCD. Because of confinement, a perturbative treatment including physical hadronic states is not possible, so the complete integral, regulated properly in a discrete, finite volume, is done numerically by Monte Carlo integration.

Lattice QCD has made significant improvements over the last several years, both in methodology and invested computing time. Recently developed methods (which rely on low-lying eigenmodes of the Dirac operator to speed up calculations) have been especially important for muon–anomaly calculations. By allowing state-of-the-art calculations using physical masses, they remove a significant systematic: the so-called chiral extrapolation for the light quarks. The remaining systematic errors arise from the finite volume and non-zero lattice spacing employed in the simulations. These are handled by doing multiple simulations and extrapolating to the infinite-volume and zero-lattice-spacing limits. 

The HVP contribution can readily be computed using lattice QCD in Euclidean space with space-like four-momenta in the photon loop, thus yielding the real part of the HVP directly. The dispersive result is currently more precise (see “Off the mark” figure”), but further improvements will depend on consistent new e+e scattering datasets.

Hadronic vacuum-polarisation contribution

Rapid progress in the last few years has resulted in first lattice results with sub-percent uncertainty, closing in on the precision of the dispersive approach. Since these lattice calculations are very involved and still maturing, it will be crucial to monitor the emerging picture once several precise results with different systematic approaches are available. It will be particularly important to aim for statistics-dominated errors to make it more straightforward to quantitatively interpret the resulting agreement with the no-new-physics scenario or the dispersive results. In the shorter term, it will also be crucial to cross-check between different lattice and dispersive results using additional observables, for example based on the vector–vector correlators.

With improved lattice calculations in the pipeline from a number of groups, the tension between lattice QCD and phenomenological calculations may well be resolved before the Fermilab and J-PARC experiments announce their final results. Interestingly, there is a new lattice result with sub-percent precision (BMW 2020) that is in agreement both with the no-new-physics point within 1.3σ, and with the dispersive-data-driven result within 2.1σ. Barring a significant re-evaluation of the phenomenological calculation, however, HVP does not appear to be the source of the discrepancy with experiments. 

The next most likely Standard Model process to explain the muon anomaly is hadronic light-by-light scattering. Though it occurs less frequently since it includes an extra virtual photon compared to the HVP contribution, it is much less well known, with comparable uncertainties to HVP.

Hadronic light-by-light scattering

In hadronic light-by-light scattering (HLbL), the magnetic field interacts not with the muon, but with a hadronic “blob”, which is connected to the muon by three virtual photons. (The interaction of the four photons via the hadronic blob gives HLbL its name.) A miscalculation of the HLbL contribution has often been proposed as the source of the apparently anomalous measurement of the muon anomaly by BNL’s E821 collaboration.

Since the so-called Glasgow consensus (the fruit of a 2009 workshop) first established a value more than 10 years ago, significant progress has been made on the analytic computation of the HLbL scattering contribution. In particular, a dispersive analysis of the most important hadronic channels has been carried out, including the leading pion–pole, sub-leading pion loop and rescattering diagrams including heavier pseudoscalars. These calculations are analogous in spirit to the dispersive HVP calculations, but are more complicated, and the experimental measurements are more difficult because form factors with one or two virtual photons are required. 

The project to calculate the HLbL contribution using lattice QCD began more than 10 years ago, and many improvements to the method have been made to reduce both statistical and systematic errors since then. Last year we published, with colleagues Norman Christ, Taku Izubuchi and Masashi Hayakawa, the first ever lattice–QCD calculation of the HLbL contribution with all errors controlled, finding aμHLbL, lattice = (78.7 ± 30.6 (stat) ± 17.7 (sys)) × 10–11. The calculation was not easy: it took four years and a billion core-hours on the Mira supercomputer at Argonne National Laboratory’s Large Computing Facility. 

Our lattice HLbL calculations are quite consistent with the analytic and data-driven result, which is approximately a factor of two more precise. Combining the results leads to aμHLbL = (90 ± 17) × 10–11, which means the very difficult HLbL contribution cannot explain the Standard Model discrepancy with experiment. To make such a strong conclusion, however, it is necessary to have consistent results from at least two completely different methods of calculating this challenging non-perturbative quantity. 

New physics?

If current theory calculations of the muon anomaly hold up, and the new experiments reduce its uncertainty by the hoped-for factor of four, then a new-physics explanation will become impossible to ignore. The idea would be to add particles and interactions that have not yet been observed but may soon be discovered at the LHC or in future experiments. New particles would be expected to contribute to the anomaly through Feynman diagrams similar to the Standard Model topographies (see “Rising to the moment” panel).

Calculations of the anomalous magnetic moment of the muon are not finished

The most commonly considered new-physics explanation is supersymmetry, but the increasingly stringent lower limits placed on the masses of super-partners by the LHC experiments make it increasingly difficult to explain the muon anomaly. Other theories could do the job too. One popular idea that could also explain persistent anomalies in the b-quark sector is heavy scalar leptoquarks, which mediate a new interaction allowing leptons and quarks to change into each other. Another option involves scenarios whereby the Standard Model Higgs boson is accompanied by a heavier Higgs-like boson.

The calculations of the anomalous magnetic moment of the muon are not finished. As a systematically improvable method, we expect more precise lattice determinations of the hadronic contributions in the near future. Increasingly powerful algorithms and hardware resources will further improve precision on the lattice side, and new experimental measurements and analysis methods will do the same for dispersive studies of the HVP and HLbL contributions.

To confidently discover new physics requires that these two independent approaches to the Standard Model value agree. With the first new results on the experimental value of the muon anomaly in almost two decades showing perfect agreement with the old value, we anxiously await more precise measurements in the near future. Our hope is that the clash of theory and experiment will be the beginning of an exciting new chapter of particle physics, heralding new discoveries at current and future particle colliders. 

The post An anomalous moment for the muon appeared first on CERN Courier.

]]>
Feature To confidently discover new physics in the muon g−2 anomaly requires that data-driven and lattice-QCD calculations of the Standard-Model value agree, write Thomas Blum, Luchang Jin and Christoph Lehner. https://cerncourier.com/wp-content/uploads/2021/04/Muon-g-2_feature.jpg
In pursuit of the possible https://cerncourier.com/a/in-pursuit-of-the-possible/ Mon, 16 Nov 2020 10:04:23 +0000 https://preview-courier.web.cern.ch/?p=89994 Theorist Giulia Zanderighi discusses fundamental physics at the boundary between theory and experiment.

The post In pursuit of the possible appeared first on CERN Courier.

]]>
Giulia Zanderighi

What do collider phenomenologists do?

I tend to prefer the term particle phenomenology because the collider is just the tool that we use. However, compared to other experiments, such as those searching for dark matter or axions, colliders provide a controlled laboratory where you decide how many collisions and what energy these collisions should have. This is quite unique. Today, accelerators and detectors have reached an immense level of sophistication, and this allows us to perform a vast amount of fundamental measurements. So, the field spans precision measurements of fundamental properties of particles, in particular of the Higgs boson, consistency tests of the Standard Model (SM), direct and indirect searches for new physics, measurements of rare decays, and much more. For essentially all these topics we have had new results in recent years, so it’s a very active and continuously evolving field. But of course we do not just measure things for the sake of it. We have big, fundamental questions and we are looking for hints from LHC data as to how to address them.

Whats hot in the field today?

One topic that I think is very cool is that we can benefit from the LHC, in its current setup, also as lepton collider. In fact, at the LHC we are looking at elementary collisions between the proton’s constituents, quarks and gluons. But since the proton is charged, it also emits photons, and one can talk about the photon parton-distribution function (PDF), i.e. the photonic content of protons. These photons can split into lepton pairs, so when one collides protons one is also colliding leptons. The fascinating thing is that the “content” of leptons in protons is rather democratic, so one can look at collisions between, say, a muon and a tau lepton – something that can’t be done even at future proposed lepton colliders. Furthermore, by picking up a lepton from one proton and a quark from the other proton, one can place new constraints on leptoquarks, and plenty of other things. This idea was already proposed in the 1990s, but was essentially forgotten because the lepton PDF was not known. Now we know this very precisely, bringing new possibilities. But let me stress that this is just one idea – there are many other new ideas out there. For instance, one major branch of phenomenology is to use machine learning or deep learning to recognise the SM and extract from data what is not SM-like.

I’m the first female director, which of course is a great responsibility

How does the Max Planck Institute differ from your previous positions, for example at CERN and Oxford?

A long time ago, somebody told me that the best thing that can happen to you in Germany is the Max Planck Society. It’s true. You are given independence and the means to fully focus on research and ideas, largely free of teaching duties or the need to apply for grants. Also, there are very valuable interactions with universities, be it in research or via the International Max Planck Research Schools for PhD students. Our institute in Munich is a very unique place. One can feel it immediately. As a guest in the theory department, for example, you get to sit in the Heisenberg office, which feels like going back in time. Our institute was founded in Berlin in 1917 with Albert Einstein as a first director. In 1958 the institute moved to Munich with Werner Heisenberg as director. After more than 100 years I’m the first female director, which of course is a great responsibility. But I also really loved both CERN and Oxford. At CERN I felt like I was at the centre of the world. It is such a vibrant environment, and I loved the proximity to the experiments and the chats in the cafeteria about calculations or measurements. In Oxford I loved the multidisciplinary aspect, the dinners in college sitting next to other academics working in completely different fields. I guess I’m lucky that I’ve been in so many and such different places.

What is the biggest challenge to reach higher precision in quantum-field-theory calculations of key SM processes?

Scattering processes

The biggest challenge is that often there is no single biggest challenge. For instance, for inclusive Higgs-boson production we have a number of theoretical uncertainties, but they are all quite comparable in size. This means that to reduce the overall uncertainty considerably, one needs to reduce all uncertainties, and they all have very different physics origins and difficulties – from a better understanding of the incoming parton densities and a better knowledge of the strong coupling constant, to higher order QCD or electroweak effects and effects related to heavy particles in virtual loops, etc. Computing power can be a liming factor for certain calculations, so making things numerically more efficient is also important. One of the main goals of the coming year will be the calculation of two to three scattering processes at the LHC at next-to-next-to-leading order (NNLO) in QCD. For instance, a milestone will be the calculation of top-pair production in association with a Higgs boson at that level of accuracy. This is the process where we can measure most directly the top-Yukawa coupling. The importance of this measurement can’t be overstressed. While the big discovery at the LHC is so far the Higgs boson, one should also remember that the Yukawa interaction is a new type of fundamental interaction, which is proportional to the mass of the particle, just like gravity, but yet so different from gravity. For some calculations, NNLO is already enough in terms of perturbative precision; going to N3LO doesn’t really add much yet. But there are a few cases where it helps already, such as super-clean Drell–Yan processes.

Is there a level of precision of LHC measurements beyond which indirect searches for new physics are no longer fruitful?

We will never rule out precision measurements as a route to search for new physics. We can always extend the reach and enhance the sensitivity of indirect searches. By increasing precision, we are exploring deeper in the ultraviolet region, where we can start to become sensitive to states exchanged in the loops that are more and more heavy. There is a limit, but we are very far from it. It’s like looking with a better and better microscope: the better the resolution, the more one can explore. However, the experimental precision has to go hand in hand with theoretical precision, and this is where the real challenge for phenomenologists lies. Of course, if you have a super precise measurement but no theory prediction, or vice versa, then you can’t do much with it. With the Higgs boson I am confident that the theory calculations will not be the deal breaker. We will eventually hit the wall in terms of experimental precision, but you can’t put a figure on where this will happen. Until you see a deviation you are never really done.

How would you characterise the state of particle physics today?

When I entered the field as a student, there were high expectations that supersymmetry would be discovered quickly at the LHC. Now things have turned out to be different, but this is what makes it exciting and challenging – even more so, because the same mysteries are still there. We have big, fundamental questions. We have hints from theory, from experiments. We have a powerful, multi-purpose machine – the LHC – that is only just getting started and will provide much more data. Of course, expectations like the quick discovery of supersymmetry have not been fulfilled, but nature is how it is. I think that progress in physics is driven by experiments. We have beautiful exceptions where progress comes from theory, like general relativity, or the postulation of the mechanism for electroweak symmetry breaking. When I think about the Higgs mechanism, I am still astonished that such a simple and powerful idea postulated in 1964 turns out to be realised in nature. But these cases, where theory precedes experiments, are the exception not the rule. In most cases progress in physics comes from observations. After all, it is a natural science, it is not mathematics.

There are some questions that are really tough, and we may never really see an answer to. But with the LHC there are many other smaller questions we certainly can address, such as understanding the proton structure or studying the interaction potential between nucleons and strange baryons, which are relevant to understand the physics of neutron stars, and these are still advancing knowledge. The brightest minds are attracted to the biggest problems, and this will always draw young researchers into the field.

Is naturalness a good guiding force in fundamental research?

Yes. We have plenty of examples where naturalness, in the sense of a quadratic sensitivity to an unknown ultraviolet scale, leads to postulating a new particle: the energy of the electron field (leading to the positron), the charged and neutral pion mass difference (leading to the rho-meson) or the kaon transition rates and mixing, which led to the postulation of the existence of the charm quark in 1970, before its direct discovery in 1974 at SLAC and Brookhaven. In everyday life we constantly assume naturalness, so yes, it is puzzling that the Higgs mass appears to be fine-tuned. Certainly, there is a lot we still don’t understand here, but I would not give up on naturalness, at least not that easily. In the case of the electroweak naturalness problem, it is clear that any solution, such as supersymmetry or compositeness, will also leave an imprint in the Higgs couplings. So the LHC can, in principle, tell us about naturalness even if we do not discover new physics directly; we just have to measure very precisely if the Higgs boson couplings align on a straight line in the mass-versus-coupling plane.

The presence of dark matter is overwhelming in the universe and it is embarrassing that we know little to nothing about its nature

Which collider should follow the LHC?

That is the billion-dollar question – I mean, the 25 billion-dollar question! To me one should go for the machines that explore as much as possible the new energy frontier, namely a 100 TeV hadron collider. It is a compromise between what we might be able to achieve from a machine-building/accelerator/engineering point of view and really exploring a new frontier. For instance, at a 100 TeV machine one can measure the Higgs self-coupling, which is intimately connected with the Higgs potential and to the profound question of the stability of the vacuum.

Which open question would you most like to see answered during your career?

Probably the nature of dark matter. The presence of dark matter is overwhelming in the universe and it is embarrassing that we know little to nothing about its nature and properties. There are many exciting possibilities, ranging from the lightest neutral states in new-physics models to a non-particle-like interpretation, like black holes. Either way, an answer to this question would be an incredible breakthrough.

The post In pursuit of the possible appeared first on CERN Courier.

]]>
Opinion Theorist Giulia Zanderighi discusses fundamental physics at the boundary between theory and experiment. https://cerncourier.com/wp-content/uploads/2020/11/CCNovDec20_INT_Zanderighi.jpg
Strong interest in feeble interactions https://cerncourier.com/a/strong-interest-in-feeble-interactions/ Thu, 12 Nov 2020 10:12:05 +0000 https://preview-courier.web.cern.ch/?p=89959 The FIPs 2020 workshop was structured around portals that may link the Standard Model to a rich dark sector: axions, dark photons, dark scalars and heavy neutral leptons.

The post Strong interest in feeble interactions appeared first on CERN Courier.

]]>
Searches for axion-like particles

Since the discovery of the Higgs boson in 2012, great progress has been made in our understanding of the Standard Model (SM) and the prospects for the discovery of new physics beyond it. Despite excellent advances in Higgs-sector measurements, searches for WIMP dark matter and exploration of very rare processes in the flavour realm, however, no unambiguous signals of new fundamental physics have been seen. This is the reason behind the explosion of interest in feebly interacting particles (FIPs) over the past decade or so.

The inaugural FIPs 2020 workshop, hosted online by CERN from 31 August to 4 September, convened almost 200 physicists from around the world. Structured around the four “portals” that may link SM particles and fields to a rich dark sector – axions, dark photons, dark scalars and heavy neutral leptons – the workshop highlighted the synergies and complementarities among a great variety of experimental facilities, and called for close collaboration across different physics communities.

Today, conventional experimental efforts are driven by arguments based on the naturalness of the electroweak scale. They result in searches for new particles with sizeable couplings to the SM, and masses near the electroweak scale. FIPs represent an alternative paradigm to the traditional beyond-the-SM physics explored at the LHC. With masses below the electroweak scale, FIPs could belong to a rich dark sector and answer many open questions in particle physics (see “Four portals” figure). Diverse searches using proton beams (CERN and Fermilab), kaon beams (CERN and JPARC), neutrino beams (JPARC and Fermilab) and muon beams (PSI) today join more idiosyncratic experiments across the globe in a worldwide search for FIPs.

FIPs can arise from the presence of feeble couplings in the interactions of new physics with SM particles and fields. These may be due to a dimensionless coupling constant or to a “dimensionful” scale, larger than that of the process being studied, which is defined by a higher dimension operator that mediates the interaction. The smallness of these couplings can be due to the presence of an approximate symmetry that is only slightly broken, or to the presence of a large mass hierarchy between particles, as the absence of new-physics signals from direct and indirect searches seems to suggest.

A selection of open questions

Take the axion, for example. This is the particle that may be responsible for the conservation of charge–parity symmetry in strong interactions. It may also constitute a fraction or the entirety of dark matter, or explain the hierarchical masses and mixings of the SM fermions – the flavour puzzle.

Or take dark photons or dark Z′ bosons, both examples of new vector gauge bosons. Such particles are associated with extensions of the SM gauge group, and, in addition to indicating new forces beyond the four we know, could lead to evidence of dark-matter candidates with thermal origins and masses in the MeV to GeV range.

Exotic Higgs bosons could also have been responsible for cosmological inflation

Then there are exotic Higgs bosons. Light dark scalar or pseudoscalar particles related to the SM Higgs may provide novel ways of addressing the hierarchy problem, in which the Higgs mass can be stabilised dynamically via the time evolution of a so-called “relaxion” field. They could also have been responsible for cosmological inflation.

Finally, consider right-handed neutrinos, often referred to as sterile neutrinos or heavy neutral leptons, which could account for the origin of the tiny, nearly-degenerate masses of the neutrinos of the SM and their oscillations, as well as providing a mechanism for our universe’s matter–antimatter asymmetry.

Scientific diversity

No single experimental approach can cover the large parameter space of masses and couplings that FIPs models allow. The interconnections between open questions require that we construct a diverse research programme incorporating accelerator physics, dark-matter direct detection, cosmology, astrophysics, and precision atomic experiments, with a strong theoretical involvement. The breadth of searches for axions or axion-like particles (ALPs) is a good indication of the growing interest in FIPs (see “Scaling the ALPs” figure). Experimental efforts here span particle and astroparticle physics. In the coming years, helioscopes, which aim to detect solar axions by their conversion into photons (X-rays) in a strong magnetic field, will improve the sensitivity by more than 10 orders of magnitude in mass in the sub-eV range. Haloscopes, which work by converting axions into visible photons inside a resonant microwave cavity placed inside a strong magnetic field, will complement this quest by increasing the sensitivity for small couplings by six orders of magnitude (down to the theoretically motivated gold band in a mass region where the axions can be a dark-matter candidate). Accelerator-based experiments, meanwhile, can probe the strongly motivated QCD scale (MeV–GeV) and beyond for larger couplings. All these results
will be complemented by a lively theo­retical activity aimed at interpreting astrophysical signals within axion and ALP models.

FIPs 2020 triggered lively discussions that will continue in the coming months via topical meetings on different subjects. Topics that motivated particular interest between communities included possible ways of comparing results from direct-detection dark-matter experiments in the MeV–GeV range against those obtained at extracted beam line and collider experiments; the connection between right-handed neutrino properties and active neutrino parameters; and the interpretation of astrophysical and cosmological bounds, which often overwhelm the interpretation of each of the four portals.

The next FIPs workshop will take place at CERN next year.

The post Strong interest in feeble interactions appeared first on CERN Courier.

]]>
Meeting report The FIPs 2020 workshop was structured around portals that may link the Standard Model to a rich dark sector: axions, dark photons, dark scalars and heavy neutral leptons. https://cerncourier.com/wp-content/uploads/2020/11/FIPS2020.jpg
The search for leptonic CP violation https://cerncourier.com/a/the-search-for-leptonic-cp-violation/ Tue, 07 Jul 2020 11:51:56 +0000 https://preview-courier.web.cern.ch/?p=87691 Boris Kayser explains how neutrino physicists are now closing in on a crucial piece of evidence on the origin of the matter–antimatter asymmetry observed in the Universe.

The post The search for leptonic CP violation appeared first on CERN Courier.

]]>
An electron anti-neutrino

Luckily for us, there is presently almost no antimatter in the universe. This makes it possible for us – made of matter – to live without being annihilated in matter–antimatter encounters. However, cosmology tells us that just after the cosmic Big Bang, the universe contained equal amounts of matter and antimatter. Obviously, for the universe to have evolved from that early state to the present one, which contains quite unequal amounts of matter and antimatter, the two must behave differently. This implies that the symmetry CP (charge conjugation × parity) must be violated. That is, there must be physical systems whose behaviour changes if we replace every particle by its antiparticle, and interchange left and right.

In 1964, Cronin, Fitch and colleagues discovered that CP is indeed violated, in the decays of neutral kaons to pions – a phenomenon that later became understood in terms of the behaviour of quarks. By now, we have observed quark CP violation in the strange sector, the beauty sector and most recently in the charm sector (CERN Courier May/June 2019 p7). The observations of CP violation in B (beauty) meson decays have been particularly illuminating. Everything we know about quark CP violation is consistent with the hypothesis that this violation arises from a single complex phase in the quark mixing matrix. This matrix gives the amplitude for any particular negatively-charged quark, whether down, strange or bottom, to convert via a weak interaction into any particular positively-charged quark, be it up, charm or top. Just two parameters in the quark mixing matrix, ρ and η, whose relative size determines the complex phase, account very successfully for numerous quark phenomena, including both CP-violating ones and others. This is impressively demonstrated by a plot of all the experimental constraints on these two parameters (figure 1). All the constraints intersect at a common point.

Of course, precisely which (ρ, η) point is consistent with all the data is not important. Lincoln Wolfenstein, who created the quark-mixing-matrix parametrisation that includes ρ and η, was known to say: “Look, I invented ρ and η, and I don’t care what their values are, so why should you?”

Figure 1

Having observed CP violation among quarks in numerous laboratory experiments of today, we might be tempted to think that we understand how CP violation in the early universe could have changed the world from one with equal quantities of matter and antimatter to one in which matter dominates very heavily over antimatter. However, scenarios that tie early-universe CP violation to that seen among the quarks today, and do not add new physics to the Standard Model of the elementary particles, yield too small a present-day matter–antimatter asymmetry. This leads one to wonder whether early-universe CP violation involving leptons, rather than quarks, might have led to the present dominance of matter over antimatter. This possibility is envisaged by leptogenesis, a scenario in which heavy neutral leptons that were their own antiparticles lived briefly in the early universe, but then underwent CP-asymmetric decays, creating a world with unequal numbers of particles and antiparticles. Such heavy neutral leptons are predicted by “see-saw” models, which explain the extreme lightness of the known neutrinos in terms of the extreme heaviness of the postulated heavy neutral leptons. Leptogenesis can successfully account for the observed size of the present matter–antimatter asymmetry.

Deniable plausibility

In the straightforward version of this picture, the heavy neutral leptons are too massive to be observable at the LHC or any foreseen collider. However, since leptogenesis requires leptonic CP violation, observing this violation in the behaviour of the currently observed leptons would make it more plausible that leptogenesis was indeed the mechanism through which the present matter–antimatter asymmetry of the universe arose. Needless to say, observing leptonic CP violation would also reveal that the breaking of CP symmetry, which before 1964 one might have imagined to be an unbroken, fundamental symmetry of nature, is not something special to the quarks, but is participated in by all the constituents of matter.

Figure 2

To find out if leptons violate CP, we are searching for what is traditionally described as a difference between the behaviour of neutrinos and that of antineutrinos. This description is fine if neutrinos are Dirac particles – that is, particles that are distinct from their antiparticles. However, many theorists strongly suspect that neutrinos are actually Majorana particles – that is, particles that are identical to their antiparticles. In that case, the traditional description of the search for leptonic CP violation is clearly inapplicable, since then the neutrinos and the antineutrinos are the same objects. However, the actual experimental approach that is being pursued is a perfectly valid probe of leptonic CP violation regardless of whether neutrinos are of Dirac or of Majorana character. In fact, this approach is completely insensitive to which of these two possibilities nature has chosen.

Through a glass darkly

The pursuit of leptonic CP violation is based on comparing the rates for two CP mirror-image processes (figure 2). In process A, the initial state is a π+ and an undisturbed detector. The final state consists of a μ+, an e, and a nucleus in the detector that has been struck by an intermediate-state neutrino beam particle that travelled a long distance from its source to the detector. Since the neutrino was born together with a muon, but produced an electron in the detector, and the probability for this to have happened oscillates as a function of the distance the neutrino travels divided by its energy, the process is commonly referred to as muon–neutrino to electron–neutrino oscillation.

Leptogenesis can account for the matter–antimatter asymmetry

In process B, the initial and final states are the same as in process A, but with every particle replaced by its antiparticle. In addition, owing to the character of the weak interactions, the helicity (the projection of the spin along the momentum) of every fermion is reversed, so that left and right are interchanged. Thus, regardless of whether neutrinos are identical to their antiparticles, processes A and B are CP mirror images, so if their rates are unequal, CP invariance is violated. Moreover, since the probability of a neutrino oscillation involves the weak interactions of leptons, but not those of quarks, this violation of CP invariance must come from the weak interactions of leptons.

Of course, we cannot employ an anti-detector in process B in practice. However, the experiment can legitimately use the same detector in both processes. To do that, it must take into account the difference between the cross sections for the beam particles in processes A and B to interact in this detector. Once that is done, the comparison of the rates for processes A and B remains a valid probe of CP non-invariance.

The matrix reloaded

Just as quark CP violation arises from a complex phase in the quark mixing matrix, so leptonic CP violation in neutrino oscillation can arise from a complex phase, δCP, in the leptonic mixing matrix, which is the leptonic analogue of the quark mixing matrix. However, if, as suggested by several short-baseline oscillation experiments, there exist not only the three well-established neutrinos, but also additional so-called “sterile” neutrinos that do not participate in Standard Model weak interactions, then the leptonic mixing matrix is larger than the quark one. As a result, while the quark mixing matrix is permitted to contain just one complex phase, its leptonic analogue may contain multiple complex phases that can contribute to CP violation in neutrino oscillations.

Stack of scintillating cells

Leptonic CP violation is being sought by two current neutrino-oscillation experiments. The NOvA experiment in the US has reported results that are consistent with either the presence or absence of CP violation. The T2K experiment in Japan reports that the complete absence of CP violation is excluded at 95% confidence. Assuming that the leptonic mixing matrix is the same size as the quark one, so that it may contain only one complex phase relevant to neutrino oscillations, the T2K data show a preference for values of that phase, δCP, that correspond to near maximal CP violation. Of course, as Lincoln Wolfenstein would doubtless point out, the precise value of δCP is not important. What counts is the extremely interesting experimental finding that the behaviour of leptons may very well violate CP. In the future, the oscillation experiments Hyper-Kamiokande in Japan and DUNE in the US will probe leptonic CP violation with greater sensitivity, and should be capable of observing it even if it should prove to be fairly small (see Tuning in to neutrinos).

By searching for leptonic CP violation, we hope to find out whether the breaking of CP symmetry occurs among all the constituents of matter, including both the leptons and the quarks, or whether it is a feature that is special to the quarks. If leptonic CP violation should be definitively shown to exist, this violation might be related to the reason that the universe contains matter, but almost no antimatter, so that life is possible.

The post The search for leptonic CP violation appeared first on CERN Courier.

]]>
Feature Boris Kayser explains how neutrino physicists are now closing in on a crucial piece of evidence on the origin of the matter–antimatter asymmetry observed in the Universe. https://cerncourier.com/wp-content/uploads/2020/07/rhc_anti-elike_candidate03_hr10k-191.jpg
New Perspectives on Einstein’s E = mc2 https://cerncourier.com/a/new-perspectives-on-einsteins-e%e2%80%89%e2%80%89mc2/ Tue, 07 Jul 2020 09:12:51 +0000 https://preview-courier.web.cern.ch/?p=87753 Young Suh Kim and Marilyn Noz’s book may struggle to find its audience, says Nikolaos Rompotis.

The post New Perspectives on Einstein’s E = mc<sup>2</sup> appeared first on CERN Courier.

]]>

New Perspectives on Einstein’s E = mc2 mixes historical notes with theoretical aspects of the Lorentz group that impact relativity and quantum mechanics. The title is a little perplexing, however, as one can hardly expect nowadays to discover new perspectives on an equation such as E = mc2. The book’s true aim is to convey to a broader audience the formal work done by the authors on group theory. Therefore, a better-suited title may have been “Group theoretical perspectives on relativity”, or even, more poetically, “When Wigner met Einstein”.

The first third of the book is an essay on Einstein’s life, with historical notes on topics discussed in the subsequent chapters, which are more mathematical and draw heavily on publications by the authors – a well-established writing team who have co-authored many papers relating to group theory. The initial part is easy to read and includes entertaining stories, such as Einstein’s mistakes when filing his US tax declaration. Einstein, according to this story, was calculating his taxes erroneously, but the US taxpayer agency was kind enough not to raise the issue. The reader has to be warned, however, that the authors, professors at the University of Maryland and New York University, have a tendency to make questionable statements about certain aspects of the development of physics that may not be backed up by the relevant literature, and may even contradict known facts. They have a repeated tendency to interpret the development of physical theories in terms of a Hegelian synthesis of a thesis and an antithesis, without any cited sources in support, which seems, in most cases, to be a somewhat arbitrary a posteriori assessment.

There is a sharp distinction in the style of the second part of the book, which requires training in physics or maths at advanced undergraduate level. These chapters begin with a discussion of the Lorentz group. The interest then quickly shifts to Wigner’s “little groups”, which are subgroups of the Lorentz group with the property of leaving the momentum of a system invariant. Armed with this mathematical machinery, the authors proceed to Dirac spinors and give a Lorentz-invariant formulation of the harmonic oscillator that is eventually applied to the parton model. The last chapter is devoted to a short discussion on optical applications of the concepts advanced previously. Unfortunately, the book finishes abruptly at this point, without a much-needed final chapter to summarise the material and discuss future work, which, the previous chapters imply, should be plentiful.

Young Suh Kim and Marilyn Noz’s book may struggle to find its audience. The contrast between the lay and expert parts of this short book, and the very specialised topics it explores, do not make it suitable for a university course, though sections could be incorporated as additional material. It may well serve, however, as an interesting pastime for mathematically inclined audiences who will certainly appreciate the formalism and clarity of the presentation of the mathematics.

The post New Perspectives on Einstein’s E = mc<sup>2</sup> appeared first on CERN Courier.

]]>
Review Young Suh Kim and Marilyn Noz’s book may struggle to find its audience, says Nikolaos Rompotis. https://cerncourier.com/wp-content/uploads/2020/07/CCJulAug20_Rev_einstein_feature.jpg
Fiction, in theory https://cerncourier.com/a/fiction-in-theory/ Tue, 31 Mar 2020 18:22:51 +0000 https://preview-courier.web.cern.ch/?p=87038 French actor Irène Jacob's novel is an intimate portrait of life as the daughter of a renowned theoretical physicist, writes James Gillies.

The post Fiction, in theory appeared first on CERN Courier.

]]>

French actor Irène Jacob rose to international acclaim for her role in the 1991 film The Double Life of Véronique. She is the daughter of Maurice Jacob (1933 – 2007), a French theoretical physicist and Head of CERN’s Theory Division from 1982 to 1988. Her new novel, Big Bang, is a fictionalised account of the daughter of a renowned physicist coming to terms with the death of her father and the arrival of her second child. Keen to demonstrate the artistic beauty of science, she is also a Patron of the Physics of the Universe Endowment Fund established in Paris by George Smoot.

When Irène Jacob recites from her book, it is more than a reading, it’s a performance. That much is not surprising: she is after all the much-feted actor in the subtly reflective 1990s films of Krzysztof Kieślowski. What did come as a surprise to this reader is just how beautifully she writes. With an easy grace and fluidity, she weaves together threads of her life, of life in general, and of the vast mysteries of the universe.

The backdrop to the opening scenes is the corridors of the theory division in the 70s and 80s

Billed as a novel, Big Bang comes across more as a memoir, and that’s no accident. The author’s aim was to use her entourage, somewhat disguised, to tell a universal story of the human condition. Names are changed, Irène’s father, the physicist Maurice Jacob, becomes René, for example, his second name. The true chronology of events is not strictly observed, and maybe there’s some invention, but behind the storytelling there is nevertheless a touching portrait of a very real family. The backdrop to the opening scenes is CERN, more specifically the corridors of the theory division in the 70s and 80s, a regular stomping ground for the young Irène. The reader discovers the wonders of physics through the wide-open eyes of a seven-year-old child. Later on, that child-become-adult reflects on other wonders – those related to the circle of life. The book ties all this together, seen from the point in spacetime at which Irène has to reconcile her father’s passing with her own impending motherhood.

For those who remember the CERN of the 80s, the story begins with an opportunity to rediscover old friends and places. For those not familiar with particle physics, it offers a glimpse into the field, to those who devote their lives to it, and to those who share their lives with them. The initial chapters open the door to Irène Jacob’s world, just a crack.

The atmosphere soon changes, though, as she flings the door wide open. More than once I found myself wondering whether I had the right to be there: inside Irène Jacob’s life, dreams and nightmares. It is a remarkably intimate account, looking deep in to what it is to be human. Highs and lows, loves and laughs, kindnesses and hurts, even tragedies: all play a part. Irène Jacob’s fictionalised family suffers much, yet although Irène holds nothing back, Big Bang is essentially an optimistic, life affirming tale.

Science makes repeated cameo appearances. There’s a passage in which René is driving home from hospital after welcoming his first child into the world. Distracted by emotion, he’s struck by a great insight and has to pull over and tell someone. How often does that happen in the creative process? Kary Mullis tells a similar story in his memoirs. In his case, the idea for Polymerase Chain Reaction came to him at the end of hot May day on Highway 128 with his girlfriend asleep next to him in the passenger seat of his little silver Honda. Mullis got the Nobel Prize. Both had a profound impact on their fields.

Bohr can be paraphrased as saying: the opposite of a profound truth is another profound truth

Alice in Wonderland is a charmingly recurrent theme, particularly the Cheshire cat. Very often, a passage ends with nothing left but an enigmatic smile, a metaphor for life in the quantum world, where believing in six impossible things before breakfast is almost a prerequisite.

Big Bang is not a page turner. Instead, each chapter is a beautifully formed vignette of family life. Take, for example, the passage that begins with a quote from Niels Bohr taken René’s manuscript, Des Quarks et des Hommes (published as Au Coeur de la Matière). Bohr can be paraphrased as saying: the opposite of a profound truth is another profound truth. As the passage moves on, it plays with this theme, ending with the conclusion: if my story does not stand up, it’s because reality is very small. And if my story is very small, it is because reality does not stand up.

Whatever the author’s wish, Big Bang comes across as an admirably honest family portrait, at times uncomfortably so. It’s a portrait that goes much deeper than the silver screen or the hallowed halls of academia. The cast of Big Bang is a very human family, and one that this reader came to like very much.

The post Fiction, in theory appeared first on CERN Courier.

]]>
Review French actor Irène Jacob's novel is an intimate portrait of life as the daughter of a renowned theoretical physicist, writes James Gillies. https://cerncourier.com/wp-content/uploads/2020/03/Irene-Jacob-CERN.jpg
Cosmology and the quantum vacuum https://cerncourier.com/a/cosmology-and-the-quantum-vacuum/ Wed, 11 Mar 2020 11:05:07 +0000 https://preview-courier.web.cern.ch/?p=86754 The sixth conference in the series marked Spanish theorist Emilio Elizalde’s 70th birthday.

The post Cosmology and the quantum vacuum appeared first on CERN Courier.

]]>
The sixth Cosmology and the Quantum Vacuum conference attracted about 60 theoreticians to the Institute of Space Sciences in Barcelona from 5 to 7 March. This year the conference marked Spanish theorist Emilio Elizalde’s 70th birthday. He is a well known specialist in mathematical physics, field theory and gravity, with over 300 publications and three monographs on the Casimir effect and zeta regularisation. He has co-authored remarkable works on viable theories of modified gravity which unify inflation with dark energy.

These meetings bring together researchers who study theoretical cosmology and various aspects of the quantum vacuum such as the Casimir effect. This quantum effect manifests itself as an attractive force which appears between plates which are extremely close to each other. As it is related to the quantum vacuum, it is expected to be important in cosmology as well, giving a kind of effective induced cosmological constant. Manuel Asorey (Zaragoza), Mike Bordag (Leipzig) and Aram Saharian (Erevan) discussed various aspects of the Casimir effect for scalars and for gauge theories. Joseph Buchbinder gave a review of one-loop effective action in supersymmetric gauge theories. Conformal quantum gravity and quantum electrodynamics in de Sitter space were presented by Enrique Alvarez (Madrid) and Drazen Glavan (Brussels), respectively.

Enrique Gaztanaga argued for two early inflationary periods

Even more attention was paid to theoretical cosmology. The evolution of the early and/or late universe in different theories of modified gravity was discussed by several delegates, with Enrique Gaztanaga (Barcelona) expressing an interesting point of view on the inflationary universe, arguing for two early inflationary periods.

Martiros Khurshyadyan and I discussed modified-gravity cosmology with the unification of inflation and dark energy, and wormholes, building on work with Emilio Elizalde. Wormholes are usually related with exotic matter, however they may in alternative gravity be caused by modifications to the gravitational equations of motion. Iver Brevik (Trondheim) gave an excellent introduction to viscosity in cosmology. Rather exotic wormholes were presented by Sergey Sushkov (Kazan), while black holes in modified gravity were discussed by Gamal Nashed (Cairo). A fluid approach to the dark-energy epoch and the addition of four forms (antisymmetric tensor fields with four indices) to late universe evolution was given by Diego Saez (Vallodolid) and Mariam Bouhmadi-Lopez (Bilbao), respectively. Novel aspects of non-standard quintessential inflation were presented by Jaime Haro (Barcelona).

Many interesting talks were given by young participants at this meeting. The exchange of ideas between cosmologists on the one side and quantum-field-theory specialists on the other will surely help in the further development of rigorous approaches to the construction of quantum gravity. It also opens the window onto a much better account of quantum effects in the history of the universe.

The post Cosmology and the quantum vacuum appeared first on CERN Courier.

]]>
Meeting report The sixth conference in the series marked Spanish theorist Emilio Elizalde’s 70th birthday. https://cerncourier.com/wp-content/uploads/2020/03/EkaterinaPozdeeva3.jpg
LHC at 10: the physics legacy https://cerncourier.com/a/lhc-at-10-the-physics-legacy/ Mon, 09 Mar 2020 21:13:36 +0000 https://preview-courier.web.cern.ch/?p=86548 The LHC’s physics programme has transformed our understanding of elementary particles, writes Michelangelo Mangano.

The post LHC at 10: the physics legacy appeared first on CERN Courier.

]]>
Ten years have passed since the first high-energy proton–proton collisions took place at the Large Hadron Collider (LHC). Almost 20 more are foreseen for the completion of the full LHC programme. The data collected so far, from approximately 150 fb–1 of integrated luminosity over two runs (Run 1 at a centre-of-mass energy of 7 and 8 TeV, and Run 2 at 13 TeV), represent a mere 5% of the anticipated 3000 fb–1 that will eventually be recorded. But already their impact has been monumental.

In Search of the Higgs Boson

Three major conclusions can be drawn frofm these first 10 years. First and foremost, Run 1 has shown that the Higgs boson – the previously missing, last ingredient of the Standard Model (SM) – exists. Secondly, the exploration of energy scales as high as several TeV has further consolidated the robustness of the SM, providing no compelling evidence for phenomena beyond the SM (BSM). Nevertheless, several discoveries of new phenomena within the SM have emerged, underscoring the power of the LHC to extend and deepen our understanding of the SM dynamics, and showing the unparalleled diversity of phenomena that the LHC can probe with unprecedented precision.

Exceeding expectations

Last but not least, we note that 10 years of LHC operations, data taking and data interpretation, have overwhelmingly surpassed all of our most optimistic expectations. The accelerator has delivered a larger than expected luminosity, and the experiments have been able to operate at the top of their ideal performance and efficiency. Computing, in particular via the Worldwide LHC Computing Grid, has been another crucial driver of the LHC’s success. Key ingredients of precision measurements, such as the determination of the LHC luminosity, or of detection efficiencies and of backgrounds using data-driven techniques beyond anyone’s expectations, have been obtained thanks to novel and powerful techniques. The LHC has also successfully provided a variety of beam and optics configurations, matching the needs of different experiments and supporting a broad research programme. In addition to the core high-energy goals of the ATLAS and CMS experiments, this has enabled new studies of flavour physics and of hadron spectroscopy, of forward-particle production and total hadronic cross sections. The operations with beams of heavy nuclei have reached a degree of virtuosity that made it possible to collide not only the anticipated lead beams, but also beams of xenon, as well as combined proton–lead, photon–lead and photon-photon collisions, opening the way to a new generation of studies of matter at high density.

Figure 1

Theoretical calculations have evolved in parallel to the experimental progress. Calculations that were deemed of impossible complexity before the start of the LHC have matured and become reality. Next-to-leading-order (NLO) theoretical predictions are routinely used by the experiments, thanks to a new generation of automatic tools. The next frontier, next-to-next-to-leading order (NNLO), has been attained for many important processes, reaching, in a few cases, the next-to-next-to-next-to-leading order (N3LO), and more is coming.

Aside from having made these first 10 years an unconditional success, all these ingredients are the premise for confident extrapolations of the physics reach of the LHC programme to come.

To date, more than 2700 peer-reviewed physics papers have been published by the seven running LHC experiments (ALICE, ATLAS, CMS, LHCb, LHCf, MoEDAL and TOTEM). Approximately 10% of these are related to the Higgs boson, and 30% to searches for BSM phenomena. The remaining 1600 or so report measurements of SM particles and interactions, enriching our knowledge of the proton structure and of the dynamics of strong interactions, of electroweak (EW) interactions, of flavour properties, and more. In most cases, the variety, depth and precision of these measurements surpass those obtained by previous experiments using dedicated facilities. The multi-purpose nature of the LHC complex is unique, and encompasses scores of independent research directions. Here it is only possible to highlight a fraction of the milestone results from the LHC’s expedition so far.

Entering the Higgs world

The discovery by ATLAS and CMS of a new scalar boson in July 2012, just two years into LHC physics operations, was a crowning early success. Not only did it mark the end of a decades-long search, but it opened a new vista of exploration. At the time of the discovery, very little was known about the properties and interactions of the new boson. Eight years on, the picture has come into much sharper focus.

The structure of the Higgs-boson interactions revealed by the LHC experiments is still incomplete. Its couplings to the gauge bosons (W, Z, photon and gluons) and to the heavy third-generation fermions (bottom and top quarks, and tau leptons) have been detected, and the precision of these measurements is at best in the range of 5–10%. But the LHC findings so far have been key to establish that this new particle correctly embodies the main observational properties of the Higgs boson, as specified by the Brout–Englert–Guralnik–Hagen–Higgs–Kibble EW-symmetry breaking mechanism, referred hereafter as “BEH”, a cornerstone of the SM. To start with, the measured couplings to the W and Z bosons reflect the Higgs’ EW charges and are proportional to the W and Z masses, consistently with the properties of a scalar field breaking the SM EW symmetry. The mass dependence of the Higgs interactions with the SM fermions is confirmed by the recent ATLAS and CMS observations of the H → bb and H → ττ decays, and of the associated production of a Higgs boson together with a tt quark pair (see figure 1).

Figure 2

These measurements, which during Run 2 of the LHC have surpassed the five-sigma confidence level, provide the second critical confirmation that the Higgs fulfills the role envisaged by the BEH mechanism. The Higgs couplings to the photon and the gluon (g), which the LHC experiments have probed via the H → γγ decay and the gg → H production, provide a third, subtler test. These couplings arise from a combination of loop-level interactions with several SM particles, whose interplay could be modified by the presence of BSM particles, or interactions. The current agreement with data provides a strong validation of the SM scenario, while leaving open the possibility that small deviations could emerge from future higher statistics.

The process of firmly establishing the identification of the particle discovered in 2012 with the Higgs boson goes hand-in-hand with two research directions pioneered by the LHC: seeking the deep origin of the Higgs field and using the Higgs boson as a probe of BSM phenomena.

The breaking of the EW symmetry is a fact of nature, requiring the existence of a mechanism like BEH. But, if we aim beyond a merely anthropic justification for this mechanism (i.e. that, without it, physicists wouldn’t be here to ask why), there is no reason to assume that nature chose its minimal implementation, namely the SM Higgs field. In other words: where does the Higgs boson detected at the LHC come from? This summarises many questions raised by the possibility that the Higgs boson is not just “put in by hand” in the SM, but emerges from a larger sector of new particles, whose dynamics induces the breaking of the EW symmetry. Is the Higgs elementary, or a composite state resulting from new confining forces? What generates its mass and self-interaction? More generally, is the existence of the Higgs boson related to other mysteries, such as the origin of dark matter (DM), of neutrino masses or of flavour phenomena?

The Higgs boson is becoming an increasingly powerful exploratory tool to probe the origin of the Higgs itself

Ever since the Higgs-boson discovery, the LHC experiments have been searching for clues to address these questions, exploring a large number of observables. All of the dominant production channels (gg fusion, associated production with vector bosons and with top quarks, and vector-boson fusion) have been discovered, and decay rates to WW, ZZ, γγ, bb and ττ were measured. A theoretical framework (effective field theory, EFT) has been developed to interpret in a global fashion all these measurements, setting strong constraints on possible deviations from the SM. With the larger data set accumulated during Run 2, the production properties of the Higgs have been studied with greater detail, simultaneously testing the accuracy of theoretical calculations, and the resilience of SM predictions.

Figure 3

To explore the nature of the Higgs boson, what has not been seen as yet can be as important as what was seen. For example, lack of evidence for Higgs decays to the fermions of the first and second generation is consistent with the SM prediction that these should be very rare. The H → μμ decay rate is expected to be about 3 × 10–3 times smaller than that of H → ττ; the current sensitivity is two times below, and ATLAS and CMS hope to first observe this decay during the forthcoming Run 3, testing for the first time the couplings of the Higgs boson to second-generation fermions. The SM Higgs boson is expected to conserve flavour, making decays such as H → μτ, H → eτ or t → Hc too small to be seen. Their observation would be a major revolution in physics, but no evidence has shown up in the data so far. Decays of the Higgs to invisible particles could be a signal of DM candidates, and constraints set by the LHC experiments are complementary to those from standard DM searches. Several BSM theories predict the existence of heavy particles decaying to a Higgs boson. For example, heavy top partners, T, could decay as T → Ht, and heavy bosons X decay as X → HV (V = W, Z). Heavy scalar partners of the Higgs, such as charged Higgs states, are expected in theories such as supersymmetry. Extensive and thorough searches of all these phenomena have been carried out, setting strong constraints on SM extensions.

As the programme of characterising the Higgs properties continues, with new challenging goals such as the measurement of the Higgs self-coupling through the observation of Higgs pair production, the Higgs boson is becoming an increasingly powerful exploratory tool to probe the origin of the Higgs itself, as well as a variety of solutions to other mysteries of particle physics.

Interactions weak and strong

The vast majority of LHC processes are controlled by strong interactions, described by the quantum-chromodynamics (QCD) sector of the SM. The predictions of production rates for particles like the Higgs or gauge bosons, top quarks or BSM states, rely on our understanding of the proton structure, in particular of the energy distribution of its quark and gluon components (the parton distribution functions, PDFs). The evolution of the final states, the internal structure of the jets emerging from quark and gluons, the kinematical correlations between different objects, are all governed by QCD. LHC measurements have been critical, not only to consolidate our understanding of QCD in all its dynamical domains, but also to improve the precision of the theoretical interpretation of data, and to increase the sensitivity to new phenomena and to the production of BSM particles.

Collisions galore

Approximately 109 proton–proton (pp) collisions take place each second inside the LHC detectors. Most of them bear no obvious direct interest for the search of BSM phenomena, but even simple elastic collisions, pp → pp, which account for about 30% of this rate, have so far failed to be fully understood with first-principle QCD calculations. The ATLAS ALFA spectrometer and the TOTEM detector have studied these high-rate processes, measuring the total and elastic pp cross sections, at the various beam energies provided by the LHC. The energy dependence of the relation between the real and imaginary part of the pp forward scattering amplitude has revealed new features, possibly described by the exchange of the so-called odderon, a coherent state of three gluons predicted in the 1970s.

Figure 4

The structure of the final states in generic pp collisions, aside from defining the large background of particles that are superimposed on the rarer LHC processes, is of potential interest to understand cosmic-ray (CR) interactions in the atmosphere. The LHCf detector measured the forward production of the most energetic particles from the collision, those driving the development of the CR air showers. These data are a unique benchmark to tune the CR event generators, reducing the systematics in the determination of the nature of the highest-energy CR constituents (protons or heavy nuclei?), a step towards solving the puzzle of their origin.

On the opposite end of the spectrum, rare events with dijet pairs of mass up to 9 TeV have been observed by ATLAS and CMS. The study of their angular distribution, a Rutherford-like scattering experiment, has confirmed the point-like nature of quarks, down to 10–18 cm. The overall set of production studies, including gauge bosons, jets and top quarks, underpins countless analyses. Huge samples of top quark pairs, produced at 15 Hz, enable the surgical scrutiny of this mysteriously heavy quark, through its production and decays. New reactions, unobservable before the LHC, were first detected. Gauge-boson scattering (e.g. W+ W+ W+ W+), a key probe of electroweak symmetry breaking proposed in the 1970s, is just one example. By and large, all data show an extraordinary agreement with theoretical predictions resulting from decades of innovative work (figure 2). Global fits to these data refine the proton PDFs, improving the predictions for the production of Higgs bosons or BSM particles.

The cross sections σ of W and Z bosons provide the most precise QCD measurements, reaching a 2% systematic uncertainty, dominated by the luminosity uncertainty. Ratios such as σ(W+)/σ(W) or σ(W)/σ(Z), and the shapes of differential distributions, are known to a few parts in 1000. These data challenge the theoretical calculations’ accuracy, and require caution to assess whether small discrepancies are due to PDF effects, new physics or yet imprecise QCD calculations.

Precision is the keystone to consolidate our description of nature

As already mentioned, the success of the LHC owes a lot to its variety of beam and experimental conditions. In this context, the data at the different centre-of-mass energies provided in the two runs are a huge bonus, since the theoretical prediction for the energy-dependence of rates can be used to improve the PDF extraction, or to assess possible BSM interpretations. The LHCb data, furthermore, cover a forward kinematical region complementary to that of ATLAS and CMS, adding precious information.

The precise determination of the W and Z production and decay kinematics has also allowed new measurements of fundamental parameters of the weak interaction: the W mass (mW) and the weak mixing angle (sinθW). The measurement of sinθW is now approaching the precision inherited from the LEP experiments and SLD, and will soon improve to shed light on the outstanding discrepancy between those two measurements. The mW precision obtained by the ATLAS experiment, ΔmW = 19 MeV, is the best worldwide, and further improvements are certain. The combination with the ATLAS and CMS measurements of the Higgs boson mass (ΔmH ≅ 200 MeV) and of the top quark mass (Δmtop ≲ 500 MeV), provides a strong validation of the SM predictions (see figure 3). For both mW and sinθW the limiting source of systematic uncertainty is the knowledge of the PDFs, which future data will improve, underscoring the profound interplay among the different components of the LHC programme.

QCD matters

The understanding of the forms and phases that QCD matter can acquire is a fascinating, broad and theoretically challenging research topic, which has witnessed great progress in recent years. Exotic multi-quark bound states, beyond the usual mesons (qq) and baryons (qqq), were initially discovered at e+e colliders. The LHCb experiment, with its large rates of identified charm and bottom final states, is at the forefront of these studies, notably with the first discovery of heavy pentaquarks (qqqcc) and with discoveries of tetraquark candidates in the charm sector (qccq), accompanied by determinations of their quantum numbers and properties. These findings have opened a new playground for theoretical research, stimulating work in lattice QCD, and forcing a rethinking of established lore.

Figure 5

The study of QCD matter at high density is the core task of the heavy-ion programme. While initially tailored to the ALICE experiment, all active LHC experiments have since joined the effort. The creation of a quark–gluon plasma (QGP) led to astonishing visual evidence for jet quenching, with 1 TeV jets shattered into fragments as they struggle their way out of the dense QGP volume. The thermodynamics and fluctuations of the QGP have been probed in multiple ways, indicating that the QGP behaves as an almost perfect fluid, the least viscous fluid known in nature. The ability to explore the plasma interactions of charm and bottom quarks is a unique asset of the LHC, thanks to the large production rates, which unveiled new phenomena such as  the recombination of charm quarks, and the sequential melting of bb bound states.

While several of the qualitative features of high-density QCD were anticipated, the quantitative accuracy, multitude and range of the LHC measurements have no match. Examples include ALICE’s precise determination of dynamical parameters such as the QGP shear-viscosity-to-entropy-density ratio, or the higher harmonics of particles’ azimuthal correlations. A revolution ensued in the sophistication of the required theoretical modelling. Unexpected surprises were also discovered, particularly in the comparison of high-density states in PbPb collisions with those occasionally generated by smaller systems such as pp and pPb. The presence in the latter of long-range correlations, various collective phenomena and an increased strange baryon abundance (figure 4), resemble behaviour typical of the QGP. Their deep origin is a mysterious property of QCD, still lacking an explanation. The number of new challenging questions raised by the LHC data is almost as large as the number of new answers obtained!

Flavour physics

Understanding the structure and the origin of flavour phenomena in the quark sector is one of the big open challenges of particle physics. The search for new sources of CP violation, beyond those present in the CKM mixing matrix, underlies the efforts to explain the baryon asymmetry of the universe. In addition to flavour studies with Higgs bosons and top quarks, more than 1014 charm and bottom quarks have been produced so far by the LHC, and the recorded subset has led to landmark discoveries and measurements. The rare Bs→ μμ decay, with a minuscule rate of approximately 3 × 10–9, has been discovered by the LHCb, CMS and ATLAS experiments. The rarer Bd→ μμ decay is still unobserved, but its expected ~10–10 rate is within reach. These two results alone had a big impact on constraining the parameter space of several BSM theories, notably supersymmetry, and their precision and BSM sensitivity will continue improving. LHCb has discovered DD mixing and the long-elusive CP violation in D-meson decays, a first for up-type quarks (figure 5). Large hadronic non-perturbative uncertainties make the interpretation of these results particularly challenging, leaving under debate whether the measured properties are consistent with the SM, or signal new physics. But the experimental findings are a textbook milestone in the worldwide flavour physics programme.

Figure 6

LHCb produced hundreds more measurements of heavy-hadron properties and flavour-mixing parameters. Examples include the most precise measurement of the CKM angle γ = (74.0+5.0–5.8)o and, with ATLAS and CMS, the first measurement of φs, the tiny CP-violation phase of Bs → J/ψϕ, whose precisely predicted SM value is very sensitive to new physics. With a few notable exceptions, all results confirm the CKM picture of flavour phenomena. Those exceptions, however, underscore the power of LHC data to expose new unexpected phenomena: B → D(*) ℓν (ℓ = μ,τ) and B → K(*)+ (ℓ = e,μ) decays hint at possible deviations from the expected lepton flavour universality. The community is eagerly waiting for further developments.

Beyond the Standard Model

Years of model building, stimulated before and after the LHC start-up by the conceptual and experimental shortcomings of the SM (e.g. the hierarchy problem and the existence of DM), have generated scores of BSM scenarios to be tested by the LHC. Evidence has so far escaped hundreds of dedicated searches, setting limits on new particles up to several TeV (figure 6). Nevertheless, much was learned. While none of the proposed BSM scenarios can be conclusively ruled out, for many of them survival is only guaranteed at the cost of greater fine-tuning of the parameters, reducing their appeal. In turn, this led to rethinking the principles that implicitly guided model building. Simplicity, or the ability to explain at once several open problems, have lost some drive. The simplest realisations of BSM models relying on supersymmetry, for example, were candidates to at once solve the hierarchy problem, provide DM candidates and set the stage for the grand unification of all forces. If true, the LHC should have piled up evidence by now. Supersymmetry remains a preferred candidate to achieve that, but at the price of more Byzantine constructions. Solving the hierarchy problem remains the outstanding theoretical challenge. New ideas have come to the forefront, ranging from the Higgs potential being determined by the early-universe evolution of an axion field, to dark sectors connected to the SM via a Higgs portal. These latter scenarios could also provide DM candidates alternative to the weakly-interacting massive particles, which so far have eluded searches at the LHC and elsewhere.

With such rapid evolution of theoretical ideas taking place as the LHC data runs progressed, the experimental analyses underwent a major shift, relying on “simplified models”: a novel model-independent way to represent the results of searches, allowing published results to be later reinterpreted in view of new BSM models. This amplified the impact of experimental searches, with a surge of phenomenological activity and the proliferation of new ideas. The cooperation and synergy between experiments and theorists have never been so intense.

Having explored the more obvious search channels, the LHC experiments refocused on more elusive signatures. Great efforts are now invested in searching corners of parameter space, extracting possible subtle signals from large backgrounds, thanks to data-driven techniques, and to the more reliable theoretical modelling that has emerged from new calculations and many SM measurements. The possible existence of new long-lived particles opened a new frontier of search techniques and of BSM models, triggering proposals for new dedicated detectors (Mathusla, CODEX-b and FASER, the last of which was recently approved for construction and operation in Run 3). Exotic BSM states, like the milli-charged particles present in some theories of dark sectors, could be revealed by MilliQan, a recently proposed detector. Highly ionising particles, like the esoteric magnetic monopoles, have been searched for by the MoEDAL detector, which places plastic tracking films cleverly in the LHCb detector hall.

While new physics is still eluding the LHC, the immense progress of the past 10 years has changed forever our perspective on searches and on BSM model building.

Final considerations

Most of the results only parenthetically cited, like the precision on the mass of the top quark, and others not even quoted, are the outcome of hundreds of years of person-power work, and would have certainly deserved more attention here. Their intrinsic value goes well beyond what was outlined, and they will remain long-lasting textbook material, until future work at the LHC and beyond improves them.

Theoretical progress has played a key role in the LHC’s progress, enhancing the scope and reliability of the data interpretation. Further to the developments already mentioned, a deeper understanding of jet structure has spawned techniques to tag high-pT gauge and Higgs bosons, or top quarks, now indispensable in many BSM searches. Innovative machine-learning ideas have become powerful and ubiquitous. This article has concentrated only on what has already been achieved, but the LHC and its experiments have a long journey of exploration ahead.

The terms precision and discovery, applied to concrete results rather than projections, well characterise the LHC 10-year legacy. Precision is the keystone to consolidate our description of nature, increase the sensitivity to SM deviations, give credibility to discovery claims, and to constrain models when evaluating different microscopic origins of possible anomalies. The LHC has already fully succeeded in these goals. The LHC has also proven to be a discovery machine, and in a context broader than just Higgs and BSM phenomena. Altogether, it delivered results that could not have been obtained otherwise, immensely enriching our understanding of nature.

The post LHC at 10: the physics legacy appeared first on CERN Courier.

]]>
Feature The LHC’s physics programme has transformed our understanding of elementary particles, writes Michelangelo Mangano. https://cerncourier.com/wp-content/uploads/2020/02/CCMarApr_LHC10_frontis.jpg
50 years of the GIM mechanism https://cerncourier.com/a/50-years-of-the-gim-mechanism/ Fri, 24 Jan 2020 14:34:54 +0000 https://preview-courier.web.cern.ch/?p=86381 A symposium to celebrate the fiftieth anniversary of Glashow, Iliopoulos and Maiani's explanation of the suppression of strangeness-changing neutral currents was held in Shanghai.

The post 50 years of the GIM mechanism appeared first on CERN Courier.

]]>
GIM originators 50 years on

In 1969 many weak amplitudes could be accurately calculated with a model of just three quarks, and Fermi’s constant and the Cabibbo angle to couple them. One exception was the remarkable suppression of strangeness-changing neutral currents. John Iliopoulos, Sheldon Lee Glashow and Luciano Maiani boldly solved the mystery using loop diagrams featuring the recently hypothesised charm quark, making its existence a solid prediction in the process. To celebrate the fiftieth anniversary of their insight, the trio were guests of honour at an international symposium at the T. D. Lee Institute at Shanghai Jiao Tong University on 29 October, 2019.

The UV cutoff needed in the three-quark theory became an estimate of the mass of the fourth quark

The Glashow-Iliopoulos-Maiani (GIM) mechanism was conceived in 1969, submitted to Physical Review D on 5 March 1970, and published on 1 October of that year, after several developments had defined a conceptual framework for electroweak unification. These included Yang-Mills theory, the universal V−A weak interaction, Schwinger’s suggestion of electroweak unification, Glashow’s definition of the electroweak group SU(2)L×U(1)Y, Cabibbo’s theory of semileptonic hadron decays and the formulation of the leptonic electroweak gauge theory by Weinberg and Salam, with spontaneous symmetry breaking induced by the vacuum expectation value of new scalar fields. The GIM mechanism then called for a fourth quark, charm, in addition to the three introduced by Gell-Mann, such that the first two blocks of the electroweak theory are made each by one lepton and one quark doublet, [(νe, e), (u, d)] and [(νµ, µ), (c, s)]. Quarks u and c are coupled by the weak interaction to two superpositions of the quarks d and s: u ↔ dC , with dC the Cabibbo combination dC = cos θC d + sin θC s, and c ↔ sC , with sC the orthogonal combination. In subsequent years, a third generation, [(ντ, τ ), (t, b)] was predicted to describe CP violation. No further generations have been observed yet.

Problem solved

The GIM mechanism was the solution to a problem arising in the simplest weak interaction theory with one charged vector boson coupled to the Cabibbo currents. As pointed out in 1968, strangeness-changing neutral-current processes, such as KL → µ+µ and K0K0 mixing, are generated at one loop with amplitudes of order G sinθC cosθC (GΛ2), where G is the Fermi constant, Λ is an ultraviolet cutoff, and GΛ2 (dimensionless) is the first term in a perturbative expansion which could be continued to take higher order diagrams into account. To comply with the strict limits existing at the time, one had to require a surprisingly small value of the cutoff, Λ, of 2 − 3 GeV, to be compared with the naturally expected value: Λ = G-1/2 ~ 300 GeV. This problem was taken seriously by the GIM authors, who wrote that “it appears necessary to depart from the original phenomenological model of weak interactions”.

GIM mechanism Feynman diagrams

To sidestep this problem, Glashow, Iliopoulos and Maiani brought in the fourth “charm” quark, already introduced by Bjorken, Glashow and others, with its typical coupling to the quark combination left alone in the Cabibbo theory: c ↔ sC = − sinθC d + cosθC s. Amplitudes for s → d with u or c on the same fermion line would cancel exactly for mc = mu, suggesting a more natural means to suppress strangeness-changing neutral-current processes to measured levels. For mc >> mu, a residual neutral-current effect would remain, which, by inspection, and for dimensional reasons, is of order G sinθC cos θC (Gmc2). This was a real surprise: the “small” UV cutoff needed in the simple three-quark theory became an estimate of the mass of the fourth quark, which was indeed sufficiently large to have escaped detection in the unsuccessful searches for charmed mesons that had been conducted in 1960s. With the two quark doublets included, a detailed study of strangeness changing neutral current processes gave mc ∼ 1.5 GeV, a value consistent with more recent data on the masses of charmed mesons and baryons. Another aspect of the GIM cancellation is that the weak charged currents make an SU(2) algebra together with a neutral component that has no strangeness changing terms. Thus, there is no difficulty to include the two quark doublets in the unified electroweak group SU(2)L×U(1)Y of Glashow, Weinberg and Salam. The 1970 GIM paper noted that “in contradistinction to the conventional (three-quark) model, the couplings of the neutral intermediary – now hypercharge conserving – cause no embarrassment.”

The GIM mechanism has become a cornerstone of the Standard Model and it gives a precise description of the observed flavour changing neutral current processes for s and b quarks. For this reason, flavour-changing neutral currents are still an important benchmark and give strong constraints on theories that go beyond the Standard Model in the TeV region.

The post 50 years of the GIM mechanism appeared first on CERN Courier.

]]>
Meeting report A symposium to celebrate the fiftieth anniversary of Glashow, Iliopoulos and Maiani's explanation of the suppression of strangeness-changing neutral currents was held in Shanghai. https://cerncourier.com/wp-content/uploads/2020/01/MarchApril_meeting-report_feature.jpg
Space–time symmetries scrutinised in Indiana https://cerncourier.com/a/space-time-symmetries-scrutinised-in-indiana/ Thu, 23 Jan 2020 13:24:28 +0000 https://preview-courier.web.cern.ch/?p=86339 The Standard-Model Extension provides a framework for testing CPT and Lorentz symmetries by including all operators that break them in an effective field theory.

The post Space–time symmetries scrutinised in Indiana appeared first on CERN Courier.

]]>
The eighth CPT and Lorentz Symmetry meeting

The space–time symmetries of physics demand that experiments yield identical results under continuous Lorentz transformations – rotations and boosts – and under the discrete CPT transformation (the combination of charge conjugation, parity inversion and time reversal). The Standard-Model Extension (SME) provides a framework for testing these symmetries by including all operators that break them in an effective field theory. The first CPT and Lorentz Symmetry meeting, in Bloomington, Indiana, in 1998, featured the first limits on SME coefficients. Last year’s event, the 8th in the triennial series, brought 100 researchers together from 12 to 16 May 2019 at the Indiana University Center for Spacetime Symmetries, to sample a smorgasbord of ongoing SME studies.

Most physics is described by operators of mass dimension three or four that are quadratic in the conventional fields – for example the Dirac lagrangian contains an operator ψ ∂̸ ψ (mass dimension 3/2 + 1 + 3/2 = 4) and an operator ψψ (mass dimension 3/2 + 3/2 = 3), with the latter controlled by an additional mass coefficient – however, the search for fundamental symmetry violations may need to employ operators of higher mass dimensions and higher order in the fields. One example is the Lorentz-breaking lagrangian-density term (kVV)μν(ψγμ ψ) (ψγν ψ), which is quartic in the fermion field ψ. The coefficient kVV carries units of GeV–2 and controls the operator, which has mass dimension six. Searches for Lorentz-symmetry breaking seek nonzero values for coefficients like kVV. In the 21 years since the first CPT meeting, theoretical studies have uncovered how to write down the myriad operators that describe hypothetical Lorentz violations in both flat and curved space–times. Meanwhile, experiments in particle physics, atomic physics, astrophysics and gravitational physics continue to place exquisitely tight bounds on the SME coefficients, motivated by the intriguing prospect of finding a crack in the Lorentz symmetry of nature.

The SME has revealed uncharted territory that requires theoretical and experimental expertise to navigate

Comparisons between matter and antimatter offer rich prospects for testing Lorentz symmetry, because individual SME coefficients can be isolated. The AEgIS, ALPHA, ASACUSA, ATRAP, BASE and gBAR collaborations at CERN, as well as ones at other institutions, are working to develop the challenging technology for such tests. Several presenters discussed Penning traps – devices that confine charged particles in a static electromagnetic field – for storing and mixing the ingredients for antihydrogen, the production of antihydrogen, spectroscopy for the hyperfine and 1S–2S transitions, and the prospects for interferometric measurements of antimatter acceleration. The commissioning of ELENA, CERN’s 30 m-circumference antiproton deceleration ring, promises larger quantities of relatively slow-moving antiprotons in support of this work.

Lorentz violation can occur independently in each sector of the particle world, and participants discussed existing and future limits on SME coefficients based on the muon g-2 experiment at Fermilab, neutrino oscillations at Daya Bay in China, kaon oscillations in Frascati, and on positronium decay using the Jagellonian PET detector, to name a few. Dozens of Lorentz-symmetry tests have probed the photon sector of the SME with table-top devices such as atomic clocks and resonant cavities, and with astrophysical polarisation measurements of sources such as active galactic nuclei, which leverage vast distances to limit cumulative effects such as the rotation of a polarisation angle. In the gravity sector, SME coefficient bounds were presented from the 2015 gravitational-wave detection by the LIGO collaboration, as well as from observations of pulsars, cosmic rays and other phenomena with signals that are proportional to the travel distance. Symmetry-breaking signals are also sought in matter-gravity interactions with test masses, and here CPT’19 included discussions of short-range spin-dependent gravity and neutron-interferometry physics.

The SME has revealed uncharted territory that requires theoretical and experimental expertise to navigate. CPT’19 showed that there is no shortage of physicists with the adventurous spirit to explore this frontier further.

The post Space–time symmetries scrutinised in Indiana appeared first on CERN Courier.

]]>
Meeting report The Standard-Model Extension provides a framework for testing CPT and Lorentz symmetries by including all operators that break them in an effective field theory. https://cerncourier.com/wp-content/uploads/2020/01/CCJanFeb20_FN-CPT_feature.jpg
The Higgs, supersymmetry and all that https://cerncourier.com/a/the-higgs-supersymmetry-and-all-that/ Fri, 10 Jan 2020 09:36:28 +0000 https://preview-courier.web.cern.ch/?p=86015 John Ellis reflects on 50 years at the forefront of theoretical high-energy physics - and whether the field is ripe for a paradigm shift.

The post The Higgs, supersymmetry and all that appeared first on CERN Courier.

]]>
John Ellis

What would you say were the best and the worst of times in your half-century-long career as a theorist?

The two best times, in chronological order, were the 1979 discovery of the gluon in three-jet events at DESY, which Mary Gaillard, Graham Ross and I had proposed three years earlier, and the discovery of the Higgs boson at CERN in 2012, in particular because one of the most distinctive signatures for the Higgs, its decay to two photons, was something Gaillard, Dimitri Nanopoulos and I had calculated in 1975. There was a big build up to the Higgs and it was a really emotional moment. The first of the two worst times was in 2000 with the closure of LEP, because maybe there was a glimpse of the Higgs boson. In fact, in retrospect the decision was correct because the Higgs wasn’t there. The other time was in September 2008 when there was the electrical accident in the LHC soon after it started up. No theoretical missing factor-of-two could be so tragic.

Your 1975 work on the phenomenology of the Higgs boson was the starting point for the Higgs hunt. When did you realise that the particle was more likely than not to exist?

Our paper, published in 1976, helped people think about how to look for the Higgs boson, but it didn’t move to the top of the physics agenda until after the discovery of the W and Z bosons in 1983. When we wrote the paper, things like spontaneous symmetry breaking were regarded as speculative hypotheses by the distinguished grey-haired scientists of the day. Then, in the early 1990s, precision measurements at LEP enabled us to look at the radiative corrections induced by the Higgs and they painted a consistent picture that suggested the Higgs would be relatively light (less than about 300 GeV). I was sort of morally convinced beforehand that the Higgs had to exist, but by the early 1990s it was clear that, indirectly, we had seen it. Before that there were alternative models of electroweak symmetry breaking but LEP killed most of them off.

To what extent does the Higgs boson represent a “portal” to new physics?

The Higgs boson is often presented as completing the Standard Model (SM) and solving lots of problems. Actually, it opens up a whole bunch of new ones. We know now that there is at least one particle that looks like an effective elementary scalar field. It’s an entirely new type of object that we’ve never encountered before, and every single aspect of the Higgs is problematic from a theoretical point of view. Its mass: we know that in the SM it is subject to quadratic corrections that make the hierarchy of mass scales unstable.

Every single aspect of the Higgs is problematic from a theoretical point of view

Its couplings to fermions: those are what produce the mixing of quarks, which is a complete mystery. The quartic term of the Higgs potential in the SM goes negative if you extrapolate it to high energies, the theory becomes unstable and the universe is doomed. And, in principle, you can add a constant term to the Higgs potential, which is the infamous cosmological constant that we know exists in the universe today but that is much, much smaller than would seem natural from the point of view of Higgs theory. Presumably some new physics comes in to fix these problems, and that makes the Higgs sector of the SM Lagrangian look like the obvious portal to that new physics.

In what sense do you feel an emotional connection to theory?

The Higgs discovery is testament to the power of mathematics to describe nature. People often talk about beauty as being a guide to theory, but I am always a bit sceptical about that because it depends on how you define beauty. For me, a piece of engineering can be beautiful even if it looks ugly. The LHC is a beautiful machine from that point of view, and the SM is a beautiful theoretical machine that is driven by mathematics. At the end of the day, mathematics is nothing but logic taken as far as you can.

Do you recall the moment you first encountered supersymmetry (SUSY), and what convinced you of its potential?

I guess it must have been around 1980. Of course I knew that Julius Wess and Bruno Zumino had discovered SUSY as a theoretical framework, but their motivations didn’t convince me. Then people like Luciano Maiani, Ed Witten and others pointed out that SUSY could help stabilise the hierarchy of mass scales that we find in physics, such as the electroweak, Planck and grand unification scales. For me, the first phenomenological indication that indicated SUSY could be related to reality was our realisation in 1983 that SUSY offered a great candidate for dark matter in the form of the lightest supersymmetric particle. The second was a few years later when LEP provided very precise measurements of the electroweak mixing angle, which were in perfect agreement with supersymmetric (but not non-supersymmetric) grand unified theories. The third indication was around 1991 when we calculated the mass of the lightest supersymmetric Higgs boson and got a mass up to about 130 GeV, which was being indicated by LEP as a very plausible value, and agrees with the experimental value.

There was great excitement about SUSY ahead of the LHC start-up. In hindsight, does the non-discovery so far make the idea less likely?

Certainly it’s disappointing. And I have to face the possibility that even if SUSY is there, I might not live to meet her. But I don’t think it’s necessarily a problem for the underlying theory. There are certainly scenarios that can provide the dark matter even if the supersymmetric particles are rather heavier than we originally thought, and such models are still consistent with the mass of the Higgs boson. The information you get from unification of the couplings at high energies also doesn’t exclude SUSY particles weighing 10 TeV or so. Clearly, as the masses of the sparticles increase, you have to do more fine tuning to solve the electroweak hierarchy problem. On the other hand, the amount of fine tuning is still many, many orders of magnitude less than what you’d have to postulate without it! It’s a question of how much resistance to pain you have. That said, to my mind the LHC has actually provided three additional reasons for loving SUSY. One is the correct prediction for the Higgs mass. Another is that SUSY stabilises the electroweak vacuum (without it, SM calculations show that the vacuum is metastable). The third is that in a SUSY model, the Higgs couplings to other particles, while not exactly the same as in the SM, should be pretty close – and of course that’s consistent with what has been measured so far.

To what extent is SUSY driving considerations for the next collider?

I still think it’s a relatively clear-cut and well-motivated scenario for physics at the multi-TeV scale. But obviously its importance is less than it was in the early 1990s when we were proposing the LHC. That said, if you want a specific benchmark scenario for new physics at a future collider, SUSY would still be my go-to model, because you can calculate accurate predictions. As for new physics beyond the Higgs and more generally the precision measurements that you can make in the electroweak sector, the next topic that comes to my mind is dark matter. If dark matter is made of weakly-interacting massive particles (WIMPs), a high-energy Future Circular Collider should be able to discover it. You can look at SUSY at various different levels. One is that you just add in these new particles and make sure they have the right couplings to fix the hierarchy problem. But at a more fundamental level you can write down a Lagrangian, postulate this boson-fermion symmetry and follow the mathematics through. Then there is a deeper picture, which is to talk about additional fermionic (or quantum) dimensions of space–time. If SUSY were to be discovered, that would be one of the most profound insights into the nature of reality that we could get.

If SUSY is not a symmetry of nature, what would be the implications for attempts to go beyond the SM, e.g. quantum gravity?

We are never going to know that SUSY is not there. String theorists could probably live with very heavy SUSY particles. When I first started thinking about SUSY in the 1980s there was this motivation related to fine tuning, but there weren’t many other reasons why SUSY should show up at low energies. More arguments came later, for example, dark matter, which are nice but a matter of taste. I and my grandchildren will have passed on, humans could still be exploring physics way below the Planck scale, and string theorists could still be cool with that.

How high do the masses of the super-partners need to go before SUSY ceases to offer a compelling solution for the hierarchy problem and dark matter?

Beyond about 10 TeV it is difficult to see how it can provide the dark matter unless you change the early expansion history of the universe – which of course is quite possible, because we have no idea what the universe was doing when the temperature was above an MeV. Indeed, many of my string colleagues have been arguing that the expansion history could be rather different from the conventional adiabatic smooth expansion that people tend to use as the default. In this case supersymmetric particles could weigh 10 or even 30 TeV and still provide the dark matter. As for the hierarchy problem, obviously things get tougher to bear.

What can we infer about SUSY as a theory of fundamental particles from its recent “avatars” in lasers and condensed-matter systems?

I don’t know. It’s not really clear to me that the word “SUSY” is being used in the same sense that I would use it. Supersymmetric quantum mechanics was taken as a motivation for the laser setup (CERN Courier March/April 2019 p10), but whether the deeper mathematics of SUSY has much to do with the way this setup works I’m not sure. The case of topological condensed-matter systems is potentially a more interesting place to explore what this particular face of SUSY actually looks like, as you can study more of its properties under controlled conditions. The danger is that, when people bandy around the idea of SUSY, often they just have in mind this fermion–boson partnership. The real essence of SUSY goes beyond that and includes the couplings of these particles, and it’s not clear to me that in these effective-SUSY systems one can talk in a meaningful way about what the couplings look like.

Has the LHC new-physics no-show so far impacted what theorists work on?

In general, I think that members of the theoretical community have diversified their interests and are thinking about alternative dark-matter scenarios, and about alternative ways to stabilise the hierarchy problem. People are certainly exploring new theoretical avenues, which is very healthy and, in a way, there is much more freedom for young theorists today than there might have been in the past. Personally, I would be rather reluctant at this time to propose to a PhD student a thesis that was based solely on SUSY – the people who are hiring are quite likely to want them to be not just working on SUSY and maybe even not working on SUSY at all. I would regard that as a bit unfair, but there are always fashions in theoretical physics.

Following a long and highly successful period of theory-led research, culminating in the completion of the SM, what signposts does theory offer experimentalists from here?

I would broaden your question. In particle physics, yes, we have the SM, which over the past 50 years has been the dominant paradigm. But there is also a paradigm in cosmology and gravitation – general relativity and the idea of a big bang – initiated a century ago by Einstein. The 2016 discovery of gravitational waves almost four years ago was the “Higgs moment” for gravity, and that community now finds itself in the same fix that we do, in that they have this theory-led paradigm that doesn’t indicate where to go next.

The discovery of gravitational waves almost four years ago was the “Higgs moment” for gravity

Gravitational waves are going to tell us a lot about astrophysics, but whether they will tell us about quantum gravity is not so obvious. The Higgs boson, meanwhile, tells us that we have a theory that works fantastically well but leaves many mysteries – such as dark matter, the origin of matter, neutrino masses, cosmological inflation, etc – still standing. These are a mixture of theoretical, phenomenological and experimental problems suggesting life beyond the SM. But we don’t have any clear signposts today. The theoretical cats are wandering off in all directions, and that’s good because maybe one of the cats will find something interesting. But there is still a dialogue going on between theory and experiment, and it’s a dialogue that is maybe less of a monologue than it was during the rise of the SM and general relativity. The problems we face in going beyond the current paradigms in fundamental physics are the hardest we’ve faced yet, and we are going to need all the dialogue we can muster between theorists, experimentalists, astrophysicists and cosmologists.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

The post The Higgs, supersymmetry and all that appeared first on CERN Courier.

]]>
Opinion John Ellis reflects on 50 years at the forefront of theoretical high-energy physics - and whether the field is ripe for a paradigm shift. https://cerncourier.com/wp-content/uploads/2020/01/CCJanFeb20-Interview-ellis.jpg
Learning to love anomalies https://cerncourier.com/a/learning-to-love-anomalies/ Fri, 10 Jan 2020 09:21:47 +0000 https://preview-courier.web.cern.ch/?p=86009 The 2020s will sort current anomalies in fundamental physics into discoveries or phantoms, says Ben Allanach.

The post Learning to love anomalies appeared first on CERN Courier.

]]>
All surprising discoveries were anomalies at some stage

Anomalies, which I take to mean data that disagree with the scientific paradigm of the day, are the bread and butter of phenomenologists working on physics beyond the Standard Model (SM). Are they a mere blip or the first sign of new physics? A keen understanding of statistics is necessary to help decide which “bumps” to work on.

Take the excess in the rate of di-photon production at a mass of around 750 GeV spotted in 2015 by the ATLAS and CMS experiments. ATLAS had a 4σ peak with respect to background, which CMS seemed to confirm, although its signal was less clear. Theorists produced an avalanche of papers speculating on what the signal might mean but, in the end, the signal was not confirmed in new data. In fact, as is so often the case, the putative signal stimulated some very fruitful work. For example, it was realised that ultra-peripheral collisions between lead ions could produce photon-photon resonances, leading to an innovative and unexpected search programme in heavy-ion physics. Other authors proposed using such collisions to measure the anomalous magnetic moment of the tau lepton, which is expected to be especially sensitive to new physics, and in 2018 ATLAS and CMS found the first evidence for (non-anomalous) high-energy light-by-light scattering in lead-lead ultra-peripheral collisions.

Some anomalies have disappeared during the past decade not primarily because they were statistical fluctuations, but because of an improved understanding of theory. One example is the forward-backward asymmetry (AFB) of top–antitop production at the Tevatron. At large transverse momentum, AFB was measured to be much too large compared to SM predictions, which were at next-to-leading order in QCD with some partial next-to-next-to leading order (NNLO) corrections. The complete NNLO corrections, calculated in a Herculean effort, proved to contribute much more than was previously thought, faithfully describing top–antitop production both at the Tevatron and at the LHC.

Ben Allanach

Other anomalies are still alive and kicking. Arguably, chief among them is the long-standing oddity in the measurement of the anomalous magnetic moment of the muon, which is about 4σ discrepant with the SM predictions. Spotted 20 years ago, many papers have been written in an attempt to explain it, with contributions ranging from supersymmetric particles to leptoquarks. A similarly long-standing anomaly is a 3.8σ excess in the number of electron antineutrinos emerging from a muon–antineutrino beam observed by the LSND experiment and backed up more recently by MiniBooNE. Again, numerous papers attempting to explain the excess, e.g. in terms of the existence of a fourth “sterile” neutrino, have been written, but the jury is still out.

Some anomalies are more recent, and unexpected. The so-called “X17” anomaly reported at a nuclear physics experiment in Hungary, for instance, shows a significant excess in the rate of certain nuclear decays of 8Be and 4He nuclei (see Rekindled Atomki anomaly merits closer scrutiny) which has been interpreted as being due to the creation of a new particle of mass 17 MeV. Though possible theoretically, one needs to work hard to make this new particle not fall afoul of other experimental constraints; confirmation from an independent experiment is also needed. Personally, I am not pursuing this: I think that the best new-physics ideas have already been had by other authors.

When working on an anomaly, beyond-the-SM phenomenologists hypothesise a new particle and/or interaction to explain it, check to see if it works quantitatively, check to see if any other measurements rule the explanation out, then provide new ways in which the idea can be tested. After this, they usually check where the new physics might fit into a larger theoretical structure, which might explain some other mysteries. For example, there are currently many anomalies in measurements of B meson decays, each of which isn’t particularly statistically significant (typically 2–3σ away from the SM) but taken together they form a coherent picture with a higher significance. The exchange of hypothesised Z′ or leptoquark quanta provide working explanations, the larger structure also shedding light on the pattern of masses of SM fermions, and most of my research time is currently devoted to studying them.

The coming decade will presumably sort several current anomalies into discoveries, or those that “went away”. Belle II and future LHCb measurements should settle the B anomalies, while the anomalous muon magnetic moment may even be settled this year by the g-2 experiment at Fermilab. Of course, we hope that new anomalies will appear and stick. One anomaly from the late 1990s – that type 1a supernovae have an anomalous acceleration at large red-shifts – turned out to reveal the existence of dark-energy and produce the dominant paradigm of cosmology today. This reminds us that all surprising discoveries were anomalies at some stage.

The post Learning to love anomalies appeared first on CERN Courier.

]]>
Opinion The 2020s will sort current anomalies in fundamental physics into discoveries or phantoms, says Ben Allanach. https://cerncourier.com/wp-content/uploads/2020/01/CCJanFeb20_Viewpoint_pencils.jpg
Who ordered all of that? https://cerncourier.com/a/who-ordered-all-of-that/ Thu, 09 Jan 2020 10:44:56 +0000 https://preview-courier.web.cern.ch/?p=85923 Explaining the bizarre pattern of fermion types and masses has led theorists to suggest that the “flavour scale” could be at a much lower energy than previously thought.

The post Who ordered all of that? appeared first on CERN Courier.

]]>
Masses of quarks and leptons

The origin of the three families of quarks and leptons and their extreme range of masses is a central mystery of particle physics. According to the Standard Model (SM), quarks and leptons come in complete families that interact identically with the gauge forces, leading to a remarkably successful quantitative theory describing practically all data at the quantum level. The various quark and lepton masses are described by having different interaction strengths with the Higgs doublet (figure 1, left), also leading to quark mixing and charge-parity (CP) violating transitions involving strange, bottom and charm quarks. However, the SM provides no understanding of the bizarre pattern of quark and lepton masses, quark mixing or CP violation.

In 1998 the SM suffered its strongest challenge to date with the decisive discovery of neutrino oscillations resolving the atmospheric neutrino anomaly and the long-standing problem of the low flux of electron neutrinos from the Sun. The observed neutrino oscillations require at least two non-zero but extremely small neutrino masses, around one ten millionth of the electron mass or so, and three sizeable mixing angles. However, since the minimal SM assumes massless neutrinos, the origin and nature of neutrino masses (i.e. whether they are Dirac or Majorana particles, the latter requiring the neutrino and antineutrino to be related by CP conjugation) and mixing is unclear, and many possible SM extensions have been proposed.

The discovery of neutrino mass and mixing makes the flavour puzzle hard to ignore, with the fermion mass hierarchy now spanning at least 12 orders of magnitude, from the neutrino to the top quark. However, it is not only the fermion mass hierarchy that is unsettling. There are now 28 free parameters in a Majorana-extended SM, including a whopping 22 associated with flavour, surely too many for a fundamental theory of nature. To restate Isidor Isaac Rabi’s famous question following the discovery of the muon in 1936: who ordered all of that?

A theory of flavour

Figure 1

There have been many attempts to formulate a theory beyond the SM that can address the flavour puzzles. Most attempt to enlarge the group structure of the SM describing the strong, weak and electromagnetic gauge forces: SU(3)C× SU(2)L× U(1)Y (see “A taste of flavour in elementary particle physics” panel). The basic premise is that, unlike in the SM, the three families are distinguished by some new quantum numbers associated with a new family or flavour symmetry group, Gfl, which is tacked onto the SM gauge group, enlarging the structure to Gfl× SU(3)C× SU(2)L× U(1)Y. The earliest ideas dating back to the 1970s include radiative fermion-mass generation, first proposed by Weinberg in 1972, who supposed that some Yukawa couplings might be forbidden at tree level by a flavour symmetry but generated effectively via loop diagrams. Alternatively, the Froggatt–Nielsen (FN) mechanism in 1979 assumed an additional U(1)fl symmetry under which the quarks and leptons carry various charges.

To account for family replication and to address the question of large lepton mixing, theorists have explored a larger non-Abelian family symmetry, SU(3)fl, where the three families are analogous to the three quark colours in quantum chromodynamics (QCD). Many other examples have been proposed based on subgroups of SU(3)fl, including discrete symmetries (figure 2, right). More recently, theorists have considered extra-dimensional models in which the Higgs field is located at a 4D brane, while the fermions are free to roam over the extra dimension, overlapping with the Higgs field in such a way as to result in hierarchical Yukawa couplings. Still other ideas include partial compositeness in which fermions may get hierarchical masses from the mixing between an elementary sector and a composite one. The possibilities are seemingly endless. However, all such theories share one common question: what is the scale, Mfl, (or scales) of new physics associated with flavour?

Since experiments at CERN and elsewhere have thoroughly probed the electroweak scale, all we can say for sure is that, unless the new physics is extremely weakly coupled, Mfl can be anywhere from the Planck scale (1019GeV), where gravity becomes important, to the electroweak scale at the mass of the W boson (80 GeV). Thus the flavour scale is very unconstrained.

 

A taste of flavour in elementary particle physics

I I Rabi

The origin of flavour can be traced back to the discovery of the electron – the first elementary fermion – in 1897. Following the discovery of relativity and quantum mechanics, the electron and the photon became the subject of the most successful theory of all time: quantum electrodynamics (QED). However, the smallness of the electron mass (me = 0.511 MeV) compared to the mass of an atom has always intrigued physicists.

The mystery of the electron mass was compounded by the discovery in 1936 of the muon with a mass of 207 me but otherwise seemingly identical properties to the electron. This led Isidor Isaac Rabi to quip “who ordered that?”. Four decades later, an even heavier version of the electron was discovered, the tau lepton, with mass mτ = 17 mμ. Yet the seemingly arbitrary values of the masses of the charged leptons are only part of the story. It soon became clear that hadrons were made from quarks that come in three colour charges mediated by gluons under a SU(3)C gauge theory, quantum chromodynamics (QCD). The up and down quarks of the first family have intrinsic masses mu= 4 me and md = 10 me, accompanied by the charm and strange quarks (mc = 12 mμ and ms = 0.9 mμ) of a second family and the heavyweight top and bottom quarks (mt = 97 mτ and mb = 2.4 mτ) of a third family.

It was also realised that the different quark “flavours”, a term invented by Gell-Mann and Fritzsch, could undergo mixing transitions. For example, at the quark level the radioactive decay of a nucleus is explained by the transformation of a down quark into an up quark plus an electron and an electron antineutrino. Shortly after Pauli hypothesized the neutrino in 1930, Fermi proposed a theory of weak interactions based on a contact interaction between the four fermions, with a coupling strength given by a dimensionful constant GF, whose scale was later identified with the mass of the W boson: GF 1/mW2.

After decades of painstaking observation, including the discovery of parity violation, whereby only left-handed particles experience the weak interaction, Fermi’s theory of weak interactions and QED were merged into an electroweak theory based on SU(2)L × U(1)Y gauge theory. The left-handed (L) electron and neutrino form a doublet under SU(2)L, while the right-handed electron is a singlet, with the doublet and singlet carrying hypercharge U(1)Y and the pattern repeating for the second and third lepton families. Similarly, the left-handed up and down quarks form doublets, and so on. The electroweak SU(2)L× U(1)Y symmetry is spontaneously broken to U(1)QED by the vacuum expectation value of the neutral component of a new doublet of complex scalar boson fields called the Higgs doublet. After spontaneous symmetry breaking, this results in massive charged W and neutral Z gauge bosons, and a massive neutral scalar Higgs boson – a picture triumphantly confirmed by experiments at CERN.

To truly shed light on the Standard Model’s flavour puzzle, theorists have explored higher and more complex symmetry groups than the Standard Model. The most promising approaches all involve a spontaneously broken family or flavour symmetry. But the flavour-breaking scale may lie anywhere from the Planck scale to the electroweak scale, with grand unified theories suggesting a high flavour scale, while recent hints of anomalies from LHCb and other experiments suggest a low flavour scale.

To illustrate the unknown magnitude of the flavour scale, consider for example the FN mechanism, where Mfl is associated with the breaking of the U(1)fl symmetry. In the SM the top-quark mass of 173 GeV is given by a Yukawa coupling times the Higgs vacuum expectation value of 246 GeV divided by the square root of two. This implies a top-quark Yukawa coupling close to unity. The exact value is not important, what matters is that the top Yukawa coupling is of order unity. From this point of view, the top quark mass is not at all puzzling – it is the other fermion masses associated with much smaller Yukawa couplings that require explanation. According to FN, the fermions are assigned various U(1)fl charges and small Yukawa couplings are forbidden due to a U(1)fl symmetry. The symmetry is broken by the vacuum expectation value of a new “flavon” field <φ>, where φ is a neutral scalar under the SM but carries one unit of U(1)fl charge. Small Yukawa couplings then originate from an operator (figure 1, right) suppressed by powers of the small ratio <φ>/Mfl (where Mfl acts as a cut-off scale of the contact interaction).

For example, suppose that the ratio <φ>/Mfl is identified with the Wolfenstein parameter λ = sinθC = 0.225 (where θC is the Cabibbo angle appearing in the CKM quark-mixing matrix). Then the fermion mass hierarchies can be explained by powers of this ratio, controlled by the assigned U(1)fl charges: me/mτλ5, mμ/mτλ2, md/mbλ4, ms/mb∼ λ2, mu/mt ∼ λ8 and mc/mt∼ λ4. This shows how fermion masses spanning many orders of magnitude may be interpreted as arising from integer U(1)fl charge assignments of less than 10. However, in this approach, Mfl may be anywhere from the Planck scale to the electroweak scale by adjusting <φ> such that the ratio λ= <φ>/Mfl is held fixed.

One possibility for Mfl, reviewed by Kaladi Babu at Oklahoma State University in 2009, is that it is not too far from the scale of grand unified theories (GUTs), of order 1016 GeV, which is the scale at which the gauge couplings associated with the SM gauge group unify into a single gauge group. The simplest unifying group, SU(5)GUT, was proposed by Georgi and Glashow in 1974, following the work of Pati and Salam based on SU(4)C× SU(2)L× SU(2)R. Both these gauge groups can result from SO(10)GUT, which was discovered by Fritzsch and Minkowski (and independently by Georgi), while many other GUT groups and subgroups have also been studied (figure 2, left). However, GUT groups by themselves only unify quarks and leptons within a given family, and while they may provide an explanation for why mb= 2.4 mτ, as discussed by Babu, they do not account for the fermion mass hierarchies.

Broken symmetries

Figure 2

A way around this, first suggested by Ramond in 1979, is to combine GUTs with family symmetry based on the product group GGUT× Gfl, with symmetries acting in the specific directions shown in the figure “Family affair”. In order not to spoil the unification of the gauge couplings, the flavour-symmetry breaking scale is often assumed to be close to the GUT breaking scale. This also enables the dynamics of whatever breaks the GUT symmetry, be it Higgs fields or some mechanism associated with compactification of extra dimensions, to be applied to the flavour breaking. Thus, in such theories, the GUT and flavour/family symmetry are both broken at or around Mfl MGUT  1016 GeV, as widely discussed by many authors. In this case, it would be impossible given known technology to directly experimentally access the underlying theory responsible for unification and flavour. Instead, we would need to rely on indirect probes such as proton decay (a generic prediction of GUTs and hence of these enlarged SM structures proposed to explain flavour) and/or charged-lepton flavour-violating processes such as μ → eγ (see CERN Courier May/June 2019 p45).

New ideas for addressing the flavour problem continue to be developed. For example, motivated by string theory, Ferruccio Feruglio of the University of Padova suggested in 2017 that neutrino masses might be complex analytic functions called modular forms. The starting point of this novel idea is that non-Abelian discrete family symmetries may arise from superstring theory in compactified extra dimensions, as a finite subgroup of the modular symmetry of such theories (i.e. the symmetry associated with the non-unique choice of basis vectors spanning a given extra-dimensional lattice). It follows that the 4D effective Lagrangian must respect modular symmetry. This, Feruglio observed, implies that Yukawa couplings may be modular forms. So if the leptons transform as triplets under some finite subgroup of the modular symmetry, then the Yukawa couplings themselves must transform also as triplets, but with a well defined structure depending on only one free parameter: the complex modulus field. At a stroke, this removes the need for flavon fields and ad hoc vacuum alignments to break the family symmetry, and potentially greatly simplifies the particle content of the theory.

Compactification

Although this approach is currently actively being considered, it is still unclear to what extent it may shed light on the entire flavour problem including all quark and lepton mass hierarchies. Alternative string-theory motivated ideas for addressing the flavour problem are also being developed, including the idea that flavons can arise from the components of extra-dimensional gauge fields and that their vacuum alignment may be achieved as a consequence of the compactification mechanism.

The discovery of neutrino mass and mixing makes the flavour puzzle hard to ignore

Recently, there have been some experimental observations concerning charged lepton flavour universality violation which hint that the flavour scale might not be associated with the GUT scale, but might instead be just around the corner at the TeV scale (CERN Courier May/June 2019 p33). Recall that in the SM the charged leptons e, μ and τ interact identically with the gauge forces, and differ only in their masses, which result from having different Yukawa couplings to the Higgs doublet. This charged lepton flavour universality has been the subject of intense experimental scrutiny over the years and has passed all the tests – until now. In recent years, anomalies have appeared associated with violations of charged lepton flavour universality in the final states associated with the quark transitions b → c and b → s.

Puzzle solving

In the case of b → c transitions, the final states involving τ leptons appear to violate charged lepton universality. In particular B → D(*) ν decays where the charged lepton ℓ is identified with τ have been shown by Babar and LHCb to occur at rates somewhat higher than those predicted by the SM (the ratios of such final states to those involving electrons and muons being denoted by RD and RD*). This is quite puzzling since all three types of charged leptons are predicted to couple to the W boson equally, and the decay is dominated by tree-level W exchange. Any new-physics contribution, such as the exchange of a new charged Higgs boson, a new W′ or a leptoquark, would have to compete with tree-level W exchange. However, the most recent measurements by Belle, reported at the beginning of 2019 (CERN Courier May/June 2019 p9), measure RD and RD* to be closer to the SM prediction.

In the case of b → s transitions, the LHCb collaboration and other experiments have reported a number of anomalies in B → K(*) + decays such as the RK and RK* ratios of final states containing μ+μ versus e+e, which are measured deviate from the SM by about 2.5 standard deviations. Such anomalies, if they persist, may be accounted for by a new contact operator coupling the four fermions bLsLμLμL suppressed by a dimensionful coefficient M2new  where Mnew ~30 TeV, according to a general operator analysis. This hints that there may be new physics arising from the non-universal couplings of leptoquark and/or a new Z′ whose mass is typically a few TeV in order to generate such an operator (where the 30 TeV scale is reduced to just a few TeV after mixing angles are taken into account). However, the introduction of these new particles increases the SM parameter count still further, and only serves to make the flavour problem of the SM worse.

Link-up

Figure 3

Motivated by such considerations, it is tempting to speculate that these recent empirical hints of flavour non-universality may be linked to a possible theory of flavour. Several authors have hinted at such a connection, for example Riccardo Barbieri of Scuola Normale Superiore, Pisa, and collaborators have related these observations to a U(2)5 flavour symmetry in an effective theory framework. In addition, concrete models have recently been constructed that directly relate the effective Yukawa couplings to the effective leptoquark and/or Z′ couplings. In such models the scale of new physics associated with the mass of the leptoquark and/or a new Z′ may be identified with the flavour scale Mfl defined earlier, except that it should be not too far from the TeV scale in order to explain the anomalies. To achieve the desired link, the effective leptoquark and/or Z′ couplings may be generated by the same kinds of operators responsible for the effective Higgs Yukawa couplings (figure 3).

In such a model the couplings of leptoquarks and/or Z′ bosons may be related to the Higgs Yukawa couplings, with all couplings arising effectively from mixing with a vector-like fourth family. The considered model predicts, apart from the TeV scale leptoquark and/or Z′, and a slightly heavier fourth family, extra flavour-changing processes such as τ μμμ. The model in its current form does not have any family symmetry, and explains the hierarchy of the quark masses in terms of the vector-like fourth family masses, which are free parameters. Crucially, the required TeV scale Z′ mass is given by MZ′ ~ <φ> ~ TeV, which would fix the flavour scale Mfl ~ few TeV. In other words, if the hints for flavour anomalies hold up as further data are collected by the LHCb, Belle II and other experiments, the origin of flavour may be right around the corner.

The post Who ordered all of that? appeared first on CERN Courier.

]]>
Feature Explaining the bizarre pattern of fermion types and masses has led theorists to suggest that the “flavour scale” could be at a much lower energy than previously thought. https://cerncourier.com/wp-content/uploads/2020/01/CCJanFeb20-Flavour-Frontis-1.jpg
When twistors met loops https://cerncourier.com/a/when-twistors-met-loops/ Tue, 26 Nov 2019 11:00:10 +0000 https://preview-courier.web.cern.ch/?p=85562 Carlo Rovelli reports on a conference which brought together two communities that have evolved independently for many years.

The post When twistors met loops appeared first on CERN Courier.

]]>
Loop Quantum Gravity and Twistor Theory have a lot in common. They both have quantum gravity as a main objective, they both discard conventional spacetime as the cornerstone of physics, and they have both taken major inspiration from renowned British mathematician Roger Penrose. Interaction between the two communities has been minimal so far, however, due to their distinct research styles: mathematically oriented in Twistor Theory, but focused on empirical support in Loop Gravity. This separation was addressed in the first week of September at a conference held at the Centre for Mathematical Researches (CIRM) at the Campus of Luminy in Marseille, where about a hundred researchers converged for lively debates designed to encouraged cross-fertilisation between the two research lines.

Both Twistor Theory and Loop Gravity regard conventional smooth general-relativistic spacetime as an approximate and emerging notion. Twistor theory was proposed by Roger Penrose as a general geometric framework for physics, with the long-term aim of unifying general relativity and quantum mechanics. The main idea of the theory is to work on the null rays, namely the space of the possible path that a light ray can follow in spacetime, instead of the manifold of the points of physical spacetime. Spacetime points, or events, are then seen as derived objects: they are given by compact holomorphic curves in a complex three-fold: twistor space. It is remarkable how much the main equations of fundamental physics simplify when formulated in these terms. The mathematics of twistors has roots in the 19th century Klein correspondence in projective geometry, and modern Twistor Theory has had a strong impact on pure mathematics, from differential geometry and representation theory to gauge theories and integrable systems.

Could allying twistors and loops be dangerous?

Loop gravity, on the other hand, is a background-independent theory of quantum gravity. That is, it does not treat spacetime as the background on which physics happens, but rather as a dynamical entity itself satisfying quantum theory. The conventional smooth general relativistic spacetime emerges in the classical (ℏ→0) limit, in the same manner as a smooth electromagnetic field satisfying the Maxwell equations emerges from the Fock space of the photons in the classical limit of quantum electrodynamics. Similarly, the full dynamics of classical general relativity is recovered from the quantum dynamics of Loop gravity in the suitable limit. The transitions amplitudes of the theory are finite in the ultraviolet and are expressed as multiple integrals over non compact groups. The theory provides a compelling picture of quantum spacetime. A basis in the Hilbert space of the theory is described by the mathematics of the spin networks: graphs with links labelled by SU(2) irreducible representations, independently introduced by Roger Penrose in the early 1970s in an attempt to a fully discrete combinatorial picture of quantum physical space. Current applications of Loop Gravity include early cosmology, where the possibility of a bounce replacing the Big Bang has been extensively studied using Loop gravity methods, and black holes, where the theory’s amplitudes can be used to study the non-perturbative transition at the end of the Hawking evaporation.

The communities working in Twistors and Loops share technical tools and conceptual pillars, but have evolved independently for many years, with different methods and different intermediate goals.  But recent developments discussed at the Marseille conference saw twistors appearing in formulations of the loop gravity amplitudes, confirming the fertility and the versatility of the twistor idea, and raising intriguing questions about possible deeper relations between the two theories.

The conference was a remarkable success. It is not easy to communicate across research programs in contemporary fundamental physics, because a good part of the field is stalled in communities blocked by conflicting assumptions, ingrained prejudices and seldom questioned judgments, making understanding one another difficult. The vibrant atmosphere of the Marseille conference cut through this.

The best moment came during Roger Penrose’s talk. Towards the end of a long and dense presentation of new ideas towards understanding the full space of the solutions of Einstein’s theory using twistors, Roger said rather dramatically that now he was going to present a new big idea that might lead to the twistor version of the full Einstein equations  – but at that precise moment the slide projector exploded in a cloud of smoke, with sparks flying. We all thought for a moment that a secret power of the Universe, worried about being unmasked, had interfered. Could allying twistors and loops be dangerous?

The post When twistors met loops appeared first on CERN Courier.

]]>
Meeting report Carlo Rovelli reports on a conference which brought together two communities that have evolved independently for many years. https://cerncourier.com/wp-content/uploads/2019/11/Rovelli-Twistors-Penrose-small.jpg
Gauge–gravity duality opens new horizons https://cerncourier.com/a/gauge-gravity-duality-opens-new-horizons/ Wed, 13 Nov 2019 15:07:18 +0000 https://preview-courier.web.cern.ch/?p=85218 In 1997, Juan Maldacena conjectured a deep relationship between gravity and quantum field theory.

The post Gauge–gravity duality opens new horizons appeared first on CERN Courier.

]]>
What, in a nutshell, did you uncover in your famous 1997 work, which became the most cited in high-energy physics?
Juan Maldacena

The paper conjectured a relation between certain quantum field theories and gravity theories. The idea was that a strongly coupled quantum system can generate complex quantum states that have an equivalent description in terms of a gravity theory (or a string theory) in a higher dimensional space. The paper considered special theories that have lots of symmetries, including scale invariance, conformal invariance and supersymmetry, and the fact that those symmetries were present on both sides of the relationship was one of the pieces of evidence for the conjecture. The main argument relating the two descriptions involved objects that appear in string theory called D-branes, which are a type of soliton. Polchinski had previously given a very precise description for the dynamics of D-branes. At low energies a soliton can be described by its centre-of-mass position: if you have N solitons you will have N positions. With D-branes it is the same, except that when they coincide there is a non-Abelian SU(N) gauge symmetry that relates these positions. So this low-energy theory resembles the theory of quantum chromodynamics, except that with N colours and special matter content.

On the other hand, these D-brane solitons also have a gravitational description, found earlier by Horowitz and Strominger, in which they look like “black branes” – objects similar to black holes but extended along certain spatial directions. The conjecture was simply that these two descriptions should be equivalent. The gravitational description becomes simple when N and the effective coupling are very large.

Did you stumble across the duality, or had you set out to find it?

It was based on previous work on the connection between D-branes and black holes. The first major result in this direction was the computation of Strominger and Vafa, who considered an extremal black hole and compared it to a collection of D-branes. By computing the number of states into which these D-branes can be arranged, they found that it matched the Bekenstein–Hawking black-hole entropy given in terms of the area of the horizon. Such black holes have zero temperature. By slightly exciting these black holes some of us were attempting to extend such results to non-zero temperatures, which allowed us to probe the dynamics of those nearly extremal black holes. Some computations gave similar answers, sometimes exactly, sometimes up to coefficients. It was clear that there was a deep relation between the two, but it was unclear what the concrete relation was. The gravity–gauge (AdS/CFT) conjecture clarified the relationship.

Are you surprised by its lasting impact?

Yes. At the time I thought that it was going to be interesting for people thinking about quantum gravity and black holes. But the applications that people found to other areas of physics continue to surprise me. It is important for understanding quantum aspects of black holes. It was also useful for understanding very strongly coupled quantum theories. Most of our intuition for quantum field theory is for weakly coupled theories, but interesting new phenomena can arise at strong coupling. These examples of strongly coupled theories can be viewed as useful calculable toy models. The art lies in extracting the right lessons from them. Some of the lessons include possible bounds on transport, a bound on chaos, etc. These applications involved a great deal of ingenuity since one has to extract the right lessons from the examples we have in order to apply them to real-world systems.

What does the gravity–gauge duality tell us about nature, given that it relates two pictures (e.g. involving different dimensionalities of space) that have not yet been shown to correspond to the physical world?

It suggests that the quantum description of spacetime can be in terms of degrees of freedom that are not localised in space. It also says that black holes are consistent with quantum mechanics, when we look at them from the outside. More recently, it was understood that when we try to describe the black-hole interior, then we find surprises. What we encounter in the interior of a black hole seems to depend on what the black hole is entangled with. At first this looks inconsistent with quantum mechanics, since we cannot influence a system through entanglement. But it is not. Standard quantum mechanics applies to the black hole as seen from the outside. But to explore the interior you have to jump in, and you cannot tell the outside observer what you encountered inside.

One of the most interesting recent lessons is the important role that entanglement plays in constructing the geometry of spacetime. This is particularly important for the black-hole interior.

I suspect that with the advent of quantum computers, it will become increasingly possible to simulate these complex quantum systems that have some features similar to gravity. This will likely lead to more surprises.

In what sense does AdS/CFT allow us to discuss the interior of a black hole?

It gives us directly a view of a black hole from the outside, more precisely a view of the black hole from very far away. In principle, from this description we should be able to understand what goes on in the interior. While there has been some progress on understanding some aspects of the interior, a full understanding is still lacking. It is important to understand that there are lots of weird possibilities for black-hole interiors. Those we get from gravitational collapse are relatively simple, but there are solutions, such as the full two-sided Schwarzschild solution, where the interior is shared between two black holes that are very far away. The full Schwarzschild solution can therefore be viewed as two entangled black holes in a particular state called the thermofield double, a suggestion made by Werner Israel in the 1970s. The idea is that by entangling two black holes we can create a geometric connection through their interiors: the black holes can be very far away, but the distance through the interior could be very short. However, the geometry is time-dependent and signals cannot go from one side to the other. The geometry inside is like a collapsing wormhole that closes off before a signal can go through. In fact, this is a necessary condition for the interpretation of these geometries as entangled states, since we cannot send signals using entanglement. Susskind and myself have emphasised this connection via the “ER=EPR” slogan. This says that EPR correlations (or entanglement) should generally give rise to some sort of “geometric” connection, or Einstein–Rosen bridge, between the two systems. The Einstein–Rosen bridge is the geometric connection between two black holes present in the full Schwarzschild solution.

Are there potential implications of this relationship for intergalactic travel?

Gao, Jafferis and Wall have shown that an interesting new feature appears when one brings two entangled black holes close to each other. Now there can be a direct interaction between the two black holes and the thermofield double state can be close to the ground state of the combined system. In this case, the geometry changes and the wormhole becomes traversable.

One can find solutions of the Standard Model plus gravity that look like two microscopic magnetically charged black holes joined by a wormhole

In fact, as shown by Milekhin, Popov and myself, one can find solutions of the Standard Model plus gravity that look like two microscopic magnetically charged black holes joined by a wormhole. We could construct a controllable solution only for small black holes because we needed to approximate the fermions as being massless.

If one wanted a big macroscopic wormhole where a human could travel, then it would be possible with suitable assumptions about the dark sector. We’d need a dark U(1) gauge field and a very large number of massless fermions charged under U(1). In that case, a pair of magnetically charged black holes would enable one to travel between distant places. There is one catch: the time it would take to travel, as seen by somebody who stays outside the system, would be longer than the time it takes light to go between the two mouths of the wormhole. This is good, since we expect that causality should be respected. On the other hand, due to the large warping of the spacetime in the wormhole, the time the traveller experiences could be much shorter. So it seems similar to what would be experienced by an observer that accelerates to a very high velocity and then decelerates. Here, however, the force of gravity within the wormhole is doing the acceleration and deceleration. So, in theory, you can travel with no energy cost.

How does AdS/CFT relate to broader ideas in quantum information theory and holography?

Quantum information has been playing an important role in understanding how holography (or AdS/CFT) works. One important development is a formula, due to Ryu and Takayanagi, for the fine-grained entropy of gravitational systems, such as a black hole. It is well known that the area of the horizon gives the coarse-grained, or thermodynamic, entropy of a black hole. The fine-grained entropy, by contrast, is the actual entropy of the full quantum density matrix describing the system. Surprisingly, this entropy can also be computed in terms of the area of the surface. But it is not the horizon, it is typically a surface that lies in the interior and has a minimal area. 

If you could pick any experiment to be funded and built, what would it be?

Well, I would build a higher energy collider, of say 100 TeV, to understand better the nature of the Higgs potential and look for hints of new physics. As for smaller scale experiments, I am excited about the current prospects to manipulate quantum matter and create highly entangled states that would have some of the properties that black holes are supposed to have, such as being maximally chaotic and allowing the kind of traversable wormholes described earlier.

How close are we to a unified theory of nature’s interactions?

String theory gives us a framework that can describe all the known interactions. It does not give a unique prediction, and the accommodation of a small cosmological constant is possible thanks to the large number of configurations that the internal dimensions can acquire. This whole framework is based on Kaluza–Klein compactifications of 10D string theories. It is possible that a deeper understanding of quantum gravity for cosmological solutions will give rise to a probability measure on this large set of solutions that will allow us to make more concrete predictions.Matth

The post Gauge–gravity duality opens new horizons appeared first on CERN Courier.

]]>
Opinion In 1997, Juan Maldacena conjectured a deep relationship between gravity and quantum field theory. https://cerncourier.com/wp-content/uploads/2019/11/Maldacena.jpg
Redeeming the role of mathematics https://cerncourier.com/a/redeeming-the-role-of-mathematics/ Wed, 09 Oct 2019 08:14:27 +0000 https://preview-courier.web.cern.ch/?p=84795 Graham Farmelo’s new book sheds light on the current debate on the role of mathematics in theoretical physics, says CERN's former head of theory, Wolfgang Lerche.

The post Redeeming the role of mathematics appeared first on CERN Courier.

]]>
A currently popular sentiment in some quarters is that theoretical physics has dived too deeply into mathematics, and lost contact with the real world. Perhaps, it is surmised, the edifice of quantum gravity and string theory is in fact a contrived Rube-Goldberg machine, or a house of cards which is about to collapse – especially given that one of the supporting pillars, namely supersymmetry, has not been discovered at the LHC. Graham Farmelo’s new book sheds light on this issue.

The universe speaks in numbers, reads Farmelo’s title. With hindsight this allows a double interpretation: first, that it is primarily mathematical structure which underlies nature. On the other hand, one can read it as a caution that the universe speaks to us purely via measured numbers, and theorists should pay attention to that. The majority of physicists would likely support both interpretations, and agree that there is no real tension between them.

The author, who was a theoretical physicist before becoming an award-winning science writer, does not embark on a detailed scientific discussion of these matters, but provides a historical tour de force of the relationship between mathematics and physics, and their tightly correlated evolution. At the time of ancient Greeks there was no distinction between these fields, and it was only from about the 19th century onwards that they were viewed as separate. Evidently, a major factor was the growing role of experiments, which provided a firmer grounding in the physical world than what had previously been called natural philosophy.

Theoretical physicists should not allow themselves to be distracted by every surprising experimental finding

Paul Dirac

The book follows the mutual fertilisation of mathematics and physics through the last few centuries, as the disciplines gained momentum with Newton, and exploded in the 20th century. Along the way it peeks into the thinking of notable mathematicians and physicists, often with strong opinions. For example, Dirac, a favourite of the author, is quoted as reflecting both that “Einstein failed because his mathematical basis… was not broad enough” and that “theoretical physicists should not allow themselves to be distracted by every surprising experimental finding.” The belief that mathematical structure is at the heart of physics and that experimental results ought to have secondary importance holds sway in this section of the book. Such thinking is perhaps the result of selection bias, however, as only scientists with successful theories are remembered.

The detailed exposition makes the reader vividly aware that the relationship between mathematics and physics is a roller-coaster loaded with mutual admiration, contempt, misunderstandings, split-ups and re-marriages. Which brings us, towards the end of the book, to the current state of affairs in theoretical high-energy physics, which most of us in the profession would agree is characterised by extreme mathematical and intellectual sophistication, paired with a stunning lack of experimental support. After many decades of flourishing interplay, which provided, for example, the group-theoretical underpinning of the quark model, the geometry of gauge theories, the algebraic geometry of supersymmetric theories and finally strings, is there a new divorce ahead? It appears that some not only desire, but relish the lack of supporting experimental evidence. This concern is also expressed by the author, who criticises self-declared experts who “write with a confidence that belies the evident slightness of their understanding of the subject they are attacking”.

The last part of the book is the least readable. Based on personal interactions with physicists, the exposition becomes too detailed to be of use to the casual, or lay reader. While there is nothing wrong with the content, which is exciting, it will only be meaningful to people who are already familiar with the subject. On the positive side, however, it gives a lively and accurate snapshot of today’s sociology in theoretical particle physics, and of influential but less well known characters in the field.

The Universe Speaks in Numbers illuminates the role of mathematics in physics in an easy-to-grasp way, exhibiting in detail their interactive co-evolution until today. A worthwhile read for anybody, the book is best suited for particle physicists who are close to the field.

The post Redeeming the role of mathematics appeared first on CERN Courier.

]]>
Review Graham Farmelo’s new book sheds light on the current debate on the role of mathematics in theoretical physics, says CERN's former head of theory, Wolfgang Lerche. https://cerncourier.com/wp-content/uploads/2019/10/E8Petrie.png
Particle physics meets gravity in the Austrian Alps https://cerncourier.com/a/particle-physics-meets-gravity-in-the-austrian-alps/ Thu, 12 Sep 2019 07:39:01 +0000 https://preview-courier.web.cern.ch/?p=84342 The meeting was sponsored by the Humboldt Foundation, based in Bonn, whose mission is to promote cooperation between scientists in Germany and elsewhere.

The post Particle physics meets gravity in the Austrian Alps appeared first on CERN Courier.

]]>
Humboldt Kolleg participants

The Humboldt Kolleg conference Discoveries and Open Puzzles in Particle Physics and Gravitation took place at Kitzbühel in the Austrian Alps from 24 to 28 June, bringing Humboldt prize winners, professors and research-fellow alumni together with prospective future fellows. The meeting was sponsored by the Humboldt Foundation, based in Bonn, whose mission is to promote cooperation between scientists in Germany and elsewhere. The programme focused on connections between particle physics and the large-scale cosmological structure of the universe.

The most recent LHC experimental results were presented by Karl Jakobs (Freiburg and ATLAS spokesperson), confirming the status of the Standard Model (SM). A key discussion topic raised by Fred Jegerlehner (DESY-Zeuthen) is whether the SM’s symmetries might be “emergent” at the relatively low energies of current experiments: in contrast to unification models that exhibit maximal symmetry at the highest energies, the gauge symmetries could emerge in the infrared, but “dissolve” in the extreme ultraviolet. Consider the analogy of a carpet: it looks flat and invariant under translations when viewed from a distance, but this smoothness dissolves when we look at it close up, e.g. as perceived by an ant crawling on it. A critical system close to the Planck scale – the scale where quantum-gravity effects should be important – could behave similarly: the only modes that can exist as long-range correlations, e.g. light-mass particles, self-organise into multiplets with a small number of particles, just as they do in the SM. The vector modes become the gauge bosons of U(1), SU(2) and SU(3); low-energy symmetries such as baryon- and lepton-number conservation would all be violated close to the Planck scale.

Ideas connecting particle physics and quantum computing were also discussed by Peter Zoller (Innsbruck) and Erez Zohar (MPQ, Munich). Here, one takes a lattice field theory that is theoretically difficult to solve and maps it onto a fully controllable quantum system such as an optical lattice that can be programmed in experiments to do calculations – a quantum simulator. First promising results with up to 20 qubits have been obtained for the Schwinger model (QED in 1+1 dimensions). This model exhibits dynamical mass generation and is a first prototype before looking at more complicated theories like QCD.

The cosmological constant is related to the vacuum energy density, which is in turn connected to possible phase transitions in the early universe.

A key puzzle concerns the hierarchies of scales: the small ratio of the Higgs-boson mass to the Planck scale plus the very small cosmological constant that drives the accelerating expansion of the universe. Might these be related? The cosmological constant is related to the vacuum energy density, which is in turn connected to possible phase transitions in the early universe. Future gravitational-wave experiments with LISA were discussed by Stefano Vitale (Trento) and are expected to be sensitive to the effects of these phase transitions.

A main purpose of Humboldt Kolleg is the promotion of young scientists from the central European region. Student poster prizes sponsored by the Kitzbühel mayor Klaus Winkler were awarded to Janina Krzysiak (IFJ PAN, Krakow) and Jui-Lin Kuo (HEPHY, Vienna).

The post Particle physics meets gravity in the Austrian Alps appeared first on CERN Courier.

]]>
Meeting report The meeting was sponsored by the Humboldt Foundation, based in Bonn, whose mission is to promote cooperation between scientists in Germany and elsewhere. https://cerncourier.com/wp-content/uploads/2019/09/CCSepOct19_fn-alps.jpg
Topological avatars of new physics https://cerncourier.com/a/topological-avatars-of-new-physics/ Thu, 11 Jul 2019 08:16:55 +0000 https://preview-courier.web.cern.ch?p=83603 A field configuration is topologically non-trivial if it exhibits the topology of a “mathematical knot” in some space, real or otherwise.

The post Topological avatars of new physics appeared first on CERN Courier.

]]>

Topologically non-trivial solutions of quantum field theory have always been a theoretically “elegant” subject, covering all sorts of interesting and physically relevant field configurations, such as magnetic monopoles, sphalerons and black holes. These objects have played an important role in shaping quantum field theories and have provided important physical insights into cosmology, particle colliders and condensed-matter physics.

In layman’s terms, a field configuration is topologically non-trivial if it exhibits the topology of a “mathematical knot” in some space, real or otherwise. A mathematical knot (or a higher-dimensional generalisation such as a Möbius strip) is not like a regular knot in a piece of string: it has no ends and cannot be continuously deformed into a topologically trivial configuration like a circle or a sphere.

One of the most conceptually simple non-trivial configurations arises in the classification of solitons, which are finite-energy extended configurations of a scalar field behaving like the Higgs field. Among the various finite-energy classical solutions for the Higgs field, there are some that cannot be continuously deformed into the vacuum without an infinite cost in energy, and are therefore “stable”. For finite-energy configurations that are spherically symmetric, the Higgs field must map smoothly onto its vacuum solution at the boundary of space.

The ’t Hooft–Polyakov monopole, which is predicted to exist in grand unified theo­ries, is one such finite-energy topologically non-trivial solitonic configuration. The black hole is an example from general relativity of a singular space–time configuration with a non-trivial space–time topology. The curvature of space–time blows up in the singularity at the centre, and this cannot be removed either by continuous deformations or by coordinate changes: its nature is topological.

Such configurations constituted the main theme of a recent Royal Society Hooke meeting “Topological avatars of new physics”, which took place in London from 4–5 March. The meeting focused on theoretical modelling and experimental searches for topologically important solutions of relativistic quantum field theories in particle physics, general relativity and cosmology, and quantum gravity. Of particular interest were topological objects that could potentially be detectable at the Large Hadron Collider (LHC), or at future colliders.

Gerard ’t Hooft opened the scientific proceedings with an inspiring talk on form­ulating a black hole in a way consistent with quantum mechanics and time-reversal symmetry, before Steven Giddings described his equally interesting proposal. Another highlight was Nicholas Manton’s talk on the inevitability of topological non-trivial unstable configurations of the Higgs field – “sphalerons” – in the Standard Model. Henry Tye said sphalerons can in principle be produced at the (upgraded) LHC or future linear colliders. A contradictory view was taken by Sergei Demidov, who predicted that their production will be strongly suppressed at colliders.

One of the exemplars of topological physics receiving significant experimental attention is the magnetic monopole

A major part of the workshop was devoted to monopoles. The theoretical framework of light monopoles within the Standard Model, possibly producible at the LHC, was presented by Yong Min Cho. These “electroweak” monopoles have twice the magnetic charge of Dirac monopoles. Like the ’t Hooft–Polyakov monopole, but unlike the Dirac monopole, they are solitonic structures, with the Higgs field playing a crucial role. Arttu Rajantie considered relatively unsuppressed thermal production of generic monopole–antimonopole pairs  in the presence of the extreme high temperatures and strong magnetic fields of heavy-ion collisions at the LHC. David Tong discussed the ambiguities on the gauge group of the Standard Model, and how these could affect monopoles that are admissible solutions of such gauge field theories. Importantly, such solutions give rise to potentially observable phenomena at the LHC and at future colliders. Anna Achucaro and Tanmay Vachaspati reported on fascinating computer simulations of monopole scattering, as well as numerical studies of cosmic strings and other topologically non-trivial defects of relevance to cosmology.

One of the exemplars of topological physics currently receiving significant experimental attention is the magnetic monopole. The MoEDAL experiment at the LHC has reported world-leading limits on multiply magnetically charged monopoles, and Albert de Roeck gave a wide-ranging report on the search for the monopole and other highly-ionising particles, with Laura Patrizii and Adrian Bevan also reporting on these searches and the machine-learning techniques employed in them.

Supersymmetric scenarios can consistently accommodate all the aforementioned topologically non-trivial field theory configurations. Doubtless, as John Ellis described, the story of the search for this beautiful – but as yet hypothetical – new symmetry of nature, is a long way from being over. Last but not least, were two inspiring talks by Juan Garcia Bellido and Marc Kamionkowski on the role of primordial black holes as dark matter, and their potential detection by means of gravitational waves.

The workshop ended with a vivid round-table discussion of the importance of a new ~100 TeV collider. The aim of this machine is to explore beyond the historic watershed represented by the discovery of the Higgs boson, and to move us closer to understanding the origin of elementary particles, and indeed space–time itself. This Hooke workshop clearly demonstrated the importance of topological avatars of new physics to such a project.

The post Topological avatars of new physics appeared first on CERN Courier.

]]>
Meeting report A field configuration is topologically non-trivial if it exhibits the topology of a “mathematical knot” in some space, real or otherwise. https://cerncourier.com/wp-content/uploads/2019/07/CCJulAug19_FN-topological-1.jpg
In it for the long haul https://cerncourier.com/a/in-it-for-the-long-haul/ Mon, 11 Mar 2019 16:42:51 +0000 https://preview-courier.web.cern.ch?p=13547 We have conquered the easiest challenges in fundamental physics, says Nima Arkani-Hamed. The case for building the next major collider is now more compelling than ever.

The post In it for the long haul appeared first on CERN Courier.

]]>
Nima Arkani-Hamed

How do you view the status of particle physics?

There has never been a better time to be a physicist. The questions on the table today are not about this-or-that detail, but profound ones about the very structure of the laws of nature. The ancients could (and did) wonder about the nature of space and time and the vastness of the cosmos, but the job of a professional scientist isn’t to gape in awe at grand, vague questions – it is to work on the next question. Having ploughed through all the “easier” questions for four centuries, these very deep questions finally confront us: what are space and time? What is the origin and fate of our enormous universe? We are extremely fortunate to live in the era when human beings first get to meaningfully attack these questions. I just wish I could adjust when I was born so that I could be starting as a grad student today! But not everybody shares my enthusiasm. There is cognitive dissonance. Some people are walking around with their heads hanging low, complaining about being disappointed or even depressed that we’ve “only discovered the Higgs and nothing else”.

So who is right?

It boils down to what you think particle physics is really about, and what motivates you to get into this business. One view is that particle physics is the study of the building blocks of matter, in which “new physics” means “new particles”. This is certainly the picture of the 1960s leading to the development of the Standard Model, but it’s not what drew me to the subject. To me, “particle physics” is the study of the fundamental laws of nature, governed by the still mysterious union of space–time and quantum mechanics. Indeed, from the deepest theoretical perspective, the very definition of what a particle is invokes both quantum mechanics and relativity in a crucial way. So if the biggest excitement for you is a cross-section plot with a huge bump in it, possibly with a ticket to Stockholm attached, then, after the discovery of the Higgs, it makes perfect sense to take your ball and go home, since we can make no guarantees of this sort whatsoever. We’re in this business for the long haul of decades and centuries, and if you don’t have the stomach for it, you’d better do something else with your life!

Isn’t the Standard Model a perfect example of the scientific method?

Sure, but part of the reason for the rapid progress in the 1960s is that the intellectual structure of relativity and quantum mechanics was already sitting there to be explored and filled in. But these more revolutionary discoveries took much longer, involving a wide range of theoretical and experimental results far beyond “bump plots”. So “new physics” is much more deeply about “new phenomena” and “new principles”. The discovery of the Higgs particle – especially with nothing else accompanying it so far – is unlike anything we have seen in any state of nature, and is profoundly “new physics” in this sense. The same is true of the other dramatic experimental discovery in the past few decades: that of the accelerating universe. Both discoveries are easily accommodated in our equations, but theoretical attempts to compute the vacuum energy and the scale of the Higgs mass pose gigantic, and perhaps interrelated, theoretical challenges. While we continue to scratch our heads as theorists, the most important path forward for experimentalists is completely clear: measure the hell out of these crazy phenomena! From many points of view, the Higgs is the most important actor in this story amenable to experimental study, so I just can’t stand all the talk of being disappointed by seeing nothing but the Higgs; it’s completely backwards. I find that the physicists who worry about not being able to convince politicians are (more or less secretly) not able to convince themselves that it is worth building the next collider. Fortunately, we do have a critical mass of fantastic young experimentalists who believe it is worth studying the Higgs to death, while also exploring whatever might be at the energy frontier, with no preconceptions about what they might find.

What makes the Higgs boson such a rich target for a future collider?

It is the first example we’ve seen of the simplest possible type of elementary particle. It has no spin, no charge, only mass, and this extreme simplicity makes it theoretically perplexing. There is a striking difference between massive and massless particles that have spin. For instance, a photon is a massless particle of spin one; because it moves at the speed of light, we can’t “catch up” with it, and so we only see it have two “polarisations”, or ways it can spin. By contrast the Z boson, which also has spin one, is massive; since you can catch up with it, you can see it spinning in any of three directions. This “two not equal to three” business is quite profound. As we collide particles at ever increasing energies, we might think that their masses are irrelevant tiny perturbations to their energies, but this is wrong, since something must account for the extra degrees of freedom.

The whole story of the Higgs is about accounting for this “two not equal to three” issue, to explain the extra spin states needed for massive W and Z particles mediating the weak interactions. And this also gives us a good understanding of why the masses of the elementary particles should be pegged to that of the Higgs. But the huge irony is that we don’t have any good understanding for what can explain the mass of the Higgs itself. That’s because there is no difference in the number of degrees of freedom between massive and massless spin-zero particles, and related to this, simple estimates for the Higgs mass from its interactions with virtual particles in the vacuum are wildly wrong. There are also good theoretical arguments, amply confirmed in analogous condensed-matter systems and elsewhere in particle physics, for why we shouldn’t have expected to see such a beast lonely, unaccompanied by other particles. And yet here we are. Nature clearly has other ideas for what the Higgs is about than theorists do.

Is supersymmetry still a motivation for a new collider?

Nobody who is making the case for future colliders is invoking, as a driving motivation, supersymmetry, extra dimensions or any of the other ideas that have been developed over the past 40 years for physics beyond the Standard Model. Certainly many of the versions of these ideas, which were popular in the 1980s and 1990s, are either dead or on life support given the LHC data, but others proposed in the early 2000s are alive and well. The fact that the LHC has ruled out some of the most popular pictures is a fantastic gift to us as theorists. It shows that understanding the origin of the Higgs mass must involve an even larger paradigm change than many had previously imagined. Ironically, had the LHC discovered supersymmetric particles, the case for the next circular collider would be somewhat weaker than it is now, because that would (indirectly) support a picture of a desert between the electroweak and Planck scales. In this picture of the world, most people wanted a linear electron–positron collider to measure the superpartner couplings in detail. It’s a picture people very much loved in the 1990s, and a picture that appears to be wrong. Fine. But when theorists are more confused, it’s the time for more, not less experiments.

What definitive answers will a future high-energy collider give us?

First and foremost, we go to high energies because it’s the frontier, and we look around for new things. While there is absolutely no guarantee we will produce new particles, we will definitely stress test our existing laws in the most extreme environments we have ever probed. Measuring the properties of the Higgs, however, is guaranteed to answer some burning questions. All the drama revolving around the existence of the Higgs would go away if we saw that it had substructure of any sort. But from the LHC, we have only a fuzzy picture of how point-like the Higgs is. A Higgs factory will decisively answer this question via precision measurements of the coupling of the Higgs to a slew of other particles in a very clean experimental environment. After that the ultimate question is whether or not the Higgs looks point-like even when interacting with itself. The simplest possible interaction between elementary particles is when three particles meet at a space–time point. But we have actually never seen any single elementary particle enjoy this simplest possible interaction. For good reasons going back to the basics of relativity and quantum mechanics, there is always some quantum number that must change in this interaction – either spin or charge quantum numbers change. The Higgs is the only known elementary particle allowed to have this most basic process as its dominant self-interaction. A 100 TeV collider producing billions of Higgs particles will not only detect the self-interaction, but will be able to measure it to an accuracy of a few per cent. Just thinking about the first-ever probe of this simplest possible interaction in nature gives me goosebumps.

What are the prospects for future dark-matter searches?

Beyond the measurements of the Higgs properties, there are all sorts of exciting signals of new particles that can be looked for at both Higgs factories and 100 TeV colliders. One I find especially important is WIMP dark matter. There is a funny perception, somewhat paralleling the absence of supersymmetry at the LHC, that the simple paradigm of WIMP dark matter has been ruled out by direct-detection experiments. Nope! In fact, the very simplest models of WIMP dark matter are perfectly alive and well. Once the electroweak quantum numbers of the dark-matter particles are specified, you can unambiguously compute what mass an electroweak charged dark-matter particle should have so that its thermal relic abundance is correct. You get a number between 1–3 TeV, far too heavy to be produced in any sizeable numbers at the LHC. Furthermore, they happen to have miniscule interaction cross sections for direct detection. So these very simplest theories of WIMP dark matter are inaccessible to the LHC and direct-detection experiments. But a 100 TeV collider has just enough juice to either see these particles, or rule out this simplest WIMP picture.

What is the cultural value of a 100 km supercollider?

Both the depth and visceral joy of experiments in particle physics is revealed in how simple it is to explain: we smash things together with the largest machines that have ever been built, to probe the fundamental laws of nature at the tiniest distances we’ve ever seen. But it goes beyond that to something more important about our self-conception as people capable of doing great things. The world has all kinds of long-term problems, some of which might seem impossible to solve. So it’s important to have a group of people who, over centuries, give a concrete template for how to go about grappling with and ultimately conquering seemingly impossible problems, driven by a calling far larger than themselves. Furthermore, suppose it’s 200 years from now, and there are no big colliders on the planet. How can humans be sure that the Higgs or top particles exist? Because it says so in dusty old books? There is an argument to be made that as we advance we should be able to do the things we did in the past. After all, the last time that fundamental knowledge was shoved in old dusty books was in the dark ages, and that didn’t go very well for the West.

What about justifying the cost of the next collider?

There are a number of projects and costs we could be talking about, but let’s call it $5–25 billion. Sounds like a lot, right? But the global economy is growing, not shrinking, and the cost of accelerators as a fraction of GDP has barely changed over the past 40 years – even a 100 TeV collider is in this same ballpark. Meanwhile the scientific issues at stake are more profound than they have been for many decades, so we certainly have an honest science case to make that we need to keep going.

People sometimes say that if we don’t spend billions of dollars on colliders, then we can do all sorts of other experiments instead. I am a huge fan of small-scale experiments, but this argument is silly because science funding is infamously not a zero-sum game. So, it’s not a question of, “do we want to spend tens of billions on collider physics or something else instead”, it is rather “do we want to spend tens of billions on fundamental physics experiments at all”.

Another argument is that we should wait until some breakthrough in accelerator technology, rather than just building bigger machines. This is naïve. Of course miracles can always happen, but we can’t plan doing science around miracles. Similar arguments were made around the time of the cancellation of the Superconducting Super Collider (SSC) 30 years ago, with prominent condensed-matter physicists saying that the SSC should wait for the development of high-temperature superconductors that would dramatically lower the cost. Of course those dreamed-of practical superconductors never materialised, while particle physics continued from strength to strength with the best technology available.

What do you make of claims that colliders are no longer productive?

It would be only to the good to have a no-holds barred, public discussion about the pros and cons of future colliders, led by people with a deep understanding of the relevant technical and scientific issues. It’s funny that non-experts don’t even make the best arguments for not building colliders; I could do a much better job than they do! I can point you to an awesomely fierce debate about future colliders that already took place in China two years ago: (Int. J. Mod. Phys. A 31 1630053 and 1630054). C N Yang, who is one of the greatest physicists of the 20th century and enormously influential in China, came out with a strong attack on colliders, not only in China but more broadly. I was delighted. Having a serious attack meant there could be a serious response, masterfully provided by David Gross. It was the King Kong vs Godzilla of fundamental physics, played out on the pages of major newspapers in China, fantastic!

What are you working on now?

About a decade ago, after a few years of thinking about the cosmology of “eternal inflation” in connection with solutions to the cosmological constant and hierarchy problems, I concluded that these mysteries can’t be understood without reconceptualising what space–time and quantum mechanics are really about. I decided to warm up by trying to understand the dynamics of particle scattering, like collisions at the LHC, from a new starting point, seeing space-time and quantum mechanics as being derived from more primitive notions. This has turned out to be a fascinating adventure, and we are seeing more and more examples of rather magical new mathematical structures, which surprisingly appear to underlie the physics of particle scattering in a wide variety of theories, some close to the real world. I am also turning my attention back to the goal that motivated the warm-up, trying to understand cosmology, as well as possible theories for the origin of the Higgs mass and cosmological constant, from this new point of view. In all my endeavours I continue to be driven, first and foremost, by the desire to connect deep theoretical ideas to experiments and the real world.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

The post In it for the long haul appeared first on CERN Courier.

]]>
Opinion We have conquered the easiest challenges in fundamental physics, says Nima Arkani-Hamed. The case for building the next major collider is now more compelling than ever. https://cerncourier.com/wp-content/uploads/2019/03/CCMarApr19_Int-arkani.png
First light for supersymmetry https://cerncourier.com/a/first-light-for-supersymmetry/ Fri, 08 Mar 2019 16:22:44 +0000 https://preview-courier.web.cern.ch?p=13629 Ideas from supersymmetry have been used to address a longstanding challenge in optics – how to suppress unwanted spatial modes that limit the beam quality of high-power lasers.

The post First light for supersymmetry appeared first on CERN Courier.

]]>
schematic representation of a supersymmetric laser array

Ideas from supersymmetry have been used to address a longstanding challenge in optics – how to suppress unwanted spatial modes that limit the beam quality of high-power lasers. Mercedeh Khajavikhan at the University of Central Florida in the US and colleagues have created a first supersymmetric laser array, paving the way towards new schemes for scaling up the radiance of integrated semiconductor lasers.

Supersymmetry (SUSY) is a possible additional symmetry of space–time that would enable bosonic and fermionic degrees of freedom to be “rotated” between one another. Devised in the 1970s in the context of particle physics, it suggests the existence of a mirror-world of supersymmetric particles and promises a unified description of all fundamental interactions. “Even though the full ramification of SUSY in high-energy physics is still a matter of debate that awaits experimental validation, supersymmetric techniques have already found their way into low-energy physics, condensed matter, statistical mechanics, nonlinear dynamics and soliton theory as well as in stochastic processes and BCS-type theories, to mention a few,” write Khajavikhan and collaborators in Science.

The team applied the SUSY formalism first proposed by Ed Witten of the Institute for Advanced Study in Princeton to force a semiconductor laser array to operate exclusively in its fundamental transverse mode. In contrast to previous schemes developed to achieve this, such as common antenna-feedback methods, SUSY introduces a global and systematic method that applies to any type of integrated laser array, explains Khajavikhan. “Now that the proof of concept has been demonstrated, we are poised to develop high-power electrically pumped laser arrays based on a SUSY design. This can be applicable to various wavelengths, ranging from visible to mid-infrared lasers.”

To demonstrate the concept, the Florida-based team paired the unwanted modes of the main laser array (comprising five coupled ridge-waveguide cavities etched from quantum wells on an InP wafer) with a lossy superpartner (an array of four waveguides left unpumped). Optical strategies were used to build a superpartner index profile with propagation constants matching those of the four higher-order modes associated with the main array, and the performance of the SUSY laser was assessed using a custom-made optical setup. The results indicated that the existence of an unbroken SUSY phase (in conjunction with a judicious pumping of the laser array) can promote the in-phase fundamental mode and produce high-radiance emission.

“This is a remarkable example of how a fundamental idea such as SUSY may have a practical application, here increasing the power of lasers,” says SUSY pioneer John Ellis of King’s College London. “The discovery of fundamental SUSY still eludes us, but SUSY engineering has now arrived.”

The post First light for supersymmetry appeared first on CERN Courier.

]]>
News Ideas from supersymmetry have been used to address a longstanding challenge in optics – how to suppress unwanted spatial modes that limit the beam quality of high-power lasers. https://cerncourier.com/wp-content/uploads/2019/03/CCMarApr19_News-susy.jpg
Understanding naturalness https://cerncourier.com/a/understanding-naturalness/ Thu, 24 Jan 2019 09:00:04 +0000 https://preview-courier.web.cern.ch/?p=13108 The last few years have seen an explosion of ideas concerning naturalness, but we’re only at the beginning of our understanding, says theorist Nathaniel Craig.

The post Understanding naturalness appeared first on CERN Courier.

]]>
Nathaniel Craig

What is “naturalness”?

Colloquially, a theory is natural if its underlying parameters are all of the same size in appropriate units. A more precise definition involves the notion of an effective field theory – the idea that a given quantum field theory might only describe nature at energies below a certain scale, or cutoff. The Standard Model (SM) is an effective field theory because it cannot be valid up to arbitrarily high energies even in the absence of gravity. An effective field theory is natural if all of its parameters are of order unity in units of the cutoff. Without fine-tuning, a parameter can only be much smaller than this if setting it to zero increases the symmetry of the theory. All couplings and scales in a quantum theory are connected by quantum effects unless symmetries distinguish them, making it generic for them to coincide.

When did naturalness become a guiding force in particle physics?

We typically trace it back to Eddington and Dirac, though it had precedents in the cosmologies of the Ancient Greeks. Dirac’s discomfort with large dimensionless ratios in observed parameters – among others, the ratio of the gravitational and electromagnetic forces between protons and electrons, which amounts to the smallness of the proton mass in units of the Planck scale – led him to propose a radical cosmology in which Newton’s constant varied with the age of the universe. Dirac’s proposed solutions were readily falsified, but this was a predecessor of the more refined notion of naturalness that evolved with the development of quantum field theory, which drew on observations by Gell-Mann, ’t Hooft, Veltman, Wilson, Weinberg, Susskind and other greats.

Does the concept appear in other disciplines?

There are notions of naturalness in essentially every scientific discipline, but physics, and particle physics in particular, is somewhat unique. This is perhaps not surprising, since one of the primary goals of particle physics is to infer the laws of nature at increasingly higher energies and shorter distances.

Isn’t naturalness a matter of personal judgement?

One can certainly come up with frameworks in which naturalness is mathematically defined – for example, quantifying the sensitivity of some parameter in the theory to variations of the other parameters. However, what one does with that information is a matter of personal judgement: we don’t know how nature computes fine-tuning (i.e. departure from naturalness), or what amount of fine-tuning is reasonable to expect. This is highlighted by the occasional abandonment of mathematically defined naturalness criteria in favour of the so-called Potter Stewart measure: “I know it when I see it.” The element of judgement makes it unproductive to obsess over minor differences in fine-tuning, but large fine-tunings potentially signal that something is amiss. Also, one can’t help but notice that the degree of fine-tuning that is considered acceptable has changed over time.

What evidence is there that nature is natural?

Dirac’s puzzle, the smallness of the proton mass, is a great example: we understand it now as a consequence of the asymptotic freedom of the strong interaction. A natural (of order-unity) value of the QCD gauge coupling at high energies gives rise to an exponentially smaller mass scale on account of the logarithmic evolution of the gauge coupling. Another excellent example, relevant to the electroweak hierarchy problem, is the mass splitting of the charged and neutral pions. From the perspective of an effective field theorist working at the energies of these pions, their mass splitting is only natural if the cutoff of the theory is around 800 MeV. Lo and behold, going up in energy from the pions, the rho meson appears at 770 MeV, revealing the composite nature of the pions and changing the picture in precisely the right way to render the mass splitting natural.

Which is the most troublesome observation for naturalness today?

The cosmological-constant (CC) problem, which is the disagreement by 120 orders of magnitude between the observed and expected value of the vacuum energy density. We understand the SM to be a valid effective field theory for many decades above the energy scale of the observed CC, which makes it very hard to believe that the problem is solved in a conventional way without considerable fine-tuning. Contrast that with the SM hierarchy problem, which is a statement about the naturalness of the mass of the Higgs boson. Data so far show that the cutoff of the SM as an effective field theory might not be too far above the Higgs mass, bringing naturalness within reach of experiment. On the other hand, the CC is only a problem in the context of the SM coupled to gravity, so perhaps its resolution lies in yet-to-be-understood features of quantum gravity.

What about the tiny values of the neutrino masses?

Neutrino masses are not remotely troublesome for naturalness. A parameter can be much smaller than the natural expectation if setting it to zero increases the symmetry of the theory (we call such parameters “technically natural”). For the neutrino, as for any SM fermion, there is an enhanced symmetry when neutrino masses are set to zero. This means that your natural expectation for the neutrino masses is zero, and if they are non-zero, quantum corrections to neutrino masses are proportional to the masses themselves. Although the SM features many numerical hierarchies, the majority of them are technically-natural ones that could be explained by physics at inaccessibly high energies. The most urgent problems are the hierarchies that aren’t technically natural, like the CC problem and the electroweak hierarchy problem.

Has applying the naturalness principle led directly to a discovery?

It’s fair to say that Gaillard and Lee predicted the charm-quark mass by applying naturalness arguments to the mass-splitting of neutral kaons. Of course, the same arguments were also used to (incorrectly) predict a wildly different value of the weak scale! This is a reminder that naturalness principles can point to a problem in the existing theory, and a scale at which the theory should change, but they don’t tell you precisely how the problem is resolved. The naturalness of the neutral kaon mass splitting, or the charged-neutral pion mass splitting, suggests to me that it is more useful to refer to naturalness as a strategy, rather than as a principle.

Unnatural?

A slightly more flippant example is the observation of neutrinos from Supernova 1987A. This marked the beginning of neutrino astronomy and opened the door to unrelated surprises, yet the large water-Cherenkov detectors that detected these neutrinos were originally constructed to look for proton decay predicted by grand unified theories (which were themselves motivated by naturalness arguments).

While it would be great if naturalness-based arguments successfully predict new physics, it’s also worthwhile if they ultimately serve only to draw experimental attention to new places.

What has been the impact of the LHC results so far on naturalness?

There have been two huge developments at the LHC. The first is the discovery of the Higgs boson, which sharpens the electroweak hierarchy problem: we seem to have found precisely the sort of particle whose mass, if natural, points to a significant departure from the SM around the TeV scale. The second is the non-observation of new particles predicted by the most popular solutions to the electroweak hierarchy problem, such as supersymmetry. While evidence for these solutions could lie right around the corner, its absence thus far has inspired both a great deal of uncertainty about the naturalness of the weak scale and a lively exploration of new approaches to the problem. The LHC null results teach us only about specific (and historically popular) models that were inspired by naturalness. It is therefore an ideal time to explore naturalness arguments more deeply. The last few years have seen an explosion of original ideas, but we’re really only at the beginning of the process.

The situation is analogous to the search for dark matter, where gravitational evidence is accumulating at an impressive rate despite numerous null results in direct-detection experiments. These null results haven’t ruled out dark matter itself; they’ve only disfavoured certain specific and historically popular models.

How can we settle the naturalness issue once and for all?

The discovery of new particles around the TeV scale whose properties suggest they are related to the top quark would very strongly suggest that nature is more or less natural. In the event of non-discovery, the question becomes thornier – it could be that the SM is unnatural; it could be that naturalness arguments are irrelevant; or it could be that there are signatures of naturalness that we haven’t recognised yet. Kepler’s symmetry-based explanation of the naturalness of planetary orbits in terms of platonic solids ultimately turned out to be a red herring, but only because we came to realise that the features of specific planetary orbits are not deeply related to fundamental laws.

Without naturalness as a guide, how do theorists go beyond the SM?

Naturalness is but one of many hints at physics beyond the SM. There are some incredibly robust hints based on data – dark matter and neutrino masses, for example. There are also suggestive hints, such as the hierarchical structure of fermion masses, the preponderance of baryons over antibaryons and the apparent unification of gauge couplings. There is also a compelling argument for constructing new-physics models purely motivated by anomalous data. This sort of “ambulance chasing” does not have a stellar reputation, but it’s an honest approach which recognises that the discovery of new physics may well come as another case of “Who ordered that?” rather than the answer to a theoretical problem.

What sociological or psychological aspects are at work?

If theoretical considerations are primarily shaping the advancement of a field, then sociology inevitably plays a central role in deciding what questions are most pressing. The good news is that the scales often tip, and data either clarify the situation or pose new questions. As a field we need to focus on lucidly articulating the case for (and against) naturalness as a guiding principle, and let the newer generations make up their minds for themselves.

The post Understanding naturalness appeared first on CERN Courier.

]]>
Opinion The last few years have seen an explosion of ideas concerning naturalness, but we’re only at the beginning of our understanding, says theorist Nathaniel Craig. https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_Interview-craig.png
Fixing gender in theory https://cerncourier.com/a/viewpoint-fixing-gender-in-theory/ Fri, 30 Nov 2018 09:00:36 +0000 https://preview-courier.web.cern.ch/?p=12929 It is high time we addressed the low representation of women in high-energy theoretical physics, says Marika Taylor.

The post Fixing gender in theory appeared first on CERN Courier.

]]>
Improving the participation of under-represented groups in science is not just the right thing to do morally. Science benefits from a community that approaches problems in a variety of different ways, and there is evidence that teams with mixed perspectives increase productivity. Moreover, many countries face a skills gap that can only be addressed by training more scientists, drawing from a broader pool of talent that cannot reasonably exclude half the population.

In the high-energy theory (HET) community, where creativity and originality are so important, the problem is particularly acute. Many of the breakthroughs in theoretical physics have come from people who think “differently”, yet the community does not acknowledge that being both mostly male and white encourages groupthink and lack of originality.

The gender imbalance in physics is well documented. Data from the American Physical Society and the UK Institute of Physics indicate that around 20% of the physics-research community is female, and the situation deteriorates significantly as one looks higher on the career ladder. By contrast, the percentage of females is higher in astronomy and the number of women at senior levels in astronomy has increased quite rapidly over the last decade.

However, research into gender in science often misses issues specific to particular disciplines such as HET. While many previous studies have explored challenges faced by women in physics, theory has not specifically been targeted, even though the representation of women is anomalously low.

In 2012, a group of string theorists in Europe launched a COST (European Cooperation in Science and Technology) action with a focus on gender in high-energy theory. Less than 10% of string theorists are female, and, worryingly, postdoc-application data in Europe show that the percentage of female early-career researchers has not changed significantly over the past 15 years.

The COST initiative enabled qualitative surveys and the collection of quantitative data. We found some evidence that women PhD students are less likely to continue onto postdoctoral positions than male ones, although further data are needed to confirm this point. The data also indicate that the percentage of women at senior levels (e.g. heads of institutes) is extremely low, less than 5%. Qualitative data raised issues specific to HET, including the need for mobility for many years before getting a permanent position and the long working hours, which are above average even for academics. A series of COST meetings also provided opportunities for women in string theory to network and to discuss the challenges that they face.

Following the conclusion of the COST action in 2017, women from the string theory community obtained support to continue the initiative, now broadened to the whole of the HET community. “GenHET” is a permanent working group hosted by the CERN theory department whose goals are to increase awareness of gender issues, improve the presence of women in decision-making roles, and provide networking, support and mentoring for women, particularly during their early career.

GenHET’s first workshop on high-energy theory and gender was hosted by CERN in September, bringing together physicists, social scientists and diversity professionals (see Faces and Places). Further meetings are planned, and the GenHET group is also developing a web resource that will collect research and reports on gender and science, advertise activities and jobs, and offer  advice on evidence-based practice for supporting women. GenHET aims to propose concrete actions, for example encouraging the community to implement codes of conduct at conferences, and all members of the HET community are welcome to join the group.

Diversity is about much more than gender: in the HET community, there is also under-representation of people of colour and LGBTQ+ researchers, as well as those who are disabled, carers, come from less privileged socio-economic backgrounds, and so on. GenHET will work in collaboration with networks focusing on other diversity characteristics to help improve this situation, turning the high-energy theory community into one that truly reflects all of society.

The post Fixing gender in theory appeared first on CERN Courier.

]]>
Opinion It is high time we addressed the low representation of women in high-energy theoretical physics, says Marika Taylor. https://cerncourier.com/wp-content/uploads/2018/11/CCDec18_Viewpoint_iStock-1060190162.jpg
The roots and fruits of string theory https://cerncourier.com/a/the-roots-and-fruits-of-string-theory/ Mon, 29 Oct 2018 09:00:04 +0000 https://preview-courier.web.cern.ch/?p=12862 "People say that string theory doesn’t make predictions, but that’s simply not true."

The post The roots and fruits of string theory appeared first on CERN Courier.

]]>
What led you to the 1968 paper for which you are most famous?

In the mid-1960s we theorists were stuck in trying to understand the strong interaction. We had an example of a relativistic quantum theory that worked: QED, the theory of interacting electrons and photons, but it looked hopeless to copy that framework for the strong interactions. One reason was the strength of the strong coupling compared to the electromagnetic one. But even more disturbing was that there were so many (and ever growing in number) different species of hadrons that we felt at a loss with field theory – how could we cope with so many different states in a QED-like framework? We now know how to do it and the solution is called quantum chromodynamics (QCD).

Gabriele Veneziano

But things weren’t so clear back then. The highly non-trivial jump from QED to QCD meant having the guts to write a theory for entities (quarks) that nobody had ever seen experimentally. No one was ready for such a logical jump, so we tried something else: an S-matrix approach. The S-matrix, which relates the initial and final states of a quantum-mechanical process, allows one to directly calculate the probabilities of scattering processes without solving a quantum field theory such as QED. This is why it looked more promising. It was also looking very conventional but, eventually, led to something even more revolutionary than QCD – the idea that hadrons are actually strings.

Is it true that your “eureka” moment was when you came across the Euler beta function in a textbook?

Not at all! I was taking a bottom-up approach to understand the strong interaction. The basic idea was to impose on the S-matrix a property now known as Dolen–Horn–Schmid (DHS) duality. It relates two apparently distinct processes contributing to an elementary reaction, say a+b c+d. In one process, a+b fuse to form a metastable state (a resonance) which, after a characteristic lifetime, decays into c+d. In the other process the pair a+c exchanges a virtual particle with the pair b+d. In QED these two processes have to be added because they correspond to two distinct Feynman diagrams, while, according to DHS duality, each one provides, for strong interactions, the whole story. I’d heard about DHS duality from Murray Gell-Mann at the Erice summer school in 1967, where he said that DHS would lead to a “cheap bootstrap” for the strong interaction. Hearing this being said by a great physicist motivated me enormously. I was in the middle of my PhD studies at the Weizmann Institute in Israel. Back there in the fall, a collaboration of four people was formed. It consisted of Marco Ademollo, on leave at Harvard from Florence, and of Hector Rubinstein, Miguel Virasoro and myself at the Weizmann Institute. We worked intensively for a period of eight-to-nine months trying to solve the (apparently not so) cheap bootstrap for a particularly convenient reaction. We got very encouraging results hinting, I was feeling, for the existence of a simple exact solution. That solution turned out to be the Euler beta function.

Veneziano in July 1968

But the 1968 paper was authored by you alone?

Indeed. The preparatory work done by the four of us had a crucial role, but the discovery that the Euler beta function was an exact realisation of DHS duality was just my own. It was around mid-June 1968, just days before I had to take a boat from Haifa to Venice and then continue to CERN where I would spend the month of July. By that time the group of four was already dispersing (Rubinstein on his way to NYU, Virasoro to Madison, Wisconsin via Argentina, Ademollo back to Florence before a second year at Harvard). I kept working on it by myself, first on the boat, then at CERN until the end of July when, encouraged by Sergio Fubini, I decided to send the preprint to the journal Il Nuovo Cimento.

Was the significance of the result already clear?

Well, the formula had many desirable features, but the reaction of the physics community came to me as a shock. As soon as I had submitted the paper I went on vacation for about four weeks in Italy and did not think much about it. At the end of August 1968, I attended the Vienna conference – one of the biennial Rochester-conference series – and found out, to my surprise, that the paper was already widely known and got mentioned in several summary talks. I had sent the preprint as a contribution and was invited to give a parallel-session talk about it. Curiously, I have no recollection of that event, but my wife remembers me telling her about it. There was even a witness, the late David Olive, who wrote that listening to my talk changed his life. It was an instant hit, because the model answered several questions at once, but it was not at all apparent then that it had anything to do with strings, not to mention quantum gravity.

When was the link to “string theory” made?

The first hints that a physical model for hadrons could underlie my mathematical proposal came after the latter had been properly generalised (to processes involving an arbitrary number of colliding particles) and the whole spectrum of hadrons it implied was unraveled (by Fubini and myself and, independently, by Korkut Bardakci and Stanley Mandelstam). It came out, surprisingly, to closely resemble the exponentially growing (with mass) spectrum postulated almost a decade earlier by CERN theorist Rolf Hagedorn and, at least naively, it implied an absolute upper limit on temperature (the so-called Hagedorn temperature).

The spectrum coincides with that of an infinite set of harmonic oscillators and thus resembles the spectrum of a quantised vibrating string with its infinite number of higher harmonics. Holger Nielsen and Lenny Susskind independently suggested a string (or a rubber-band) picture. But, as usual, the devil was in the details. Around the end of the decade Yoichiro Nambu (and independently Goto) gave the first correct definition of a classical relativistic string, but it took until 1973 for Goddard, Goldstone, Rebbi and Thorn to prove that the correct application of quantum mechanics to the Nambu–Goto string reproduced exactly the above-mentioned generalisation of my original work. This also included certain consistency conditions that had already been found, most notably the existence of a massless spin-1 state (by Virasoro) and the need for extra spatial dimensions (from Lovelace’s work). At that point it became clear that the original model had a clear physical interpretation of hadrons being quantised strings. Some details were obviously wrong: one of the most striking features of strong interactions is their short-range nature, while a massless state produces long-range interactions. The model being inconsistent for three spatial dimensions (our world!) was also embarrassing, but people kept hoping.

So string theory was discovered by accident?

Not really. Qualitatively speaking, however, having found that hadrons are strings was no small achievement for those days. It was not precisely the string we now associate with quark confinement in QCD. Indeed the latter is so complicated that only the most powerful computers could shed some light on it many decades later. A posteriori, the fact that by looking at hadronic phenomena we were driven into discovering string theory was neither a coincidence nor an accident.

When was it clear that strings offer a consistent quantum-gravity theory?

This very bold idea came as early as 1974 from a paper by Joel Scherk and John Schwarz. Confronted with the fact that the massless spin-1 string state refused to become massive (there is no Brout–Englert–Higgs mechanism at hand in string theory!) and that even a massless spin-2 string had to be part of the string spectrum, they argued that those states should be identified with the photon and the graviton, i.e. with the carriers of electromagnetic and gravitational interactions, respectively. Other spin-1 particles could be associated with the gluons of QCD or with the W and Z bosons of the weak interaction. String theory would then become a theory of all interactions, at a deeper, more microscopic level. The characteristic scale of the hadronic string (~10–13 cm) had to be reduced by 20 orders of magnitude (~10–33 cm, the famous Planck-length) to describe the quarks themselves, the electron, the muon and the neutrinos, in fact every elementary particle, as a string.

In addition, it turned out that a serious shortcoming of the old string (namely its “softness”, meaning that string–string collisions cannot produce events with large deflection angles) was a big plus for the Scherk–Schwarz proposal. While the data were showing that hard hadron collisions were occurring at substantial rates, in agreement with QCD predictions, the softness of string theory could free quantum gravity from its problematic ultraviolet divergences – the main obstacle to formulating a consistent quantum-gravity theory.

Did you then divert your attention to string theory?

Not immediately. I was still interested in understanding the strong interactions and worked on several aspects of perturbative and non-perturbative QCD and their supersymmetric generalisations. Most people stayed away from string theory during the 1974–1984 decade. Remember that the Standard Model had just come to life and there was so much to do in order to extract its predictions and test it. I returned to string theory after the Green–Schwarz revolution in 1984. They had discovered a way to reconcile string theory with another fact of nature: the parity violation of weak interactions. This breakthrough put string theory in the hotspot again and since then the number of string-theory aficionados has been steadily growing, particularly within the younger part of the theory community. Several revolutions have followed since then, associated with the names of Witten, Polchinski, Maldacena and many others. It would take too long to do justice to all these beautiful developments. Personally, and very early on, I got interested in applying the new string theory to primordial cosmology.

Was your 1991 paper the first to link string theory with cosmology?

I think there was at least one already, a model by Brandenberger and Vafa trying to explain why our universe has only three large spatial dimensions, but it was certainly among the very first. In 1991, I (and independently Arkadi Tseytlin) realised that the string-cosmology equations, unlike Einstein’s, admit a symmetry (also called, alas, duality!) that connects a decelerating expansion to an accelerating one. That, I thought, could be a natural way to get an inflationary cosmology, which was already known since the 1980s, in string theory without invoking an ad-hoc “inflaton” particle.

The problem was that the decelerating solution had, superficially, a Big Bang singularity in its past, while the (dual) accelerating solution had a singularity in the future. But this was only the case if one neglected effects related to the finite size of the string. Many hints, including the already mentioned upper limit on temperature, suggested that Big Bang-like singularities are not really there in string theory. If so, the two duality-related solutions could be smoothly connected to provide what I dubbed a “pre-Big Bang scenario” characterised by the lack of a beginning of time. I think that the model (further developed with Maurizio Gasperini and by many others) is still alive, at least as long as a primordial B-mode polarisation is not discovered in the cosmic microwave background, since it is predicted to be insignificant in this cosmology.

Did you study other aspects of the new incarnation of string theory?

A second line of string-related research, which I have followed since 1987, concerns the study of thought experiments to understand what string theory can teach us about quantum gravity in the spirit of what people did in the early days of quantum mechanics. In particular, with Daniele Amati and Marcello Ciafaloni first, and then also with many others, I have studied string collisions at trans-Planckian energies (> 1019 GeV) that cannot be reached in human-made accelerators but could have existed in the early universe. I am still working on it. One outcome of that study, which became quite popular, is a generalisation of Heisenberg’s uncertainty principle implying a minimal value of Δx of the order of the string size.

50 years on, is the theory any closer to describing reality?

People say that string theory doesn’t make predictions, but that’s simply not true. It predicts the dimensionality of space, which is the only theory so far to do so, and it also predicts, at tree level (the lowest level of approximation for a quantum-relativistic theory), a whole lot of massless scalars that threaten the equivalence principle (the universality of free-fall), which is by now very well tested. If we could trust this tree-level prediction, string theory would be already falsified. But the same would be true of QCD, since at tree level it implies the existence of free quarks. In other words: the new string theory, just like the old one, can be falsified by large-distance experiments provided we can trust the level of approximation at which it is solved. On the other hand, in order to test string theory at short distance, the best way is through cosmology. Around (i.e. at, before, or soon after) the Big Bang, string theory may have left its imprint on the early universe and its subsequent expansion can bring those to macroscopic scales today.

What do you make of the ongoing debate on the scientific viability of the landscape, or “swamp”, of string-theory solutions?

I am not an expert on this subject but I recently heard (at the Strings 2018 conference in Okinawa, Japan) a talk on the subject by Cumrun Vafa claiming that the KKLT solution [which seeks to account for the anomalously small value of the vacuum energy, as proposed in 2003 by Kallosh, Kachru, Linde and Trivedi] is in the swampland, meaning it’s not viable at a fundamental quantum-gravity level. It was followed by a heated discussion and I cannot judge who is right. I can only add that the absence of a metastable de-Sitter vacuum would favour quintessence models of the kind I investigated with Thibault Damour several years ago and that could imply interestingly small (but perhaps detectable) violations of the equivalence principle.

What’s the perception of strings from outside the community?

Some of the popular coverage of string theory in recent years has been rather superficial. When people say string theory can’t be proved, it is unfair. The usual argument is that you need unconceivably high energies. But, as I have already said, the new incarnation of string theory can be falsified just like its predecessor was; it soon became very clear that QCD was a better theory. Perhaps the same will happen to today’s string theory, but I don’t think there are serious alternatives at the moment. Clearly the enthusiasm of young people is still there. The field is atypically young – the average age of attendees of a string-theory conference is much lower than that for, say, a QCD or electroweak physics conference. What is motivating young theorists? Perhaps the mathematical beauty of string theory, or perhaps the possibility of carrying out many different calculations, publishing them and getting lots of citations.

What advice do you offer young theorists entering the field?

I myself regret that most young string theorists do not address the outstanding physics questions with quantum gravity, such as what’s the fate of the initial singularity of classical cosmology in string theory. These are very hard problems and young people these days cannot afford to spend a couple of years on one such problem without getting out a few papers. When I was young I didn’t care about fashions, I just followed my nose and took risks that eventually paid off. Today it is much harder to do so.

How has theoretical particle physics changed since 1968?

In 1968 we had a lot of data to explain and no good theory for the weak and strong interactions. There was a lot to do and within a few years the Standard Model was built. Today we still have essentially the same Standard Model and we are still waiting for some crisis to come out of the beautiful experiments at CERN and elsewhere. Steven Weinberg used to say that physics thrives on crises. The crises today are more in the domain of cosmology (dark matter, dark energy), the quantum mechanics of black holes and really unifying our understanding of physics at all scales, from the Planck length to our cosmological horizon, two scales that are 60 orders of magnitude apart. Understanding such a hierarchy (together with the much smaller one of the Standard Model) represents, in my opinion, the biggest theoretical challenge for 21st century physics.

The post The roots and fruits of string theory appeared first on CERN Courier.

]]>
Feature "People say that string theory doesn’t make predictions, but that’s simply not true." https://cerncourier.com/wp-content/uploads/2003/06/cernstr1_7-03.jpg
Loops and legs in quantum field theory https://cerncourier.com/a/loops-and-legs-in-quantum-field-theory/ Mon, 09 Jul 2018 15:17:48 +0000 https://preview-courier.web.cern.ch?p=13360 The conference brought together more than 100 researchers from 18 countries to discuss the latest results in precision calculations for particle physics at colliders.

The post Loops and legs in quantum field theory appeared first on CERN Courier.

]]>
The meeting poster. Credit: H Klaes

The international conference Loops and Legs in Quantum Field Theory 2018 took place from 29 April to 4 May near Rheinfels Castle in St Goar, Rhine, Germany. The conference brought together more than 100 researchers from 18 countries to discuss the latest results in precision calculations for particle physics at colliders and associated mathematical, computer-algebraic and numerical calculation technologies. It was the 14th conference in the series, with 87 talks delivered.

Organised biennially by the theory group of DESY at Zeuthen, the locations for Loops and Legs are usually remote parts of the German countryside to provide a quiet atmosphere and room for intense scientific discussions. The first conference took place in 1992, just as the HERA collider started up, and the next event, close to the start of LEP2 in 1994, concentrated on precision physics at e+e colliders. Since 1996, general precision calculations for physics at high-energy colliders form its focus.

This year, the topics covered new results on: the physics of jets; hadronic Higgs-boson and top-quark production; multi-gluon amplitudes; multi-leg two-loop QCD corrections; electroweak corrections at hadron colliders; the Z resonance in e+e scattering; soft resummation, e+e tt̅; precision determinations of parton distribution functions; the heavy quark masses and the fundamental coupling constants; g-2; and NNLO and N3LO QCD corrections for various hard processes.

On the technologies side, analytic multi-summation methods, Mellin–Barnes techniques, the solution of large systems of ordinary differential equations and large-scale computer algebra methods were discussed, as well as unitarity methods, cut-methods in integrating Feynman integrals, and new developments in the field of elliptic integral solutions. These techniques finally allow analytic and numeric calculations of the scattering cross-sections for the key processes measured at the LHC.

All of these results are indispensable to make the LHC, in its high-luminosity phase, a real success and to help hunt down signs of physics beyond the Standard Model (CERN Courier April 2017 p18). The calculations need to match the experimental precision in measuring the couplings and masses, in particular for the top-quark and the Higgs sector, and an even more precise understanding of the strong interactions.

Since the first event, when the most advanced results were single-scale two-loop corrections in QCD, the field has taken a breath-taking leap to inclusive five-loop results – like the β functions of the Standard Model, which control the running of the coupling constant to high precision – to mention only one example. In general, the various subfields of this discipline witness a significant advance every two years or so. Many promising young physicists and mathematicians participate and present results. The field became interdisciplinary very rapidly because of the technologies needed, and now attracts many scientists from computing and mathematics.

The theoretical problems, on the other hand, also trigger new research, for example in algebraic geometry, number theory and combinatorics. This will be the case even more with future projects, like an ILC, and planned machines such as the FCC, which needs even higher precision. The next conference will be held at the beginning of May 2020.

The post Loops and legs in quantum field theory appeared first on CERN Courier.

]]>
Meeting report The conference brought together more than 100 researchers from 18 countries to discuss the latest results in precision calculations for particle physics at colliders. https://cerncourier.com/wp-content/uploads/2018/07/CCJulAug18_Faces-Loopslegs.png
Putting the Pauli exclusion principle on trial https://cerncourier.com/a/putting-the-pauli-exclusion-principle-on-trial/ Fri, 16 Feb 2018 12:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/putting-the-pauli-exclusion-principle-on-trial/ If detected, even the tiniest violation of the exclusion principle would revolutionise physics.

The post Putting the Pauli exclusion principle on trial appeared first on CERN Courier.

]]>

If we tightly grasp a stone in our hands, we neither expect it to vanish nor leak through our flesh and bones. Our experience is that stone and, more generally, solid matter is stable and impenetrable. Last year marked the 50th anniversary of the demonstration by Freeman Dyson and Andrew Lenard that the stability of matter derives from the Pauli exclusion principle. This principle, for which Wolfgang Pauli received the 1945 Nobel Prize in Physics, is based on ideas so prevalent in fundamental physics that their underpinnings are rarely questioned. Here, we celebrate and reflect on the Pauli principle, and survey the latest experimental efforts to test it.

The exclusion principle (EP), which states that no two fermions can occupy the same quantum state, has been with us for almost a century. In his Nobel lecture, Pauli provided a deep and broad-ranging account of its discovery and its connections to unsolved problems of the newly born quantum theory. In the early 1920s, before Schrödinger’s equation and Heisenberg’s matrix algebra had come along, a young Pauli performed an extraordinary feat when he postulated both the EP and what he called “classically non-describable two-valuedness” – an early hint of the existence of electron spin – to explain the structure of atomic spectra.

At that time the EP met with some resistance and Pauli himself was dubious about the concepts that he had somewhat recklessly introduced. The situation changed significantly after the introduction in 1925 of the electron-spin concept and its identification with Pauli’s two-valuedness, which derived from the empirical ideas of Lande, an initial suggestion by Kronig, and an independent paper by Goudsmit and Uhlenbeck. By introducing the picture of the electron as a small classical sphere with a spin that could point in just two directions, both Kronig, and Goudsmit and Uhlenbeck, were able to compute the fine-structure splitting of atomic hydrogen, although they still missed a critical factor of two. These first steps were followed by the relativistic calculations of Thomas, by the spin calculus of Pauli, and finally, in 1928, by the elegant wave equation of Dirac, which put an end to all resistance against the concept of spin.

However, a theoretical explanation of the EP had to wait for some time. Just before the Second World War, Pauli and Markus Fierz made significant progress toward this goal, followed by the publication in 1940 by Pauli of his seminal paper “The connection between spin and statistics”. This paper showed that (assuming a relativistically invariant form of causality) the spin of a particle determines the commutation relations, i.e. whether fields commute or anticommute, and therefore the statistics that particles obey. The EP for spin-1/2 fermions follows as a corollary of the spin-statistics connection, and the division of particles into fermions and bosons based on their spins is one of the cornerstones of modern physics.

Beguilingly simple

The EP is beguilingly simple to state, and many physicists have tried to skip relativity and find direct proofs that use ordinary quantum mechanics alone – albeit assuming spin, which is a genuinely relativistic concept. Pauli himself was puzzled by the principle, and in his Nobel lecture he noted: “Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions. I had always the feeling and I still have it today, that this is a deficiency. …The impression that the shadow of some incompleteness fell here on the bright light of success of the new quantum mechanics seems to me unavoidable.” Even Feynman – who usually outshone others with his uncanny intuition – felt frustrated by his inability to come up with a simple, straightforward justification of the EP: “It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation… This probably means that we do not have a complete understanding of the fundamental principle involved. For the moment, you will just have to take it as one of the rules of the world.”

Of special interest

After further theoretical studies, which included new proofs of the spin-statistics connection and the introduction of so-called para-statistics by Green, a possible small violation of the EP was first considered by Reines and Sobel in 1974 when they reanalysed an experiment by Goldhaber and Scharff in 1948. The possibility of small violations was refuted theoretically by Amado and Primakoff in 1980, but the topic was revived in 1987. That year, Russian theorist Lev Okun presented a model of violations of the EP in which he considered modified fermionic states which, in addition to the usual vacuum and one-particle state, also include a two-particle state. Okun wrote that “The special place enjoyed by the Pauli principle in modern theoretical physics does not mean that this principle does not require further and exhaustive experimental tests. On the contrary, it is specifically the fundamental nature of the Pauli principle that would make such tests, over the entire periodic table, of special interest.”

Okun’s model, however, ran into difficulties when attempting to construct a reasonable Hamiltonian, first because the Hamiltonian included nonlocal terms and, second, because Okun did not succeed in constructing a relativistic generalisation of the model. Despite this, his paper strongly encouraged experimental tests in atoms. In the same year (1987), Ignatiev and Kuzmin presented an extension of Okun’s model in a strictly non-relativisitic context that was characterised by a “beta parameter” |β| << 1. Not to be confused with the relativistic factor v/c, β is a parameter describing the action of the creation operator on the one-particle state. Using a toy model to illustrate transitions that violate the EP, Ignatiev and Kuzmin deduced that the transition probability for an anomalous two-electron symmetric state is proportional to β2/2, which is still widely used to represent the probability of EP violation.

This non-relativistic approach was criticized by A B Govorkov, who argued that the naive model of Ignatiev and Kuzmin could not be extended to become a fully-fledged quantum field theory. Since causality is an important ingredient in Pauli’s proof of the spin-statistics connection, however, Govorkov’s objections could be bypassed: later in 1987, Oscar Greenberg and Rabindra Mohapatra at the University of Maryland introduced a quantum field theory with continuously deformed commutation relations that led to a violation of causality. The deformation parameter was denoted by the letter q, and the theory was supposed to describe new hypothetical particles called “quons”. However, Govorkov was able to show that even this sleight of hand could not trick quantum field theory into small violations of the EP, demonstrating that the mere existence of antiparticles – again a true relativistic hallmark of quantum field theory – was enough to rule out small violations. The take-home message was that the violation of locality is not enough to break the EP, even “just a little”.

The connection between the intrinsic spin of particles and the statistics that they obey are at the heart of quantum field theory and therefore should be tested. A violation of the EP would be revolutionary. It could be related either to the violation of CPT, or violation of locality or Lorentz invariance, for example. However, we have seen how robust the EP is and how difficult it is to frame a violation within current quantum field theory. Experiments face no lesser difficulties, as noted as early as 1980 by Amado and Primakoff, and there are very few experimental options with which to truly test this tenet of modern physics.

One of the difficulties faced by experiments is that the identicalness of elementary particles implies that Hamiltonians must be invariant with respect to particle exchange, and, as a consequence, they cannot change the symmetry of any given state of multiple identical particles. Even in the case of a mixed symmetry of a many-particle system, there is no physical way to induce a transition to a state of different symmetry. This is the essence of the Messiah–Greenberg superselection rule, which can only be broken if a physical system is open.

Breaking the rules

The first dedicated experiment in line with this breaking of the Messiah–Greenberg superselection rule was performed in 1990 by Ramberg and Snow, who searched for Pauli-forbidden X-ray transitions in copper after introducing electrons into the system. The idea is that a power supply injecting an electric current into a copper conductor acts as a source of electrons, which are new to the atoms in the conductor. If these electrons have the “wrong” symmetry they can be radiatively captured into the already occupied 1S level of the copper atoms and emit electromagnetic radiation. The resulting X-rays are influenced by the unusual electron configuration and are slightly shifted towards lower energies with respect to the characteristic X-rays of copper.

Ramberg and Snow did not detect any violation but were able to put an upper bound on the violation probability of Β2/2 < 1.7 × 10–26. Following their concept, a much improved version of the experiment, called VIP (violation of the Pauli principle), was set up in the LNGS underground laboratory in Gran Sasso, Italy, in 2006. VIP improved significantly on the Ramberg and Snow experiment by using charge-coupled devices (CCDs) as high-resolution X-ray detectors with a large area and high intrinsic efficiency. In the original VIP setup, CCDs were positioned around a pure-copper cylinder; X-rays emitted from the cylinder were measured without and with current up to 40 A. The cosmic background in the LNGS laboratory is strongly suppressed – by a factor of 106 thanks to the overlying rock – and the apparatus was also surrounded by massive lead shielding.

Setting limits

After four years of data taking, VIP set a new limit on the EP violation for electrons at β2/2 < 4.7 × 10–29. To further enhance the sensitivity, the experiment was upgraded to VIP2, where silicon drift detectors (SDDs) replace CCDs as X-ray detectors. The VIP2 construction started in 2011 and in 2016 the setup was installed in the underground LNGS laboratory, where, after debugging and testing, data-taking started. The SDDs provide a wider solid angle for X-ray detection and this improvement, together with higher current and active shielding with plastic scintillators to limit background, leads to a much better sensitivity. The timing capability of SDDs also helps to suppress background events.

The experimental programme testing for a possible violation of the EP for electrons made great progress in 2017 and had already improved the upper limit set by VIP in the first two months of running time. With a planned duration of three years and alternating measurement with and without current, a two-orders-of-magnitude improvement is expected with respect to the previous VIP upper bound. In the absence of a signal, this will set the limit on violations of the EP at β2/2 < 10–31.

Experiments like VIP and VIP2 test the spin-statistics connection for one particular kind of fermions: electrons. The case of EP violations for neutrinos was also theoretically discussed by Dolgov and Smirnov. As for bosons, constraints on possible statistics violations come from high-energy-physics searches for decays of vector (i.e. spin-one) particles into two photons. Such decays are forbidden by the Landau–Yang theorem, whose proof incorporates the assumption that the two photons must be produced in a permutation-symmetric state. A complementary approach is to apply spectroscopic tests, as carried out at LENS in Florence during the 1990s, which probe the permutation properties of 16O nuclei in polyatomic molecules by searching for transitions between states that are antisymmetric under the exchange of two nuclei. If the nuclei are bosons, as in this case, such transitions, if found, violate the spin-statistics relation. High-sensitivity tests for photons were also performed with spectroscopic methods. As an example, using Bose–Einstein-statistics-forbidden two-photon excitation in barium, the probability for two photons to be in a “wrong” permutation-symmetry state was shown by English and co-workers at Berkeley in 2010 to be less than 4 × 10–11 – an improvement of more than three orders of magnitude compared to earlier results.

To conclude, we note that the EP has many associated philosophical issues, as Pauli himself was well aware of, and these are being studied within a dedicated project involving VIP collaborators, and supported by the John Templeton Foundation. One such issue is the notion of “identicalness”, which does not seem to have an analogue outside quantum mechanics because there are no two fundamentally identical classical objects.

This ultimate equality of quantum particles leads to all-important consequences governing the structure and dynamics of atoms and molecules, neutron stars, black-body radiation and determining our life in all its intricacy. For instance, molecular oxygen in air is extremely reactive, so why do our lungs not just burn? The reason lies in the pairing of electron spins: ordinary oxygen molecules are paramagnetic with unpaired electrons that have parallel spins, and in respiration this means that electrons have to be transferred one after the other. This sequential character to electron transfers is due to the EP, and moderates the rate of oxygen attachment to haemoglobin. Think of that the next time you breathe!

The post Putting the Pauli exclusion principle on trial appeared first on CERN Courier.

]]>
Feature If detected, even the tiniest violation of the exclusion principle would revolutionise physics. https://cerncourier.com/wp-content/uploads/2018/06/CCpau1_02_18.jpg
Particle physics meets quantum optics https://cerncourier.com/a/particle-physics-meets-quantum-optics/ Mon, 15 Jan 2018 16:12:40 +0000 https://preview-courier.web.cern.ch?p=13379 The sixth International Conference on New Frontiers in Physics (ICNFP) took place on 17–29 August in Kolymbari, Crete.

The post Particle physics meets quantum optics appeared first on CERN Courier.

]]>
Photo of Sergio Bertolucci, John Womersley and Victor Matveev

The sixth International Conference on New Frontiers in Physics (ICNFP) took place on 17–29 August in Kolymbari, Crete, Greece, bringing together about 360 participants. Results from LHC Run 2 were shown, in addition to some of the latest advances in quantum optics.

A mini-workshop dedicated to “highly-ionising avatars of new physics” brought together an ever-growing community of theorists, astroparticle physicists and collider experimentalists. There were also presentations of advances in the theory of highly ionising particles as well as light monopoles, with masses accessible to LHC and future colliders, and discussions included experimental searches both extraterrestrial and terrestrial, including results on magnetic monopoles from MoEDAL-LHC experiment that have set the strongest limits so far on high-charge monopoles at colliders.

In the “quantum” workshops, this year dedicated to the 85th birthday of theorist Yakir Aharonov, leading experts addressed fundamental concepts and topics in quantum mechanics, such as continuous variables and relativistic quantum information measurement theory, collapse, time’s arrow, entanglement and nonlocality.

In the exotic hadron workshop the nature of the exotic meson X(3872) was discussed in considerable detail, especially with regard to its content: is it a mixture of a hadronic molecule and excited charmonium, or a diquark–antidiquark state? Detailed studies of the decay modes and pT dependence of the production cross section in proton–proton collisions emerged as two most promising avenues for clarifying this issue. Following the recent LHCb discovery of doubly-charmed Χcc baryon, new results were reported including the prediction of a stable bbbud tetraquark and a quark-level analogue of nuclear fusion.

Presentations on the future low-energy heavy-ion accelerator centres, FAIR in Darmstadt and NICA at JINR in Dubna, showed that the projects are progressing on schedule for operation in the mid-2020s. Delegates were also treated to the role of non-commutative geometry as a way to unify gauge theories and gravity, self-interactions among right-handed neutrinos with masses in the warm-dark-matter regime, and the subtle physics behind sunsets and the aurora.

The conference ended with two-day workshops on supergravity and strings, and a workshop on the future of fundamental physics. Major future projects were presented, together with visionary talks about the future of accelerators and the challenges ahead in the interaction of fundamental physics and society. The conference also hosted a well-attended special session on physics education and outreach. The next ICNFP conference will take place on 4–12 July 2018 in Kolymbari, Crete.

indico.cern.ch/event/559774

The post Particle physics meets quantum optics appeared first on CERN Courier.

]]>
Meeting report The sixth International Conference on New Frontiers in Physics (ICNFP) took place on 17–29 August in Kolymbari, Crete. https://cerncourier.com/wp-content/uploads/2018/01/CCfac18_01_18.jpg
Doubting darkness https://cerncourier.com/a/doubting-darkness/ Thu, 13 Apr 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/doubting-darkness/ An interview with Erik Verlinde, who argues that dark matter is an illusion caused by an incomplete understanding of gravity.

The post Doubting darkness appeared first on CERN Courier.

]]>

What is wrong with the theory of gravity we have?

The current description of gravity in terms of general relativity has various shortcomings. Perhaps the most important is that we cannot simply apply Einstein’s laws at a subatomic level without generating notorious infinities. There are also conceptual puzzles related to the physics of black holes that indicate that general relativity is not the final answer to gravity, and important lessons learnt from string theory suggesting gravity is emergent. Besides these theoretical issues, there is also a strong experimental motivation to rethink our understanding of gravity. The first is the observation that our universe is experiencing accelerated expansion, suggesting it contains an enormous amount of additional energy. The second is dark matter: additional gravitating but non-luminous mass that explains anomalous galaxy dynamics. Together these entities account for 95 per cent of all the energy in the universe.

Isn’t the evidence for dark matter overwhelming?

It depends who you ask. There is a lot of evidence that general relativity works very well at length scales that are long compared to the Planck scale, but when we apply general relativity at galactic and cosmological scales we see deviations. Most physicists regard this as evidence that there exists an additional form of invisible matter that gravitates in the same way as normal matter, but this assumes that gravity itself is still described by general relativity. Furthermore, although the most direct evidence for the existence of dark matter comes from the study of galaxies and clusters, not all astronomers are convinced that what they observe is due to particle dark matter – for example, there appears to be a strong correlation between the amount of ordinary baryonic matter and galactic rotation velocities that is hard to explain with particle dark matter. On the other hand, the physicists who are carrying out numerical work on particle dark matter are trying to explain these correlations by including complicated baryonic feedback mechanisms and tweaking the parameters that go into their models. Finally, there is a large community of experimental physicists who simply take the evidence for dark matter as a given.

Is your theory a modification of general relativity, or a rewrite?

The aim of emergent gravity is to derive the equations that govern gravity from a microscopic quantum, while using ingredients from quantum-information theory. One of the main ideas is that different parts of space–time are glued together via quantum entanglement. This is due to van Raamsdonk and has been extended and popularised by Maldacena and Susskind with the slogan “EPR = ER”, where EPR is a reference to Einstein–Podolsky–Rosen and ER refers to the Einstein–Rosen bridge: a “wormhole” that connects the two parts of the black-hole geometry on opposite sides of the horizon. These ideas are being developed by many theorists, in particular in the context of the Anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence. The goal is then to derive the Einstein equations from this microscopic-quantum perspective. The first step in this programme was already made before my work, but until now most results were derived for AdS space, which describes a universe with a negative cosmological constant and therefore differs from our own. In my recent paper [arXiv:1611.02269] I extended these ideas to de Sitter space, which contains a positive dark energy and has a cosmological horizon. My insight has been that, due to the presence of positive dark energy, the derivation of the Einstein equations breaks down precisely in the circumstances where we observe the effects of “dark matter”.

How did the idea emerge?

The idea of emergent gravity from thermodynamics has been lingering around since the discovery by Hawking and Bekenstein of black-hole entropy and the laws of black-hole thermodynamics in the 1970s. Ted Jacobson made an important step in 1996 by deriving the Einstein equations from assuming the Bekenstein–Hawking formula, which expresses the microscopic entropy in terms of the area of the horizon measured in Planck units. In my 2010 paper [arXiv:1001.0785] I clarified the origin of the inertia force and its relation to the microscopic entropy in space, assuming that this is given by the area of an artificial horizon. After this work I started thinking about cosmology, and learnt about the observations that indicate a close connection between the acceleration scale in galaxies and the acceleration at the cosmological horizon, which is determined by the Hubble parameter. I immediately realised that this implied a relation between the observed phenomena associated with dark matter and the presence of dark energy.

Your paper is 50 pages long. Can you summarise it here?

The idea is that gravity emerges by applying an analogue of the laws of thermodynamics to the entanglement entropy in the vacuum. Just like the normal laws of thermodynamics can be understood from the statistical treatment of microscopic molecules, gravity can be derived from the microscopic units that make up space–time. These “space–time molecules” are units of quantum information (qubits) that are entangled with each other, with the amount of entanglement measured by the entanglement entropy. I realised that in a universe with a positive dark energy, there is a contribution to the entanglement entropy that grows in proportion to area. This leads to an additional force on top of the usual gravity law, because the dark energy “pushes back” like an elastic medium and results in the phenomena that we currently attribute to dark matter. In short, the laws of gravity differ in the low-acceleration regime that occurs in galaxies and other cosmological structures.

How did the community react to the paper?

Submitting work that goes against a widely supported theory requires some courage, and the fact that I have already demonstrated serious work in string theory helped. Nevertheless, I do experience some resistance – mainly from researchers who have been involved in particle dark-matter research. Some string theorists find my work interesting and exciting, but most of them take a “wait and see” attitude. I am dealing with a number of different communities with different attitudes and scientific backgrounds. A lot of it is driven by sociology and past investments.

How often do you work on the idea?

Emergent gravity from quantum entanglement is now an active field worldwide, and I have worked on the idea for a number of years. I mostly work in the evening for around three hours and perhaps one hour in the morning. I also discuss these ideas with my PhD students, colleagues and visitors. In the Netherlands we have quite a large community working on gravity and quantum entanglement, and recently we received a grant together with theorists from the universities of Groningen, Leiden, Utrecht and Amsterdam, to work on this topic.

Within a month of your paper, Brouwer et al. published results supporting your idea. How significant is this?

My theory predicts that the gravitational force due to a centralised mass exhibits a certain scaling relation. This relation was already known to hold for galaxy rotation curves, but these can only be measured up to distances of about 100 kilo-parsec because there are no visible stars beyond this distance. Brouwer and her collaborators used weak gravitational lensing to determine the gravitational force due to a massive galaxy up to distances of one mega-parsec and confirmed that the same relation still holds. Particle dark-matter models can also explain these observations, but they do so by adjusting a free parameter to fit the data. My prediction has no free parameters and hence I find this more convincing, but more observations are needed before definite conclusions can be drawn.

Is there a single result that would rule your theory in or out?

If a dark-matter particle would be discovered that possesses all the properties to explain all the observations, then my idea would be proven to be false. Personally I am convinced this will not happen, although I am still developing the theory further to be able to address important dynamical situations such as the Bullet Cluster (see “How dark matter became a particle”) and the acoustic oscillations that explain the power spectrum of the cosmic microwave background. One of the problems is that particle dark-matter models are so flexible and can therefore easily be made consistent with the data. By improving and extending the observations of gravitational phenomena that are currently attributed to dark matter, we can make better comparisons with the theory. I am hopeful that within the next decade the precision of the observations will have improved and the theory will be developed to a level that decisive tests can be performed.

How would emergent gravity affect the rest of physics?

Our perspective on the building blocks of nature would change drastically. We will no longer think in terms of elementary particles and fundamental forces, but units of quantum information. Hence, the gauge forces responsible for the electroweak and strong interactions will also be understood as being emergent, and this is the way that the forces of nature will become unified. In this sense, all of our current laws of nature will be seen as emergent.

The post Doubting darkness appeared first on CERN Courier.

]]>
Feature An interview with Erik Verlinde, who argues that dark matter is an illusion caused by an incomplete understanding of gravity. https://cerncourier.com/wp-content/uploads/2018/06/CCint1_04_17.jpg
General relativity at 100 https://cerncourier.com/a/general-relativity-at-100/ Fri, 13 Jan 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/general-relativity-at-100/ Testing Einstein’s masterpiece with ever increasing precision.

The post General relativity at 100 appeared first on CERN Courier.

]]>

Einstein’s long path towards general relativity (GR) began in 1907, just two years after he created special relativity (SR), when the following apparently trivial idea occurred to him: “If a person falls freely, he will not feel his own weight.” Although it was long known that all bodies fall in the same way in a gravitational field, Einstein raised this thought to the level of a postulate: the equivalence principle, which states that there is complete physical equivalence between a homogeneous gravitational field and an accelerated reference frame. After eight years of hard work and deep thinking, in November 1915 he succeeded in extracting from this postulate a revolutionary theory of space, time and gravity. In GR, our best description of gravity, space–time ceases to be an absolute, non-dynamical framework as envisaged by the Newtonian view, and instead becomes a dynamical structure that is deformed by the presence of mass-energy.

GR has led to profound new predictions and insights that underpin modern astrophysics and cosmology, and which also play a central role in attempts to unify gravity with other interactions. By contrast to GR, our current description of the fundamental constituents of matter and of their non-gravitational interactions – the Standard Model (SM) – is given by a quantum theory of interacting particles of spins 0, ½ and 1 that evolve within the fixed, non-dynamical Minkowski space–time of SR. The contrast between the homogeneous, rigid and matter-independent space–time of SR and the inhomogeneous, matter-deformed space–time of GR is illustrated in figure 1.

The universality of the coupling of gravity to matter (which is the most general form of the equivalence principle) has many observable consequences such as: constancy of the physical constants; local isotropy of space; local Lorentz invariance; universality of free fall and universality of gravitational redshift. Many of these have been verified to high accuracy. For instance, the universality of the acceleration of free fall has been verified on Earth at the 10–13 level, while the local isotropy of space has been verified at the 10–22 level. Einstein’s field equations (see panel below) also predict many specific deviations from Newtonian gravity that can be tested in the weak-field, quasi-stationary regime appropriate to experiments performed in the solar system. Two of these tests – Mercury’s perihelion advance, and light deflection by the Sun – were successfully performed, although with limited precision, soon after the discovery of GR. Since then, many high-precision tests of such post-Newtonian gravity have been performed in the solar system, and GR has passed each of them with flying colours.

Precision tests

Similar to what is done in precision electroweak experiments, it is useful to quantify the significance of precision gravitational experiments by parameterising plausible deviations from GR. The simplest, and most conservative, deviation from Einstein’s pure spin-2 theory is defined by adding a long-range (massless) spin-0 field, φ, coupled to the trace of the energy-momentum tensor. The most general such theory respecting the universality of gravitational coupling contains an arbitrary function of the scalar field defining the “observable metric” to which the SM matter is minimally and universally coupled.

In the weak-field slow-motion limit, appropriate to describing gravitational experiments in the solar system, the addition of φ modifies Einstein’s predictions only through the appearance of two dimensionless parameters, γ and β. The best current limits on these “post-Einstein” parameters are, respectively, (2.1±2.3) × 10–5 (deduced from the additional Doppler shift experienced by radio-wave beams connecting the Earth to the Cassini spacecraft when they passed near the Sun) and < 7 × 10–5, from a study of the global sensitivity of planetary ephemerides to post-Einstein parameters.

In the regime of radiative and/or strong gravitational fields, by contrast, pulsars (rotating neutron stars emitting a beam of radio waves) in gravitationally bound orbits have provided crucial tests of GR. In particular, measurements of the decay in the orbital period of binary pulsars have provided direct experimental confirmation of the propagation properties of the gravitational field. Theoretical studies of binaries in GR have shown that the finite velocity of propagation of the gravitational interaction between the pulsar and its companion generates damping-like terms at order (v/c)5 in the equations of motion that lead to a small orbital period decay. This has been observed in more than four different systems since the discovery of binary pulsars in 1974, providing direct proof of the reality of gravitational radiation. Measurements of the arrival times of pulsar signals have also allowed precision tests of the quasi-stationary strong-field regime of GR, since their values may depend both on the unknown masses of the binary system and on the theory of gravity used to describe the strong self-gravity of the pulsar and its companion (figure 2).

The radiation revelation

Einstein realised that his field equations had wave-like solutions in two papers in June 1916 and January 1918 (see panel below). For many years, however, the emission of gravitational waves (GWs) by known sources was viewed as being too weak to be of physical significance. In addition, several authors – including Einstein himself – had voiced doubts about the existence of GWs in fully nonlinear GR.

The situation changed in the early 1960s when Joseph Weber understood that GWs arriving on Earth would have observable effects and developed sensitive resonant detectors (“Weber bars”) to search for them. Then, prompted by Weber’s experimental effort, Freeman Dyson realised that, when applying the quadupolar energy-loss formula derived by Einstein to binary systems made of neutron stars, “the loss of energy by gravitational radiation will bring the two stars closer with ever-increasing speed, until in the last second of their lives they plunge together and release a gravitational flash at a frequency of about 200 cycles and of unimaginable intensity.” The vision of Dyson has recently been realised thanks, on the one hand, to the experimental development of drastically more sensitive non-resonant kilometre-scale interferometric detectors and, on the other hand, to theoretical advances that allowed one to predict in advance the accurate shape of the GW signals emitted by coalescing systems of neutron stars and black holes (BHs).

The recent observations of the LIGO interferometers have provided the first detection of GWs in the wave zone. They also provide the first direct evidence of the existence of BHs via the observation of their merger, followed by an abrupt shut-off of the GW signal, in complete accord with the GR predictions.

BHs are perhaps the most extraordinary consequence of GR, because of the extreme distortion of space and time that they exhibit. In January 1916, Karl Schwarzschild published the first exact solution of the (vacuum) Einstein equations, supposedly describing the gravitational field of a “mass point” in GR. It took about 50 years to fully grasp the meaning and astrophysical plausibility of these Schwarzschild BHs. Two of the key contributions that led to our current understanding of BHs came from Oppenheimer and Snyder, who in 1939 suggested that a neutron star exceeding its maximum possible mass will undergo gravitational collapse and thereby form a BH, and from Kerr 25 years later, who discovered a generalisation of the Schwarzschild solution describing a BH endowed both with mass and spin.

The Friedmann models still constitute the background models of the current, inhomogeneous cosmologies.

Another remarkable consequence of GR is theoretical cosmology, namely the possibility of describing the kinematics and the dynamics of the whole material universe. The field of relativistic cosmology was ushered in by a 1917 paper by Einstein. Another key contribution was the 1924 paper of Friedmann that described general families of spatially curved, expanding or contracting homogeneous cosmological models. The Friedmann models still constitute the background models of the current, inhomogeneous cosmologies. Quantitative confirmations of GR on cosmological scales have also been obtained, notably through the observation of a variety of gravitational lensing systems.

Dark clouds ahead

In conclusion, all present experimental gravitational data (universality of free fall, post-Newtonian gravity, radiative and strong-field effects in binary pulsars, GW emission by coalescing BHs and gravitational lensing) have been found to be compatible with the predictions of Einstein’s theory. There are also strong constraints on sub-millimetre modifications of Newtonian gravity from torsion-balance tests of the inverse square law.

One might, however, wish to keep in mind the presence of two dark clouds in our current cosmology, namely the need to assume that most of the stress-energy tensor that has to be put on the right-hand side of the GR field equations to account for the current observations is made of yet unseen types of matter: dark matter and a “cosmological constant”. It has been suggested that these signal a breakdown of Einstein’s gravitation at large scales, although no convincing theoretical modification of GR at large distances has yet been put forward.

GWs, BHs and dynamical cosmological models have become essential elements of our description of the macroscopic universe. The recent and bright beginning of GW astronomy suggests that GR will be an essential tool for discovering new aspects of the universe (see “The dawn of a new era”). A century after its inception, GR has established itself as the standard theoretical description of gravity, with applications ranging from the Global Positioning System and the dynamics of the solar system, to the realm of galaxies and the primordial universe.

However, in addition to the “dark clouds” of dark matter and energy, GR also poses some theoretical challenges. There are both classical challenges (notably the formation of space-like singularities inside BHs), and quantum ones (namely the non-renormalisability of quantum gravity – see “Gravity’s quantum side”). It is probable that a full resolution of these challenges will be reached only through a suitable extension of GR, and possibly through its unification with the current “spin ≤ 1” description of particle physics, as suggested both by supergravity and by superstring theory.

It is therefore vital that we continue to submit GR to experimental tests of increasing precision. The foundational stone of GR, the equivalence principle, is currently being probed in space at the 10–15 level by the MICROSCOPE satellite mission of ONERA and CNES. The observation of a deviation of the universality of free fall would imply that Einstein’s purely geometrical description of gravity needs to be completed by including new long-range fields coupled to bulk matter. Such an experimental clue would be most valuable to indicate the road towards a more encompassing physical theory.

General relativity makes waves

There are two equivalent ways of characterising general relativity (GR). One describes gravity as a universal deformation of the Minkowski metric, which defines a local squared interval between two infinitesimally close space–time points and, consequently, the infinitesimal light cones describing the local propagation of massless particles. The metric field gμν is assumed in GR to be universally and minimally coupled to all the particles of the Standard Model (SM), and to satisfy Einstein’s field equations:

equation 1

Here, Rμν denotes the Ricci curvature (a nonlinear combination of gμν and of its first and second derivatives), Tμν is the stress-energy tensor of the SM particles (and fields), and G denotes Newton’s gravitational constant.

The second way of defining GR, as proven by Richard Feynman, Steven Weinberg, Stanley Deser and others, states that it is the unique, consistent, local, special-relativistic theory of a massless spin-2 field. It is then found that the couplings of the spin-2 field to the SM matter are necessarily equivalent to a universal coupling to a “deformed” space–time metric, and that the propagation and self-couplings of the spin-2 field are necessarily described by Einstein’s equations.

Following the example of Maxwell, who had found that the electromagnetic-field equations admit propagating waves as solutions, Einstein found that the GR field equations admit propagating gravitational waves (GWs). He did so by considering the weak-field limit (gμν  = ημν + hμν) of his equations, namely,

equation 2

where hμν =  hμν – ½h ημν. When choosing the co-ordinate system so as to satisfy the gravitational analogue of the Lorenz gauge condition, so that

equation 3

the linearised field equations simplify to the diagonal inhomogeneous wave equation, which can be solved by retarded potentials.

There are two main results that derive from this wave equation: first, a GW is locally described by a plane wave with two transverse tensorial polarisations (corresponding to the two helicity states of the massless spin-2 graviton) and travelling at the velocity of light; second, a slowly moving, non self-gravitating source predominantly emits a quadupolar GW.

The post General relativity at 100 appeared first on CERN Courier.

]]>
Feature Testing Einstein’s masterpiece with ever increasing precision. https://cerncourier.com/wp-content/uploads/2018/06/CCgen1_01_17.jpg
Gravity’s quantum side https://cerncourier.com/a/gravitys-quantum-side/ Fri, 13 Jan 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/gravitys-quantum-side/ There is little doubt that, in spite of their overwhelming success in describing phenomena over a vast range of distances, general relativity (GR) and the Standard Model (SM) of particle physics are incomplete theories. Concerning the SM, the problem is often cast in terms of the remaining open issues in particle physics, such as its […]

The post Gravity’s quantum side appeared first on CERN Courier.

]]>

There is little doubt that, in spite of their overwhelming success in describing phenomena over a vast range of distances, general relativity (GR) and the Standard Model (SM) of particle physics are incomplete theories. Concerning the SM, the problem is often cast in terms of the remaining open issues in particle physics, such as its failure to account for the origin of the matter–antimatter asymmetry or the nature of dark matter. But the real problem with the SM is theoretical: it is not clear whether it makes sense at all as a theory beyond perturbation theory, and these doubts extend to the whole framework of quantum field theory (QFT) (with perturbation theory as the main tool to extract quantitative predictions). The occurrence of “ultraviolet” (UV) divergences in Feynman diagrams, and the need for an elaborate mathematical procedure called renormalisation to remove these infinities and make testable predictions order-by-order in perturbation theory, strongly point to the necessity of some other and more complete theory of elementary particles.

On the GR side, we are faced with a similar dilemma. Like the SM, GR works extremely well in its domain of applicability and has so far passed all experimental tests with flying colours, most recently and impressively with the direct detection of gravitational waves (see “General relativity at 100”). Nevertheless, the need for a theory beyond Einstein is plainly evident from the existence of space–time singularities such as those occurring inside black holes or at the moment of the Big Bang. Such singularities are an unavoidable consequence of Einstein’s equations, and the failure of GR to provide an answer calls into question the very conceptual foundations of the theory.

Unlike quantum theory, which is rooted in probability and uncertainty, GR is based on notions of smoothness and geometry and is therefore subject to classical determinism. Near a space–time singularity, however, the description of space–time as a continuum is expected to break down. Likewise, the assumption that elementary particles are point-like, a cornerstone of QFT and the reason for the occurrence of ultraviolet infinities in the SM, is expected to fail in such extreme circumstances. Applying conventional particle-physics wisdom to Einstein’s theory by quantising small fluctuations of the metric field (corresponding to gravitational waves) cannot help either, since it produces non-renormalisable infinities that undermine the predictive power of perturbatively quantised GR.

In the face of these problems, there is a wide consensus that the outstanding problems of both the SM and GR can only be overcome by a more complete and deeper theory: a theory of quantum gravity (QG) that possibly unifies gravity with the other fundamental interactions in nature. But how are we to approach this challenge?

Planck-scale physics

Unlike with quantum mechanics, whose development was driven by the need to explain observed phenomena such as the existence of spectral lines in atomic physics, nature gives us very few hints of where to look for QG effects. One main obstacle is the sheer smallness of the Planck length, of the order 10−33 cm, which is the scale at which QG effects are expected to become visible (conversely, in terms of energy, the relevant scale is 1019 GeV, which is 15 orders of magnitude greater than the energy range accessible to the LHC). There is no hope of ever directly measuring genuine QG effects in the laboratory: with zillions of gravitons in even the weakest burst of gravitational waves, realising the gravitational analogue of the photoelectric effect will forever remain a dream.

One can nevertheless speculate that QG might manifest itself indirectly, for instance via measurable features in the cosmic microwave background, or cumulative effects originating from a more granular or “foamy” space–time. Alternatively, perhaps a framework will emerge that provides a compelling explanation for inflation, dark energy and the origin of the universe. Although not completely hopeless, available proposals typically do not allow one to unambiguously discriminate between very different approaches, for instance when contrarian schemes like string theory and loop quantum gravity vie to explain features of the early universe. And even if evidence for new effects was found in, say, cosmic-ray physics, these might very well admit conventional explanations.

In the search for a consistent theory of QG, it therefore seems that we have no other choice but to try to emulate Einstein’s epochal feat of creating a new theory out of purely theoretical considerations.

Emulating Einstein

Yet, after more than 40 years of unprecedented collective intellectual effort, different points of view have given rise to a growing diversification of approaches to QG – with no convergence in sight. It seems that theoretical physics has arrived at crossroads, with nature remaining tight-lipped about what comes after Einstein and the SM. There is currently no evidence whatsoever for any of the numerous QG schemes that have been proposed – no signs of low-energy supersymmetry, large extra dimensions or “stringy” excitations have been seen at the LHC so far. The situation is no better for approaches that do not even attempt to make predictions that could be tested at the LHC.

Existing approaches to QG fall roughly into two categories, reflecting a basic schism that has developed in the community. One is based on the assumption that Einstein’s theory can stand on its own feet, even when confronted with quantum mechanics. This would imply that QG is nothing more than the non-perturbative quantisation of Einstein’s theory and that GR, suitably treated and eventually complemented by the SM, correctly describes the physical degrees of freedom also at the very smallest distances. The earliest incarnation of this approach goes back to the pioneering work of John Wheeler and Bryce DeWitt in the early 1960s, who derived a GR analogue of the Schrödinger equation in which the “wave function of the universe” encodes the entire information about the universe as a quantum system. Alas, the non-renormalisable infinities resurface in a different guise: the Wheeler–DeWitt equation is so ill-defined mathematically that no one until now has been able to make sense of it beyond mere heuristics. More recent variants of this approach in the framework of loop quantum gravity (LQG), spin foams and group field theory replace the space–time metric by new variables (Ashtekar variables, or holonomies and fluxes) in a renewed attempt to overcome the mathematical difficulties.

The opposite attitude is that GR is only an effective low-energy theory arising from a more fundamental Planck-scale theory, whose basic degrees of freedom are very different from GR or quantum field theory. In this view, GR and space–time itself are assumed to be emergent, much like macroscopic physics emerges from the quantum world of atoms and molecules. The perceived need to replace Einstein’s theory by some other and more fundamental theory, having led to the development of supersymmetry and supergravity, is the basic hypothesis underlying superstring theory (see “The many lives of supergravity”). Superstring theory is the leading contender for a perturbatively finite theory of QG, and widely considered the most promising possible pathway from QG to SM physics. This approach has spawned a hugely varied set of activities and produced many important ideas. Most notable among these, the AdS/CFT correspondence posits that the physics that takes place in some volume can be fully encoded in the surface bounding that volume, as for a hologram, and consequently that QG in the bulk should be equivalent to a pure quantum field theory on its boundary.

Apart from numerous technical and conceptual issues, there remain major questions for all approaches to QG. For LQG-like or “canonical” approaches, the main unsolved problems concern the emergence of classical space–time and the Einstein field equations in the semiclassical limit, and their inability to recover standard QFT results such as anomalies. On the other side, a main shortcoming is the “background dependence” of the quantisation procedure, for which both supergravity and string theory have to rely on perturbative expansions about some given space–time background geometry. In fact, in its presently known form, string theory cannot even be formulated without reference to a specific space–time background.

These fundamentally different viewpoints also offer different perspectives on how to address the non-renormalisability of Einstein’s theory, and consequently on the need (or not) for unification. Supergravity and superstring theory try to eliminate the infinities of the perturbatively quantised theory, in particular by including fermionic matter in Einstein’s theory, thus providing a raison d’être for the existence of matter in the world. They therefore automatically arrive at some kind of unification of gravity, space–time and matter. By contrast, canonical approaches attribute the ultraviolet infinities to basic deficiencies of the perturbative treatment. However, to reconcile this view with semiclassical gravity, they will have to invoke some mechanism – a version of Weinberg’s asymptotic safety – to save the theory from the abyss of non-renormalisability.    

Conceptual challenges

Beyond the mathematical difficulties to formulating QG, there are a host of issues of a more conceptual nature that are shared by all approaches. Perhaps the most important concerns the very ground rules of quantum mechanics: even if we could properly define and solve the Wheeler–DeWitt equation, how are we to interpret the resulting wave function of the universe? After all, the latter pretends to describe the universe in its entirety, but in the absence of outside classical observers, the Copenhagen interpretation of quantum mechanics clearly becomes untenable. On a slightly less grand scale, there are also unresolved issues related to the possible loss of information in connection with the Hawking evaporation of black holes.

A further question that any theory of QG must eventually answer concerns the texture of space–time at the Planck scale: do there exist “space–time atoms” or, more specifically, web-like structures like spin networks and spin foams, as claimed by LQG-like approaches? (see diagram) Or does the space–time continuum get dissolved into a gas of strings and branes, as suggested by some variants of string theory, or emerge from holographic entanglement, as advocated by AdS/CFT aficionados? There is certainly no lack of enticing ideas, but without a firm guiding principle and the prospect of making a falsifiable prediction, such speculations may well end up in the nirvana of undecidable propositions and untestable expectations.

Why then consider unification? Perhaps the strongest argument in favour of unification is that the underlying principle of symmetry has so far guided the development of modern physics from Maxwell’s theory to GR all the way to Yang–Mills theories and the SM (see diagram). It is therefore reasonable to suppose that unification and symmetry may also point the way to a consistent theory of QG. This point of view is reinforced by the fact that the SM, although only a partially unified theory, does already afford glimpses of trans-Planckian physics, independently of whether new physics shows up at the LHC or not. This is because the requirements of renormalisability and vanishing gauge anomalies put very strong constraints on the particle content of the SM, which are indeed in perfect agreement with what we see in detectors. There would be no more convincing vindication of a theory of QG than its ability to predict the matter content of the world (see panel below).

In search of SUSY

Among the promising ideas that have emerged over the past decades, arguably the most beautiful and far reaching is supersymmetry. It represents a new type of symmetry that relates bosons and fermions, thus unifying forces (mediated by vector bosons) with matter (quarks and leptons), and which endows space–time with extra fermionic dimensions. Supersymmetry is very natural from the point of view of cancelling divergences because bosons and fermions generally contribute with opposite signs to loop diagrams. This aspect means that low-energy (N = 1) supersymmetry can stabilise the electroweak scale with regard to the Planck scale, thereby alleviating the so-called hierarchy problem via the cancellation of quadratic divergences. These models predict the existence of a mirror world of superpartners that differ from the SM particles only by their opposite statistics (and their mass), but otherwise have identical internal quantum numbers.

To the great disappointment of many, experimental searches at the LHC so far have found no evidence for the superpartners predicted by N = 1 supersymmetry. However, there is no reason to give up on the idea of supersymmetry as such, since the refutation of low-energy supersymmetry would only mean that the most simple-minded way of implementing this idea does not work. Indeed, the initial excitement about supersymmetry in the 1970s had nothing to do with the hierarchy problem, but rather because it offered a way to circumvent the so-called Coleman–Mandula no-go theorem – a beautiful possibility that is precisely not realised by the models currently being tested at the LHC.

In fact, the reduplication of internal quantum numbers predicted by N = 1 supersymmetry is avoided in theories with extended (N > 1) supersymmetry. Among all supersymmetric theories, maximal N = 8 supergravity stands out as the most symmetric. Its status with regard to perturbative finiteness is still unclear, although recent work has revealed amazing and unexpected cancellations. However, there is one very strange agreement between this theory and observation, first emphasised by Gell-Mann: the number of spin-1/2 fermions remaining after complete breaking of supersymmetry is 48 = 3 × 16, equal to the number of quarks and leptons (including right-handed neutrinos) in three generations (see “The many lives of supergravity”). To go beyond the partial matching of quantum numbers achieved so far will, however, require some completely new insights, especially concerning the emergence of chiral gauge interactions.

Then again, perhaps supersymmetry is not the end of the story. There is plenty of evidence that another type of symmetry may be equally important, namely duality symmetry. The first example of such a symmetry, electromagnetic duality, was discovered by Dirac in 1931. He realised that Maxwell’s equations in vacuum are invariant under rotations of the electric and magnetic fields into one another – an insight that led him to predict the existence of magnetic monopoles. While magnetic monopoles have not been seen, duality symmetries have turned out to be ubiquitous in supergravity and string theory, and they also reveal a fascinating and unsuspected link with the so-called exceptional Lie groups.

More recently, hints of an enormous symmetry enhancement have also appeared in a completely different place, namely the study of cosmological solutions of Einstein’s equations near a space-like singularity. This mathematical analysis has revealed tantalising evidence of a truly exceptional infinite-dimensional duality symmetry, which goes by the name of E10, and which “opens up” as one gets close to the cosmological (Big Bang) singularity (see image at top). Could it be that the near-singularity limit can tell us about the underlying symmetries of QG in a similar way as the high-energy limit of gauge theories informs us about the symmetries of the SM? One can validly argue that this huge and monstrously complex symmetry knows everything about maximal supersymmetry and the finite-dimensional dualities identified so far. Equally important, and unlike conventional supersymmetry, E10 may continue to make sense in the Planck regime where conventional notions of space and time are expected to break down. For this reason, duality symmetry could even supersede supersymmetry as a unifying principle.

Outstanding questions

Our summary, then, is very simple: all of the important questions in QG remain wide open, despite a great deal of effort and numerous promising ideas. In the light of this conclusion, the LHC will continue to play a crucial role in advancing our understanding of how everything fits together, no matter what the final outcome of the experiments will be. This is especially true if nature chooses not to abide by current theoretical preferences and expectations.

Over the past decades, we have learnt that the SM is a most economical and tightly knit structure, and there is now mounting evidence that minor modifications may suffice for it to survive to the highest energies. To look for such subtle deviations will therefore be a main task for the LHC in the years ahead. If our view of the Planck scale remains unobstructed by intermediate scales, the popular model-builders’ strategy of adding ever more unseen particles and couplings may come to an end. In that case, the challenge of explaining the structure of the low-energy world from a Planck-scale theory of quantum gravity looms larger than ever.

Einstein on unification

It is well known that Albert Einstein spent much of the latter part of his life vainly searching for unification, although disregarding the nuclear forces and certainly with no intention of reconciling quantum mechanics and GR. Already in 1929, he published a paper on the unified theory (pictured above right, click to enlarge). In this paper, he states with wonderful and characteristic lucidity what the criteria should be of a “good” unified theory: to describe as far as possible all phenomena and their inherent links, and to do so on the basis of a minimal number of assumptions and logically independent basic concepts. The second of these goals (also known as the principle of Occam’s razor) refers to “logical unity”, and goes on to say: “Roughly but truthfully, one might say: we not only want to understand how nature works, but we are also after the perhaps utopian and presumptuous goal of understanding why nature is the way it is and not otherwise.” 

 

The post Gravity’s quantum side appeared first on CERN Courier.

]]>
Feature https://cerncourier.com/wp-content/uploads/2018/06/CCqua1_01_17-1.jpg
The many lives of supergravity https://cerncourier.com/a/the-many-lives-of-supergravity/ Fri, 13 Jan 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-many-lives-of-supergravity/ The early 1970s was a pivotal period in the history of particle physics. Following the discovery of asymptotic freedom and the Brout–Englert–Higgs mechanism a few years earlier, it was the time when the Standard Model (SM) of electroweak and strong interactions came into being. After decades of empirical verification, the theory received a final spectacular […]

The post The many lives of supergravity appeared first on CERN Courier.

]]>

The early 1970s was a pivotal period in the history of particle physics. Following the discovery of asymptotic freedom and the Brout–Englert–Higgs mechanism a few years earlier, it was the time when the Standard Model (SM) of electroweak and strong interactions came into being. After decades of empirical verification, the theory received a final spectacular confirmation with the discovery of the Higgs boson at CERN in 2012, and its formulation has also been recognised by Nobel prizes awarded to theoretical physics in 1979, 1999, 2004 and 2013.

It was clear from the start, however, that the SM, a spontaneously broken gauge theory, had two major shortcomings. First, it is not a truly unified theory because the gluons of the strong (colour) force and the photons of electromagnetism do not emerge from a common symmetry. Second, it leaves aside gravity, the other fundamental force of nature, which is based on the gauge principle of general co-ordinate transformations and is described by general relativity (GR).

In the early 1970s, grand unified theories (GUTs), based on larger gauge symmetries that include the SM’s “SU(3) × SU(2) × U(1)” structure, did unify colour and charge – thereby uniting the strong and electroweak interactions. However, they relied on a huge new energy scale (~1016 GeV), just a few orders of magnitude below the Planck scale of gravity (~1019 GeV) and far above the electroweak Fermi scale (~102 GeV), and on new particles carrying both colour and electroweak charges. As a result, GUTs made the stunning prediction that the proton might decay at detectable rates, which was eventually excluded by underground experiments, and their two widely separated cut-off scales introduced a “hierarchy problem” that called for some kind of stabilisation mechanism.

A possible solution came from a parallel but unrelated development. In 1973, Julius Wess and Bruno Zumino unveiled a new symmetry of 4D quantum field theory: supersymmetry, which interchanges bosons and fermions and, as would be better appreciated later, can also conspire to stabilise scale hierarchies. Supersymmetry was inspired by “dual resonance models”, an early version of string theory pioneered by Gabriele Veneziano and extended by André Neveu, Pierre Ramond and John Schwarz. Earlier work done in France by Jean-Loup Gervais and Benji Sakita, and in the Soviet Union by Yuri Golfand and Evgeny Likhtman, and by Dmitry Volkov and Vladimir Akulov, had anticipated some of supersymmetry’s salient features.

An exact supersymmetry would require the existence of superpartners in the SM, but it would also imply mass degeneracies between the known particles and their superpartners. This option has been ruled out over the years by several experiments at CERN, Fermilab and elsewhere, and therefore supersymmetry can be at best broken, with superpartner masses that seem to lie beyond the TeV energy region currently explored at the LHC. Moreover, a spontaneous breaking of supersymmetry would imply the existence of additional massless (“Goldstone”) fermions.

Supergravity, the supersymmetric extension of GR, came to the rescue in this respect. It predicted the existence of a new particle of spin 3/2 called the gravitino that would receive a mass in the broken phase. In this fashion, one or more gravitinos could be potentially very heavy, while the additional massless fermions would be “eaten” – much as it occurs for part of the Higgs doublet in the SM.

Seeking unification

Supergravity, especially when formulated in higher dimensions, was the first concrete realisation of Einstein’s dream of a unified field theory (see diagram opposite). Although the unification of gravity with other forces was the central theme for Einstein during the last part of his life, the beautiful equations of GR were for him a source of frustration. For 30 years he was disturbed by what he considered a deep flaw: one side of the equations contained the curvature of space–time, which he regarded as “marble”, while the other contained the matter energy, which he compared to “wood”. In retrospect, Einstein wanted to turn “wood” into “marble”, but after special and general relativity he failed in this third great endeavour.

GR has, however, proved to be an inestimable source of deep insights for unification. A close scrutiny of general co-ordinate transformations led Theodor Kaluza and Oskar Klein (KK), in the 1920s and 1930s, to link electromagnetism and its Maxwell potentials to internal circle rotations, what we now call a U(1) gauge symmetry. In retrospect, more general rotations could also have led to the Yang–Mills theory, which is a pillar of the SM. According to KK, Maxwell’s theory could be a mere byproduct of gravity, provided the universe contains one microscopic extra dimension beyond time and the three observable spatial ones. In this 5D picture, the photon arises from a portion of the metric tensor – the “marble” in GR – with one “leg” along space–time and the other along the extra dimensions.

Supergravity follows in this tradition: the gravitino is the gauge field of supersymmetry, just like the photon is the gauge field of internal circle rotations. If one or more local supersymmetries (whose number will be denoted by N) accompany general co-ordinate transformations, they grant the consistency of gravitino interactions. In a subclass of “pure” supergravity models, supersymmetry also allows one to connect “marble” and “wood” and therefore goes well beyond the KK mechanism, which does not link Bose and Fermi fields. Curiously, while GR can be formulated in any number of dimensions, seven additional spatial dimensions, at most, are allowed in supergravity due to intricacies of the Fermi–Bose matching.

Last year marked the 40th anniversary of the discovery of supergravity. At its heart lie some of the most beautiful ideas in theoretical physics, and therefore over the years this theory has managed to display different facets or has lived different parallel lives.

Construction begins

The first instance of supergravity, containing a single gravitino (N = 1), was built in the spring of 1976 by Daniel Freedman, Peter van Nieuwenhuizen and one of us (SF). Shortly afterwards, the result was recovered by Stanley Deser and Bruno Zumino, in a simpler and elegant way that extended the first-order (“Palatini”) formalism of GR. Further simplifications emerged once the significance of local supersymmetry was better appreciated. Meanwhile, the “spinning string” – the descendant of dual resonance models that we have already met – was connected to space–time supersymmetry via the so-called Gliozzi–Scherk–Olive (GSO) projection, which reflects a subtle interplay between spin-statistics and strings in space–time. The low-energy spectrum of the resulting models pointed to previously unknown 10D versions of supergravity, which would include the counterparts of several gravitinos, and also to a 4D Yang–Mills theory that is invariant under four distinct supersymmetries (N = 4). A first extended (N = 2) version of 4D supergravity involving two gravitinos came to light shortly after.

When SF visited Caltech in the autumn of 1976, he became aware that Murray Gell-Mann had already worked out many consequences of supersymmetry. In particular, Gell-Mann had realised that the largest “pure” 4D supergravity theory, in which all forces would be connected to the conventional graviton, would include eight gravitinos. Moreover, this N = 8 theory could also allow an SO(8) gauge symmetry, the rotation group in eight dimensions (see table opposite). Although SO(8) would not suffice to accommodate the SU(3) × SU(2) × U(1) symmetry group of the SM, the full interplay between supergravity and supersymmetric matter soon found a proper setting in string theory, as we shall see.

The following years, 1977 and 1978, were most productive and drew many people into the field. Important developments followed readily, including the discovery of reformulations where N = 1 4D supersymmetry is manifest. This technical step was vital to simplify more general constructions involving matter, since only this minimal form of supersymmetry is directly compatible with the chiral (parity-violating) interactions of the SM. Indeed, by the early 1980s, theorists managed to construct complete couplings of supergravity to matter for N = 1 and even for N = 2.

The maximal, pure N = 8 4D supergravity was also derived, via a circle KK reduction, in 1978 by Eugene Cremmer and Bernard Julia. This followed their remarkable construction, with Joel Scherk, of the unique 11D form of supergravity, which displayed a particularly simple structure where a single gravitino accounts for eight 4D ones. In contrast, the N = 8 model is a theory of unprecedented complication. It was built after an inspired guess about the interactions of its 70 scalar fields (see table) and a judicious use of generalised dualities, which extend the manifest symmetry of the Maxwell equations under the interchange of electric and magnetic fields. The N = 8 supergravity with SO(8) gauge symmetry foreseen by Gell-Mann was then constructed by Bernard de Wit and Hermann Nicolai. It revealed a negative vacuum energy, and thus an anti-de Sitter (AdS) vacuum, and was later connected to 11D supergravity via a sphere KK reduction. Regarding the ultraviolet behaviour of supergravity theories, which was vigorously investigated soon after the original discovery, no divergences were found, at one loop, in the “pure” models, and many more unexpected cancellations of divergences have since come to light. The case of N = 8 supergravity is still unsettled, and some authors still expect that this maximal theory be finite to all orders.

The string revolution

Following the discovery of supergravity, the GSO projection opened the way to connect “spinning strings”, or string theory as they came to be known collectively, to supersymmetry. Although the link between strings and gravity had been foreseen by Scherk and Schwarz, and independently by Tamiaki Yoneya, it was only a decade later, in 1984, that widespread activity in this direction began. This followed Schwarz and Michael Green’s unexpected discovery that gauge and gravitational anomalies cancel in all versions of 10D supersymmetric string theory. Anomalies – quantum violations of classical symmetries – are very troublesome when they concern gauge interactions, and their cancellation is a fundamental consistency condition that is automatically granted in the SM by its known particle content.

Anomaly cancellation left just five possible versions of string theory in 10 dimensions: two “heterotic” theories of closed strings, where the SU(3) × SU(2) × U(1) symmetry of the SM is extended to the larger groups SO(32) or E8 × E8; an SO(32) “type-I” theory involving both open and closed strings, akin to segments and circles, respectively; and two other very different and naively less interesting theories called IIA and IIB. At low energies, supergravity emerges from all of these theories in its different 10D realisations, opening up unprecedented avenues for linking 10D strings to the interactions of particle physics. Moreover, the extended nature of strings made all of these enticing scenarios free of the ultraviolet problems of gravity.

Following this 1984 “first superstring revolution”, one might well say that supergravity officially started a second life as a low-energy manifestation of string theory. Anomaly cancellation had somehow connected Einstein’s “marble” and “wood” in a miraculous way dictated by quantum consistency, and definite KK scenarios soon emerged that could recover from string theory both the SM gauge group and its chiral, parity-violating interactions. Remarkably, this construction relied on a specific class of 6D internal manifolds called Calabi–Yau spaces that had been widely studied in mathematics, thereby merging 4D supergravity with algebraic geometry. Calabi–Yau spaces led naturally, in four dimensions, to a GUT gauge group E6, which was known to connect to the SM with right-handed neutrinos, also providing realisations of the see-saw mechanism.

A third life

The early 1990s were marked by many investigations of black-hole-like solutions in supergravity, which soon unveiled new aspects of string theory. Just like the Maxwell field is related to point particles, some of the fields in 10D supergravity are related to extended objects, generically dubbed “p-branes” (p = 0 for particles, p = 1 for strings, p = 2 for membranes, and so on). String theory, being based at low energies on supergravity, therefore could not be merely a theory of strings. Rather, as had been strongly advocated over the years by Michael Duff and Paul Townsend, we face a far more complicated soup of strings and more general p-branes. A novel ingredient was a special class of p-branes, the D-branes, whose role was clarified by Joseph Polchinski, but the electric-magnetic dualities of the low-energy supergravity remained the key tool to analyse the system. The end result, in the mid 1990s, was the awesome, if still somewhat vague, unified picture called M-theory, which was largely due to Edward Witten and marked the “second superstring revolution”. Twenty years after its inception, supergravity thus started a third parallel life, as a deep probe into the mysteries of string theory.

The late 1990s witnessed the emergence of a new duality. The AdS/CFT correspondence, pioneered by Juan Maldacena, is a profound equivalence between supergravity and strings in AdS and conformal field theory (CFT) on its boundary, which connects theories living in different dimensions. This “third superstring revolution” brought to the forefront the AdS versions of supergravity, which thus started a new life as a unique tool to probe quantum field theory in unusual regimes. The last two decades have witnessed many applications of AdS/CFT outside of its original realm. These have touched upon fluid dynamics, quark–gluon plasma, and more recently condensed-matter physics, providing a number of useful insights on strongly coupled matter systems. Perhaps more unexpectedly, AdS/CFT duality has stimulated work related to scattering amplitudes, which may also shed light on the old issue of the ultraviolet behaviour of supergravity. The reverse programme of gaining information about gravity from gauge dynamics has proved harder, and it is difficult to foresee where the next insights will come from. Above all, there is a pressing need to highlight the geometrical principles and the deep symmetries underlying string theory, which have proved elusive over the years.

The interplay between particle physics and cosmology is a natural arena to explore consequences of supergravity. Recent experiments probing the cosmic microwave background, and in particular the results of the Planck mission, lend support to inflationary models of the early universe. An elusive particle, the inflaton, could have driven this primordial acceleration, and although our current grasp of string theory does not allow a detailed analysis of the problem, supergravity can provide fundamental clues on this and the subsequent particle-physics epochs.

Supersymmetry was inevitably broken in a de Sitter-like inflationary phase, where superpartners of the inflaton tend to experience instabilities. The novel ingredient that appears to get around these problems is non-linear supersymmetry, whose foundations lie in the prescient 1973 work of Volkov and Akulov. Non-linear supersymmetry arises when superpartners are exceedingly massive, and seems to play an intriguing role in string theory. The current lack of signals for supersymmetry at the LHC makes one wonder whether it might also hold a prominent place in an eventual picture of particle physics. This resonates with the idea of “split supersymmetry”, which allows for large mass splittings among superpartners and can be accommodated in supergravity at the price of reconsidering hierarchy issues.

In conclusion, attaining a deeper theoretical understanding of broken supersymmetry in supergravity appears crucial today. In breaking supersymmetry, one is confronted with important conceptual challenges: the resulting vacua are deeply affected by quantum fluctuations, and this reverberates on old conundrums related to dark energy and the cosmological constant. There are even signs that this type of investigation could shed light on the backbone of string theory, and supergravity may also have something to say about dark matter, which might be accounted for by gravitinos or other light superpartners. We are confident that supergravity will lead us farther once more.

The post The many lives of supergravity appeared first on CERN Courier.

]]>
Feature https://cerncourier.com/wp-content/uploads/2018/06/CCsup1_01_17.jpg
Testing times for space–time symmetry https://cerncourier.com/a/testing-times-for-space-time-symmetry/ https://cerncourier.com/a/testing-times-for-space-time-symmetry/#comments Fri, 11 Nov 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/testing-times-for-space-time-symmetry/ Numerous experiments, many of them at CERN, are testing for violations of Lorentz and CPT symmetry in the search for new physics.

The post Testing times for space–time symmetry appeared first on CERN Courier.

]]>

Throughout history, our notion of space and time has undergone a number of dramatic transformations, thanks to figures ranging from Aristotle, Leibniz and Newton to Gauss, Poincaré and Einstein. In our present understanding of nature, space and time form a single 4D entity called space–time. This entity plays a key role for the entire field of physics: either as a passive spectator by providing the arena in which physical processes take place or, in the case of gravity as understood by Einstein’s general relativity, as an active participant.

Since the birth of special relativity in 1905 and the CPT theorem of Bell, Lüders and Pauli in the 1950s, we have come to appreciate both Lorentz and CPT symmetry as cornerstones of the underlying structure of space–time. The former states that physical laws are unchanged when transforming between two inertial frames, while the latter is the symmetry of physical laws under the simultaneous transformations of charge conjugation (C), parity inversion (P) and time reversal (T). These closely entwined symmetries guarantee that space–time provides a level playing field for all physical systems independent of their spatial orientation and velocity, or whether they are composed of matter or antimatter. Both have stood the tests of time, but in the last quarter century these cornerstones have come under renewed scrutiny as to whether they are indeed exact symmetries of nature. Were physicists to find violations, it would lead to profound revisions in our understanding of space and time and force us to correct both general relativity and the Standard Model of particle

Accessing the Planck scale

Several considerations have spurred significant enthusiasm for testing Lorentz and CPT invariance in recent years. One is the observed bias of nature towards matter – an imbalance that is difficult, although perhaps possible, to explain using standard physics. Another stems from the synthesis of two of the most successful physics concepts in history: unification and symmetry breaking. Many theoretical attempts to combine quantum theory with gravity into a theory of quantum gravity allow for tiny departures from Lorentz and CPT invariance. Surprisingly, even deviations that are suppressed by 20 orders of magnitude or more are experimentally accessible with present technology. Few, if any, other experimental approaches to finding new physics can provide such direct access to the Planck scale.

Unfortunately, current models of quantum gravity cannot accurately pinpoint experimental signatures for Lorentz and CPT violation. An essential milestone has therefore been the development of a general theoretical framework that incorporates Lorentz and CPT violation into both the Standard Model and general relativity: the Standard Model Extension (SME), as formulated by Alan Kostelecký of Indiana University in the US and coworkers beginning in the early 1990s. Due to its generality and independence of the underlying models, the SME achieves the ambitious goal of allowing the identification, analysis and interpretation of all feasible Lorentz and CPT tests (see panel below). Any putative quantum-gravity remnants associated with Lorentz breakdown enter the SME as a multitude of preferred directions criss-crossing space–time. As a result, the playing field for physical systems is no longer level: effects may depend slightly on spatial orientation, uniform velocity, or whether matter or antimatter is involved. These preferred directions are the coefficients of the SME framework; they parametrise the type and extent of Lorentz and CPT violation, offering specific experiments the opportunity to try to glimpse them.

The Standard Model Extension

At the core of attempts to detect violations in space–time symmetry is the Standard Model Extension (SME) – an effective field theory that contains not just the SM but also general relativity and all possible operators that break Lorentz symmetry. It can be expressed as a Lagrangian in which each Lorentz-violating term has a coefficient that leads to a testable prediction of the theory.

Lorentz and CPT research is unique in the exceptionally wide range of experiments it offers. The SME makes predictions for symmetry-violating effects in systems involving neutrinos, gravity, meson oscillations, cosmic rays, atomic spectra, antimatter, Penning traps and collider physics, among others. In the case of free particles, Lorentz and CPT violation lead to a dependence of observables on the direction and magnitude of the particles’ momenta, on their spins, and on whether particles or antiparticles are studied. For a bound system such as atomic and nuclear states, the energy spectrum depends on its orientation and velocity and may differ from that of the corresponding antimatter system.

The vast spectrum of experiments and latest results in this field were the subject of the triennial CPT conference held at Indiana University in June this year (see panel below), highlights from which form the basis of this article.

The seventh triennial CPT conference

A host of experimental efforts to probe space–time symmetries were the focus of the week-long Seventh Meeting on CPT and Lorentz Symmetry (CPT’16) held at Indiana University, Bloomington, US, on 20–24 June, which are summarised in the main text of this article. With around 120 experts from five continents discussing the most recent developments in the subject, it has been the largest of all meetings in this one-of-a-kind triennial conference series. Many of the sessions included presentations involving experiments at CERN, and the discussions covered a number of key results from experiments at the Antiproton Decelerator and future improvements expected from the commissioning of ELENA. The common thread weaving through all of these talks heralds an exciting emergent era of low-energy Planck-reach fundamental physics with antimatter.

CERN matters

As host to the world’s only cold-antiproton source for precision antimatter physics (the Antiproton Decelerator, AD) and the highest-energy particle accelerator (the Large Hadron Collider, LHC), CERN is in a unique position to investigate the microscopic structure of space–time. The corresponding breadth of measurements at these extreme ends of the energy regime guarantees complementary experimental approaches to Lorentz and CPT symmetry at a single laboratory. Furthermore, the commissioning of the new ELENA facility at CERN is opening brand new tests of Lorentz and CPT symmetry in the antimatter sector (see panel below).

Cold antiprotons offer powerful tests of CPT symmetry

CPT – the combination of charge conjugation (C), parity inversion (P) and time reversal (T) – represents a discrete symmetry between matter and antimatter. As the standard CPT test framework, the Standard Model Extension (SME) possesses a feature that might perhaps seem curious at first: CPT violation always comes with a breakdown of Lorentz invariance. However, an extraordinary insight gleaned from the celebrated CPT theorem of the 1950s is that Lorentz symmetry already contains CPT invariance under “mild smoothness” assumptions: since CPT is essentially a special Lorentz transformation with a complex-valued velocity, the symmetry holds whenever the equations of physics are smooth enough to allow continuation into the complex plane. Unsurprisingly, then, the loss of CPT invariance requires Lorentz breakdown, an argument made rigorous in 2002. Lorentz violation, on the other hand, does not imply CPT breaking.

That CPT breaking comes with Lorentz violation has the profound experimental implication that CPT tests do not necessarily have to involve both matter and antimatter: hypothetical CPT violation might also be detectable via the concomitant Lorentz breaking in matter alone. But this feature comes at a cost: the corresponding Lorentz tests typically cannot disentangle CPT-even and CPT-odd signals and, worse, they may even be blind to the effect altogether. Antimatter experiments decisively brush aside these concerns, and the availability at CERN of cold antiprotons has thus opened an unparalleled avenue for CPT tests. In fact, all six fundamental-physics experiments that use CERN’s antiprotons have the potential to place independent limits on distinct regions of the SME’s coefficient space. The upcoming Extra Low ENergy Antiproton (ELENA) ring at CERN (see “CERN soups up its antiproton source”) will provide substantially upgraded access to antiprotons for these experiments.

One exciting type of CPT test that will be conducted independently by the ALPHA, ATRAP and ASACUSA experiments is to produce antihydrogen, an atom made up of an antiproton and a positron, and compare its spectrum to that of ordinary hydrogen. While the production of cold antihydrogen has already been achieved by these experiments, present efforts are directed at precision spectroscopy promising clean and competitive constraints on various CPT-breaking SME coefficients for the proton and electron.

At present, the gravitational interaction of antimatter remains virtually untested. The AEgIS and GBAR experiments will tackle this issue by dropping antihydrogen atoms in the Earth’s gravity field. These experiments differ in their detailed set-up, but both are projected to permit initial measurements of the gravitational acceleration, g, for antihydrogen at the per cent level. The results will provide limits on SME coefficients for the couplings between antimatter and gravity that are inaccessible with other experiments.

A third fascinating type of CPT test is based on the equality of the physical properties of a particle and its antiparticle, as guaranteed by CPT invariance. The ATRAP and BASE experiments have been advocating such a comparison between protons and antiprotons confined in a cryogenic Penning trap. Impressive results for the charge-to-mass ratios and g factors have already been obtained at CERN and are poised for substantial future improvements. These measurements permit clean bounds on SME coefficients of the proton with record sensitivities.

Regarding the LHC, the latest Lorentz- and CPT-violation physics comes from the LHCb collaboration, which studies particles made up of b quarks. The experiment’s first measurements of SME coefficients in the Bd and Bs systems, published in June this year, have improved existing results by up to two orders of magnitude. LHCb also has competition from other major neutral-meson experiments. These involve studies of the Bs system at the Tevatron’s DØ experiment, recent searches for  Lorentz and CPT violation with entangled kaons at KLOE and the upcoming KLOE-2 at DAΦNE in Italy, as well as results on CPT-symmetry tests in Bd mixing and decays from the BaBar experiment at SLAC. The LHC’s general-purpose ATLAS and CMS experiments, meanwhile, hold promise for heavy-quark studies. Data on single-top production at these experiments would allow the world’s first CPT test for the top quark, while the measurement of top–antitop production can sharpen by a factor of 10 the earlier measurements of CPT-even Lorentz violation at DØ.

Other possibilities for accelerator tests of Lorentz and CPT invariance include deep inelastic scattering and polarised electron–electron scattering. The first ever analysis of the former offers a way to access previously unconstrained SME coefficients in QCD employing data from, for example, the HERA collider at DESY. Polarised electron–electron scattering, on the other hand, allows constraints to be placed on currently unmeasured Lorentz violations in the Z boson, which are also parameterised by the SME and have relevance for SLAC’s E158 data and the proposed MOLLER experiment at JLab. Lorentz-symmetry breaking would also cause the muon spin precession in a storage ring to be thrown out of sync by just a tiny bit, which is an effect accessible to muon g-2 measurements at J-PARC and Fermilab.

Historically, electromagnetism is perhaps most closely associated with Lorentz tests, and this idea continues to exert a sustained influence on the field. Modern versions of the classical Michelson–Morley experiment have been realised with tabletop resonant cavities as well as with the multi-kilometre LIGO interferometer, with upcoming improvements promising unparalleled measurements of the SME’s photon sector. Another approach for testing Lorentz and CPT symmetry is to study the energy- and direction-dependent dispersion of photons as predicted by the SME. Recent observations by the space-based Fermi Large Area Telescope severely constrain this effect, placing tight limits on 25 individual non-minimal SME coefficients for the photon.

AMO techniques

Experiments in atomic, molecular and optical (AMO) physics are also providing powerful probes of Lorentz and CPT invariance and these are complementary to accelerator-based tests. AMO techniques excel at testing Lorentz-violating effects that do not grow with energy, but they are typically confined to normal-matter particles and cannot directly access the SME coefficients of the Higgs or the top quark. Recently, advances in this field have allowed researchers to carry out interferometry using systems other than light, and an intriguing idea is to use entangled wave functions to create a Michelson–Morley interferometer within a single Yb+ ion. The strongly enhanced SME effects in this system, which arise due to the ion’s particular energy-level structure, could improve existing limits by five orders of magnitude.

Other AMO systems, such as atomic clocks, have long been recognised as a backbone of Lorentz tests. The bright SME prospects arising from the latest trend toward optical clocks, which are several orders of magnitude more precise than traditional varieties based on microwave transitions, are being examined by researchers at NIST and elsewhere. Also, measurements on the more exotic muonium atom by J-PARC and by the PSI can place limits on the SME’s muon coefficients, which is a topic of significant interest in light of several current puzzles involving the muon.

From neutrinos to gravity

Unknown neutrino properties, such as their mass, and tension between various neutrino measurements have stimulated a wealth of recent research including a number of SME analyses. The breakdown of Lorentz and CPT symmetry would cause the ordinary neutrino–neutrino and antineutrino–antineutrino oscillations to exhibit unusual direction, energy and flavour dependence, and would also induce unconventional neutrino–antineutrino mixing and kinematic effects – the latter leading to modified velocities and dispersion, as measured in time-of-flight experiments. Existing and planned neutrino experiments offer a wealth of opportunities to examine such effects. For example: upcoming results from the Daya Bay experiment should yield improved limits on Lorentz violation from antineutrino–antineutrino mixing; EXO has obtained the first direct experimental bound on a difficult-to-access “counter-shaded” coefficient extracted from the electron spectrum of double beta decay; T2K has announced new constraints on the a-and-c coefficients tightened by a factor of two using the muon-neutrino; and IceCube promises extreme sensitivities to “non-minimal” effects with kinematical studies of astrophysical neutrinos, such as Cherenkov effects of various kinds.

The modern approach to Lorentz and CPT tests remains as active as ever.

The feebleness of gravity makes the corresponding Lorentz and CPT tests in this SME sector particularly challenging. This has led researchers from HUST in China and from Indiana University to use an ingenious tabletop experiment to seek Lorentz breaking in the short-range behaviour of the gravitational force. The idea is to bring gravitationally interacting test masses to within submillimetre ranges of one another and observe their mechanical resonance behaviour, which is sensitive to deviations from Lorentz symmetry in the gravitational field. Other groups are carrying out related cutting-edge measurements of SME gravity coefficients with laser ranging of the Moon and other solar-system objects, while analysis of the gravitational-wave data recently obtained by LIGO has already yielded many first constraints on SME coefficients in the gravity sector, with the promise of more to come.

After a quarter century of experimental and theoretical work, the modern approach to Lorentz and CPT tests remains as active as ever. As the theoretical understanding of Lorentz and CPT violation continues to evolve at a rapid pace, it is remarkable that experimental studies continue to follow closely behind and now stretch across most subfields of physics. The range of physical systems involved is truly stunning, and the growing number of different efforts displays the liveliness and exciting prospects for a research field that could help to unlock the deepest mysteries of the universe.

The post Testing times for space–time symmetry appeared first on CERN Courier.

]]>
https://cerncourier.com/a/testing-times-for-space-time-symmetry/feed/ 1 Feature Numerous experiments, many of them at CERN, are testing for violations of Lorentz and CPT symmetry in the search for new physics. https://cerncourier.com/wp-content/uploads/2016/11/CCcpt3_10_16.jpg
Chern–Simons (Super) Gravity – 100 Years of General Relativity (vol. 2) https://cerncourier.com/a/chern-simons-super-gravity-100-years-of-general-relativity-vol-2/ Fri, 15 Apr 2016 12:58:21 +0000 https://preview-courier.web.cern.ch/?p=103836 Written on the basis of a set of lecture notes, this book provides a concise introduction to Chern–Simons (super) gravity theories accessible to graduate students and researchers in physics and mathematics.

The post Chern–Simons (Super) Gravity – 100 Years of General Relativity (vol. 2) appeared first on CERN Courier.

]]>
By Mokhtar Hassaine and Jorge Zanelli
World Scientific

61bSQ5eEmtL

Written on the basis of a set of lecture notes, this book provides a concise introduction to Chern–Simons (super) gravity theories accessible to graduate students and researchers in physics and mathematics.

Chern–Simons (CS) theories are gauge-invariant models that could include gravity in a consistent way. As a consequence, they are very interesting to study because they can open up the way to a common description of the four fundamental interactions of nature.

As is well known, three such interactions are described by the Standard Model as Yang–Mills (YM) theories, which are based on the principle of gauge invariance (requiring a correlation between particles at different locations in space–time). The particular form of these YM interactions makes them consistent with quantum mechanics.

On the other hand, gravitation – the fourth fundamental force – is described by general relativity (GR), which is also based on a gauge principle, but cannot be quantised following the same steps that work in the YM case.

Gauge principles suggest that a viable path is the introduction of a peculiar, yet generic, modification of GR, consisting in the addition of a CS term to the action.

Besides being mathematically elegant, CS theories have a set of properties that make them intriguing and promising: they are gauge-invariant, scale-invariant and background-independent; they have no dimensionful coupling constants; and all constants in the Lagrangian equation are fixed rational coefficients that cannot be adjusted without destroying the gauge invariance.

The post Chern–Simons (Super) Gravity – 100 Years of General Relativity (vol. 2) appeared first on CERN Courier.

]]>
Review Written on the basis of a set of lecture notes, this book provides a concise introduction to Chern–Simons (super) gravity theories accessible to graduate students and researchers in physics and mathematics. https://cerncourier.com/wp-content/uploads/2022/08/61bSQ5eEmtL.jpg
Inflation and String Theory https://cerncourier.com/a/inflation-and-string-theory/ Fri, 13 Nov 2015 14:26:52 +0000 https://preview-courier.web.cern.ch/?p=103967 This complete and accessible text, written by two of the leading researchers in the field, provides a modern treatment of inflationary cosmology and its connection to string theory and elementary particle theory.

The post Inflation and String Theory appeared first on CERN Courier.

]]>
By D Baumann and L McAllister Cambridge University Press

6c4d6091cca3cf45ef4954fe9f5aa09e0d40f9be_BF2000-2000

This complete and accessible text, written by two of the leading researchers in the field, provides a modern treatment of inflationary cosmology and its connection to string theory and elementary particle theory.

The past two decades of advances in observational cosmology have brought about a revolution in our understanding of the universe. In particular, deeper studies of the cosmic microwave background have revealed strong evidence for a period of inflationary expansion in the very early universe. At the same time, new developments in string theory have led to a better understanding of inflation in a framework that unifies quantum mechanics and general relativity.

After a brief introduction about observations in favour of the inflationary hypothesis, the volume provides an overview of effective field theory, string theory, and string compactifications. Finally, several classes of models of inflation in string theory are examined in detail.

The background material in geometry and cosmological perturbation theory included in the appendices makes the book self-contained and accessible not only to experienced researchers, but also to graduate students and readers who are new to the field.

The post Inflation and String Theory appeared first on CERN Courier.

]]>
Review This complete and accessible text, written by two of the leading researchers in the field, provides a modern treatment of inflationary cosmology and its connection to string theory and elementary particle theory. https://cerncourier.com/wp-content/uploads/2022/08/6c4d6091cca3cf45ef4954fe9f5aa09e0d40f9be_BF2000-2000.jpg
Yoichiro Nambu: breaking the symmetry https://cerncourier.com/a/yoichiro-nambu-breaking-the-symmetry/ https://cerncourier.com/a/yoichiro-nambu-breaking-the-symmetry/#respond Fri, 13 Nov 2015 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/yoichiro-nambu-breaking-the-symmetry/   Yoichiro Nambu passed away on 5 July 2015 in Osaka. He was awarded the Nobel Prize in Physics in 2008 “for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics”. Nambu’s work in theoretical physics spanning more than half a century is prophetic, and played a key role in the development of […]

The post Yoichiro Nambu: breaking the symmetry appeared first on CERN Courier.

]]>
 

Yoichiro Nambu passed away on 5 July 2015 in Osaka. He was awarded the Nobel Prize in Physics in 2008 “for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics”. Nambu’s work in theoretical physics spanning more than half a century is prophetic, and played a key role in the development of one of the great accomplishments of 20th century physics – the Standard Model of particle physics. He was also among those who laid the foundations of string theory.

The early years

When Nambu graduated from the University of Tokyo in 1943, Japan was in the midst of the Second World War – but at the same time, Japanese physics was extremely vibrant. Among other things, a group of superb Japanese physicists were developing the framework of quantum field theory. This spark came from the work of Hideki Yukawa in the 1930s, who laid the foundations of modern particle physics by his prediction that the force between nucleons inside a nucleus is caused by the exchange of a particle (today called the pion) that, unlike the photon, had a mass. Yukawa showed that this results in a force that dies out quickly as the distance between the nucleons is increased, as opposed to electromagnetic forces, caused by a massless photon, which have infinite range. Yukawa was Japan’s first Nobel laureate, in 1949. Soon afterwards, Japan became a powerhouse of particle physics and quantum field theory. In 1965, Sin-Itiro Tomonaga received the Nobel prize (shared with Richard Feynman and Julian Schwinger) for his work on the quantum field theory of electromagnetism.

In 1948, Nambu joined a select group of theoretical physicists at the newly formed department at Osaka City University. He spent three formative years there: “I had never felt and enjoyed so much the sense of freedom.” Much of his early work dealt with quantum field theory. One influential paper dealt with the derivation of the precise force laws in nuclear physics. In the process, he derived the equation that describes how particles can bind with each other – an equation that was later derived independently by Bethe and Salpeter, and is now known commonly as the Bethe–Salpeter equation.

Nambu always felt that his work in physics was guided by a philosophy – one that was uniquely his own. During his years in Osaka, he was deeply influenced by the philosophy of Sakata and Taketani. Sakata was yet another prominent theoretical physicist in Japan at that time: he later became well known for the Sakata model, which was a precursor to the quark model of nuclear constituents. Sakata was influenced by Marxist philosophy, and together with Taketani developed a “three-stage methodology” in physics. As Nambu recalled later, Taketani used to visit the young group of theorists at Osaka and “spoke against our preoccupation with theoretical ideas, emphasised to pay attention to experimental physics. I believe that this advice has come to make a big influence on my attitude towards physics”. Together with colleagues Nishijima and Miyazawa, he immersed himself in understanding the properties of the newly discovered elementary particles called mesons.

In 1952, J R Oppenheimer invited Nambu to spend a couple of years at the Institute of Advanced Study in Princeton. By his own account, this was not a particularly fruitful period: “I was not very happy.” After a summer at Caltech, he finally came to the University of Chicago at the invitation of Marvin Goldberger. There he became exposed to a remarkably stimulating intellectual atmosphere, which epitomised Fermi’s style of “physics without boundaries”. There was no “particle physics” or “physics of metals” or “nuclear physics”: everything was discussed in a unified manner. Nambu soon achieved a landmark in the history of 20th century physics: the discovery that a vacuum can break symmetries spontaneously. And he came up with the idea while working in a rather different area of physics: superconductivity.

Symmetries of the laws of nature often provide guiding principles in physics. An example is “rotational symmetry”. Imagine yourself to be in deep space, so far away from any star or galaxy that all you can see in any direction is empty space. Things look completely identical in all directions – in particular, if you are performing an experiment, the results would not depend on if you rotated your lab slowly and did the same thing. It is this symmetry that leads to the conservation of angular momentum. Of course, the rotational symmetry is only approximate, because there are stars and galaxies that break this symmetry explicitly.

There are other situations, however, where a symmetry is broken spontaneously. One example is a magnet. The molecules inside a magnet are themselves little magnetic dipoles. If we switch on a small magnetic field, then the rotational symmetry is broken explicitly and all of the dipoles align themselves in the direction of the magnetic field. That is simple. The interesting phenomenon is that the dipoles continue to be aligned in the same direction, even after the external magnetic field is switched off. Here the rotational symmetry is broken spontaneously.

Nevertheless, the fact that the underlying laws respect rotational symmetry has a consequence: if we gently disturb one of the dipoles from its perfectly aligned position, it gently nudges its neighbours and they nudge their neighbours, and the result is a wave that propagates through the magnet. Such a wave has very low energy and is called a spin wave. This is a special case of a general phenomenon where a spontaneously broken symmetry has an associated low-energy mode, or in quantum theory an associated massless particle.

Breaking symmetry

Nambu took the concept of spontaneous symmetry breaking to a new level. He came up with this idea while trying to understand the Bardeen–Cooper–Schrieffer (BCS) theory of superconductivity. Superconductors are materials that conduct electric current without any resistance. Superconductors also repel external magnetic fields – an effect called the Meissner effect. Inside a superconductor, electromagnetic fields are short-ranged rather than long-ranged: as if the photon has acquired a mass, like Yukawa’s mesons. However, a massive photon appears to be inconsistent with gauge invariance – a basic property of electromagnetism.

It was Nambu in 1959, and independently Philip Anderson a little earlier in 1958, who understood what was going on. They realised that (in the absence of electromagnetic interactions) the superconducting state broke the symmetry spontaneously. This symmetry is unlike the rotation symmetry that is spontaneously broken in magnets or crystals. It is a symmetry associated with the fact that electric charge is conserved. Also, if we imagine switching off the electromagnetic interaction, this symmetry breaking would also result in very low-energy waves, like spin waves in a magnet – a massless particle. Now comes a great discovery: if we switch on the electromagnetic interaction, which is there, we can undo the apparent symmetry breaking by a gauge transformation, which is local in space (and time), without any energy cost. Hence, there is no massless particle, and in fact the photon becomes massive together with a massive neutral particle, which explains the Meissner effect. The neutral scalar excitation in superconductors was discovered 20 years after it was predicted. This effortless excursion across traditional boundaries of physics characterised Nambu’s work throughout his career.

Soon after finishing his work on superconductivity, Nambu returned to particle physics. The first thing he noticed was that the Bogoliubov equations describing excitations near the Fermi surface in a superconductor are very similar to the Dirac equation that describes nucleons. The energy gap in a superconductor translates to the mass of nucleons. The charge symmetry that is spontaneously broken in a superconductor (electromagnetism switched off) also has an analogue – chiral symmetry. If the energy gap in a superconductor is a result of spontaneous symmetry breaking of charge symmetry, could it be that the mass of a nucleon is the result of spontaneous symmetry breaking of chiral symmetry? Unlike the charge symmetry in a superconductor, chiral symmetry is a global symmetry that can be truly spontaneously broken, leading to a massless particle – which Nambu identified with the pion. This is exactly what Nambu proposed in a short paper in 1960, soon followed by two papers with Jona-Lasinio.

This was a revolutionary step. In all previous examples, spontaneous symmetry breaking happened in situations where there were constituents (the molecular dipoles in a magnet, for example) and the underlying laws did not permit them to arrange themselves maintaining the symmetry. Nambu, however, proposed that there are situations where spontaneous symmetry breaking can happen in the vacuum of the world.

In physics, vacuum is the name given to “nothing”. How can a symmetry be broken – even spontaneously – when there is nothing around? The radical nature of this idea has been best described by Phil Anderson: “To me – and perhaps more to his fellow particle theorists – this seemed like a fantastic stretch of imagination. The vacuum, to us, was and always had been a vacuum – it had, since Einstein got rid of the aether, been the epitome of emptiness…I, at least, had my mind encumbered with the idea that if there was a condensate, there was something there…This is why it took a Nambu to break the first symmetry.”

Nambu was proposing that the masses of elementary particles have an origin – something we can calculate. The revolutionary nature of this idea cannot be overstated. Soon after the papers of Nambu and Jona-Lasinio, Goldstone came up with a simpler renormalisable model of superconductivity, which also illustrates the phenomenon of spontaneous symmetry breaking by construction and provided a general proof that such symmetry breaking always leads to a massless particle.

Meanwhile, in 1963 Anderson realised that the mechanism of generating masses for gauge particles that was discovered in superconductivity could be useful in elementary particle physics in the context of the nature of “vacuum of the world”. The mechanism was subsequently worked out in full generality by three independent groups, Higgs, Englert and Brout, and Guralnik, Hagen and Kibble, and is called the “Higgs mechanism”. It became the key to formulating the Standard Model of particle physics by Weinberg and Salam, building on the earlier work of Glashow, and resulting in our current understanding of electromagnetic and weak forces. The analogue of the special massive state in a superconductor is the Higgs particle, discovered at CERN in 2012.

We now know, for certain, that chiral symmetry is spontaneously broken in strong interactions. However, the final realisation of this idea had to wait until another work by Nambu.

The idea that all hadrons (particles that experience strong forces) are made of quarks was proposed by Gell-Mann, and independently Zweig, in 1964. However, the idea soon ran into serious trouble.

Now, the quarks that make up nucleons have spin ½. According to the spin-statistics theorem, they should be fermions obeying the exclusion principle. However, it appeared that if quarks are indeed the constituents of all hadrons, they cannot at the same time be fermions. To resolve this contradiction, Nambu proposed that quarks possess an attribute that he called “charm” and is now called colour. In his first proposal, quarks have two such colours. Subsequently, in a paper with M Y Han, he proposed a model with three colours. Two quarks may appear identical (and therefore cannot be on top of each other) if their colour is ignored. However, once it is recognised that their colours are different, they cease to be identical, and the usual “exclusion” of fermions does not apply. A little earlier, O Greenberg came up with another resolution: he postulated that quarks are not really fermions but something called “para-fermions”, which have unconventional properties that are just right to solve the problem.

However, it was Nambu’s proposal that turned out to be more fruitful. This is because he made another remarkable one: colour is like another kind of electric charge. A quark not only produced an ordinary electric field, but a new kind of generalised electric field. This new kind of electric field causes a new kind of force between quarks, and the energy is minimum when the quarks form a colour singlet. This force, Nambu claimed, is the basic strong force that holds the quarks together inside a nucleon. This proposal turned out to be essentially correct, and is now known as quantum chromodynamics (QCD). In the model of Han and Nambu, quarks carry integer charges, which we now know is incorrect. In 1972, Fritzsch and Gell-Mann wrote down the model with correct charge assignments and proposed that only colour singlets occur in the spectrum, which would ensure that fractionally charged quarks remain unobserved. However, it was only after the discovery by David Gross, Frank Wilczek, and David Politzer in 1973 of “asymptotic freedom” for the generalised electric field that QCD became a candidate theory of the strong interactions. It explained the observed scaling properties of the strong interactions at high energies (which probe short distances) and indicated that the force between quarks had a tendency to grow as they were pulled apart.

Simple dynamical principle

String theory, which is recognised today as the most promising framework of fundamental physics including gravity, had its origins in making sense of strongly interacting elementary particles in the days before the discovery of asymptotic freedom. To make a long story short, Nambu, Nielsen and Susskind proposed that many mathematical formulae of the day, which originated from Veneziano’s prescient formula, could be explained by the hypothesis that the underlying physical objects were strings (one-dimensional objects) rather than point particles. This was a radical departure from the “Newtonian” viewpoint that elementary laws of nature are formulated in terms of “particles” or point-like constituents.

Nambu (and independently Goto) also provided a simple dynamical principle with a large local symmetry for consistent string propagation. His famous paper on the string model entitled “Duality and hadrodynamics” was submitted to the Copenhagen High Energy Physics Symposium in 1970. In a letter dated 4 September 1986, to one of us (SRW), Nambu wrote: “In August 1970, there was a symposium to be held in Copenhagen just before a High Energy Physics Conference in Kiev, and I was planning to attend both. But before leaving for Europe, I set out to California with my family so that they could stay with our friends during my absence. Unfortunately our car broke down as we were crossing the Great Salt Lake Desert, and we were stranded in a tiny settlement called Wendover for the three days. Having missed the flight and the meeting schedules, I cancelled the trip in disgust and had a vacation in California instead. The manuscript, however had been sent out to Copenhagen, and survived.”

It is quite common for scientists to become excessively attached to their own creations. In contrast, Nambu was remarkably open-minded. To him, his work was like placing a few pieces into a giant jigsaw puzzle: he never thought that he had discovered the “ultimate truth”. This deep sense of modesty was also a part of his personality. To the entire community of physicists, he was this shy, unassuming man, often difficult to understand, coming up with one original idea after another. There was a sense of play in the way that he did science: maybe that is why his ideas were sometimes incomprehensible when they first appeared.

Nambu’s legacy, “physics without boundaries”, must have had a subconscious influence on some of us in India involved in setting up the International Centre for Theoretical Sciences (ICTS), a centre of TIFR in Bangalore, where “science is without boundaries”.

We end with a quote from Nambu’s speech at the Nobel presentation ceremony at the University of Chicago on 10 December 2008, which clearly shows his view of nature: “Nowadays, the principle of spontaneous symmetry breaking is the key concept in understanding why the world is so complex as it is, in spite of the many symmetry properties in the basic laws that are supposed to govern it. The basic laws are very simple, yet this world is not boring; that is, I think, an ideal combination.”

• An earlier version of the article appeared in Frontline magazine, see www.frontline.in/other/obituary/a-giant-of-physics/article7593580.ece.

The post Yoichiro Nambu: breaking the symmetry appeared first on CERN Courier.

]]>
https://cerncourier.com/a/yoichiro-nambu-breaking-the-symmetry/feed/ 0 Feature https://cerncourier.com/wp-content/uploads/2015/11/CCnam1_10_15.jpg
First measurement of ionization potential casts light on ‘last’ actinide https://cerncourier.com/a/first-measurement-of-ionization-potential-casts-light-on-last-actinide/ https://cerncourier.com/a/first-measurement-of-ionization-potential-casts-light-on-last-actinide/#respond Mon, 27 Apr 2015 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/first-measurement-of-ionization-potential-casts-light-on-last-actinide/ The quest for new heavy chemical elements is the subject of intense research, as the synthesis and identification of these new elements fill up empty boxes in the familiar Periodic Table.

The post First measurement of ionization potential casts light on ‘last’ actinide appeared first on CERN Courier.

]]>
CCnew14_04_15

The quest for new heavy chemical elements is the subject of intense research, as the synthesis and identification of these new elements fill up empty boxes in the familiar Periodic Table. The measurement of their properties for a proper classification in the table has proved challenging, because the isotopes of these elements are short-lived and new methods must be devised to cope with synthesis rates that yield only one atom at a time. Now, an international team led by researchers from the Japanese Atomic Energy Agency (JAEA) in Tokai has developed an elegant experimental strategy to measure the first ionization potential of the heaviest actinide, lawrencium (atomic number, Z = 103).

Using a new surface ion source (figure 1) and a mass-separated beam, the team’s measurement of 4.96±0.08 eV – published recently in Nature (Sato et al. 2015) – agrees perfectly with state-of-the-art quantum chemical calculations that include relativistic effects, which play an increasingly important role in this region of the Periodic Table. The result confirms the extremely low binding energy of the outermost valence electron in this element, therefore confirming its position as the last element in the actinide series. This is in line with the concept of heavier homologues of the lanthanide rare earths, which was introduced by Glenn Seaborg in the 1940s.

In the investigations at JAEA the researchers have exploited the isotope-separation online (ISOL) technique, which has been used for nuclear-physics studies at CERN’s ISOLDE facility since the 1960s. The technique has now been adapted to perform ionization studies with the one-atom-at-a-time rates that are accessible for studies of lawrencium. A new surface-ion source was developed and calibrated with a series of lanthanide isotopes of known ionization potentials. The ionization probability of the mass-separated lawrencium could then be exploited to determine its ionization potential using the calibration master curve.

CCnew15_04_15

The special position of lawrencium in the Periodic Table has placed the element at the focus of questions on the influence of relativistic effects, and the determination of properties to confirm its position as the last actinide. The two aspects most frequently addressed have concerned its ground-state electronic configuration and the value of its first ionization potential.

Relativistic effects strongly affect the electron configurations of the heaviest elements. In the actinides, the relativistic expansion of the 5f orbital contributes to the actinide contraction – the regular decrease in the ionic radii with increasing Z. Together with direct relativistic effects on the 7s and 7p1/2 orbitals, this influences the binding energies of valence electrons and the energetic ordering of the electron configurations. However, it is difficult to measure the energy levels of the heaviest actinides with Z > 100 by a spectroscopic method because these elements are not available in a weighable amount.

The ground-state electronic configuration of lawrencium (Lr) is expected to be [Rn]5f147s27p1/2. This is different from that of its homologue in the lanthanide series, lutetium, which is [Xe]4f146s25d. The reason for this change is the stabilization by strong relativistic effects of the 7p1/2 orbital of Lr below the 6d orbital. Lr, therefore, is anticipated to be the first element with a 7p1/2 orbital in its electronic ground state. As the measurement of the ionization potential directly reflects the binding energy of a valence electron under the influence of relativistic effects, its experimental determination provides direct information on the energetics of the electronic orbitals of Lr, including relativistic effects, and a test for modern theories. However, this measurement cannot answer questions about the electronic configuration itself. Nevertheless, as figure 2 shows, the experimental result is in excellent agreement with a new theoretical calculation that includes these effects and favours the [Rn]5f147s27p1/2 ground-state configuration.

The post First measurement of ionization potential casts light on ‘last’ actinide appeared first on CERN Courier.

]]>
https://cerncourier.com/a/first-measurement-of-ionization-potential-casts-light-on-last-actinide/feed/ 0 News The quest for new heavy chemical elements is the subject of intense research, as the synthesis and identification of these new elements fill up empty boxes in the familiar Periodic Table. https://cerncourier.com/wp-content/uploads/2015/04/CCnew14_04_15.jpg
Quantum Field Theory for the Gifted Amateur https://cerncourier.com/a/quantum-field-theory-for-the-gifted-amateur/ Thu, 09 Apr 2015 12:17:39 +0000 https://preview-courier.web.cern.ch/?p=104092 Johann Rafelski reviews in 2015 Quantum Field Theory for the Gifted Amateur.

The post Quantum Field Theory for the Gifted Amateur appeared first on CERN Courier.

]]>
By Tom Lancaster and Stephen J Blundell
Oxford University Press
Hardback: £65 $110
Paperback: £29.99 $49.95
Also available as an e-book, and at the CERN bookshop

CCboo1_03_15

Many readers of CERN Courier will already have several introductions to quantum field theory (QFT) on their shelves. Indeed, it might seem that another book on this topic has missed its century – but that is not quite true. Tom Lancaster and Stephen Blundell offer a response to a frequently posed question: What should I read and study to learn QFT? Before this text it was impossible to name a contemporary book suitable for self-study, where there is regular interaction with an adviser but not classroom-style. Now, in this book I find a treasury of contemporary material presented concisely and lucidly in a format that I can recommend for independent study.

Quantum Field Theory for the Gifted Amateur is in my opinion a good investment, although of course one cannot squeeze all of QFT into 500 pages. Specifically, this is not a book about strong interactions; QCD is not in the book, not a word. Reading page 308 at the end of subsection 34.4 one might expect that some aspects of quarks and asymptotic freedom would appear late in chapter 46, but they do not. I found the word “quark” once – on page 308 – but as far as I can tell, “gluon” did not make its way at all into the part on “Some applications from the world of particle physics.”

If you are a curious amateur and hear about, for example, “Majorana” (p444ff) or perhaps “vacuum instability” (p457ff, done nicely) or “chiral symmetry” (p322ff), you can start self-study of these topics by reading these pages. However, it’s a little odd that although important current content is set up, it is not always followed with a full explanation. In these examples, oscillation into a different flavour is given just one phrase, on p449.

Some interesting topics – such as “coherent states” – are described in depth, but others central to QFT merit more words. For example, figure 41.6 is presented in the margin to explain how QED vacuum polarization works, illustrating equations 41.18-20. The figure gives the impression that the QED vacuum-polarization effect decreases the Coulomb–Maxwell potential strength, while the equations and subsequent discussion correctly show that the observed vacuum-polarization effect in atoms adds attraction to electron binding. The reader should be given an explanation of the subtle point that reconciles the intuitive impression from the figure with the equations.

Despite these issues, I believe that this volume offers an attractive, new “rock and roll” approach, filling a large void in the spectrum of QFT books, so my strong positive recommendation stands. The question that the reader of these lines will now have in mind is how to mitigate the absence of some material.

The post Quantum Field Theory for the Gifted Amateur appeared first on CERN Courier.

]]>
Review Johann Rafelski reviews in 2015 Quantum Field Theory for the Gifted Amateur. https://cerncourier.com/wp-content/uploads/2015/04/CCboo1_03_15.jpg
ICTP: theorists in the developing world https://cerncourier.com/a/ictp-theorists-in-the-developing-world/ https://cerncourier.com/a/ictp-theorists-in-the-developing-world/#respond Thu, 27 Nov 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/ictp-theorists-in-the-developing-world/ As ICTP reaches its first half-century, the current director talks about the contribution that theorists make to society.

The post ICTP: theorists in the developing world appeared first on CERN Courier.

]]>
Fernando Quevedo

Fernando Quevedo, director of ICTP since 2009, came to CERN in September to take part in the colloquium “From physics to daily life”, organized for the launch of two books of the same name, to which he is one of the contributors. His participation in such an initiative is not just a fortunate coincidence, but testimony of his willingness to explain the prominent role that theoretical and fundamental physics have in human development. “Theory is the driving force behind the creation of a culture of science, and this is of paramount importance to developing societies,” he explains. “Abdus Salam founded the ICTP because he believed in this strong potential, which comes at a very low cost to the countries that cannot afford expensive experimental infrastructures.”

Unfortunately, theorists are not usually credited properly for their contributions to the development of society. “The reason is that a lot of time separates the theoretical advancement from the practical application,” says Quevedo. “People and policy makers at some point stop seeing the link, and do not see the primary origin of it anymore.” However, although these links are often lost in the complicated ripples of history, it is often the case that when people are asked to recall names of famous scientists, most likely they are theorists. Examples include Albert Einstein, Richard Feynmann, James Clerk Maxwell and, of course, Stephen Hawking. More importantly, theories such as quantum mechanics or relativity have changed not just the way that scientists understand the universe but also, years later, everyday life, with applications that range from lasers and global-positioning systems to quantum computation. For Quevedo, “The example I like best is Dirac’s story. He was a purist. He wanted to see the beauty in the mathematical equations. He predicted the existence of antimatter because it came out of his equations. Today, we use positrons – the first antimatter particle predicted by Dirac – in PET scanners, but people never go back to remember his contribution.”

Theorists often have an impact that is difficult to predict, even by their fellow colleagues. “When I was a student in Texas,” recalls Quevedo, “we were studying supersymmetry and string theory for high-energy physics, and we saw that some colleagues were working on even more theoretical subjects. At that time, we thought that they were not on the right track because they were trying to develop a new interpretation of quantum mechanics. Two decades later, some of those people had become the leaders of quantum-information theory and had given birth to quantum computing. Today, this field is booming!” Perhaps surprisingly, there is also an extremely practical “application” of string theory: the arXiv project. This online repository of electronic preprints of scientific papers was invented by string theorist Paul Ginsparg. Perhaps this will be the only practical application of string theory.

While Quevedo considers it important to credit the role of the theorists in the development of society and in creating the culture of science, at the same time, he recognizes an equivalent need for the theorists to open their research horizon and accept the challenge of the present time to tackle more applied topics. “Theorists are very versatile scientists,” he says. “They are trained to be problem solvers, and their skills can be applied to a variety of fields, not just physics.” This year, ICTP is launching a new Master’s course in high-performance computing, which will use a new cluster of computers. In line with Quevedo’s thinking, during the first year, the students will be trained in general matters related to computing techniques. Then, during the second year, they will have the opportunity to specialize not only in physics but also in other subjects, including climate change, astrophysics, renewable energy and mathematical modelling.

If you are from a poor country, why should you be limited to do agriculture, health, etc?

All of these arguments should not be seen as justifications for the need to support theoretical physics. Rather, wondering about the universe and its functioning should be a recognized right for anyone. “I come from Guatemala and have the same rights as Americans and Europeans to address the big questions,” confirms Quevedo. “If you are from a poor country, why should you be limited to do agriculture, health, etc? As human beings, we have the right to dream about becoming scientists and understanding the world around us. We have the right to be curious. After all, politicians decide where to put the money, but the person who is spending his/her life on scientific projects is the scientist.”

ICTP has the specific mandate to focus on supporting scientists from developing countries. Across its long history, the institute has proudly welcomed visitors from 188 countries – that is, almost the entire planet. While CERN’s activities are concentrated mainly in developed countries, the activity map of ICTP spreads across all continents more uniformly, including Africa and the whole of Latin America. “Some countries do not have the right level of development for science to get involved in CERN yet. ICTP can play the role of being an intermediate point to attract the participation of scientists from the least developed countries to then get involved with CERN’s projects,” Quevedo comments.

Quevedo’s relationship with CERN goes beyond his role as ICTP’s director. CERN was his first employer when he was a young postdoc, coming from the University of Texas. He still comes to CERN every year, and thinks of it not only as a model but, more importantly, as a “home away from home” for any scientist. Like two friends, CERN and ICTP have a variety of projects that they are developing together. “CERN’s director-general, Rolf Heuer, and myself recently signed a new memorandum of understanding,” he explains. “ICTP scientists collaborate directly in the ATLAS computing working groups. With CERN we are also involved in the EPLANET project (CERN Courier June 2014 p58), and in the organization of the African School of Physics (CERN Courier November 2014 p37). More recently, we are developing new collaborations in teacher training and the field of medical physics.”

Does Quevedo have a dream about the future of CERN? “Yes, I would like to see more Africans, Asians and Latin Americans here,” he says. “Imagine a more coloured cafeteria, with people really coming from all corners of the planet. This could be the CERN of the future.”

ICTP’s 50th anniversary

In June 1960, the Department of Physics at the University of Trieste organized a seminar on elementary particle physics in the Castelletto in Miramare Park. The notion of creating an institute of theoretical physics open to scientists from around the world was discussed at that meeting. That proposal became a reality in Trieste in 1964. Pakistani-born physicist Abdus Salam, who spearheaded the drive for the creation of ICTP by working through the International Atomic Energy Agency, became the centre’s director, and Paolo Budinich, who worked tirelessly to bring the centre to Trieste, became ICTP’s deputy director.

From 6 to 9 October this year, ICTP celebrated its 50 years of success in international scientific co-operation, and the promotion of scientific excellence in the developing world. More than 250 distinguished scientists, ministers and others attended the anniversary celebration. In parallel, the programme included exhibitions, lectures and special initiatives for schools and the general public.

• For the whole programme of events with photos and videos, visit www.ictp.it/ictp-50th-anniversary.aspx.

 

The post ICTP: theorists in the developing world appeared first on CERN Courier.

]]>
https://cerncourier.com/a/ictp-theorists-in-the-developing-world/feed/ 0 Feature As ICTP reaches its first half-century, the current director talks about the contribution that theorists make to society. https://cerncourier.com/wp-content/uploads/2014/11/CCque2_10_14.jpg
A Brief History of String Theory: From Dual Models to M-Theory https://cerncourier.com/a/a-brief-history-of-string-theory-from-dual-models-to-m-theory/ Tue, 26 Aug 2014 11:59:36 +0000 https://preview-courier.web.cern.ch/?p=104227 Wolfgang Lerche reviews in 2014 A Brief History of String Theory: From Dual Models to M-Theory.

The post A Brief History of String Theory: From Dual Models to M-Theory appeared first on CERN Courier.

]]>
By Dean Rickles
Springer
Hardback: £35.99 €32.12 $49.99
E-book: £27.99 €42.79 $39.99
Also available at the CERN bookshop

CCboo1_07_14

String theory provides a theoretical framework for unifying particle physics and gravity that is also consistent at the quantum level. Apart from particle physics, it also sheds light on a vast range of problems in physics and mathematics. For example, it helps in understanding certain properties of gauge theories, black holes, the early universe and even heavy-ion physics.

This new book fills a gap by reviewing the 40-year-plus history of the subject, which it divides into four parts, with the main focus on the earlier decades. The reader learns about the work of researchers in the early days in detail, where so-called dual models were investigated with the aim of describing hadron physics. It took ingenious insights to realize that the underlying physical interpretation is in terms of small, oscillating strings. Some of the groundbreaking work took place at CERN – for example, the discovery of the Veneziano amplitude.

The reader obtains a good impression of how it took many years of collective effort and struggle to develop the theory and understand it better, often incrementally, although sometimes the direction of research changed drastically in a serendipitous manner. For example, at some point there was an unexpected shift of interpretation, namely in terms of gravity rather than hadron physics. Supersymmetry was discovered along the way as well, demonstrating that string theory has been the source and inspiration of many ideas in particle physics, gravity and related fields.

The main strength of the book is the extensively and carefully researched history of string theory, rather than profound explanations of the physics (for which enough books are available). It is full of anecdotes, quotations of physicists at the time, and historical facts, to an extent that makes it unique. Despite the author’s avoidance of technicalities, the book seems to be more suitable for people educated in particle physics, and less suitable for philosophers, historians and other non-experts.

One caveat, however: the history covered in the book more or less stops at around the mid-1990s, and as the author emphasizes, the subject becomes much harder to describe after that, without going into the details more deeply. While some of the new and important developments are mentioned briefly in the last chapter – for example, the gauge/gravity correspondence – they do not get the attention that they deserve in relation to older parts of the history. In other words, while the history has been quite accurately presented until the mid-1990s, the significance of some of its earlier parts is rather overrated in comparison with more recent developments.

In summary, this is a worthwhile and enjoyable book, full of interesting details about the development of one of the main research areas of theoretical physics. It appears to be most useful to scientists educated in related fields, and I would even say that it should be a mandatory read for young colleagues entering research in string theory.

The post A Brief History of String Theory: From Dual Models to M-Theory appeared first on CERN Courier.

]]>
Review Wolfgang Lerche reviews in 2014 A Brief History of String Theory: From Dual Models to M-Theory. https://cerncourier.com/wp-content/uploads/2014/08/CCboo1_07_14.jpg
ATLAS searches for supersymmetry via electroweak production https://cerncourier.com/a/atlas-searches-for-supersymmetry-via-electroweak-production/ https://cerncourier.com/a/atlas-searches-for-supersymmetry-via-electroweak-production/#respond Thu, 22 May 2014 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/atlas-searches-for-supersymmetry-via-electroweak-production/ Supersymmetry is one of the most popular theories beyond the Standard model.

The post ATLAS searches for supersymmetry via electroweak production appeared first on CERN Courier.

]]>
The Standard Model is currently the best theory there is of the subatomic world, but it fails to answer several fundamental questions, for example: why are the strengths of the fundamental interactions so different? What makes the Higgs boson light? What is dark matter made of? Such questions have led to the development of theories beyond the Standard Model, of which the most popular is supersymmetry (SUSY). In its most minimalistic form, SUSY predicts that each Standard Model particle has a partner whose spin differs by ½ and an extended Higgs sector with five Higgs bosons. SUSY’s symmetry between bosons and fermions stabilizes the mass of scalar particles, such as the Higgs boson and also the new scalar partners of the Standard Model fermions at high energy. If, as suggested by some theorists, the new particles have a conserved SUSY quantum number (denoted R-parity), the lightest SUSY particle (LSP) cannot decay and primordial LSPs might still be around, forming dark matter.

CCnew20_05_14

Two charginos, χ~±1,2, and four neutralinos, χ~01,2,3,4 – collectively referred to as electroweakinos – are the SUSY partners of the five Higgs and the electroweak gauge bosons. Based on arguments that try to accommodate the light mass of the Higgs boson in a “natural”, non-fine-tuned manner, the lightest electroweakinos are expected to have masses in the order of a few hundred giga-electron-volts. The lightest chargino, χ~±1, and the next-to-lightest neutralino, χ~02, can decay into the LSP, χ~01, plus multilepton final states via superpartners of neutrinos (sneutrinos, ν~) or charged leptons (sleptons, l~), or via Standard Model bosons (W, Z or Higgs). If SUSY exists in nature at the tera-electron-volt scale, electroweakinos could be produced in the LHC collisions.

The ATLAS collaboration’s searches for charginos, neutralinos and sleptons use events with multiple leptons and missing transverse momentum from the undetected LSP. The two-lepton (e, μ) search has dedicated selections that target the production of l~ l~, χ~±1χ~1 and χ~±1χ~02 through their decays via sleptons or W and Z bosons. Meanwhile, the three-lepton (e, μ, τ) analysis searches for χ~±1χ~02 decaying either via sleptons, staus (the SUSY partner of the τ), W and Z bosons, or W and Higgs bosons. Charginos and neutralinos decaying via Standard Model bosons are more challenging to search for than the decays via sleptons, owing to the smaller branching ratio into leptons. The main backgrounds in the two(three)-lepton search are WZ and Z+jets (tt) production, and these are modelled using Monte Carlo simulation and data-driven methods, respectively.

CCnew21_05_14

ATLAS has found no significant excess beyond the Standard Model expectation in either the two or three-lepton SUSY searches. This null result can be used to set exclusion limits on SUSY models, narrowing down where SUSY might exist in nature. For example, the two-lepton analysis sets the first direct limits in a simplified SUSY model of χ~±1χ~1, where the chargino decays 100% of the time to a W boson. The selections based on the presence of hadronically decaying τ particles in the three-lepton analysis set exclusion limits for χ~±1χ~02 decaying via W and Higgs bosons.

In some cases, the results of two or more analyses can be combined to strengthen the exclusion limits in a particular SUSY model. This is done for the two and three-lepton searches in a simplified SUSY model of χ~±1χ~02, where the χ~±1 and χ~02 are assumed to decay exclusively via W and Z bosons (figure 1). On its own, the two-lepton analysis excludes χ~±1 and χ~02 masses from 170–370 GeV, while the three-lepton analysis excludes masses from 100–350 GeV. By combining the two searches, the exclusion limit is pushed out much further to χ~±1 and χ~02 masses of 415 GeV for a massless χ~01 (figure 2).

So far, no evidence for SUSY has been observed with the first dataset collected by ATLAS. However, in 2015 the LHC will collide protons at higher energies and rates than ever before. This will be an exciting time as exploration of unchartered territories of higher-mass SUSY particles and rarer signatures begins.

The post ATLAS searches for supersymmetry via electroweak production appeared first on CERN Courier.

]]>
https://cerncourier.com/a/atlas-searches-for-supersymmetry-via-electroweak-production/feed/ 0 News Supersymmetry is one of the most popular theories beyond the Standard model. https://cerncourier.com/wp-content/uploads/2014/05/CCnew21_05_14.jpg
A Course in Field Theory https://cerncourier.com/a/a-course-in-field-theory/ Wed, 30 Apr 2014 08:32:44 +0000 https://preview-courier.web.cern.ch/?p=104320 Massimo Giovannini reviews in 2014 A Course in Field Theory.

The post A Course in Field Theory appeared first on CERN Courier.

]]>
By Pierre Van Baal
CRC Press
Also available as an e-book

CCboo2_04_14

Quantum field theory is a mature discipline. One of the key questions today is how to teach and organize this large body of information, which spans several decades and encompasses diverse physical applications that range from condensed-matter to nuclear and high-energy physics. Since the turn of the millennium, interested readers have witnessed progressive growth in publications on the subject. More often than not, the authors choose to edit their own notes extensively, with the purpose of presenting a whole series of lectures as a treatise.

Indeed, it is common to see books on quantum field theory of around 500 pages. Most of these publications give slightly different perspectives on the same subjects, but their treatments are often synoptic because they all refer to some of the classic presentations on field theory of the 20th century. The proliferation of books is at odds with the current practice where students are obliged to summarize a large number of different subjects through shorter texts, or even by systematic searches through various databases.

In this respect, A Course in Field Theory is a pleasant novelty that manages the impossible: a full course in field theory from a derivation of the Dirac equation to the standard electroweak theory in less than 200 pages. Moreover, the final chapter consists of a careful selection of assorted problems, which are original and either anticipate or detail some of the topics discussed in the bulk of the chapters.

Instead of building a treatise out of a collection of lecture notes, the author took the complementary approach and constructed a course out of a number of well-known and classic treatises. The result is fresh and useful. The essential parts of the 22 short chapters – each covering approximately one or two blackboard lectures – are cleverly set out: the more thorough calculations are simply quoted by spelling out, in great detail, the chapters and sections of the various classic books on field theory, where students can appreciate the real source of the various treatments that have propagated through the current scientific literature. Despite the book’s conciseness the mathematical approach is rigorous, and readers are never spoon-fed but encouraged to focus on the few essential themes of each lecture. The purpose is to induce specific reflections on many important applications that are often mentioned but not pedantically scrutinized. The ability to prioritize the various topics is wisely married with constant stimulus for the reader’s curiosity.

This book will be useful not only for masters-level students but will, I hope, be well received by teachers and practitioners in the field. At a time when PowerPoint dictates the rules of scientific communication between students and teachers (and vice versa), this course – including some minor typos – smells pleasantly of chalk and blackboard.

The post A Course in Field Theory appeared first on CERN Courier.

]]>
Review Massimo Giovannini reviews in 2014 A Course in Field Theory. https://cerncourier.com/wp-content/uploads/2014/04/CCboo2_04_14.jpg
Advanced General Relativity: Gravity Waves, Spinning Particles, and Black Holes https://cerncourier.com/a/advanced-general-relativity-gravity-waves-spinning-particles-and-black-holes/ Fri, 28 Mar 2014 09:38:38 +0000 https://preview-courier.web.cern.ch/?p=104359 This book is aimed at students making the transition from a first course on general relativity to a specialized subfield.

The post Advanced General Relativity: Gravity Waves, Spinning Particles, and Black Holes appeared first on CERN Courier.

]]>
By Claude Barrabès and Peter A Hogan

Oxford University Press
Hardback: £55 $89.95
Also available as an e-book

41V6dQvOKrL._SX343_BO1,204,203,200_

This book is aimed at students making the transition from a first course on general relativity to a specialized subfield. It presents a variety of topics under the general headings of gravitational waves in vacuo and in a cosmological setting, equations of motion, and black holes, all having clear physical relevance and a strong emphasis on space–time geometry. Each chapter could be used as the basis for an early postgraduate project for those who are exploring avenues into research in general relativity, and who have already accumulated the technical knowledge required.

The post Advanced General Relativity: Gravity Waves, Spinning Particles, and Black Holes appeared first on CERN Courier.

]]>
Review This book is aimed at students making the transition from a first course on general relativity to a specialized subfield. https://cerncourier.com/wp-content/uploads/2022/08/41V6dQvOKrL._SX343_BO1204203200_.jpg
Heavy stable charged particles: an exotic window to new physics https://cerncourier.com/a/heavy-stable-charged-particles-an-exotic-window-to-new-physics/ https://cerncourier.com/a/heavy-stable-charged-particles-an-exotic-window-to-new-physics/#respond Fri, 28 Mar 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/heavy-stable-charged-particles-an-exotic-window-to-new-physics/ As the LHC experiments improve the precision of their measurements of Standard Model processes, the extent of possibilities for new physics open to exploration is becoming ever more apparent.

The post Heavy stable charged particles: an exotic window to new physics appeared first on CERN Courier.

]]>
As the LHC experiments improve the precision of their measurements of Standard Model processes, the extent of possibilities for new physics open to exploration is becoming ever more apparent. Even within a constrained framework for new physics, such as the phenomenological minimal supersymmetric standard model (pMSSM), there is an impressive variety of final-state topologies and unique phenomena. For instance, in regions of the pMSSM where the chargino–neutralino mass difference is small, the chargino can become metastable and exhibit macroscopic lifetimes, potentially travelling anywhere between a few centimetres and many kilometres before it decays. An experiment like CMS can identify these heavy stable charged particles (HSCPs) through specialized techniques, such as patterns of anomalously high ionization in the inner tracker, as well as out-of-time signals in the muon detectors.

CCnew12_03_14

The CMS collaboration recently released a reinterpretation of a previously published search for HSCPs that used these techniques to constrain several broad classes of new physics models (CMS 2013a). There are two purposes for this reinterpretation. The first is to provide a simplified description of the acceptance and efficiency of the analysis as a function of a few key variables. This simplified “map” allows theorists and others interested to determine an approximate sensitivity of the CMS experiment to any model that produces HSCPs. This is an essential tool for the broader scientific community, because HSCPs are predicted in a large variety of models and it is important to understand if the gaps in their coverage are still present.

The second purpose is to provide a concrete example of a reinterpretation in terms of the pMSSM. In this analysis, CMS chose a limited subspace of the full pMSSM, requiring, among other things, that sparticle masses extend only up to about 3 TeV. The figure shows the number of points in this restricted pMSSM subspace that are excluded as a function of the average decay length, cτ, of the chargino. The red points are excluded by the HSCP interpretation described here (CMS 2013b). The blue points are excluded by another CMS search dedicated to “prompt” chargino production (CMS 2012a). The bottom panel shows the fraction of parameter points excluded by each of these two searches. Only a few parameter points, with chargino cτ >1 km, are still not excluded. This is because the theoretical cross-section for these parameter points is small – around 0.1 fb.

This analysis demonstrates the power of the CMS search for HSCPs to cover a broad range of models of new physics. By mapping the sensitivity of the analysis as a function of the HSCP kinematics and the detector geometry, it also makes the results from the search accessible for studies by the broader scientific community.

Although this analysis searches for metastable particles, another open possibility is the production of new, exotic particles that traverse a short distance – around 1 mm to 100 cm – before decaying to visible particles within the detector. CMS has also released results from two searches for such particles. One search looks for decays of these long-lived particles into two jets, and another into two oppositely charged leptons (CMS 2012b and 2012c). The results from these searches exclude production cross-sections for such particles as low as about 0.5 fb, depending on the lifetime and kinematics of the decay.

The post Heavy stable charged particles: an exotic window to new physics appeared first on CERN Courier.

]]>
https://cerncourier.com/a/heavy-stable-charged-particles-an-exotic-window-to-new-physics/feed/ 0 News As the LHC experiments improve the precision of their measurements of Standard Model processes, the extent of possibilities for new physics open to exploration is becoming ever more apparent. https://cerncourier.com/wp-content/uploads/2014/03/CCnew12_03_14.jpg
New precision reached on electron mass https://cerncourier.com/a/new-precision-reached-on-electron-mass/ https://cerncourier.com/a/new-precision-reached-on-electron-mass/#respond Fri, 28 Mar 2014 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/new-precision-reached-on-electron-mass/ Knowledge of the electron mass has been improved by a factor of 13, thanks to a clever extension of previous Penning-trap experiments.

The post New precision reached on electron mass appeared first on CERN Courier.

]]>
Knowledge of the electron mass has been improved by a factor of 13, thanks to a clever extension of previous Penning-trap experiments. A team from the Max-Planck-Institut für Kernphysik in Heidelberg, GSI and the ExtreMe Matter Institute in Darmstadt, and the Johannes Gutenberg-Universität in Mainz, used a Penning trap to measure the magnetic moment of an electron bound to a carbon nucleus in the hydrogen-like ion 12C5+. The cyclotron frequency of the combined system allowed precise determination of the magnetic field at the position of the electron, while the precession frequency allowed for measurement of the mass of the electron.

The result, in atomic-mass units, is 0.000548579909067(14)(9)(2) where the last error is theoretical. This new value for the electron’s mass value will allow comparison of the magnetic moment of the electron to theory – which is good to about 0.08 parts in 1012 – to better than one part in 1012.

The post New precision reached on electron mass appeared first on CERN Courier.

]]>
https://cerncourier.com/a/new-precision-reached-on-electron-mass/feed/ 0 News Knowledge of the electron mass has been improved by a factor of 13, thanks to a clever extension of previous Penning-trap experiments.
The Conceptual Framework of Quantum Field Theory https://cerncourier.com/a/the-conceptual-framework-of-quantum-field-theory/ Wed, 22 Jan 2014 14:02:36 +0000 https://preview-courier.web.cern.ch/?p=104420 This book attempts to provide an introduction to quantum field theory by emphasizing conceptual issues.

The post The Conceptual Framework of Quantum Field Theory appeared first on CERN Courier.

]]>
By Anthony Duncan
Oxford University Press

Hardback: £77.50
Also available as an e-book

41qBd2VIlOL._SX347_BO1,204,203,200_

This book attempts to provide an introduction to quantum field theory by emphasizing conceptual issues. The aim is to build up the theory systematically from clearly stated foundations. The first section, “Origins”, consists of two historical chapters that situate quantum field theory in the larger context of modern physical theories. The three remaining sections follow a step-by-step reconstruction of this framework, beginning with a few basic assumptions: relativistic invariance, the basic principles of quantum mechanics, and the prohibition of physical action at a distance embodied in the clustering principle. Problems are included at the ends of the chapters and solutions can be requested via the publisher’s website.

The post The Conceptual Framework of Quantum Field Theory appeared first on CERN Courier.

]]>
Review This book attempts to provide an introduction to quantum field theory by emphasizing conceptual issues. https://cerncourier.com/wp-content/uploads/2022/08/41qBd2VIlOL._SX347_BO1204203200_.jpg
Basic Concepts of String Theory https://cerncourier.com/a/basic-concepts-of-string-theory/ Wed, 22 May 2013 08:23:23 +0000 https://preview-courier.web.cern.ch/?p=104513 Wolfgang Lerche reviews in 2013 Basic Concepts of String Theory.

The post Basic Concepts of String Theory appeared first on CERN Courier.

]]>
By Ralph Blumenhagen, Dieter Lüst and Stefan Theisen
Springer
Hardback: £72 €84.35 $99
E-book: £56.99 €67.82 $69.95

This new textbook features an introduction to string theory, a fundamental line of research in theoretical physics during recent decades. String theory provides a framework for unifying particle physics and gravity in a coherent manner and, moreover, appears also to be consistent at the quantum level. This sets it apart from other attempts at that goal. More generally, string theory plays an important role as a generator of ideas and “toy” models in many areas of theoretical physics and mathematics; the spin-off includes the application of mathematical methods, originally motivated by and developed within string theory, to other areas. For example, string theory helps in the understanding of certain properties of gauge theories, black holes, the early universe and heavy-ion physics.

CCboo3_05_13

Thus any student and researcher of particle physics should have some knowledge of this important field. The book under discussion provides an excellent basis for that. It encompasses a range of essential and advanced topics, aiming at mid – to high-level students and researchers who really want to get into the subject and/or would like to look up some facts. For beginners, who just want to gain an impression of what string theory is all about, the book might be a little hefty and deterring. It really requires a serious effort to master it, and corresponds to at least a one-year course on string theory.

The book offers a refreshing mix of basic facts and up-to-date research, and avoids giving too much space to formal and relatively boring subjects such as the quantization of the bosonic string. Rather, the main focus is on the construction and properties of the various string theories in 10 dimensions and their compactifications to lower dimensions; it also includes thorough discussions of D-branes, fluxes and dualities. A particular emphasis is given to the two-dimensional world-sheet, or conformal field-theoretical point of view, which is more “stringy” than the popular supergravity approach. Filling this important gap is one of the strengths of this book, which sets it apart from other recent, similar books.

This is in line with the general focus of the book, namely the unification aspect of string theory, whose main aim is to explain, or at least describe, all known particles and interactions in one consistent framework. In recent years, additional aspects of string theory have been become increasingly popular and important lines of research, including the anti-de-Sitter/conformal-field-theory (AdS/CFT) correspondence and the quantum properties of black holes. The book barely touches on these subjects, which is wise because even the basic material would be more than would fit into the same book. For these subjects, a second volume may be in order.

All in all, this book is a perfect guide for someone with some moderate prior exposure to field and string theory, who likes to get into the principles and technical details of string model construction.

The post Basic Concepts of String Theory appeared first on CERN Courier.

]]>
Review Wolfgang Lerche reviews in 2013 Basic Concepts of String Theory. https://cerncourier.com/wp-content/uploads/2013/05/CCboo3_05_13.jpg
An Introduction to Non-Perturbative Foundations of Quantum Field Theory https://cerncourier.com/a/an-introduction-to-non-perturbative-foundations-of-quantum-field-theory/ Fri, 26 Apr 2013 08:30:13 +0000 https://preview-courier.web.cern.ch/?p=104523 This book provides general physical principles and a mathematically sound approach to Quantum Field Theory.

The post An Introduction to Non-Perturbative Foundations of Quantum Field Theory appeared first on CERN Courier.

]]>
By Franco Strocchi
Oxford University Press
Hardback: £55 $98.50

419Q1IYKsKL._SX343_BO1,204,203,200_

Quantum Field Theory (QFT) has proved to be the most useful strategy for the description of elementary-particle interactions and as such is regarded as a fundamental part of modern theoretical physics. In most presentations, the emphasis is on the effectiveness of the theory in producing experimentally testable predictions, which at present essentially means perturbative QFT. However, after more than 50 years of QFT, there is still no single non-trivial (even non-realistic) model of QFT in 3+1 dimensions, allowing a non-perturbative control. This book provides general physical principles and a mathematically sound approach to QFT. It covers the general structure of gauge theories, presents the charge superselection rules, gives a non-perturbative treatment of the Higgs mechanism and covers chiral symmetry breaking in QCD without instantons

The post An Introduction to Non-Perturbative Foundations of Quantum Field Theory appeared first on CERN Courier.

]]>
Review This book provides general physical principles and a mathematically sound approach to Quantum Field Theory. https://cerncourier.com/wp-content/uploads/2022/08/419Q1IYKsKL._SX343_BO1204203200_.jpg
Group Theory for High-Energy Physicists https://cerncourier.com/a/group-theory-for-high-energy-physicists/ Thu, 28 Mar 2013 08:39:45 +0000 https://preview-courier.web.cern.ch/?p=104539 Although group theory has played a significant role in the development of various disciplines of physics, there are few recent books that start from the beginning and then go on to consider applications from the point of view of high-energy physicists. Group Theory for High-Energy Physicists aims to fill that role.

The post Group Theory for High-Energy Physicists appeared first on CERN Courier.

]]>
By Mohammad Saleem and Muhammad Rafique
CRC Press/Taylor and Francis
Hardback: £44.99

9780429087097

Although group theory has played a significant role in the development of various disciplines of physics, there are few recent books that start from the beginning and then go on to consider applications from the point of view of high-energy physicists. Group Theory for High-Energy Physicists aims to fill that role. The book first introduces the concept of a group and the characteristics that are imperative for developing group theory as applied to high-energy physics. It then describes group representations and, with a focus on continuous groups, analyses the root structure of important groups and obtains the weights of various representations of these groups. It also explains how symmetry principles associated with group theoretical techniques can be used to interpret experimental results and make predictions. This concise introduction should be accessible to undergraduate and graduate students in physics and mathematics, as well as to researchers in high-energy physics.

The post Group Theory for High-Energy Physicists appeared first on CERN Courier.

]]>
Review Although group theory has played a significant role in the development of various disciplines of physics, there are few recent books that start from the beginning and then go on to consider applications from the point of view of high-energy physicists. Group Theory for High-Energy Physicists aims to fill that role. https://cerncourier.com/wp-content/uploads/2022/08/9780429087097-feature.jpg
Introduction to Mathematical Physics: Methods and Concepts, Second Edition https://cerncourier.com/a/introduction-to-mathematical-physics-methods-and-concepts-second-edition/ Thu, 28 Mar 2013 08:39:45 +0000 https://preview-courier.web.cern.ch/?p=104540 Introduction to Mathematical Physics explains how and why mathematics is needed in the description of physical events in space.

The post Introduction to Mathematical Physics: Methods and Concepts, Second Edition appeared first on CERN Courier.

]]>
By Chun Wa Wong
Oxford University Press
Hardback: £45 $84.95

51PHIPEJW+L

Introduction to Mathematical Physics explains how and why mathematics is needed in the description of physical events in space. Aimed at physics undergraduates, it is a classroom-tested textbook on vector analysis, linear operators, Fourier series and integrals, differential equations, special functions and functions of a complex variable. Strongly correlated with core undergraduate courses on classical and quantum mechanics and electromagnetism, it helps students master these necessary mathematical skills but also contains advanced topics of interest to graduate students. It includes many tables of mathematical formulae and references to useful materials on the internet, as well as short tutorials on basic mathematical topics to help readers refresh their knowledge. An appendix on Mathematica encourages the reader to use computer-aided algebra to solve problems in mathematical physics. A free Instructor’s Solutions Manual is available to instructors who order the book.

The post Introduction to Mathematical Physics: Methods and Concepts, Second Edition appeared first on CERN Courier.

]]>
Review Introduction to Mathematical Physics explains how and why mathematics is needed in the description of physical events in space. https://cerncourier.com/wp-content/uploads/2022/08/51PHIPEJWL.jpg
Gauge Theories of Gravitation: A Reader with Commentaries https://cerncourier.com/a/gauge-theories-of-gravitation-a-reader-with-commentaries/ Thu, 28 Mar 2013 08:39:45 +0000 https://preview-courier.web.cern.ch/?p=104541 With a foreword by Tom Kibble and commentaries by Milutin Blagojević and Friedrich W Hehl, the aim of this volume is to introduce graduate and advanced undergraduate students of theoretical or mathematical physics – and other interested researchers – to the field of classical gauge theories of gravity.

The post Gauge Theories of Gravitation: A Reader with Commentaries appeared first on CERN Courier.

]]>
By Milutin Blagojevićand Friedrich W Hehl (eds.)
World Scientific
Hardback: £111 $168 S$222

51elGMp-YQL

With a foreword by Tom Kibble and commentaries by Milutin Blagojević and Friedrich W Hehl, the aim of this volume is to introduce graduate and advanced undergraduate students of theoretical or mathematical physics – and other interested researchers – to the field of classical gauge theories of gravity. Intended as a guide to the literature in this field, it encourages readers to study the introductory commentaries and become familiar with the basic content of the reprints and the related ideas, before choosing specific reprints and then returning to the text to focus on further topics.

The post Gauge Theories of Gravitation: A Reader with Commentaries appeared first on CERN Courier.

]]>
Review With a foreword by Tom Kibble and commentaries by Milutin Blagojević and Friedrich W Hehl, the aim of this volume is to introduce graduate and advanced undergraduate students of theoretical or mathematical physics – and other interested researchers – to the field of classical gauge theories of gravity. https://cerncourier.com/wp-content/uploads/2022/08/51elGMp-YQL.jpg
A Unified Grand Tour of Theoretical Physics, Third Edition https://cerncourier.com/a/a-unified-grand-tour-of-theoretical-physics-third-edition/ Thu, 28 Mar 2013 08:39:44 +0000 https://preview-courier.web.cern.ch/?p=104537 A Unified Grand Tour of Theoretical Physics invites readers on a guided exploration of the theoretical ideas that shape contemporary understanding of the physical world at the fundamental level.

The post A Unified Grand Tour of Theoretical Physics, Third Edition appeared first on CERN Courier.

]]>
By Ian D Lawrie
CRC Press/Taylor and Francis
Paperback: £44.99

9781138473355

A Unified Grand Tour of Theoretical Physics invites readers on a guided exploration of the theoretical ideas that shape contemporary understanding of the physical world at the fundamental level. Its central themes – which include space–time geometry and the general relativistic account of gravity, quantum field theory and the gauge theories of fundamental forces – are developed in explicit mathematical detail, with an emphasis on conceptual understanding. Straightforward treatments of the Standard Model of particle physics and that of cosmology are supplemented with introductory accounts of more speculative theories, including supersymmetry and string theory. This third edition includes a new chapter on quantum gravity and new sections with extended discussions of topics such as the Higgs boson, massive neutrinos, cosmological perturbations, dark energy and dark matter.

The post A Unified Grand Tour of Theoretical Physics, Third Edition appeared first on CERN Courier.

]]>
Review A Unified Grand Tour of Theoretical Physics invites readers on a guided exploration of the theoretical ideas that shape contemporary understanding of the physical world at the fundamental level. https://cerncourier.com/wp-content/uploads/2022/08/9781138473355-feature.jpg
Strings, Gauge Fields, and the Geometry Behind the Legacy of Maximilian Kreuzer https://cerncourier.com/a/strings-gauge-fields-and-the-geometry-behind-the-legacy-of-maximilian-kreuzer/ Thu, 28 Mar 2013 08:39:44 +0000 https://preview-courier.web.cern.ch/?p=104538 This book contains invited contributions from collaborators of Maximilian Kreuzer, a well known string theorist who built a sizeable group at Vienna University of Technology but sadly died in November 2010 aged just 50.

The post Strings, Gauge Fields, and the Geometry Behind the Legacy of Maximilian Kreuzer appeared first on CERN Courier.

]]>
By Anton Rebhan, Ludmil Katzarkov, Johanna Knapp, Radoslav Rashkov and Emanuel Scheidegger (eds.)
World Scientific
Hardback: £104
E-book: £135

strings-gauge-fields-and-the-geometry-behind-the-legacy-of-maximilian-kreuzer

This book contains invited contributions from collaborators of Maximilian Kreuzer, a well known string theorist who built a sizeable group at Vienna University of Technology (TU Vienna) but sadly died in November 2010 aged just 50. Victor Batyrev, Philip Candelas, Michael Douglas, Alexei Morozov, Joseph Polchinski, Peter van Nieuwenhuizen and Peter Wes are among others giving accounts of Kreuzer’s scientific legacy and original articles. Besides reviews of recent progress in the exploration of string-theory vacua and corresponding mathematical developments, Part I reviews in detail Kreuzer’s important work with Friedemann Brandt and Norbert Dragon on the classification of anomalies in gauge theories. Similarly, Part III contains a user manual for a new thoroughly revised version of PALP (Package for Analysing Lattice Polytopes with applications to toric geometry), the software developed by Kreuzer and Harald Skarke at TU Vienna.

The post Strings, Gauge Fields, and the Geometry Behind the Legacy of Maximilian Kreuzer appeared first on CERN Courier.

]]>
Review This book contains invited contributions from collaborators of Maximilian Kreuzer, a well known string theorist who built a sizeable group at Vienna University of Technology but sadly died in November 2010 aged just 50. https://cerncourier.com/wp-content/uploads/2022/08/strings-gauge-fields-and-the-geometry-behind-the-legacy-of-maximilian-kreuzer.jpg
Supergravity https://cerncourier.com/a/supergravity/ Tue, 06 Nov 2012 10:12:28 +0000 https://preview-courier.web.cern.ch/?p=104646 John March-Russell reviews in 2012 Supergravity.

The post Supergravity appeared first on CERN Courier.

]]>
By Daniel Z Freedman and Antoine Van Proeyen
Cambridge University Press
Hardback: £45
E-book: $64

CCboo2_09_12

Since the work of Emmy Noether nearly a century ago, the idea of symmetry has played an increasingly important role in physics, resulting in spectacular successes such as Yang-Mills gauge theory along the way. Albert Einstein, in particular, realized that symmetry could be a foundational principle; his understanding that the space–time dependent (“local”) symmetry of general co-ordinate invariance could be used to build general relativity had an enormous impact on the development of 20th-century physics.

The current zenith of the local symmetry principle is the theory of supergravity, which combines general relativity with the spin-intermingling theory of supersymmetry to construct the richest and deepest symmetry-based theory yet discovered. Supergravity also lies at the foundation of string theory – a theory whose own symmetry principle has not yet been uncovered – and so is one of the central ideas of modern high-energy theoretical physics.

Unfortunately, since its invention in the 1970s, supergravity has been an infamously difficult subject to learn. Now, two of the inventors and masters of supergravity – Dan Freedman and Antoine Van Proeyen – have produced a superb, pedagogical textbook that covers the classical theory in considerable depth.

The book is notably self-contained, with substantial and readable introductory material on the ideas and techniques that combine to make up supergravity, such as global supersymmetry, gauge theory, the mathematics of spinors and general relativity. There are many well chosen problems for the student along the way, together with compact discussions of complex geometry. After the backbone of the book on N=1 and N=2 supergravities, there is an excellent and especially clear chapter on the anti-deSitter supergravity/conformal field theory correspondence as an application.

Naturally, any finite book has to cut short some deserving topics. I hope that any second edition has an expanded discussion on superspace to complement the current, clear treatment based on the component multiplet calculus, as well as a greater discussion on supergravity and supersymmetry in the quantum regime.

Overall, this is a masterful introduction to supergravity for students and researchers alike, which I strongly recommend.

The post Supergravity appeared first on CERN Courier.

]]>
Review John March-Russell reviews in 2012 Supergravity. https://cerncourier.com/wp-content/uploads/2012/11/CCboo2_09_12.jpg
An Introduction to String Theory and D-Brane Dynamics: With Problems and Solutions (2nd Edition) https://cerncourier.com/a/an-introduction-to-string-theory-and-d-brane-dynamics-with-problems-and-solutions-2nd-edition/ Fri, 27 Apr 2012 10:44:31 +0000 https://preview-courier.web.cern.ch/?p=104691 Originally published in 2004, this book provides a quick introduction to the rudiments of perturbative string theory and a detailed introduction to the more current topic of D-brane dynamics.

The post An Introduction to String Theory and D-Brane Dynamics: With Problems and Solutions (2nd Edition) appeared first on CERN Courier.

]]>
By Richard J Szabo
Imperial College Press
Hardback: £42 $68
E-book: $88

61kelz3dhML

Originally published in 2004, this book provides a quick introduction to the rudiments of perturbative string theory and a detailed introduction to the more current topic of D-brane dynamics. The presentation is pedagogical, with much of the technical detail streamlined. The rapid but coherent introduction to the subject is perhaps what distinguishes this book from other string-theory or D-brane books. This second edition includes an additional appendix with solutions to the exercises, thus expanding the technical material and making the book more appealing for use in lecture courses. The material is based on mini-courses in theoretical high-energy physics delivered by the author at various summer schools, so its level has been appropriately tested.

The post An Introduction to String Theory and D-Brane Dynamics: With Problems and Solutions (2nd Edition) appeared first on CERN Courier.

]]>
Review Originally published in 2004, this book provides a quick introduction to the rudiments of perturbative string theory and a detailed introduction to the more current topic of D-brane dynamics. https://cerncourier.com/wp-content/uploads/2022/08/61kelz3dhML.jpg
Interactions with André Petermann https://cerncourier.com/a/interactions-with-andr-petermann/ https://cerncourier.com/a/interactions-with-andr-petermann/#respond Tue, 27 Mar 2012 14:35:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/interactions-with-andr-petermann/ Antonino Zichichi remembers a major CERN theorist.

The post Interactions with André Petermann appeared first on CERN Courier.

]]>
CCpet1_03_12

The origin of this conceptual revolution was the work in which these two theoretical physicists discovered that all quantities such as the gauge couplings (αi ) and the masses (mj) must “run” with q2, the invariant four-momentum of a process (Stueckelberg and Petermann 1951). It took many years to realize that this “running” allows not only the existence of a grand unification and opens the way to supersymmetry but also finally produces the need for a non-point-like description of physics processes – the relativistic quantum-string theory – that should produce the much-needed quantization of gravity.

It is interesting to recall the reasons that this paper attracted so much attention. The radiative corrections to any electromagnetic process had been found to be logarithmically divergent. Fortunately, all divergencies could be grouped into two classes: one had the property of a mass; the other had the property of an electric charge. If these divergent integrals were substituted with the experimentally measured mass and charge of the electron, then all theoretical predictions could be made to be “finite”. This procedure was called “mass” and “charge” renormalization.

Stueckelberg and Petermann discovered that if the mass and the charge are made finite, then they must run with energy. However, the freedom remains to choose the renormalization subtraction points. Petermann and Stueckelberg proposed that this freedom had to obey the rules of an invariance group, which they called the “renormalization group” (Stueckelberg and Petermann 1953). This is the origin of what we now call the renormalization group equations, which – as mentioned – imply that all gauge couplings and masses must run with energy. It was remarkable many years later to find that the three gauge couplings could converge, even if not well, towards the same value. This means that all gauge forces could have the same origin; in other words, grand unification. A difficulty in the unification was the new supersymmetry that my old friend Bruno Zumino was proposing with Julius Wess. Bruno told me that he was working with a young fellow, Sergio Ferrara, to construct non-Abelian Lagrangian theories simultaneously invariant under supergauge transformations, without destroying asymptotic freedom. During a nighttime discussion with André, in the experimental hall to search for quarks at the Intersecting Storage Rings in 1977, I told him that two gifts were in front of us: asymptotic freedom and supersymmetry. The first was essential for the experiment being implemented, the second to make the convergence of the gauge couplings “perfect” for our work on the unification. We will see later that this was the first time that we realized how to make the unification “perfect”.

The muon g-2

The second occasion for me to know about André came in 1960, when I was engaged in measuring the anomalous magnetic moment (g-2) of the muon. He had made the most accurate theoretical prediction, but there was no high-precision measurement of this quantity because technical problems remained to be solved. For example, a magnet had to be built that could produce a set of high-precision polynomial magnetic fields throughout as long a path as possible. This is how the biggest (6-m long) “flat magnet” came to be built at CERN with the invention of a new technology now in use the world over. André worked only at night and because he was interested in the experimental difficulties he spent nights with me working in the SC-Experimental Hall. It was a great help for me to interact with the theorist who had made the most accurate theoretical prediction for the anomalous magnetic moment of a particle 200 times heavier than the electron. The muon must surely reveal a difference in a fundamental property like its g-value. Otherwise, why is its mass 200 times greater than that of the electron? (Even now, five decades later, no one knows why.)

When the experiment at CERN proved that, at the level of 2.5 parts in a million for the g-value, the muon behaves as a perfect electromagnetic object, the problem changed focus to ask why are there so many muons around? The answer lay in the incredible value of the mass difference between the muon and its parent, the π. Could another “heavy electron” – a “third lepton” – exist with a mass in the range of giga-electron-volts? Had a search ever been done for this third “lepton”? The answer was no. Only strongly interacting particles had been studied. This is how the search for a new heavy lepton, called HL, was implemented at CERN, with the Proton AntiProton into LEpton Pairs (PAPLEP) project, where the production process was proton–antiproton annihilation. André and I discussed these topics in the CERN Experimental Hall during the night shifts he spent with me.

The results of the PAPLEP experiment gave an unexpected and extremely strong value for the (time-like) electromagnetic form-factor of the proton, whose consequence was a factor 500 below the point-like cross-section for PAPLEP. This is how, during another series of night discussions with André , we decided that the “ideal” production process for a third “lepton” was (e+e) annihilation. However, there was no such collider at CERN. The only one being built was at Frascati, by Bruno Touschek, who was a good friend of Bruno Ferretti and another physicist who preferred to work at night. I had the great privilege of knowing Touschek when I was in Rome. He also became a strong supporter of the search for a “third lepton” with the new e+e collider, ADONE. Unfortunately the top energy of ADONE was 3 GeV and the only result that we could achieve was a limit of 1 GeV for the mass of the much desired “third lepton”.

Towards supersymmetry

Another topic talked about with André has its roots in the famous work with Stueckelberg – the running with energy of the fundamental couplings of the three interactions: electromagnetic, weak and strong. The crucial point here was at the European Physical Society (EPS) conferences in York (1978) and Geneva (1979). In my closing lecture at EPS-Geneva, I said: “Unification of all forces needs first a supersymmetry. This can be broken later, thus generating the sequence of the various forces of nature as we observe them.” This statement was based on work with André where in 1977 we studied – as mentioned before – the renormalization-group running of the couplings and introduced a new degree of freedom: supersymmetry. The result was that the convergence of the three couplings improved a great deal. This work was not published, but known to a few, and it led to the Erice Schools Superworld I, Superworld II and Superworld III.

This is how we arrived at 1991 when it was announced that the search for supersymmetry had to wait until the multi-tera-electron-volt energy threshold would become available. At the time, a group of 50 young physicists was engaged with me on the search for the lightest supersymmetric particle in the L3 experiment at CERN’s Large Electron Positron (LEP) collider. If the new theoretical “predictions” were true then there was no point in spending so much effort in looking for supersymmetry-breaking in the LEP energy region. Reading the relevant papers, André and I realized that no one had ever considered the evolution of the gaugino mass (EGM). During many nights of work we improved the unpublished result of 1977 mentioned above: the effect of the EGM was to bring down the energy threshold for supersymmetry-breaking by nearly three orders of magnitude. Thanks to this series of works I could assure my collaborators that the “theoretical” predictions on the energy-level where supersymmetry-breaking could occur were perfectly compatible with LEP energies (and now with LHC energies).

Finally, in the field of scientific culture, I would like to pay tribute to André Petermann for having been a strong supporter for the establishment of the Ettore Majorana Centre for Scientific Culture in Erice. In the old days, before anyone knew of Ettore Majorana, André was one of the few people who knew about Majorana neutrinos and that relativistic invariance does not give any privilege to spin-½ particles, such as the privilege of having antiparticles, all spin values having the same privilege. In all of my projects André was of great help, encouraging me to go on, no matter what the opposition could present in terms of arguments that often he found to be far from being of rigorous validity.

The post Interactions with André Petermann appeared first on CERN Courier.

]]>
https://cerncourier.com/a/interactions-with-andr-petermann/feed/ 0 Feature Antonino Zichichi remembers a major CERN theorist. https://cerncourier.com/wp-content/uploads/2012/03/CCpet1_03_12.jpg
Saul Perlmutter: from light into darkness https://cerncourier.com/a/saul-perlmutter-from-light-into-darkness/ https://cerncourier.com/a/saul-perlmutter-from-light-into-darkness/#respond Tue, 27 Mar 2012 14:35:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/saul-perlmutter-from-light-into-darkness/ The Nobel laureate on the discovery behind dark energy.

The post Saul Perlmutter: from light into darkness appeared first on CERN Courier.

]]>
CCsau1_03_12

Paradoxically, work on “light candles” led to the discovery that the universe is much darker than anyone thought. Arnaud Marsollier caught up with Saul Perlmutter recently to find out more about this Nobel breakthrough.

Saul Perlmutter admits that measuring an acceleration of the expansion of the universe – work for which he was awarded the 2011 Nobel Prize in Physics together with Brian Schmidt and Adam Riess – came as a complete surprise. Indeed, it is exactly the opposite of what Perlmutter’s team was trying to measure: the decelerating expansion of the universe. “My very first reaction was the reaction of any physicist in such a situation: I wondered which part of the chain of the analysis needed a new calibration,” he recalls. After the team had checked and rechecked over several weeks, Perlmutter, who is based at Lawrence Berkeley National Laboratory and the University of California, Berkeley, still wondered what could be wrong: “If we were going to present this, then we would have to make sure that everybody understood each of the checks.” Then, after a few months, the team began to make public its result in the autumn of 1997, inviting scrutiny from the broader cosmology community.

Despite great astonishment, acceptance of the result was swift. “Maybe in science’s history, it’s the fastest acceptance of a big surprise,” says Perlmutter. In a colloquium that he presented in November 1997, he remembers how cosmologist Joel Primack stood up and instead of talking to Perlmutter he addressed the audience, declaring: “You may not realize this, but this is a very big problem. This is an outstanding result you should be worried about.” Of course, some colleagues were sceptical at first. “There must be something wrong, it is just too crazy to have such a small cosmological constant,” said cosmologist Rocky Kolb in a later conference in early 1998.

CCsau2_03_12

According to Perlmutter, one of the main reasons for the quick acceptance by the community of the accelerating expansion of the universe is that two teams reported the same result at almost the same time: Perlmutter’s Supernova Cosmology Project and the High-z Supernova Search Team of Schmidt and Riess. Thus, there was no need to wait a long time for confirmation from another team. “It was known that the two teams were furious competitors and that each of them would be very glad to prove the other one wrong,” he adds. By the spring of 1998, a symposium was organized at Fermilab that gathered many cosmologists and particle physicists specifically to look at these results. At the end of the meeting, after subjecting the two teams to hard questioning, some three quarters of the people in the room raised their hands in a vote to say that they believed the results.

What could be responsible for such an acceleration of the expanding universe? Dark energy, a hypothetical “repulsive energy” present throughout the universe, was the prime suspect. The concept of dark energy was also welcomed because it solves some delicate theoretical problems. “There were questions in cosmology that did not work so well, but with a cosmological constant they are solved,” explains Perlmutter. Albert Einstein had at first included a cosmological constant in his equations of general relativity. The aim was to introduce a counterpart to gravity in order to have a model describing a static universe. However, with evidence for the expansion of the universe and the Big Bang theory, the cosmological constant had been abandoned by most cosmologists. According to George Gamow, even Einstein thought that it was his “biggest blunder” (Gamow 1970). Today, with the discovery of the acceleration of the expansion of the universe, the cosmological constant “is back”.

Since the discovery, other kinds of measurements – for example on the cosmic microwave background radiation (CMB), first by the MAXIMA and BOOMERANG balloon experiments, and then by the Wilkinson Microwave Anisotropy Probe satellite – have proved consistent with, and even made stronger, the idea of an accelerating expansion of the universe. However, it all leads to a big question: what could be the nature of dark energy? In the 20th century, physicists were already busy with dark matter, the mysterious invisible matter that can only be inferred through observations of its gravitational effects on other structures in the universe. Although they still do not know what dark matter is, physicists are increasingly confident that they are close to finding out, with many different kinds of experiments that can shed light on it, from telescopes to underground experiments to the LHC. In the case of dark energy, however, the community is far from agreeing on a consistent explanation.

When asked what dark energy could be, Perlmutter’s eyes light up and his broad smile shows how excited he is by this challenging question. “Theorists have been doing a very good job and we have a whole landscape of possibilities. Over the past 12 years there was an average of one paper a day from the theorists. This is remarkable,” he says. Indeed, this question has now become really important as it seems that physicists know about a mere 5% of the whole mass-energy of the universe, the rest being in the form of dark matter or, in the case of more than 70%, the enigmatic, repulsive stuff known as dark energy or a vacuum energy density.

Including a cosmological constant in Einstein’s equations of general relativity is a simple solution to explain the acceleration of the expansion of the universe. However, there are other possibilities. For example, a decaying scalar field of the kind that could have caused the first acceleration at the beginning of the universe or the existence of extra dimensions could save the standard cosmological model. “We might even have to modify Einstein’s general relativity,” Perlmutter says. Indeed, all that is known is that the expansion of the universe is accelerating, but there is no clue as to why. The ball is in the court of experimentalists, who will have to provide theorists with more data and refined measurements to show precisely how the expansion rate changes over time. New observations by different means will be crucial, as they could show the way forward and decide between the different available theoretical models.

“We have improved the supernova technique and we know what we need to make a measurement that is 20 times more accurate,” he says. There are also two other precision techniques currently being developed to probe dark energy either in space or from the ground. One uses baryon acoustic-oscillations, which can be seen as “standard rulers” in the same way that supernovae are used as standard candles (see box, previous page). These oscillations leave imprints on the structure of the universe at all ages. By studying these imprints relative to the CMB, the earliest “picture of the universe” available, it is possible to measure the rate at which the expansion of the universe is accelerating. The second technique is based on gravitational lensing, a deflection of light by massive structures, which allows cosmologists to study the history of the clumping of matter in the universe, with the attraction of gravity contesting with the accelerating expansion. “We think we can use all of these techniques together,” says Perlmutter. Among the projects he mentions, are the US-led ground-based experiments BigBOSS and the Large Synoptic Survey Telescope and ESA’s Euclid satellite, all of which are under preparation.

However, the answer to this obscure mystery – or at least part of it – could come from elsewhere. The full results from ESA’s Planck satellite, for instance, are eagerly awaited because they should provide unprecedented precision on measurements of the CMB. “The Planck satellite is an ingredient in all of these analyses,” explains Perlmutter. In addition, cosmology and particle physics are increasingly linked. In particular, the LHC could bring some input into the story quite soon. “It is an exciting time for physics,” he says. “If we just get one of these breakthroughs through the LHC, it would help a lot. We are really hoping that we will see the Higgs and maybe we will see some supersymmetric particles. If we are able to pin down the nature of dark matter, that can help a lot as well.” Not that Perlmutter thinks that the mystery of dark energy is related to dark matter, considering that they are two separate sectors of physics, but as he says, “until you find out, it is still possible”.

The post Saul Perlmutter: from light into darkness appeared first on CERN Courier.

]]>
https://cerncourier.com/a/saul-perlmutter-from-light-into-darkness/feed/ 0 Feature The Nobel laureate on the discovery behind dark energy. https://cerncourier.com/wp-content/uploads/2012/03/CCsau1_03_12.jpg
Quantum Engineering: Theory and Design of Quantum Coherent Structures https://cerncourier.com/a/quantum-engineering-theory-and-design-of-quantum-coherent-structures/ Tue, 27 Mar 2012 10:52:01 +0000 https://preview-courier.web.cern.ch/?p=104710 This book provides a self-contained presentation of the theoretical methods and experimental results in quantum engineering.

The post Quantum Engineering: Theory and Design of Quantum Coherent Structures appeared first on CERN Courier.

]]>
By A M Zagoskin
Cambridge University Press
Hardback: £45 $80
E-book: $64

9780521113694

Quantum engineering has emerged as a field with important potential applications. This book provides a self-contained presentation of the theoretical methods and experimental results in quantum engineering. It covers topics such as the quantum theory of electric circuits, the quantum theory of noise and the physics of weak superconductivity. The theory is complemented by up-to-date experimental data to help put it into context.

The post Quantum Engineering: Theory and Design of Quantum Coherent Structures appeared first on CERN Courier.

]]>
Review This book provides a self-contained presentation of the theoretical methods and experimental results in quantum engineering. https://cerncourier.com/wp-content/uploads/2022/08/9780521113694-feature.jpg
Relativistic Quantum Physics: From Advanced Quantum Mechanics to Introductory Quantum Field Theory https://cerncourier.com/a/relativistic-quantum-physics-from-advanced-quantum-mechanics-to-introductory-quantum-field-theory/ Tue, 27 Mar 2012 10:52:00 +0000 https://preview-courier.web.cern.ch/?p=104709 This book combines special relativity and quantum physics to provide a complete description of the fundamentals of relativistic quantum physics, guiding the reader from relativistic quantum mechanics to basic quantum field theory.

The post Relativistic Quantum Physics: From Advanced Quantum Mechanics to Introductory Quantum Field Theory appeared first on CERN Courier.

]]>
By Tommy Ohlsson
Cambridge University Press
Hardback: £38 $65
E-book: $52

41mu5Ovo8CL

Quantum physics and special relativity theory were two of the greatest breakthroughs in physics during the 20th century and contributed to paradigm shifts in physics. This book combines these two discoveries to provide a complete description of the fundamentals of relativistic quantum physics, guiding the reader from relativistic quantum mechanics to basic quantum field theory. It gives a detailed treatment of the subject, beginning with the classification of particles, the Klein–Gordon equation and the Dirac equation. Exercises and problems are featured at the end of most chapters.

The post Relativistic Quantum Physics: From Advanced Quantum Mechanics to Introductory Quantum Field Theory appeared first on CERN Courier.

]]>
Review This book combines special relativity and quantum physics to provide a complete description of the fundamentals of relativistic quantum physics, guiding the reader from relativistic quantum mechanics to basic quantum field theory. https://cerncourier.com/wp-content/uploads/2022/08/41mu5Ovo8CL.jpg
Dileptons: a window on force unification and extra-dimensions https://cerncourier.com/a/dileptons-a-window-on-force-unification-and-extra-dimensions/ https://cerncourier.com/a/dileptons-a-window-on-force-unification-and-extra-dimensions/#respond Wed, 23 Nov 2011 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/dileptons-a-window-on-force-unification-and-extra-dimensions/ The ATLAS Collaboration has published its latest search for neutral resonances decaying to pairs of leptons, either electrons or muons.

The post Dileptons: a window on force unification and extra-dimensions appeared first on CERN Courier.

]]>
The ATLAS Collaboration has published its latest search for neutral resonances decaying to pairs of leptons, either electrons or muons.

CCnew9_10_11

Searches for dilepton resonances have a history of discoveries, from the J/ψ and Υ to the Z boson. Now new neutral gauge bosons, Z’, which would appear as resonances, are predicted by a number of theories. They are the mediators of new forces that allow the unification of all fundamental forces at some very large energy scale. Dilepton resonances are also predicted as gravitons in models of extra-dimensional gravity.

The analysis by ATLAS used a data sample corresponding to an integrated luminosity of 1.1 fb–1. The sensitivity to new physics extends to 1.8 TeV, similar to that of recent preliminary results from CMS and far beyond the limits achieved at lower-energy accelerators. The observed dilepton mass distributions (for example, the di-electron distribution of figure 2) are in good agreement with the spectrum predicted by the Standard Model including higher-order QCD and electroweak corrections.

CCnew10_10_11

The search technique employed by ATLAS involves the comparison of the dilepton mass distribution with the predicted spectrum over the entire high-mass range. The prediction includes a series of hypothetical resonance line-shapes with different masses and couplings. The dominant sources of systematic uncertainty are of a theoretical nature, arising from the calculations of the production rates.

The ATLAS detector will measure the mass of any resonance observed quite accurately. The liquid argon calorimeter provides a linear and stable response for electrons up to the highest energy, and the combination of the inner detector and the muon spectrometer provides muon measurement at the highest momenta. ATLAS will also measure the cross-section, couplings, spin and interference properties of a resonance.

Work is ongoing to increase the lepton acceptance further, and ATLAS will extend the kinematic reach of these exciting measurements with much larger datasets in 2011–2012.

The post Dileptons: a window on force unification and extra-dimensions appeared first on CERN Courier.

]]>
https://cerncourier.com/a/dileptons-a-window-on-force-unification-and-extra-dimensions/feed/ 0 News The ATLAS Collaboration has published its latest search for neutral resonances decaying to pairs of leptons, either electrons or muons. https://cerncourier.com/wp-content/uploads/2011/11/CCnew9_10_11.jpg
Gravitation: Foundations and Frontiers https://cerncourier.com/a/gravitation-foundations-and-frontiers/ Tue, 25 Oct 2011 07:50:44 +0000 https://preview-courier.web.cern.ch/?p=104837 Johann Rafelski reviews in 2011 Gravitation: Foundations and Frontiers.

The post Gravitation: Foundations and Frontiers appeared first on CERN Courier.

]]>
By T Padmanabhan
Cambridge University Press
Hardback: £50 $85
E-book: $68

CCboo3_09_11

The general theory of relativity – the foundation of gravitation and cosmology – may be as widely known today as Newton’s laws were before Einstein proposed their geometric interpretation. That was around 100 years ago, yet many unanswered questions and issues are being revisited from the current perspective, such as: why is gravity described by geometry and why is the cosmological constant so extraordinarily fine-tuned in comparison with the scale of elementary particles?

In an active research field – where the universe at large meets the discoveries in particle physics – there is much need for textbooks based on research that address gravity in depth. Thanu Padmanabhan’s book fills this need well and in a unique way. Within minutes of opening the rich, heavy, full, yet succinctly written 728 pages I realized that this is a new and personal view on general relativity, which leads beyond many excellent standard textbooks and offers a challenging training ground for students with its original exercises and study topics.

In the first 340 pages, the book presents the fundamentals of relativity in an approachable style. Yet, even in this “standard” part the text goes far beyond the conventional framework in preparing the reader in depth for mastering the “frontiers”. The titles of the following chapters speak for themselves: “Black Holes”, “Gravitational Waves”, “Relativistic Cosmology” and “Evolution of Cosmological Perturbations”, all of which address key domains in present-day research. Then, on page 591, the book turns to the quantum frontier and extensions of general relativity to extra dimensions, and to efforts to view it as an effective “emergent” theory.

This research-oriented volume is written in a format that is suitable for a primary text in a year-long graduate class on general relativity, although the lecturer is likely to leave a few of the chapters to self-study. “Padmanabhan” complements the somewhat older offerings of this type, such as “The Big Black Book” (Gravitation by Charles Misner, Kip Thorne and John Wheeler, W H Freeman 1973) or “Weinberg” (Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, Wiley 1972).

Naturally, this publication differs greatly from “text and no research” offerings, such as Ta-Pei Cheng’s Relativity, Gravitation and Cosmology: A Basic Introduction (OUP 2009) or Ray d’Inverno’s Introducing Einstein’s Relativity (OUP 1992). Any lecturer using these should consider adding “Padmanabhan” as an optional text to offer a wider view to students on what is happening in research today. In comparison with “Hartle” (Gravity: An Introduction to Einstein’s General Relativity, Addison-Wesley 2003), one cannot but admire that “Padmanabhan” does not send the reader to other texts to handle details of computations; what is mentioned is also derived and explained in depth. Of course, “Hartle” is often used in a “first” course on gravity but frankly how often is there a “second” course?

“Padmanabhan” is, as noted earlier, voluminous, making it an excellent value for money because it contains the material of three contemporary books for the price of one. So who should own a copy? Certainly for any good library covering physics, the question is really not if to buy but how many copies. I also highly recommend it to anyone interested in general relativity and related fields because it offers a modern update. Students who have already had a “first” course in the subject and are considering taking up research in this field will find in “Padmanabhan” a self-study text to deepen their understanding. If you are a bookworm like me, you must have it, because it is a great read from start to finish.

The post Gravitation: Foundations and Frontiers appeared first on CERN Courier.

]]>
Review Johann Rafelski reviews in 2011 Gravitation: Foundations and Frontiers. https://cerncourier.com/wp-content/uploads/2011/10/CCboo3_09_11.jpg
ALICE measures the shape of 
head-on lead–lead collisions https://cerncourier.com/a/alice-measures-the-shape-of-%e2%80%a8head-on-lead-lead-collisions/ https://cerncourier.com/a/alice-measures-the-shape-of-%e2%80%a8head-on-lead-lead-collisions/#respond Fri, 23 Sep 2011 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/alice-measures-the-shape-of-%e2%80%a8head-on-lead-lead-collisions/ One the many surprises to have emerged from studies of heavy-ion collisions at Brookhaven's Relativistic Heavy Ion Collider (RHIC) and now at CERN's LHC concerns the extreme fluidity of the dense matter of the nuclear fireball produced.

The post ALICE measures the shape of 
head-on lead–lead collisions appeared first on CERN Courier.

]]>
CCnew2_08_11

One the many surprises to have emerged from studies of heavy-ion collisions at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) and now at CERN’s LHC concerns the extreme fluidity of the dense matter of the nuclear fireball produced. This has traditionally been studied experimentally by measuring the second harmonic of the azimuthal distribution of emitted particles with respect to the plane of nuclear impact. Known as v2, this observable is remarkably large, saturating expectations from hydrodynamic models, suggesting that the so-called quark-gluon plasma is one of the most perfect fluids in nature. Many assumed that the matter in the elliptical nuclear overlap region becomes smooth upon thermalization, rendering the Fourier coefficients other than v2 negligible in comparison.

However, recently it was proposed that collective flow also responds to pressure gradients from the “chunkiness” of matter distributed within the initial fireball in random event-by-event fluctuations. These nonuniformities lead to anisotropy patterns beyond smooth ellipses: triangular, quadrangular, and pentangular flow are now being studied by measurements of v3, v4, v5 and beyond at RHIC and the LHC.

The new measurements evoke comparisons with the vestigial cosmic microwave background (CMB) radiation, whose nonuniformities offer hints about the conditions at the universe’s earliest moments. Just as the CMB anisotropy is expressed by multipole moments, the azimuthal anisotropy of correlated hadron pairs from heavy-ion collisions can be represented by a spectrum of Fourier coefficients V. In pair-correlation measurements, a “trigger” particle is paired with associated particles in the event to form a distribution in relative azimuth Δφ. Over many events, a correlation function is produced, whose peaks and valleys describe the relative probability of pair coincidence.

The left side of the figure shows a correlation function measured by ALICE for the 2% most central (i.e. head-on) lead–lead collisions at the LHC, where the particle pairs are separated in pseudorapidity to suppress “near-side” jet correlations near Δφ = 0. Even when this gap is imposed, a curious longitudinally-extended near-side “ridge” feature remains. Considerable theoretical effort has been devoted to explaining the source of this feature since its discovery at RHIC. In the correlation function in the figure, the first five V harmonics are superimposed. The right side of the figure shows the spectrum of the Fourier amplitudes. Evidently in the most head-on collisions, the dominant harmonic is not the second elliptical term, but the triangular one, V; moreover, the Fourier coefficients here are significant up to n = 5. These results corroborate the idea that initial density fluctuations are non-negligible.

The intriguing double-peak structure evident on the “away side” (i.e. opposite to the trigger particle, at Δφ = π) was not observed in inclusive (i.e. not background-subtracted) correlation functions prior to the LHC. However, in the hope of isolating jet-like correlations, the v2 component was often subtracted as a non-jet background, leaving a residual double peak when the initial away-side peak was broad. This led to interpretation of the structure as a coherent shock-wave response of the nuclear matter to energetic recoil partons, akin to a Mach cone in acoustics. However, the concepts of higher-order anisotropic flow are now gaining favour over theories that depend on conceptually independent Mach-cone and ridge explanations.

These measurements at the LHC are significant because they suggest a single consistent physical picture, vindicating relativistic viscous hydrodynamics as the most plausible explanation for the observed anisotropy. The same collective response to initial spatial anisotropy that causes elliptic flow also economically explains the puzzling “ridge” and “Mach cone” features, once event-by-event initial-state density fluctuations are considered. Moreover, measuring the higher Fourier harmonics offers tantalizing possibilities to improve understanding of the nuclear initial state and the transport properties of the nuclear matter. For example, the high-harmonic features at small angular scales are suppressed by the smoothing effects of shear viscosity. This constrains models incorporating a realistic initial state and hydrodynamic evolution, improving understanding of the deconfined phase of nuclear matter.

The post ALICE measures the shape of 
head-on lead–lead collisions appeared first on CERN Courier.

]]>
https://cerncourier.com/a/alice-measures-the-shape-of-%e2%80%a8head-on-lead-lead-collisions/feed/ 0 News One the many surprises to have emerged from studies of heavy-ion collisions at Brookhaven's Relativistic Heavy Ion Collider (RHIC) and now at CERN's LHC concerns the extreme fluidity of the dense matter of the nuclear fireball produced. https://cerncourier.com/wp-content/uploads/2011/09/CCnew2_08_11.jpg
Introduction to the Theory of the Universe: Hot Big Bang Theory and Introduction to the Theory of the Universe: Cosmological Perturbations and Inflationary Theory https://cerncourier.com/a/introduction-to-the-theory-of-the-universe-hot-big-bang-theory-and-introduction-to-the-theory-of-the-universe-cosmological-perturbations-and-inflationary-theory/ Fri, 23 Sep 2011 08:54:35 +0000 https://preview-courier.web.cern.ch/?p=104846 John March-Russell reviews in 2011 Introduction to the Theory of the Universe: Hot Big Bang Theory and Introduction to the Theory of the Universe: Cosmological Perturbations and Inflationary Theory.

The post Introduction to the Theory of the Universe: Hot Big Bang Theory and Introduction to the Theory of the Universe: Cosmological Perturbations and Inflationary Theory appeared first on CERN Courier.

]]>
Introduction to the Theory of the Universe: Hot Big Bang Theory
By Dmitry S Gorbunov and Valery A Rubakov
World Scientific
Hardback: £103 $158
Paperback: £51 $78
E-book: $200

Introduction to the Theory of the Universe: Cosmological Perturbations and Inflationary Theory
By Dmitry S Gorbunov and Valery A Rubakov
World Scientific
Hardback: £101 $156
Paperback: £49 $76
E-book: $203

CCboo2_08_11

When a field is developing as fast as modern particle astrophysics and cosmology, and in as many exciting and unexpected ways, it is difficult for textbooks to keep up. The two-volume Introduction to the Theory of the Early Universe by Dmitry Gorbunov and Valery Rubakov is an excellent addition to the field of theoretical cosmology that goes a long way towards filling the need for a fully modern pedagogical text. Rubakov, one of the outstanding masters of beyond-the-Standard Model physics, and his younger collaborator give an introduction to almost the entire field over the course of the two books.

The first book covers the basic physics of the early universe, including thorough discussions of famous successes, such as big bang nucleosynthesis, as well as more speculative topics, such as theories of dark matter and its genesis, baryogenesis, phase transitions and soliton physics – all of which receive much more coverage than is usual. As the choice of topics indicates, the approach in this volume tends to be from the perspective of particle theory, usefully complementing some of the more astrophysically and observationally oriented texts.

CCboo1_08_11

The second volume focuses on cosmological perturbations – where the vast amounts of data coming from cosmic-microwave background and large-scale structure observations have transformed cosmology into a precision science – and the related theory of inflation, which is our best guess for the dynamics that generate the perturbations. Both volumes contain notably insightful treatments of many topics and there is a large variety of problems for the student distributed throughout the text, in addition to extensive appendices on background material.

Naturally, there are some missing topics, particularly on the observational side, for example a discussion of direct and indirect detection of dark matter or of weak gravitational lensing. There are also some infelicities of language that a good editor would have corrected. However, for those wanting a modern successor to The Early Universe by Edward Kolb and Michael Turner (Perseus 1994) or John Peacock’s Cosmological Physics (CUP 1999), either for study of an unfamiliar topic or to recommend to PhD students to prepare them for research, the two volumes of Theory of the Early Universe are a fine choice and an excellent alternative to Steven Weinberg’s more formal Cosmology (OUP 2008).

The post Introduction to the Theory of the Universe: Hot Big Bang Theory and Introduction to the Theory of the Universe: Cosmological Perturbations and Inflationary Theory appeared first on CERN Courier.

]]>
Review John March-Russell reviews in 2011 Introduction to the Theory of the Universe: Hot Big Bang Theory and Introduction to the Theory of the Universe: Cosmological Perturbations and Inflationary Theory. https://cerncourier.com/wp-content/uploads/2011/09/CCboo2_08_11.jpg
EPS-HEP 2011: the harvest begins https://cerncourier.com/a/eps-hep-2011-the-harvest-begins/ https://cerncourier.com/a/eps-hep-2011-the-harvest-begins/#respond Fri, 26 Aug 2011 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/eps-hep-2011-the-harvest-begins/ Impressive results, and so much more to come: this is the general feeling that more than 800 participants took home from the International Europhysics Conference on High-Energy Physics, EPS-HEP 2011, which was held in Grenoble on 21–27 July.

The post EPS-HEP 2011: the harvest begins appeared first on CERN Courier.

]]>
CCnew1_06_11

Impressive results, and so much more to come: this is the general feeling that more than 800 participants took home from the International Europhysics Conference on High-Energy Physics, EPS-HEP 2011, which was held in Grenoble on 21–27 July. After only a year of data-taking, the spectacular performance of the LHC and the amazingly fast data analysis by the experiments have raised current knowledge by a huge notch in searches for new physics.

Those who had hoped that the LHC would reveal supersymmetry early on may have been slightly disappointed, although each extended limit contributes to the correct picture and new physics is guaranteed, as many speakers reminded the audience. CERN’s director-general, Rolf Heuer reinforced this point, stating that for the Higgs boson in particular, either finding it or excluding it will be a great discovery.

On the search for the Higgs boson, both the CMS and ATLAS experiments at the LHC have observed small excesses of events in the WW and ZZ channels. Each one is statistically weak but taken together, they become interesting, as each team independently sees a small excess in the low range for the Higgs mass. While this is exactly how a Standard Model Higgs would manifest itself, it is still far too early to tell (The LHC homes in on the Higgs).

Another big topic of conversation was the report by the CDF collaboration at Fermilab of the first measurement of the rare decay Bs→μμ, appearing possibly stronger than predicted. On the other hand, the CMS and LHCb collaborations at the LHC showed preliminary results, which when combined provide a limit in contradiction with the CDF result (CMS and LHCb pull together in search for rare decay). More data will soon clarify what is happening here.

The session on QCD showed great progress in the field, with updates on parton-distribution functions from the experiments at HERA, DESY, as well as several results from the LHC experiments. These measurements are now challenging the precision of theoretical predictions, and will contribute towards refining the Monte Carlo simulations further. The experiments at Fermilab’s Tevatron and at the B-factories also presented improved and impressive limits in all directions in flavour physics, contributing to a clearer theoretical picture.

In neutrino physics, new results came from the T2K and MINOS experiments, giving the first indications of a sizeable mixing angle between the first and third neutrino generations (MINOS and T2K glimpse electron neutrinos). It was particularly moving to see how Japanese colleagues are recovering after the devastating earthquake and tsunami. Atsuko Suzuki, head of the KEK laboratory, thanked the particle-physics community for its extended support.

An important highlight of the conference was the award of the European Physical Society (EPS) High Energy and Particle Physics Prize to Sheldon Lee Glashow, John Iliopoulos and Luciano Maiani. They received this for their crucial contribution to the theory of flavour, currently embedded in the Standard Model of strong and electroweak interactions, which is still of utmost importance today.

With the first results from significant amounts of data at the LHC, the conference attracted a great deal of interest from the world’s press. A press conference was held on 25 July to announce the EPS 2011 high-energy physics prizes, with contributions on the latest results from the LHC, the European strategy for particle physics, and the latest advances in astroparticle physics in Europe.

• A more detailed report will appear in the October issue of the CERN Courier.

The post EPS-HEP 2011: the harvest begins appeared first on CERN Courier.

]]>
https://cerncourier.com/a/eps-hep-2011-the-harvest-begins/feed/ 0 News Impressive results, and so much more to come: this is the general feeling that more than 800 participants took home from the International Europhysics Conference on High-Energy Physics, EPS-HEP 2011, which was held in Grenoble on 21–27 July. https://cerncourier.com/wp-content/uploads/2011/08/CCnew1_06_11.jpg
Jülich welcomes the latest spin on physics https://cerncourier.com/a/jlich-welcomes-the-latest-spin-on-physics/ https://cerncourier.com/a/jlich-welcomes-the-latest-spin-on-physics/#respond Wed, 30 Mar 2011 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/jlich-welcomes-the-latest-spin-on-physics/ SPIN2010 provides a showcase for what’s new in spin physics.

The post Jülich welcomes the latest spin on physics appeared first on CERN Courier.

]]>
CCspi1_03_11

The international conference series on spin originated with the biannual Symposia on High Energy Spin Physics, launched in 1974 at Argonne, and the Symposia on Polarization Phenomena in Nuclear Physics, which started in 1960 at Basle and were held every five years. Joint meetings began in Osaka in 2000, with the latest, SPIN2010, being held at the Forschungszentrum Jülich, chaired by Hans Ströher and Frank Rathmann. The 19th International Spin Physics Symposium was organized by the Institut für Kernphysik

(IKP), host of the 3 GeV Cooler Synchrotron, COSY – a unique facility for studying the interactions of polarized protons and deuterons with internal polarized targets. Research there is aimed at developing new techniques in spin manipulation for applications in spin physics, in particular for the new Facility for Antiproton and Ion Research (FAIR) at GSI, Darmstadt. The 250 or so talks presented at SPIN2010 covered all aspects of spin physics – from the latest results on transverse spin physics from around the world to spin-dependence at fusion reactors.

The conference started with a review of the theoretical aspects of spin physics by Ulf-G Meißner, director of the theory division at IKP, who focused on the challenges faced by the modern effective field-theory approach to few-body interactions at low and intermediate energies. Progress here has been tremendous but old puzzles such as the analysing power, Ay, in proton-deuteron scattering, refuse to be fixed. These were discussed in more detail in the plenary talks by Evgeny Epelbaum of Bochum and Johan Messchendorp of Groningen. In the second talk of the opening plenary session, Richard Milner of the Massachusetts Institute of Technology (MIT) highlighted the future of experimental spin physics.

It is fair to say that the classical issue of the helicity structure of protons has decided to take a rest, in the sense that rapid progress is unlikely. During the heyday of the contribution of the Efremov-Teryaev-Altarelli-Ross spin anomaly to the Ellis-Jaffe sum rule, it was tempting to attribute the European Muon Collaboration “spin crisis” to a relatively large number of polarized gluons in the proton. Andrea Bressan of Trieste reported on the most recent data from the COMPASS experiment at CERN, on the helicity structure function of protons and deuterons at small x, as well as the search for polarized gluons via hard deep inelastic scattering (DIS) reactions. Kieran Boyle of RIKEN and Brookhaven summarized the limitations on Δg from data from the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. The non-observation of Δg within the already tight error bars indicates that gluons refuse to carry the helicity of protons. Hence, the dominant part of the proton helicity is in the orbital momentum of partons.

The extraction of the relevant generalized parton distributions from deeply virtual Compton scattering was covered by Michael Düren of Gießen for the HERMES experiment at DESY, Andrea Ferrero of Saclay for COMPASS and Piotr Konczykowski for the CLAS experiment at Jefferson Lab. Despite impressive progress, there is still a long road ahead towards data that could offer a viable evaluation of the orbital momentum contribution to Ji’s sum rule. The lattice QCD results reviewed by Philipp Hägler of Munich suggest the presence of large orbital-angular momenta, Lu ≈ –Ld ≈ 0.36 (1/2), which tend to cancel each other.

The future of polarized DIS at electron–ion colliders was reviewed by Kurt Aulenbacher of Mainz. The many new developments range from a 50-fold increase in the current of polarized electron guns to an increase of 1000 in the rate of electron cooling.

Transversity was high on the agenda at SPIN2010. It is the last, unknown leading-twist structure function of the proton – without it the spin tomography of the proton would be forever incomplete. Since the late 1970s, everyone has known that QCD predicts the death of transverse spin physics at high energy. It took quite some time for the theory community to catch up with the seminal ideas of J P Ralston and D E Soper of some 30 years ago on the non-vanishing transversity signal in double-polarized Drell-Yan (DY) processes; it also took a while to accept the Sivers function, although the Collins function fell on fertile ground. Now, the future of transverse spin physics has never been brighter. During the symposium, news came of the positive assessment by CERN’s Super Proton Synchrotron Committee with respect to the continuation of COMPASS for several more years.

CCspi2_03_11

Both the Collins and Sivers effects have been observed beyond doubt by HERMES and COMPASS. With its renowned determination of the Collins function, the Belle experiment at KEK paved the way for the first determination of the transversity distribution in the proton, which turns out to be similar in shape and magnitude to the helicity density in the proton. Mauro Anselmino reviewed the phenomenology work at Turin, which was described in more detail by Mariaelena Boglione. Non-relativistically, the tensor/Gamow-Teller (transversity) and axial (helicity) currents are identical. The lattice QCD results reported by Hägler show that the Gamow-Teller charge of protons is indeed close to the axial charge.

The point that large transverse spin effects are a feature of valence quarks has been clearly demonstrated in single-polarized proton–proton collisions at RHIC by the PHENIX experiment, as Brookhaven’s Mickey Chiu reported. The principal implication for the PAX experiment at FAIR from the RHIC data, the Turin phenomenology and lattice QCD is that the theoretical expectations of large valence–valence transversity signals in DY processes with polarized antiprotons on polarized protons are robust.

The concern of the QCD community about a contribution of the orbital angular momentum of constituents to the total spin is nothing new to the radioactive-ion-beam community. Hideki Ueno of RIKEN reported on the progress in the production of spin-aligned and polarized radioactive-ion beams, where the orbital momentum of stripped nucleons shows itself in the spin of fragments.

The spin-physics community is entering a race to test the fundamental QCD prediction of the opposite sign of the Sivers effect in semi-inclusive DIS and DY on polarized protons. As Catarina Quintans from Lisbon explained, COMPASS is well poised to pursue this line of research. At the same time, ambitious plans to measure AN in DY experiments with transverse polarization at RHIC, which Elke-Caroline Aschenauer of Brookhaven presented, have involved scraping together a “yard-sale apparatus” for a proposal to be submitted this year. Paul Reimer of Argonne and Ming Liu of Los Alamos discussed the possibilities at the Fermilab Main Injector.

Following the Belle collaboration’s success with the Collins function, Martin Leitgab of Urbana-Champaign reported nice preliminary results on the interference fragmentation function. These cover a broad range of invariant masses in both arms of the experiment.

In his summary talk, Nikolai Nikolaev, of Jülich, raised the issue of the impact of hadronization on spin correlation. As Wolfgang Schäfer observed some time ago, the beta decay of open charm can be viewed as the final step of the hadronization of open charm. In the annihilation of e+e to open charm, the helicities of heavy quarks are correlated and the beta decay of the open charm proceeds via the short-distance heavy quark; so there must be a product of the parity-violating components in the dilepton spectrum recorded in two arms of an experiment. However, because the spinning D* mesons decay into spinless Ds, the spin of the charmed quark is washed out and the parity-violating component of the lepton spectrum is obliterated.

The PAX experiment to polarize stored antiprotons at FAIR featured prominently during the meeting. Jülich’s Frank Rathmann reviewed the proposal and also reported on the spin-physics programme of the COSY-ANKE spectrometer. Important tests of the theories of spin filtering in polarized internal targets will be performed with protons at COSY, before the apparatus is moved to the Antiproton Decelerator at CERN – a unique place to study the spin filtering of antiprotons. Johann Haidenbauer of Jülich, Yury Uzikov of Dubna and Sergey Salnikov of the Budker Institute of Nuclear Physics reported on the Jülich- and Nijmegen-model predictions for the expected spin-filtering rate. There are large uncertainties with modelling the annihilation effects but the findings of substantial polarization of filtered antiprotons are encouraging. Bogdan Wojtsekhowski of Jefferson Lab came up with an interesting suggestion for the spin filtering of antiprotons using a high-pressure, polarized 3He target. This could drastically reduce the filtering time but the compatibility with the storing of the polarized antiprotons remains questionable.

Kent Paschke of Virginia gave a nice review on nucleon electromagnetic form factors, where there is still a controversy between the polarization transfer and the Rosenbluth separation of GE and GM. He and Richard Milner of MIT discussed future direct measurements of the likely culprit – the two-photon exchange contribution – at Jefferson Lab’s Hall B, at DESY with the OLYMPUS experiment at DORIS and at VEPP-III at Novosibirsk.

Spin experiments have always provided stringent tests of fundamental symmetries and there were several talks on the electric dipole moments (EDMs) of nucleons and light nuclei. Experiments with ultra-cold neutrons could eventually reach a sensitivity of dn ≈ 10–28 e⋅cm for the neutron EDM, while new ideas on electrostatic rings for protons could reach a still smaller dp ≈ 10–29 e⋅cm. The latter case, pushed strongly by the groups at Brookhaven and Jülich, presents enormous technological challenges. In the race for high precision versus high energy, such upper bounds on dp and dn would impose more stringent restrictions on new physics (supersymmetry etc.) than LHC experiments could provide.

Will nuclear polarization facilitate a solution to the energy problem? There is an old theoretical observation by Russell Kulsrud and colleagues that the fusion rate in tokomaks could substantially exceed the rate of depolarization of nuclear spins. While the spin dependence of the 3HeD and D3H fusion reactions is known, the spin dependence of the DD fusion reaction has never been measured. Kirill Grigoriev of PNPI Gatchina reported on the planned experiment on polarized DD fusion. Even at energies in the 100 keV range, DD reactions receive substantial contributions from higher partial waves and, besides possibly meeting the demands of fusion reactors, such data would provide stringent tests of few-body theories – in 2010 the existing theoretical models predict quintet suppression factors which differ by nearly one order in magnitude.

• The proceedings will be published by IOP Publishing in Journal of Physics: Conference Series (online and open-access). The International Spin Physics Committee (www.spin-community.org) decided that the 20th Spin Physics Symposium will be held in Dubna in 2012.

The post Jülich welcomes the latest spin on physics appeared first on CERN Courier.

]]>
https://cerncourier.com/a/jlich-welcomes-the-latest-spin-on-physics/feed/ 0 Feature SPIN2010 provides a showcase for what’s new in spin physics. https://cerncourier.com/wp-content/uploads/2011/03/CCspi1_03_11-feature.jpg
Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity and Exact Space–Times in Einstein’s General Relativity https://cerncourier.com/a/quantum-field-theory-in-curved-spacetime-quantized-fields-and-gravity-and-exact-space-times-in-einsteins-general-relativity/ Tue, 25 Jan 2011 09:45:17 +0000 https://preview-courier.web.cern.ch/?p=104901 Massimo Giovannini reviews in 2011 Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity and Exact Space–Times in Einstein's General Relativity.

The post Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity and Exact Space–Times in Einstein’s General Relativity appeared first on CERN Courier.

]]>
Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity

By Leonard Parker and David Toms

Cambridge University Press

Hardback: £48 $83 E-book: $64

Exact Space–Times in Einstein’s General Relativity

By Jerry B Griffiths and Jirˇí Podolský

Cambridge University Press

Hardback: £80 $129 E-book: $100

CCboo2_01_11

Long ago, more or less immediately after Einstein’s formulation of general relativity, one of the dreams of physics was to understand why flat space–time is so special. Why are quantum mechanics and field theory formulated in flat space while their curved-space analogues are sometimes ill defined, at least conceptually? Can we hope, as Richard Feynman speculated, to quantize gravity in flat space–times and then construct all of the most complicated geometries as coherent states of gravitons?

The dreams of a more coherent picture of gravity and of gauge interactions in flat space are probably still there, but nowadays theorists invest a great deal of effort in understanding the subtleties of the quantization of fields, particles, strings and (mem)branes in geometries that are curved both in space and in time. Cambridge University Press was one of the first publishers to voice these attempts with the classic Quantum Fields in Curved Space by N B Birrel and P C W Davies, which is now well known to many students since its first edition in 1982. Leonard Parker (distinguished professor emeritus at the University of Wisconsin) and David Toms (reader in mathematical physics and statistics at the University of Newcastle) were both abundantly quoted in the book by Birrel and Davies and they have now published Quantum Field Theory in Curved Spacetime, also with Cambridge. While readers of Birrel and Davies will certainly like this new book, newcomers and students will appreciate the breadth and the style of a treatise written by two well known scientists who have dedicated their lives to the understanding of the treatment of quantum fields in a fixed gravitational background.

The book consists of seven chapters spread evenly between pure theory and applications. One of its features is the attention to the introductory aspects of a problem: students and teachers will like this aspect. The introductory chapter reminds the reader of various concepts arising in field theory in flat space–time, while the second chapter introduces the basic aspects of quantum field theory in curved backgrounds. After the central chapters dealing with useful applications (including the discussion of pair creation in black-hole space–times) the derivation of effective actions of fields of various spins is presented, always by emphasizing the curved-space aspects.

CCboo3_01_11

A rather appropriate companion volume is Exact Space-Times in Einstein’s General Relativity by Jerry Griffiths and Jiří Podolský, published by Cambridge in late 2009. Here, the interested reader is led through a review of the monumental work performed by general relativists over the past 50 years. The book also complements (and partially extends) the famous work by Dietrich Kramer, Hans Stephani, Malcolm MacCallum and Eduard Herlt, Exact Solutions of Einstein’s Field Equations, first published, again by Cambridge, in 1980.

Like its famous ancestor, the book by Griffiths and Podolský will probably be used as a collection of exact solutions by practitioners. However this risk is moderated to some extent by a presentation in the style of an advanced manual of general relativity (GR). The 22 chapters cover in more than 500 pages all of the most important solutions of GR. After two introductory chapters the reader is guided on a tour of the most important spatially homogeneous and spatially inhomogeneous, four-dimensional background geometries, starting from de Sitter and anti-de Sitter space–times but quickly moving to a whole zoo of geometries that are familiar to theorists but which may sound rather arcane to scientists who are not directly working with GR.

Both books reviewed here can also be recommended because they tell of the achievements of a generation of theorists whose only instruments were, for a good part of their lives, a pad of paper and a few pencils.

The post Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity and Exact Space–Times in Einstein’s General Relativity appeared first on CERN Courier.

]]>
Review Massimo Giovannini reviews in 2011 Quantum Field Theory in Curved Spacetime: Quantized Fields and Gravity and Exact Space–Times in Einstein's General Relativity. https://cerncourier.com/wp-content/uploads/2011/01/CCboo2_01_11.jpg
The Shape of Inner Space: String Theory and the Geometry of the Universe’s Hidden Dimensions https://cerncourier.com/a/the-shape-of-inner-space-string-theory-and-the-geometry-of-the-universes-hidden-dimensions/ Tue, 30 Nov 2010 08:23:11 +0000 https://preview-courier.web.cern.ch/?p=104617 Gordon Fraser reviews in 2010 The Shape of Inner Space: String Theory and the Geometry of the Universe's Hidden Dimensions.

The post The Shape of Inner Space: String Theory and the Geometry of the Universe’s Hidden Dimensions appeared first on CERN Courier.

]]>
by Shing-Tung Yau and Steve Nadis, Basic Books. Hardback ISBN 9780465020232, $30.

CCboo2_10_10

Geometry is the architecture of space, explains Shing-Tung Yau at the start of this book. For most of history, this architecture used the rigid straight lines inherited from Pythagoras, Euclid and other Ancient Greeks. Then, René Descartes, Carl Friedrich Gauss and Bernhard Riemann in turn showed how it could become more flexible.

Whichever way it was constructed, geometry remained largely abstract until almost 100 years ago, when Albert Einstein’s theory of general relativity showed how matter influences the space around it. Ever since this pioneer synthesis, mathematicians have been exploring the possibilities of geometry for physics, and vice versa. One early milestone was the attempt by Theodor Kaluza and Oskar Klein to extend space from four to five dimensions. Although their attempt to extract new physics failed, it has never stopped physicists and mathematicians from exploring the potential of multidimensional spaces.

In the same way that Einstein’s work revolutionized the theory of gravity, so in the closing years of the 20th century string theory emerged as a new way of viewing elementary particles and their various interactions. Unlike Brian Greene’s The Elegant Universe, this book is not an introduction to the physics fundamentals of string theory. Instead, it is more concerned with the mathematics that string theory uses.

In 1950, a geometer named Eugenio Calabi launched a bold new conjecture. More than a quarter of a century later, this conjecture was proved by Shing-Tung Yau, and the geometry has since been known as Calabi-Yau manifolds. The two names have become so closely associated that Yau wryly points out how many people assume that his first name is Calabi!

Following a description of such arcane mathematics is difficult, the proof even more so. However, it is dutifully done, in a way redolent of Simon Singh’s Fermat’s Last Theorem, which commendably made mathematics understandable without using equations. Some of Yau’s explanations are difficult to follow but a glossary of mathematical terms at the end of the book is a great help. The remainder of the book explains the potential of Calabi-Yau geometry as a framework for string theories – a subject that seems to have taken a place alongside rocket science as a perceived pinnacle of intellectual ingenuity.

While books with two co-authors are not unusual, this one is: one author writes a narrative in the first person, the other uses the third person. Nevertheless it works. For anyone interested in string theory it is a good book for understanding what has been achieved so far, and by whom (however, some notable contributions are missing). It is also a timely reminder of the latent power and elegance of mathematics. Calabi-Yau manifolds could help revolutionize our understanding of the world around us in the same way that Riemannian geometry did. However, while many great minds have chipped away at the problem, the ultimate latter-day Einstein has yet to emerge.

The post The Shape of Inner Space: String Theory and the Geometry of the Universe’s Hidden Dimensions appeared first on CERN Courier.

]]>
Review Gordon Fraser reviews in 2010 The Shape of Inner Space: String Theory and the Geometry of the Universe's Hidden Dimensions. https://cerncourier.com/wp-content/uploads/2010/11/CCboo2_10_10.jpg
The window opens on physics at 7 TeV https://cerncourier.com/a/the-window-opens-on-physics-at-7-tev/ https://cerncourier.com/a/the-window-opens-on-physics-at-7-tev/#respond Tue, 26 Oct 2010 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-window-opens-on-physics-at-7-tev/ After almost six months of operation in a new energy region, the experiments at the LHC are yielding papers on physics at 7 TeV in the centre-of-mass. They include results aired at the International Conference on High-Energy Physics in Paris in July.

The post The window opens on physics at 7 TeV appeared first on CERN Courier.

]]>
After almost six months of operation in a new energy region, the experiments at the LHC are yielding papers on physics at 7 TeV in the centre-of-mass. They include results aired at the International Conference on High-Energy Physics in Paris in July.

CCnew1_0910

At the end of September, the CMS collaboration announced the observation of intriguing correlations between particles produced in proton–proton collisions at 7 TeV. It measured two-particle angular correlations in collisions at 0.9, 2.36 and 7 TeV – the three centre-of-mass energies at which the LHC has run. At 7 TeV, a pronounced structure emerges in the two-dimensional correlation function for particle pairs in high-multiplicity events, with at least 100 charged particles and a transverse momentum of 1–3 GeV/c. The ridge-like structure occurs at ΔΦ (a measure of the difference in transverse angle) near zero and spans a rapidity range of 2.0 <|Δη| <4.8 (CMS collaboration 2010). This implies that some pairs of particles emerging with a wide longitudinal angle (which is related to Δη) are closely correlated in transverse angle. The effect bears some similarity to those already seen in heavy-ion collisions at the Relativistic Heavy Ion Collider at the Brookhaven National Laboratory, which have been linked to the formation of hot, dense matter in the collisions. However, as the CMS collaboration stresses, there are several potential explanations.

These developments will be of interest to the ALICE collaboration, whose detector is optimized for the study of heavy-ion collisions at the LHC, the first period of which is scheduled to begin in November. In the meantime, one of the interesting results from ALICE in proton–proton collisions concerns the ratio of the yields of antiprotons to protons at both 0.9 TeV and 7 TeV. The measurement relates to the question of whether baryon number can transfer from the incoming beams to particles emitted transversely (at mid-rapidity). Any excess of protons over antiprotons would indicate such a transfer, which would be related to the slowing down of the incident proton. The results show that the ratio rises from about 0.95 at 0.9 TeV to close to 1 at 7 TeV and is independent of both rapidity and transverse momentum (ALICE collaboration 2010). These findings are consistent with the conventional model of baryon-number transport, setting stringent limits on any additional contributions.

In the search for new physics, the ATLAS experiment recently set new limits on the mass of excited quarks by looking in the mass distributions of two-jet events, or dijets. Now, the collaboration has also produced the first measurements of cross-sections for the production of jets in proton–proton collisions at 7 TeV. It has measured inclusive single-jet differential cross-sections as functions of the jet’s transverse momentum and rapidity and dijet cross-sections as functions of the dijet’s mass and an angular variable Χ. The results agree with expectations from next-to-leading-order QCD, so providing a validation of the theory in a new kinematic regime.

The LHCb collaboration is also measuring cross-sections in the LHC’s new energy region. With its focus on the physics of b quarks, the experiment has looked, for example, at the decays of b hadrons into final states containing a D0 meson and a muon to measure the bb production cross-section at 7 TeV (LHCb collaboration 2010). While some earlier results on the production of b-flavoured hadrons at 1.8 TeV at the Tevatron appeared to be higher than theoretical predictions, more recent measurements there at 1.96 TeV by the CDF experiment were consistent with theory. Now, LHCb’s results have extended the measurements to a much higher centre-of-mass energy – and again show consistency with theory, this time at 7 TeV. Such measurements of particle yields are vital to LHCb in assessing the sensitivity for studying fundamental parameters, for example, in CP violation.

The post The window opens on physics at 7 TeV appeared first on CERN Courier.

]]>
https://cerncourier.com/a/the-window-opens-on-physics-at-7-tev/feed/ 0 News After almost six months of operation in a new energy region, the experiments at the LHC are yielding papers on physics at 7 TeV in the centre-of-mass. They include results aired at the International Conference on High-Energy Physics in Paris in July. https://cerncourier.com/wp-content/uploads/2010/10/CCnew1_0910.jpg
Physics buzz in Paris https://cerncourier.com/a/physics-buzz-in-paris/ https://cerncourier.com/a/physics-buzz-in-paris/#respond Tue, 26 Oct 2010 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/physics-buzz-in-paris/ Pushing back frontiers in particle physics at ICHEP 2010.

The post Physics buzz in Paris appeared first on CERN Courier.

]]>
CCich1_0910

Sixty years ago, particle physics was in its infancy. In 1950 Cecil Powell received the Nobel Prize in Physics for the emulsion technique and the discovery of the charged pions, and an experiment at Berkeley revealed the first evidence for the neutral version. In New York, the first in a new series of conferences organized by Robert Marshak took place at the University of Rochester with 50 participants. The “Rochester conference” was to evolve into the International Conference on High-Energy Physics (ICHEP) and this year more than 1100 physicists gathered in Paris for the 35th meeting in the series.

ICHEP’s first visit to the French capital was in 1982. CERN’s Super Proton Synchrotron had just begun to operate as a proton–antiproton collider and the UA2 collaboration reported on the first observations of back-to-back jets with high transverse momentum. This year, as ICHEP retuned to Paris, jets in a new high-energy region were again a highlight. This time they were from the LHC, one undoubted “star of the show”, together with the president of France, Nicolas Sarkozy.

CCich4_0910

Given the growth in the field since the first Rochester conference, this report can only touch on some of the highlights of ICHEP 2010, which took place on 22–28 July at the Palais des Congrès and followed the standard format of three days of parallel sessions, a rest day (Sunday) and then three days of plenary sessions. The evening of 27 July saw Parisians and tourists well outnumber physicists at the “Nuit des particules”, a public event held at the Grand Rex theatre (see box). On the rest day, in addition to various tours, there was the opportunity to watch the final stage of the 2010 Tour de France as it took over the heart of Paris.

A tour of LHC physics

The LHC project has had similarities to the famous cycle race – participants from around the world undertaking a long journey, with highs and lows en route to a thrilling climax. In the first of the plenary sessions, Steve Myers, director for accelerators and technology at CERN, looked back over more than a year of repair and consolidation work that led to the LHC’s successful restart with first collisions in November 2009. With the collider running at 3.5 TeV per beam since March this year, the goal is to collect 1 fb–1 of integrated luminosity with proton collisions before further consolidation work takes place in 2012 to allow the machine to run at its full energy of 7 TeV per beam in 2013. The long-term goal is to reach 3000 fb–1 by 2030. This will require peak luminosities of 5 × 1034 cm–2 s–1 in 2021–2030 for which studies are already underway, for example on the use of crab cavities.

The proposed long-term schedule envisages one-year shutdowns for consolidation in 2012, 2016 and 2020, with shorter periods of maintenance in December/January in the intervening years, and 6–8 month shutdowns every other year after 2020. Heavy-ion runs are planned for each November when the LHC is running, starting this year. Myers also provided glimpses of ideas for a 16.5 TeV version of the LHC that would require 20 T dipole magnets based on NbSn3, NbAI and high-temperature superconductors.

CCich2_0910

What many at the conference were waiting for were the reports from the LHC experiments on the first collision data, presented both in dedicated parallel sessions and by the spokespersons on the first plenary day. Common features of these talks revealed just how well prepared the experiments were, despite the unprecedented scale and complexity of the detectors. The first data – much of it collected only days before the conference as the LHC ramped up in luminosity – demonstrated the excellent performance of the detectors, the high efficiency of the triggers and the swift distribution of data via the worldwide computing Grid. All of these factors combined to allow the four large experiments to rediscover the physics of the Standard Model and make the first measurements of cross-sections in the new energy regime of 7 TeV in the centre-of-mass.

The ATLAS and CMS collaborations revealed some of their first candidate events with top quarks – previously observed only at Fermilab’s Tevatron. They also displayed examples of the more copiously produced W and Z bosons, seen for the first time in proton–proton collisions, and presented cross-sections that are in good agreement with measurements at lower energies. Lighter particles provided the means to demonstrate the precision of the reconstruction of secondary vertices, shown off in remarkable maps of the material in the inner detectors.

Both ATLAS and CMS have observed dijet events, with masses higher than that of the Tevatron’s centre-of-mass energy. The first measurements of inclusive jet cross-sections in both experiments show good agreement with next-to-leading-order QCD (The window opens on physics at 7 TeV). In searches for new physics, ATLAS has provided a new best limit on excited quarks, which are now excluded in the mass region 0.4 <M <1.29 TeV at 95% CL. For its part, by collecting data in the period between collisions at the LHC, CMS derived limits on the existence of the “stopped gluino”, showing that it cannot exist with lifetimes of longer than 75 ns.

The LHCb collaboration reported clear measurements of several rare decays of B mesons and cross-sections for the production of open charm, the J/ψ and bb states. With the first 100 pb–1 of data, the experiment should become competitive with Belle at KEK and DØ at Fermilab, with discoveries in prospect once 1 fb–1 is achieved.

The ALICE experiment, which is optimized for heavy-ion collisions, is collecting proton–proton collision data for comparison with later heavy-ion measurements and to evaluate the performance of the detectors. The collaboration has final results in charged multiplicity distributions at 7 TeV, as well as at 2.36 TeV and 0.9 TeV in the centre-of-mass. These show significant increases with respect to Monte Carlo predictions, as do similar measurements from CMS. ALICE also has interesting measurements of the antiproton to proton ratio.

CCich3_0910

While the LHC heads towards its first 1 fb–1, the Tevatron has already delivered some 9 fb–1, with 6.7 fb–1 analysed by the time of the conference. One eagerly anticipated highlight was the announcement of a new limit on the Higgs mass from a combined analysis of the CDF and DØ experiments. This excludes a Higgs between 158–175 GeV/c2, thus eliminating about 25% of the favoured region from analysis of data from the Large Electron–Positron collider and elsewhere. As time goes by, there is little hiding place for the long-sought particle. In other Higgs-related searches, the biggest effect is a 2σ discrepancy found in CDF for the decay to bb of the Higgs in the minimal supersymmetric extension to the Standard Model.

Stressing the Standard Model

The strongest hint at the Tevatron for physics beyond the Standard Model comes from measurements of the decays of B mesons. The DØ experiment finds evidence for an anomalous asymmetry in the production of muons of the same sign in the semi-leptonic decays of Bs mesons, which is greater than the asymmetry predicted by CP violation in the B system in the Standard Model by about 3.2σ. While new results from DØ and CDF for the decay Bs→J/ψ+Φ show a better consistency with the Standard Model, they are not inconsistent with the measurement of Absl,.

Experiments at the HERA collider at DESY, and at the B factories at KEK and SLAC, have also searched extensively for indications of new physics, and although they have squeezed the Standard Model in every way possible it generally remains robust. Of course, the searches extend beyond the particle colliders and factories, to fixed-target experiments and detectors far from accelerator laboratories. The Super-Kaminokande experiment, now in its third incarnation, is known for its discovery of neutrino oscillations, which is the clearest indication yet of physics beyond the Standard Model, but it also searches for signs of proton decay. It has now accumulated data corresponding to 173 kilotonne-years and, with no evidence for the proton’s demise, it sets the proton’s lifetime at greater than 1 × 1034 years for the decay to e+π0 and greater than 2.3 × 1034 years for νK+.

The first clear evidence for neutrino oscillations came from studies of neutrinos from the Sun and those created by cosmic rays in the upper atmosphere, but now it is the turn of the long-baseline experiments based at accelerators and nuclear reactors to bring the field into sharper focus. At accelerators a new era is opening with the first events in the Tokai-to-Kamioka (T2K) experiment, as well as the observation of the first candidate ντ in the OPERA detector at the Gran Sasso National Laboratory, using beams of νμ from the Japan Proton Accelerator Research Complex and CERN respectively.

While T2K aims towards production of the world’s highest intensity neutrino beam, the honour currently lies with Fermilab’s Neutrino beam at the Main Injector, which delivers νμ to the MINOS experiment, with a far-detector 735 km away in the Soudan Mine. MINOS now has analysed data for 7.2 × 1020 protons on target (POT) and observes 1986 events where 2451 would be expected without oscillation. The result is the world’s best measurement for |Δm2| with a value of 2.35+0.11/–0.08 × 10–3 eV2, and sin22θ> 0.91 (90% CL). MINOS also finds no evidence for oscillations to sterile neutrinos and puts limits on θ13. Recently, the experiment has been running with an anti-neutrino beam, and this has proved to hint at differences in the oscillations of antineutrinos as compared with neutrinos. With antineutrinos, the collaboration measures |Δm2|= 3.36+0.45/–0.40 × 10–3 eV2 and sin22θ = 0.86±0.11. As yet the statistics are low, with only 1.7 × 1020 POT for the antineutrinos, but the experiment can quickly improve this with more data.

The search for direct evidence of dark-matter particles, which by definition lie outside the Standard Model, continues to have tantalizing yet inconclusive results. Experiments on Earth search for the collisions of weakly interacting massive particles (WIMPs) in detectors where background suppression is even more challenging than in neutrino experiments. Recent results include those from the CDMS II and EDELWEISS II experiments, in the Soudan Mine and the Modane Underground Laboratory in the Fréjus Tunnel, respectively. CDMS II presented its final results in November 2009, following a blind analysis. After a timing cut, the analysis of 194 kg days of data yields two events, with an expected background of 0.8±01(stat.)±0.2 (syst.) events. The collaboration concludes that this “cannot be interpreted as significant evidence for WIMP interactions”. EDELWEISS II has new, updated results, which now cover an effective 322 kg days. They have three events near threshold and one with a recoil energy of 175 keV, giving a limit on the cross-section of 5.0 × 10–8 pb for a WIMP mass of 80 GeV (at a 90% CL).

Higher energies, in nature and in the lab

Looking to the skies provides a window on nature’s own laboratory of the cosmos. The annihilation of dark matter in the galaxy could lead to detectable effects, but the jury is still out on the positron excess observed by the PAMELA experiment in space. Back on Earth, the Pierre Auger Observatory and the High-Resolution Fly’s Eye (HiRes) experiment in the southern and northern hemispheres, respectively, detect cosmic rays with energies up to 1020 eV (100 EeV) and more. Both have evidence for the suppression of the highest energies by the Greisen-Zatsepin-Kuzmin (GZK) cut-off. There is also evidence for a change in composition towards heavier nuclei at higher energies, although this may also be related to a change in cross-sections at the highest energies. The correlation of the direction of cosmic rays at energies of 55 EeV or more with active galactic nuclei, first reported by the Pierre Auger collaboration in 2007, has weakened with further data, from the earlier value of 69 + 11/–13% to stabilize around 38 + 7/–6%, now with more than 50 events.

Cosmic neutrinos provide another possibility for identifying sources of cosmic rays. The ANTARES water Cherenkov telescope in the Mediterranean Sea now has a sky map of its first 1000 neutrinos and puts upper limits on point sources and on the diffuse astrophysical neutrino flux. IceCube, with its Cherenkov telescope in ice at the South Pole, also continues to push down the upper limits on the diffuse flux with measurements that begin to constrain theoretical models.

In the laboratory, the desire to push further the exploration of the high-energy frontier continues to drive R&D into accelerator and detector techniques. The world community is already deeply involved in studies for a future linear e+e collider. The effort behind the International Linear Collider to reach 500 GeV per beam is relatively mature, while work on the more novel two-beam concept for a Compact Linear Collider to reach 3 TeV is close to finishing a feasibility study. Other ideas for machines further into the future include the concept for a muon collider, which would require muon-cooling to create a tight beam, but could provide collisions at 4 TeV in the centre-of-mass. Reaching much higher energies will require new technologies to overcome the electrical breakdown limits in RF cavities. Dielectric structures offer one possibility, with studies showing breakdown limits that approach 1 GV/m. Beyond that, plasma-based accelerators still hold the promise of still greater gradients, as high as 50 GV/m.

Particle physics has certainly moved on since the first Rochester conference; maybe a future ICHEP will see results from a muon collider or the first plasma-wave accelerator. For now, ICHEP 2010 proved a memorable event, not least as the first international conference to present results from collisions at the LHC. Its success was thanks to the hard work of the French particle-physics community, and in particular the members of the local organizing committee, led by Guy Wormser of LAL/Orsay. Now, the international community can look forward to the next ICHEP, which will be in Melbourne in 2012.

• Proceedings of ICHEP 2010 are published online in the Proceedings of Science, see http://pos.sissa.it.

The post Physics buzz in Paris appeared first on CERN Courier.

]]>
https://cerncourier.com/a/physics-buzz-in-paris/feed/ 0 Feature Pushing back frontiers in particle physics at ICHEP 2010. https://cerncourier.com/wp-content/uploads/2010/10/CCich1_0910.jpg
Relatività Generale e Teoria della Gravitazione https://cerncourier.com/a/relativita-generale-e-teoria-della-gravitazione/ Tue, 28 Sep 2010 09:57:41 +0000 https://preview-courier.web.cern.ch/?p=104917 Diego Casadei reviews in 2010 Relatività Generale e Teoria della Gravitazione.

The post Relatività Generale e Teoria della Gravitazione appeared first on CERN Courier.

]]>
by Maurizio Gasperini, Springer. Paperback ISBN 9788847014206, €25.72 (£19.99).

CCboo2_08_10

Maurizio Gasperini’s book is a textbook on the theory of general relativity (GR), but it does not present Einstein’s theory as the final goal of a course. Rather, GR is seen here as an intermediate step towards more complex theories, as already becomes clear from the table of contents. In addition to the standard material on Riemannian geometry, which always accompanies the development of the physical content of GR, and on the solutions of the Einstein equations for the case of a weak field (including a treatment of gravitational waves) and for the case of a homogeneous and isotropic system (including black holes), there are also chapters on gauge symmetries (local and global), supersymmetry and supergravity.

Given the purpose of the book, it is not surprising to find the treatment of the formalism of tetrads (vierbein), forms and duality relations, which constitute the bridge between the Riemannian manifold describing space–time and gravity and the flat tangent space with Minkowski metric. For the same reason, the author considers the general case in which the torsion of the curved space–time is not null (as in Einstein’s GR) in order to address the general case of a curved manifold, which is needed for the theory of the gravitino (i.e. of a local supersymmetry between fermions and bosons).

Other nice aspects of the book are the analogy between the Maxwell equations in a curved Riemannian manifold and in an optical medium, the computation of the precession of Mercury in the context of both the special and general theories of relativity, as well as several exercises whose solutions are a valuable ingredient of the book. Given the relatively small number of pages (fewer than 300), I can understand why a few stimulating aspects have been omitted (“gravitomagnetism” or Lense–Thirring precession, Hawking radiation and a discussion of the topological aspects left free by GR), but I sincerely hope that they could be included in a future edition.

Special mention should be made of the last four chapters, which deal with the Kasner solution of the Einstein equations in a homogenous but anisotropic medium, with the bridge between the curved Riemannian manifold and the flat tangent space, with quantum theory in a curved space–time and with supersymmetry and supergravity. These make the book different from most texts of its kind. In conclusion, I warmly recommend reading this book and hope that an English translation can help it reach a wider audience.

The post Relatività Generale e Teoria della Gravitazione appeared first on CERN Courier.

]]>
Review Diego Casadei reviews in 2010 Relatività Generale e Teoria della Gravitazione. https://cerncourier.com/wp-content/uploads/2010/09/CCboo2_08_10.jpg
QCD scattering: from DGLAP to BFKL https://cerncourier.com/a/qcd-scattering-from-dglap-to-bfkl/ https://cerncourier.com/a/qcd-scattering-from-dglap-to-bfkl/#respond Tue, 20 Jul 2010 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/qcd-scattering-from-dglap-to-bfkl/ A look at some of the work behind two well known equations.

The post QCD scattering: from DGLAP to BFKL appeared first on CERN Courier.

]]>
CCqcd1_06_10

Most particle physicists will be familiar with two famous abbreviations, DGLAP and BFKL, which are synonymous with calculations of high-energy, strong-interaction scattering processes, in particular nowadays at HERA, the Tevatron and most recently, the LHC. The Dokshitzer-Gribov-Lipatov-Alterelli-Parisi (DGLAP) equation and the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation together form the basis of current understanding of high-energy scattering in quantum chromodynamics (QCD), the theory of strong interactions. The celebration this year of the 70th birthday of Lev Lipatov, whose name appears as the common factor, provides a good occasion to look back at some of the work that led to the two equations and its roots in the theoretical particle physics of the 1960s.

Quantum field theory (QFT) lies at the heart of QCD. Fifty years ago, however, theoreticians were generally disappointed in their attempts to apply QFT to strong interactions. They began to develop methods to circumvent traditional QFT by studying the unitarity and analyticity constraints on scattering amplitudes, and extending Tullio Regge’s ideas on complex angular momenta to relativistic theory. It was around this time that the group in Leningrad led by Vladimir Gribov, which included Lipatov, began to take a lead in these studies.

Quantum electrodynamics (QED) provided the theoretical laboratory to check the new ideas of particle “reggeization”. In several pioneering papers Gribov, Lipatov and co-authors developed the leading-logarithm approximation to processes at high-energies; this later played a key role in perturbative QCD for strong interactions (Gorshkov et al. 1966). Using QED as an example, they demonstrated that QFT leads to a total cross-section that does not decrease with energy – the first example of what is known as Pomeron exchange. Moreover, they checked and confirmed the main features of Reggeon field theory in the particular case of QED.

CCqcd2_06_10

By the end of the 1960s, experiments at SLAC had revealed Bjorken scaling in deep inelastic lepton-hadron scattering. This led Richard Feynman and James Bjorken to introduce nucleon constituents – partons – that later turned out to be nothing other than quarks, antiquarks and gluons. Gribov became interested in finding out if Bjorken scaling could be reproduced in QFT. As examples he studied both a fermion theory with a pseudoscalar coupling and QED, in the kinematic conditions where there is a large momentum-transfer, Q2, to the fermion. The task was to select and sum all leading Feynman diagrams that give rise to the logarithmically enhanced (α log Q2)n contributions to the cross section, at fixed values of the Bjorken variable x=Q2/(s+Q2) between zero and unity, where s is the invariant energy of the reaction.

At some point Lipatov joined Gribov in the project and together they studied not only deep inelastic scattering but also the inclusive annihilation of e+e to a particle, h, in two field-theoretical models, one of which was QED. They showed that in a renormalizable QFT, the structure functions must violate Bjorken scaling (Gribov and Lipatov 1971). They obtained relations between structure functions that describe deep inelastic scattering and those that describe jet fragmentation in e+e annihilation – the Gribov-Lipatov reciprocity relations. It is interesting to note that this work appeared at a time before experiments had either detected any violation in Bjorken scaling or observed any rise with momentum transfer of the transverse momenta in “hard” hadronic reactions, as would follow from a renormalizable field theory. This paradox led to continuous and sometimes heated discussions in the new Theory Division of the Leningrad (now Petersburg) Nuclear Physics Institute (PNPI) in Gatchina.

CCqcd3_06_10

Somewhat later, Lipatov reformulated the Gribov-Lipatov results for QED in the form of the evolution equations for parton densities (Lipatov 1974). This differed from the real thing, QCD, only by colour factors and by the absence of the gluon-to-gluon-splitting kernel, which was later provided independently by Yuri Dokshitzer at PNPI, and by Guido Altarelli and Giorgio Parisi, then at Ecole Normale Superieure and IHES, Bures-sur-Yvette, respectively (Dokshitzer 1977, Altarelli and Parisi 1977). Today the Gribov-Lipatov-Dokshitzer-Altarelli-Parisi (DGLAP) evolution equations are the basis for all of the phenomenological approaches that are used to describe hadron interactions at short distances.

The more general evolution equation for quasi-partonic operators that Lipatov and his co-authors obtained allowed them to consider more complicated reactions, including high-twist operators and polarization phenomena in hard hadronic processes.

Lipatov went on to show that the gauge vector boson in Yang-Mills theory is “reggeized”: with radiative corrections included, the vector boson becomes a moving pole in the complex angular momentum plane near j=1. In QCD, however, this pole is not directly observable by itself because it corresponds to colour exchange. More meaningful is an exchange of two or more reggeized gluons, which leads to “colourless” exchange in the t-channel, either with vacuum quantum numbers (when it is called a Pomeron) or non-vacuum ones (when it is called an “odderon”). Lipatov and his collaborators showed that the Pomeron corresponds not to a pole, but to a cut in the plane of complex angular momentum.

A different approach

The case of high-energy scattering required a different approach. In this case, in contrast to the DGLAP approach – which sums up higher-order αs contributions enhanced by the logarithm of virtuality, ln Q2 – contributions enhanced by the logarithm of energy, ln s, or by the logarithm of a small momentum fraction, x, carried by gluons, become important. The leading-log contributions of the type (αsln(1/x))n are summed up by the famous Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation (Kuraev et al. 1977, Balitsky and Lipatov 1978). Compared with DGLAP, this is a more complicated problem because the BFKL equation actually includes contributions from operators of higher twists.

In its general form the BFKL equation describes not only the high-energy behaviour of cross-sections but also the amplitudes at non-zero momentum transfer. Lipatov discovered beautiful symmetries in this equation, which enabled him to find solutions in terms of the conformal-symmetric eigenfunctions. This completed the construction of the “bare Pomeron in QCD”, a fundamental entity of high-energy physics (Lipatov 1986). An interesting new property of this bare Pomeron (which was not known in the old reggeon field theory) is the diffusion of the emitted particles in ln kt space.

Later, in the 1990s, Lipatov together with Victor Fadin calculated the next-to-leading-order corrections to the BFKL equation, obtaining the “BFKL Pomeron in the next-to-leading approximation” (Fadin and Lipatov 1998). Independently, this was also done by Marcello Ciafaloni and Gianni Camici in Florence (Ciafaloni and Camici 1998). Lipatov also studied higher-order amplitudes with an arbitrary number of gluons exchanged in the t-channel and, in particular, described odderon exchange in perturbative QCD. The significance of this work was, however, much greater. It led to the discovery of the connection between high-energy scattering and the exactly solvable two-dimensional field-theoretical models (Lipatov 1994).

More recently Lipatov has taken these ideas into the hot, new field in theoretical physics: the anti-de Sitter/conformal-field theory correspondence (ADS/CFT) – a hypothesis put forward by Juan Maldacena in 1997. This states that there is a correspondence – a duality – in the description of the maximally supersymmetric N=4 modification of QCD from the standard field-theory side and, from the “gravity” side, in the spectrum of a string moving in a peculiar curved anti-de Sitter background – a seemingly unrelated problem. However, Lipatov’s experience and deep understanding of re-summed perturbation theory has enabled him to move quickly into this new territory where he has developed and tested new ideas, considering first the BFKL and DGLAP equations in the N=4 theory and computing the anomalous dimensions of various operators. The high symmetry of this theory, in contrast to standard QCD, allows calculations to be made at unprecedented high orders and the results then compared with the “dual” predictions of string theory. It also facilitates finding the integrable structures in the theory (Lipatov 2009).

In this work, Lipatov has collaborated with many people, including Vitaly Velizhanin, Alexander Kotikov, Jochen Bartels, Matthias Staudacher and others. Their work is establishing the duality hypothesis almost beyond doubt. This opens a new horizon in studying QFT at strong couplings – something that no one would have dreamt of 50 years ago.

• The author thanks Victor Fadin and Mikhail Ryskin for helpful comments.

The post QCD scattering: from DGLAP to BFKL appeared first on CERN Courier.

]]>
https://cerncourier.com/a/qcd-scattering-from-dglap-to-bfkl/feed/ 0 Feature A look at some of the work behind two well known equations. https://cerncourier.com/wp-content/uploads/2010/07/CCqcd1_06_10.jpg
Black holes and qubits https://cerncourier.com/a/black-holes-and-qubits/ https://cerncourier.com/a/black-holes-and-qubits/#respond Wed, 05 May 2010 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/black-holes-and-qubits/ Michael Duff explains how string theory and M-theory could have practical applications in quantum information theory.

The post Black holes and qubits appeared first on CERN Courier.

]]>
CCbla1_04_10

Quantum entanglement lies at the heart of quantum information theory (QIT), with applications to quantum computing, teleportation, cryptography and communication. In the apparently separate world of quantum gravity, the Hawking effect of radiating black holes has also occupied centre stage. Despite their apparent differences it turns out that there is a correspondence between the two (Duff 2007; Kallosh and Linde 2006).

Whenever two disparate areas of theoretical physics are found to share the same mathematics, it frequently leads to new insights on both sides. Indeed, this correspondence turned out to be the tip of an iceberg: knowledge of string theory and M-theory leads to new discoveries about QIT, and vice versa.

Bekenstein-Hawking entropy

Every object, such as a star, has a critical size that is determined by its mass, which is called the Schwarzschild radius. A black hole is any object smaller than this. Once something falls inside the Schwarzschild radius, it can never escape. This boundary in space–time is called the event horizon. So the classical picture of a black hole is that of a compact object whose gravitational field is so strong that nothing – not even light – can escape.

Yet in 1974 Stephen Hawking showed that quantum black holes are not entirely black but may radiate energy. In that case, they must possess the thermodynamic quantity called entropy. Entropy is a measure of how disorganized a system is and, according to the second law of thermodynamics, it can never decrease. Noting that the area of a black hole’s event horizon can never decrease, Jacob Bekenstein had earlier suggested such a thermodynamic interpretation implying that black holes must have entropy. This Bekenstein–Hawking black-hole entropy is given by one quarter of the area of the event horizon.

Entropy also has a statistical interpretation as a measure of the number of quantum states available. However, it was not until 20 years later that string theory provided a microscopic explanation of this kind for black holes.

Bits and pieces

A bit in the classical sense is the basic unit of computer information and takes the value of either 0 or 1. A light switch provides a good analogy; it can either be off, denoted 0, or on, denoted 1. A quantum bit or “qubit” can also have two states but whereas a classical bit is either 0 or 1, a qubit can be both 0 and 1 until we make a measurement. In quantum mechanics, this is called a superposition of states. When we actually perform a measurement, we will find either 0 or 1 but we cannot predict with certainty what the outcome will be; the best we can do is to assign a probability to each outcome.

There are many different ways to realize a qubit physically. Elementary particles can carry an intrinsic spin. So one example of a qubit would be a superposition of an electron with spin up, denoted 0, and an electron with spin down, denoted 1. Another example of a qubit would be the superposition of the left and right polarizations of a photon. So a single qubit state, usually called Alice, is a superposition of Alice-spin-up 0 and Alice-spin-down 1, represented by the line in figure 1. The most general two-qubit state, Alice and Bob, is a superposition of Alice-spin-up–Bob-spin-up 00, Alice-spin-up–Bob-spin-down 01, Alice-spin-down–Bob-spin-up 10 and Alice-spin-down–Bob-spin-down 11, represented by the square in figure 1.

Consider a special two-qubit state that is just 00 + 01. Alice can only measure spin up but Bob can measure either spin up or spin down. This is called a separable state; Bob’s measurement is uncorrelated with that of Alice. By contrast, consider 00 + 11. If Alice measures spin up, so too must Bob, and if she measures spin down so must he. This is called an entangled state; Bob cannot help making the same measurement. Mathematically, the square in figure 1 forms a 2 × 2 matrix and a state is entangled if the matrix has a nonzero determinant.

This is the origin of the famous Einstein–Podolsky–Rosen (EPR) paradox put forward in 1935. Even if Alice is in Geneva and Bob is millions of miles away in Alpha Centauri, Bob’s measurement will still be determined by that of Alice. No wonder Albert Einstein called it “spooky action at a distance”. EPR concluded rightly that if quantum mechanics is correct then nature is nonlocal, and if we insist on local “realism” then quantum mechanics must be incomplete. Einstein himself favoured the latter hypothesis. However, it was not until 1964 that CERN theorist John Bell proposed an experiment that could decide which version was correct – and it was not until 1982 that Alain Aspect actually performed the experiment. Quantum mechanics was right, Einstein was wrong and local realism went out the window. As QIT developed, the impact of entanglement went far beyond the testing of the conceptual foundations of quantum mechanics. Entanglement is now essential to numerous quantum-information tasks such as quantum cryptography, teleportation and quantum computation.

Cayley’s hyperdeterminant

As a high-energy theorist involved in research on quantum gravity, string theory and M-theory, I paid little attention to any of this, even though, as a member of staff at CERN in the 1980s, my office was just down the hall from Bell’s.

My interest was not aroused until 2006, when I attended a lecture by Hungarian physicist Peter Levay at a conference in Tasmania. He was talking about three qubits Alice, Bob and Charlie where we have eight possibilities,000 , 001, 010, 011, 100, 101, 110, 111, represented by the cube in figure 1. Wolfgang Dür and colleagues at the University of Innsbruck have shown that three qubits can be entangled in several physically distinct ways: tripartite GHZ (Greenberger–Horne–Zeilinger), tripartite W, biseparable A-BC, separable A-B-C and null, as shown in the left hand diagram of figure 2 (Dür et al. 2000).

CCbla2_04_10

The GHZ state is distinguished by a nonzero quantity known as the 3-tangle, which measures genuine tripartite entanglement. Mathematically, the cube in figure 1 forms what in 1845 the mathematician Arthur Cayley called a “2 × 2 × 2 hypermatrix” and the 3-tangle is given by the generalization of a determinant called Cayley’s hyperdeterminant.

The reason this sparked my interest was that Levay’s equations reminded me of some work I had been doing on a completely different topic in the mid-1990s with my collaborators Joachim Rahmfeld and Jim Liu (Duff et al. 1996). We found a particular black-hole solution that carries eight charges (four electric and four magnetic) and involves three fields called S, T and U. When I got back to London from Tasmania I checked my old notes and asked what would happen if I identified S, T and U with Alice, Bob and Charlie so that the eight black-hole charges were identified with the eight numbers that fix the three-qubit state. I was pleasantly surprised to find that the Bekenstein–Hawking entropy of the black holes was given by the 3-tangle: both were described by Cayley’s hyperdeterminant.

Octonions and super qubits

According to supersymmetry, for each known boson (integer spin 0, 1, 2 and so on) there is a fermion (half-integer spin 1/2, 3 /2, 5/2 and so on), and vice versa. CERN’s Large Hadron Collider will be looking for these superparticles. The number of supersymmetries is denoted by N and ranges from 1 to 8 in four space–time dimensions.

CERN’s Sergio Ferrara and I have extended the STU model example, which has N = 2, to the most general case of black holes in N = 8 supergravity. We have shown that the corresponding system in quantum-information theory is that of seven qubits (Alice, Bob, Charlie, Daisy, Emma, Fred and George), undergoing at most a tripartite entanglement of a specific kind as depicted by the Fano plane of figure 3.

CCbla3_04_10

The Fano plane has a strange mathematical property: it describes the multiplication table of a particular kind of number: the octonion. Mathematicians classify numbers into four types: real numbers, complex numbers (with one imaginary part A), quaternions (with three imaginary parts A, B, D) and octonions (with seven imaginary parts A, B, C, D, E, F, G). Quaternions are noncommutative because AB does not equal BA. Octonions are both noncommutative and nonassociative because (AB)C does not equal A(BC).

Real, complex and quaternion numbers show up in many physical contexts. Quantum mechanics, for example, is based on complex numbers and Pauli’s electron-spin operators are quaternionic. Octonions have fascinated mathematicians and physicists for decades but have yet to find any physical application. In recent books, both Roger Penrose and Ray Streater have characterized octonions as one of the great “lost causes” in physics. So we hope that the tripartite entanglement of seven qubits (which is just at the limit of what can be reached experimentally) will prove them wrong and provide a way of seeing the effects of octonions in the laboratory (Duff and Ferrara 2007; Borsten et al. 2009a).

In another development, QIT has been extended to super-QIT with the introduction of the superqubit, which can take on three values: 0 or 1 or $. Here 0 and 1 are “bosonic” and $ is “fermionic” (Borsten et al. 2009b). Such values can be realized in condensed-matter physics, such as the excitations of the t-J model of strongly correlated electrons, known as spinons and holons. The superqubits promise totally new effects. For example, despite appearances, the two-superqubit state $$ is entangled. Superquantum computing is already being investigated (Castellani et al. 2010).

Strings, branes and M-theory

If current ideas are correct, a unified theory of all physical phenomena will require some radical ingredients in addition to supersymmetry. For example, there should be extra dimensions: supersymmetry places an upper limit of 11 on the dimension of space–time. The kind of real, four-dimensional world that supergravity ultimately predicts depends on how the extra seven dimensions are rolled up, in a way suggested by Oskar Kaluza and Theodor Klein in the 1920s. In 1984, however, 11-dimensional supergravity was knocked off its pedestal by superstring theory in 10 dimensions. There were five competing theories: the E8 × E8 heterotic, the SO(32) heterotic, the SO(32) Type I, and the Type IIA and Type IIB strings. The E8 × E8 seemed – at least in principle – capable of explaining the elementary particles and forces, including their handedness. Moreover, strings seemed to provide a theory of gravity that is consistent with quantum effects.

However, the space–time of 11 dimensions allows for a membrane, which may take the form of a bubble or a two-dimensional sheet. In 1987 Paul Howe, Takeo Inami, Kelly Stelle and I showed that if one of the 11 dimensions were a circle, we could wrap the sheet round it once, pasting the edges together to form a tube. If the radius becomes sufficiently small, the rolled-up membrane ends up looking like a string in 10 dimensions; it yields precisely the Type IIA superstring. In a landmark talk at the University of Southern California in 1995, Ed Witten drew together all of this work on strings, branes and 11 dimensions under the umbrella of M-theory in 11 dimensions. Branes now occupy centre stage as the microscopic constituents of M-theory, as the higher-dimensional progenitors of black holes and as entire universes in their own right.

Such breakthroughs have led to a new interpretation of black holes as intersecting black-branes wrapped round the seven curled dimensions of M-theory or six of string theory. Moreover, the microscopic origin of the Bekenstein-Hawking entropy is now demystified. Using Polchinski’s D-branes, Andrew Strominger and Cumrun Vafa were able to count the number of quantum states of these wrapped branes (Strominger and Vafa 1996). A p-dimensional D-brane (or Dp-brane) wrapped round some number p of the compact directions (x4, x5, x6, x7, x8, x9) looks like a black hole (or D0-brane) from the four-dimensional (x0, x1, x2, x3) perspective. Strominger and Vafa found an entropy that agrees with Hawking’s prediction, placing another feather in the cap of M-theory. Yet despite all of these successes, physicists are glimpsing only small corners of M-theory; the big picture is still lacking. Over the next few years we hope to discover what M-theory really is. Understanding black holes will be an essential prerequisite.

Falsifiable predictions?

The partial nature of our understanding of string/M-theory has so far prevented any kind of smoking-gun experimental test. This has led some critics of string theory to suggest that it is not true science. This is easily refuted by studying the history of scientific discovery; the 30-year time lag between the EPR idea and Bell’s falsifiable prediction provides a nice example (see Further reading). Nevertheless it cannot be denied that such a prediction in string theory would be welcome.

CCbla4_04_10

In string literature one may find D-brane intersection rules that tell us how N branes can intersect over one another and the fraction of supersymmetry (susy) that they preserve (Bergshoeff et al. 1997). In our black hole/qubit correspondence, my students Leron Borsten, Duminda Dahanayake, Hajar Ebrahim, William Rubens and I showed that the microscopic description of the GHZ state,000 +011+101+110 is that of the N = 4;1/8 susy case of D3-branes of Type IIB string theory (Borsten et al. 2008). We denoted the wrapped circles by crosses and the unwrapped circles by noughts; O corresponds to XO and 1 to OX, as in table 1. So the number of qubits here is three because the number of extra dimensions is six. This also explains where the two-valuedness enters on the black-hole side. To wrap or not to wrap; that is the qubit.

Repeating the exercise for the N <4 cases and using our dictionary, we see that string theory predicts the three-qubit entanglement classification of figure 2, which is in complete agreement with the standard results of QIT. Allowing for different p-branes wrapping different dimensions, we can also describe “qutrits” (three-state systems) and more generally “qudits” (d-state systems). Furthermore, for the well documented cases of 2 × 2, 2 × 3, 3 × 3, 2 × 2 × 3 and 2 × 2 × 4, our D-brane intersection rules are also in complete agreement. However, for higher entanglements, such as 2 × 2 × 2 × 2, the QIT results are partial or not known, or else contradictory. This is currently an active area of research in QIT because the experimentalists can now control entanglement with a greater number of qubits. One of our goals is to use the allowed wrapping configurations and D-brane intersection rules to predict new qubit-entanglement classifications.

So the esoteric mathematics of string and M-theory might yet find practical applications.

 

The post Black holes and qubits appeared first on CERN Courier.

]]>
https://cerncourier.com/a/black-holes-and-qubits/feed/ 0 Feature Michael Duff explains how string theory and M-theory could have practical applications in quantum information theory. https://cerncourier.com/wp-content/uploads/2010/05/CCbla1_04_10.jpg
Is quantum theory exact or approximate? https://cerncourier.com/a/is-quantum-theory-exact-or-approximate/ https://cerncourier.com/a/is-quantum-theory-exact-or-approximate/#respond Tue, 25 Aug 2009 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/is-quantum-theory-exact-or-approximate/ Most solutions to the measurement problem look for a reinterpretation of the formalism of quantum mechanics. Models in which the wave function collapses spontaneously, however, follow a different route.

The post Is quantum theory exact or approximate? appeared first on CERN Courier.

]]>
Quantum mechanics has puzzled the scientific community from the beginning. One of the major sources of difficulties comes from the measurement problem: why do measurement processes always have definite outcomes, despite the fact that the Schrödinger equation allows for superpositions of states? And why are such outcomes random (distributed according to the Born rule), while the Schrödinger equation is deterministic? New experiments and observations could help to answer such questions by providing a more precise idea of the possible limits of validity of quantum theory (Adler and Bassi 2009).

Most solutions to the measurement problem look for a reinterpretation of the formalism of quantum mechanics. Models in which the wave function collapses spontaneously, however, follow a different route. They purposely modify the Schrödinger equation by adding new nonlinear and stochastic terms, which break quantum linearity above a scale fixed by new parameters. Physically, the wave function is coupled (nonlinearly) to a white-noise classical scalar field, which is assumed to fill space.

By modifying the Schrödinger equation, collapse models make predictions that differ from those of standard quantum mechanics and that can be, in principle, tested. The scale at which deviations from standard quantum behaviour can be expected gives indications of the sensitivity that experiments should reach if they are to provide meaningful tests of collapse models and quantum mechanics.

There have already been experiments that directly or indirectly test collapse models against quantum mechanics and others are proposed for the future. Probably the best known are the diffraction experiments with macromolecules (C60, C70, C30H12F30N2O4), which set an upper bound 13 decades above the most conservative value of the collapse parameter λ (related to the noise strength) and five decades above the strongest value suggested. Other tests include the decay of supercurrents and proton decay, but the upper bounds are even weaker than in the diffraction experiments. One interesting proposal is an experiment that includes a tiny mirror mounted on a cantilever, within an interferometer: it will set an upper bound of 9 (1) decades on the weakest (strongest) value of λ.

The strongest bound, however, comes from the spontaneous emission of X-rays from germanium-76, as predicted by the continuous spontaneous localization (CSL) model, the most popular collapse model. It sets an upper bound of only six decades on the weakest value of λ. The strongest value is disproved by these data, but the bound is weakened if non-white-noise is considered with a frequency cutoff. The data coming from spontaneous X-ray emission are very raw, and several contributions from known sources (e.g. gamma-ray contamination, double beta-decay) have not been subtracted. A dedicated experiment on spontaneous photon emission could set a much stronger upper bound and would represent the most accurate test of quantum mechanics against the rival theory. Such a project is under discussion between the University of Trieste and the INFN, Laboratori Nazionali di Frascati.

Collapse models also make predictions that have cosmological implications. The apparent violation of energy conservation arising from the interaction with the collapsing noise places important upper bounds. The strongest comes from the intergalactic medium: requiring that the heating produced by the noise remains below experimental bounds places an upper bound of 8 (0) decades on the weakest (strongest) value of λ.

The post Is quantum theory exact or approximate? appeared first on CERN Courier.

]]>
https://cerncourier.com/a/is-quantum-theory-exact-or-approximate/feed/ 0 News Most solutions to the measurement problem look for a reinterpretation of the formalism of quantum mechanics. Models in which the wave function collapses spontaneously, however, follow a different route.
Those were the days: discovering the gluon https://cerncourier.com/a/those-were-the-days-discovering-the-gluon/ https://cerncourier.com/a/those-were-the-days-discovering-the-gluon/#respond Wed, 15 Jul 2009 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/those-were-the-days-discovering-the-gluon/ John Ellis recalls how theorists and experimentalist friends worked together to find the second gauge boson.

The post Those were the days: discovering the gluon appeared first on CERN Courier.

]]>
CCglu1_06_09

In the mid-1970s quantum chromodynamics (QCD) was generally referred to as the “candidate” theory of the strong interactions. It was known to be asymptotically free and was the only plausible field-theoretical framework for accommodating the (approximate) scaling seen in deep-inelastic scattering, as well as having some qualitative success in fitting the emerging pattern of scaling violations. Moreover, QCD could be used to explain qualitatively the emerging spectrum of charmonia and had some semi-quantitative successes in calculating their decays. No theorist seriously doubted the existence of the gluon but direct proof of its existence, a “smoking gluon”, remained elusive.

In parallel, jet physics was an emerging topic. Statistical evidence was found for two-jet events in low-energy electron–positron annihilation into hadrons at SPEAR at SLAC, but large transverse-momentum jets had not yet been observed at the Intersecting Storage Rings, CERN’s pioneering proton–proton collider. There, it was known that the transverse-momentum spectrum of individual hadron production had a tail above the exponential fall-off seen in earlier experiments, but the shape of the spectrum did not agree with naive predictions that were based on the hard scattering of quarks and gluons, so rival theories – such as the constituent-interchange model – were touted.

The three-jet idea

This was the context in 1976 when I was walking back over the bridge from the CERN cafeteria to my office one day. As I turned the corner by the library, it occurred to me that the simplest experimental situation to search directly for the gluon would be through production via bremsstrahlung in electron–positron annihilation. Two higher-energy collider projects were in preparation at the time, PETRA at DESY and PEP at SLAC, and I thought that they should have sufficient energy to observe clear-cut three-jet events. My theoretical friends Graham Ross, Mary Gaillard and I then proceeded to calculate the gluon bremsstrahlung process in QCD, demonstrating how it would manifest itself via jet broadening and the appearance of three-jet events featuring the long-sought “smoking gluon”. We also contrasted the predictions of QCD with a hypothetical theory based on scalar gluons.

I was already in contact with experimentalists at DESY, particularly my friend the late Bjørn Wiik, who shared my enthusiasm about the three-jet idea. Soon after Mary, Graham and I had published our paper, I made a trip to DESY to give a seminar about it. The reception from the DESY theorists of that time was one of scepticism, even hostility, and I faced fierce questioning on why the short-distance structure of QCD should survive the hadronization process. My reply was that hadronization was expected to be a soft process involving small exchanges of momenta and that two-jet events had already been seen at SPEAR. At the suggestion of Bjørn Wiik, I also went to Günter Wolf’s office to present the three-jet idea: he listened much more politely than the theorists.

The second paper on three-jet events was published in 1977 by Tom Degrand, Jack Ng and Henry Tye, who contrasted the QCD prediction with that of the constituent-interchange model. Then, in 1978, George Sterman and Steve Weinberg published an influential paper showing how jet cross-sections could be defined rigorously in QCD with a careful treatment of infrared and collinear singularities. In our 1976 paper we had contented ourselves with showing that these were unimportant in the three-jet kinematic region of interest to us. Sterman and Weinberg opened the way to a systematic study of variables describing jet broadening and multi-jet events, which generated an avalanche of subsequent theoretical papers. In particular, Alvaro De Rújula, Emmanuel Floratos, Mary Gaillard and I wrote a paper showing how “antenna patterns” of gluon radiation could be calculated in QCD and used to extract statistical evidence for gluon radiation, even if individual three-jet events could not be distinguished.

Meanwhile, the PETRA collider was being readied for high-energy data-taking with its four detectors, TASSO, JADE, PLUTO and Mark J. I maintained regular contact with Bjørn Wiik, one of the leaders of the TASSO collaboration, as he came frequently to CERN around that time for various committee meetings. I was working with him to advocate the physics of electron–proton colliders. He told me that Sau Lan Wu had joined the TASSO experiment and that he had proposed that she prepare a three-jet analysis for the collaboration. She and Gus Zobernig wrote a paper describing an algorithm for distinguishing three-jet events, which appeared in early 1979.

Proof at last

During the second half of 1978 and the first half of 1979, the machine crews at DESY were systematically increasing the collision energy of PETRA. The first three-jet news came in June 1979 at the time of a neutrino conference in Bergen. The weekend before that meeting I was staying with Bjørn Wiik at his father’s house beside a fjord, when Sau Lan Wu arrived over the hills bearing printouts of the first three-jet event. Bjørn included the event in his talk at the conference and I also mentioned it in mine. I remember Don Perkins asking me whether one event was enough to prove the existence of the gluon: my tongue-in-cheek response was that it was difficult to believe in eight gluons on the strength of a single event!

CCglu3_06_09

The next outing for three-jet events was at the European Physical Society conference in Geneva in July. Three members of the TASSO collaboration, Roger Cashmore, Paul Söding and Günter Wolf, spoke at the meeting and presented several clear three-jet events. The hunt for gluons was looking good!

The public announcement of the gluon discovery came at the Lepton/Photon Symposium held at Fermilab in August 1979. All four PETRA experiments showed evidence: Sam Ting’s Mark J collaboration presented an analysis of antenna patterns; while JADE and PLUTO followed TASSO in presenting evidence for jet broadening and three-jet events. One three-jet event was presented at a press conference and a journalist asked which jet was the gluon. He was told that the smart money was on the left-hand one (or was it the right?). Refereed publications by the four collaborations soon appeared and the gluon finally joined the Pantheon of established particles as the first gauge boson to be discovered after the photon.

An important question remained: was the gluon a vector particle, as predicted by QCD, or was it a scalar boson? In 1978 my friend Inga Karliner and I wrote a paper that proposed a method for distinguishing the two possibilities, based on our intuition about the nature of gluon bremsstrahlung. This was used in 1980 by the TASSO collaboration to prove that the gluon was indeed a vector particle, a result that was confirmed by the other experiments at PETRA in various ways.

CCglu2_06_09

Gluon-jet studies have developed into a precision technique for testing QCD. One-loop corrections to three-jet cross-sections were calculated by Keith Ellis, Douglas Ross and Tony Terrano in 1980 and used, particularly by the LEP collaborations, to measure the strong coupling and its running with energy. The latter also used four-jet events to verify the QCD predictions for the three-gluon coupling, a crucial consequence of the non-Abelian nature of QCD.

In the words of Mary Hopkin’s song in 1968, “those were the days, my friends”. A small group of theoretical friends saw how to discover the gluon and promptly shared the idea with some experimental friends, who then seized the opportunity and the rest – as the saying goes – is history. To my mind, it is a textbook example of how theorists and experimentalists, working together, can advance knowledge. The LHC experiments will be a less intimate environment but let us hope that strong interactions between theorists and experimentalists will again lead to discoveries for the textbooks!

The post Those were the days: discovering the gluon appeared first on CERN Courier.

]]>
https://cerncourier.com/a/those-were-the-days-discovering-the-gluon/feed/ 0 Feature John Ellis recalls how theorists and experimentalist friends worked together to find the second gauge boson. https://cerncourier.com/wp-content/uploads/2009/07/CCglu1_06_09.jpg
Precise mass measurements may help decode X-ray bursts https://cerncourier.com/a/precise-mass-measurements-may-help-decode-x-ray-bursts/ https://cerncourier.com/a/precise-mass-measurements-may-help-decode-x-ray-bursts/#respond Mon, 08 Jun 2009 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/precise-mass-measurements-may-help-decode-x-ray-bursts/ The results may make it easier to understand type I X-ray bursts, the most common stellar explosions in the galaxy.

The post Precise mass measurements may help decode X-ray bursts appeared first on CERN Courier.

]]>
Researchers at the Michigan State University (MSU) National Superconducting Cyclotron Laboratory (NSCL) have made precise mass measurements of four proton-rich nuclei, 68Se, 70Se, 71Br and an excited state of 70Br. The results may make it easier to understand type I X-ray bursts, the most common stellar explosions in the galaxy.

These bursts occur in the hot and dense environment that arises when a neutron star accretes matter from a companion star in a binary system. In these circumstances, rapid burning of hydrogen and helium occurs through a series of proton captures and beta decays known as the rp process, releasing an energy of 1032–1033 J in the form of X-rays in a burst 10–100 s long. Generally the capture-decay sequence happens in a matter of seconds or less, but “waiting points” occur at the proton dripline, where the protons become too weakly bound and the slower beta-decays intervene.

One of the major waiting points involves 68Se, which has 34 neutrons and 34 protons, and closely related nuclei. The lifetimes of these nuclei influence the light curve of the X-ray burst as well as the final mix of elements created in the burst process. The lifetimes of the waiting points in turn depend critically on the masses of the nuclei involved, which also influence the possibility for double-proton capture that can bypass the beta-decay process and hence the waiting point.

The experiment at NSCL, conducted by Josh Savory and colleagues, used the Low Energy Beam and Ion Trap facility, LEBIT, for the mass measurements of the four nuclei. The nuclides themselves were produced by projectile fragmentation of a 150 MeV/u primary 78Kr beam and separated in flight by the A1900 separator. LEBIT takes isotope beams travelling at roughly half the speed of light and then slows and stops the isotopes for highly accurate mass measurements via Penning-trap mass spectrometry.

The experiment was able to reach uncertainties as low as 0.5 keV for 68Se to 15 keV for 70mBr, with up to 100 times improvement in precision (for 71Br) in comparison with previous measurements. The team then used the new measurements as input to calculations of the rp process and found an increase in the effective lifetime of 68Se, together with more precise information on the luminosity of a type I X-ray burst and on the elements produced.

The post Precise mass measurements may help decode X-ray bursts appeared first on CERN Courier.

]]>
https://cerncourier.com/a/precise-mass-measurements-may-help-decode-x-ray-bursts/feed/ 0 News The results may make it easier to understand type I X-ray bursts, the most common stellar explosions in the galaxy.
Cosmology’s golden age https://cerncourier.com/a/cosmologys-golden-age/ https://cerncourier.com/a/cosmologys-golden-age/#respond Mon, 08 Jun 2009 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cosmologys-golden-age/ Nobel laureate George Smoot looks at exciting times now and to come in cosmology.

The post Cosmology’s golden age appeared first on CERN Courier.

]]>
CCnew9_05_09

La verità è il destino per il quale siamo stati fatti (Truth is the destiny for which we were made)”. This article gives an example of how “truth” is achieved through “discovery” – the method used in science. By revealing nature, discovery is the way in which we can achieve truth, or at least glimpse it. But how can we know or have confidence that we have made a correct discovery? Here we can look to the major architect of the scientific method, Galileo Galilei: “La matematica è l’afabeto nel quale Dio ha scritto l’Universo” (Mathematics is the language with which God has written the universe). A discovery will be described best – and most economically and poetically – mathematically.

Virtual space flight

There has never been a more exciting time for cosmologists than now. Through advanced techniques and ingenious, and often heroic observational efforts, we have obtained a direct and extraordinarily detailed picture of the universe – from very early times to the present. I recently had the pleasure of using a specially outfitted planetarium at the Chabot Observatory Space and Science Center in Oakland, California, and taking a virtual flight through the universe on a realistic (though often faster-than-light) journey based on real astronomical data.

CCcos1_05_09

We took off from the surface of the Earth and zoomed up to see the International Space Station at its correct location in orbit. When we first arrived we could only see a dark region moving above the Earth but soon the space station’s orbit brought it out of the Earth’s shadow into direct Sun light. We circled round, looking at it from all sides and then swiftly moved on to see the solar system with all the planets in their correct current locations. After a brief visit to the spectacular sight of Saturn we continued out to see the stars in our neighbourhood before moving on, impatient to see the whole galaxy with all the stars in the positions determined by the Hipparchos Satellite mission. After that we travelled farther out to see our local group of galaxies dominated by our own Milky Way and the Andromeda galaxy.

Moving more and more quickly we zoomed out and saw many clusters of galaxies. I was having trouble deciding quickly enough which supercluster was Coma, Perseus-Pisces or Hydra-Centaurus when viewed from an arbitrary location and moving through the universe so fast. Then, using the latest galaxy survey data, we went out farther to where we were seeing half-way to the edge of the observable universe. All the galaxies were displayed in their observed colours and locations – millions of them, admittedly only a fraction of the estimated 100 billion in the visible universe, but still incredibly impressive in number and scope, revealing the web of the cosmos.

We were actually moving through time as well as space. As we went farther away from the Earth we were at distances where light takes a long time to reach our own planet, so we were looking at objects with a very much younger age (earlier in time). It was fun flying round through the universe at hyperfaster-than-light speed and seeing all of the known galaxies. Soon I asked to see to the edge (and beyond). The operator brought up the data for the cosmic microwave background (CMB) – at the time, the 3-year maps from the Wilkinson Microwave Anisotropy Probe – and it appeared behind the distance galaxies. I asked to move right to the edge, and in the process of zooming out we went past the CMB map surface and were looking back at the sphere containing the full observable universe. Where were we? Out in the part from which light has not had time to reach Earth and – if our current understanding is correct – will never reach us. But still we wonder about what is out there, and we have some hope of understanding.

The second reason why this is such an incredibly exciting time in cosmology is that these observations, combined with careful reasoning and an occasional brilliant insight, have allowed us to formulate an elegant and precisely quantitative model for the origin and evolution of the universe. This model reproduces to high accuracy everything that we observe over the history of the universe, images of which are displayed in the planetarium.

CCcos2_05_09

We now have precise observations of a very early epoch in the universe through the images made using the CMB radiation and we hope to start a newer and even more precise and illuminating effort with the launch of the Planck Mission on 14 May. However, we also have many impressive galaxy surveys and plans for even more extensive surveys using new ideas to see the relics of the acoustic oscillations in the very, very early universe, as well as the gravitational lensing caused by the more recently formed large-scale structures, such as clusters of galaxies that slightly warp the fabric of space–time by their presence. Each will give us new images and thus new information about the overall history of the universe.

However, the model invokes new physics; some explicitly and some by omission. First, we put in inflation, the physical mechanism that takes a small homogeneous piece of space–time and turns it into something probably much larger than our currently observable universe but with all its features, including the very-small-amplitude fluctuations discovered with the Differential Microwave Radiometers on the Cosmic Background Explorer, which are the seeds of modern galaxies and clusters. Second, we put in dark matter, which plays the key role in the formation of structure in the universe and holds the clusters and galaxies together. This is a completely new kind of matter – unlike any other with which we have experience. It does not interact electromagnetically with light but apparently does interact gravitationally, precisely the property needed for it to form structure. A third additional ingredient is dark energy, which is used to balance the energy budget of the universe and explain the accelerating rate of expansion observed in the more recent history of the universe. Last, we need baryogenesis, the physical mechanism that explains the dominance of matter over antimatter. We have good reason to believe that there were equal amounts of matter and antimatter at the very beginning, but now matter prevails.

If we add these four extra ingredients in the simplest possible form we can reproduce the observable universe in our simulations or analytic calculations to an accuracy that is equal to (and probably better than) the current observational accuracy – at roughly the per cent level.

There are other things that we don’t put in so explicitly but have reason to suspect might be there. For example, we work with a universe constrained by three large dimensions of space and one of time, even though we know that more dimensions are possible and may be necessary. We do not deal with our confinement to 4D. We also stick with the four known basic forces even though there is plenty of opportunity for new forces; and likewise for additional relics from earlier epochs.

Universal ingredients

The success of the standard cosmological model has many consequences that puzzle us and also raises several key questions, which are far from answered. The observation of dark energy demonstrates that our well established theories of particles and gravity are at least incomplete – or not fully correct. What makes up the dark side of the universe? What process, in detail, created the primordial fluctuations? Is gravity purely geometry as Albert Einstein envisaged, or is there more to it (such as scalar partners and extra dimensions)? An unprecedented experimental effort is currently being devoted to address these grand-challenge questions in cosmology. This is an intrinsically interdisciplinary issue that will inevitably be at the forefront of research in astrophysics and fundamental physics in the coming decades. Cosmology is offering us a new laboratory where standard and exotic fundamental theories can be tested on scales not otherwise accessible.

The situation in cosmology is rife with opportunities. There are well defined but fundamental questions to be answered and new observations arriving to guide us in this quest. We should learn much more about inflation from the observations that we can anticipate over the next few years. Likewise we can hope to learn about the true nature of dark matter from laboratory and new accelerator experiments that are underway or soon to be operating, as at the LHC. We hope to learn more about possible extra dimensions through observations.

We continue to seek and encourage new ideas and concepts for understanding the universe. These concepts and ideas must pass muster – like a camel going through the eye of a needle – in agreeing with the multitude of precise observations and thereby yield an effective version of our now-working cosmological model. This is the key point of modern cosmology, which is fully flowering and truly exciting. It is the natural consequence and culmination of the path that Galileo started us on four centuries ago.

The post Cosmology’s golden age appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cosmologys-golden-age/feed/ 0 Feature Nobel laureate George Smoot looks at exciting times now and to come in cosmology. https://cerncourier.com/wp-content/uploads/2009/06/CCcos2_05_09.jpg
Dark-matter research arrives at the crossroads https://cerncourier.com/a/dark-matter-research-arrives-at-the-crossroads/ https://cerncourier.com/a/dark-matter-research-arrives-at-the-crossroads/#respond Wed, 01 Apr 2009 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/dark-matter-research-arrives-at-the-crossroads/ The 2008 DESY Theory Workshop focused on the nature of dark matter.

The post Dark-matter research arrives at the crossroads appeared first on CERN Courier.

]]>
CCdes1_03_09

There is overwhelming evidence that the universe contains dark matter made from unknown elementary particles. Astronomers discovered more than 75 years ago that spiral galaxies, such as the Milky Way, spin faster than allowed by the gravity of known kinds of matter. Since then there have been many more observations that point to the existence of this dark matter.

Gravitational lensing, for example, provides a unique probe of the distribution of luminous-plus-dark matter in individual galaxies, in clusters of galaxies and in the large-scale structure of the universe. The deflection of  gravitational light depends only on the gravitational field between the emitter and the observer, and it is independent of the nature and state of the matter producing the gravitational field, so it yields by far the most precise determinations of mass in extragalactic astronomy. Gravitational lensing has established that, like spiral galaxies, elliptical galaxies are dominated by dark matter.

Strong evidence for the fact that most of the dark matter has a non-baryonic nature comes from the observed heights of the acoustic peaks in the angular power spectrum of the cosmic microwave background measured by the Wilkinson Microwave Anisotropy Probe, because the peaks are sensitive to the fraction of mass in the baryons. It turns out that only about 4% of the mass of the universe is in baryons, whereas about 20% is in non- baryonic dark matter – a finding that is also in line with inferences from primordial nucleosynthesis.

A host of candidates

This leaves some pressing questions. What is the microscopic nature of this non- baryonic dark matter? Why is its mass fraction today about 20%? How dark is it? How cold is it? How stable is it?

CCdes2_03_09

Progress in finding the answers to such questions provided the focus for the 2008 DESY b  Theory Workshop, which was held on 29 September – 2 October.

Organized by Manuel Drees of Bonn, it sought to combine results from a range of experiments and confront them with theoretical predictions. It is clear that the investigation of the microscopic nature of dark matter has recently entered a decisive phase. Experiments are being carried out around the globe to try to identify traces of the  mysterious dark-matter particles. Since the different theoretical candidates appear to have quite distinctive signatures, there are good reasons to expect that from a combination of all of these efforts a common picture will materialize within the next decade.

Theoretical particle physicists have proposed a whole host of candidates for the constituents of non-baryonic dark matter, with fancy names such as axions, axinos, gravitinos, neutralinos and lightest Kaluza–Klein partners. The best-motivated of these occur in extensions of the Standard Model that have been proposed to solve other problems besides the dark-matter puzzle. The axion, for example, arose in extensions that aim to solve the strong CP problem. It later turned out to be a viable dark- matter candidate if its mass is in the micro-electron-volt range. Gravitinos and neutralinos, on the other hand, are the superpartners of the graviton and the neutral bosons, respectively. They arise in supersymmetric extensions of the Standard Model, which aim at a solution of the hierarchy problem and at a grand unification of the strong and electroweak interactions. In fact, neutralinos are natural candidates for dark matter because they have cross-sections of the order of electroweak interactions and their masses are expected to be of the order of the weak scale (i.e. 100 GeV). This leads to the fact that their relic density resulting from freeze-out in the early universe is just right to account for the observed amount of dark matter.

Neutralinos belong to the class of weakly interacting massive particles (WIMPs). Such particles seem to be more or less generic in extensions of the Standard Model at the tera-electron-volt scale, but their stability (or a long enough lifetime) has to be imposed. This is not necessary for super-weakly interacting massive particles (superWIMPs), such as sterile neutrinos, gravitinos, hidden sector gauge bosons (gauginos) and the axino. For example, unstable but long-lived gravitinos in the 5–300 GeV mass range are viable candidates for dark matter and provide a consistent thermal history of the universe, including successful Big Bang nucleosynthesis.

Detecting dark matter

Owing to their relatively large elastic cross-sections with atomic nuclei, WIMPs such as neutralinos are good candidates for direct detection in the laboratory, yielding up to one event per day, per 100 kg of target material. The expected WIMP signatures are nuclear recoils, which should occur uniformly throughout the detector volume at a rate that shows an annual flux modulation by a few per cent. Intriguingly, the DAMA experiment in the Gran Sasso National Laboratory has seen evidence for such an annual modulation.

However, there is some tension with other direct-detection experiments. Theoretical studies have revealed that interpretation in terms of a low-mass (5–50 GeV) WIMP is marginally compatible with the current limits from other experiments. In contrast to DAMA, which looks just for scintillation light, most of the latter exploit at least two observables out of the set (phonons, charge, light) to reconstruct the nuclear recoil-energy.

CCdes3_03_09

Many different techniques based on cryogenic detectors (e.g. the Cryogenic Dark Matter Search), noble liquids (e.g. the XENON Dark Matter Project) or even bubble chambers, are currently employed to search for WIMPs via direct detection. Detectors with directional sensitivity (e.g. the Directional Recoil Identification From Tracks experiment) may not only have a better signal-to-background discrimination but may also be capable of measuring the local dark-matter, phase-space distribution. In summary, these direct experiments are currently probing some of the theoretically interesting regions for WIMP candidates. The next generation of experiments may enter the era of WIMP (astro) physics.

The axion is another dark-matter candidate for which there are ongoing direct- detection experiments. Both the Axion Dark Matter Experiment (ADMX) in the US and the Cosmic Axion Research with Rydberg Atoms in a Resonant Cavity (CARRACK) experiment in Japan exploit a cooled cavity inside a strong magnetic field to search for the stimulation of a cavity resonance from a dark-matter axion–photon conversion in the microwave frequency region, corresponding to the expected axion mass. While they differ in their detector technology – ADMX uses microwave telescope technology whereas CARRACK employs Rydberg atom technology – both experiments are designed to cover the 1–10 μeV mass range. Indeed, if dark matter consists just of axions then it should soon be found in these experiments. The CERN Axion Solar Telescope, meanwhile, is looking for axions produced in the Sun.

There are also of course possibilities for indirect detection. Dark matter may not be absolutely dark. In fact, in regions where the dark-matter density is high (e.g. in the Earth, in the Sun, near the galactic centre, in external galaxies), neutralinos or other WIMPs may annihilate to visible particle–antiparticle pairs and lead to signatures in gamma-ray, neutrino, positron and antiproton spectra. Moreover, superWIMPs (e.g. gravitinos), may also leave their traces in cosmic-ray spectra if they are not absolutely stable.

CCdes4_03_09

Interestingly, the Payload for Antimatter Matter Exploration and Light-Nuclei Astrophysics (PAMELA) satellite experiment recently observed an unexpected rise in the fraction of positrons at energies of 10–100 GeV, thereby confirming earlier observations by the High Energy Antimatter Telescope balloon experiment. In addition, the Advanced Thin Ionization Chamber balloon experiment has reported a further anomaly in the electron-plus positron flux, which can be interpreted as the continuation of the PAMELA excess to about 800 GeV. The quantification of these excesses is still quite uncertain, not least because of relatively large systematic uncertainties. It is well established that they cannot be explained by the standard mechanism, namely the secondary production of positrons arising from collisions between cosmic-ray protons and the interstellar medium within our galaxy. However, a very conventional astrophysical source for them could be nearby pulsars.

On a more speculative level, these observations have inspired theorists to search for pure particle-physics models that accommodate all results. Generically, interpretations in terms of WIMP annihilation seem to be disfavoured, because they require a huge clumpiness of the Milky Way dark-matter halo, which is at variance with recent numerical simulations of the latter. This constraint is relaxed in superWIMP scenarios, where the positrons may be produced in the decay of dark-matter particles (e.g. gravitinos).

It is clear that one of the keys to understanding the origin of the excess in the positron fraction is the accurate, separate measurement of positron and electron fluxes, which can be done with further PAMELA data and with the Alpha Magnetic Spectrometer satellite experiment. Furthermore, distinguishing different interpretations of the observed excesses requires a multimessenger approach (i.e. to search for signatures in the radio range, synchrotron radiation, neutrinos, antiprotons and gamma rays).

Fortunately the Fermi Gamma-Ray Space Telescope is in orbit and taking data. Together with other cosmic-ray experiments it will probe interesting regions of parameter space in WIMP and superWIMP scenarios of dark matter.

Dark matter at colliders

Clearly, at colliders the existence of a dark-matter candidate can be inferred only indirectly from the apparent missing energy, associated with the dark-matter particles, in the final state of the collision. However, such a measurement can be made with precision and under controlled conditions. To extract the properties, such as the mass, of dark-matter particles, these final-state measurements have to be compared with predictions from theoretical models. In a supersymmetric extension of the Standard Model, for example, with the neutralino as the lightest superpartner, experiments at the LHC would search for signatures from the cascade decay of gluinos and squarks into gluons, quarks, leptons and neutralinos. This would show up as large missing transverse-energy in events with some jets and leptons. The endpoints in kinematic distributions could then be used for the determination of the dark-matter candidate’s mass, which could be compared with the mass determined eventually by measurements of recoil energy in direct-detection experiments.

This complementarity between direct, indirect and collider searches for dark matter is essential. Although collider experiments might identify a dark-matter candidate and precisely measure its properties, they will not be able to distinguish a cosmologically stable particle from one that is long-lived but unstable. In turn, direct detection cannot tell definitely what kind of WIMP has been observed. Moreover, in many superWIMP dark matter scenarios a direct detection is impossible, while detection at the LHC may be feasible. For example, if the lightest superpartner is a gravitino (or hidden gaugino) and the next-to-lightest is a charged lepton, experiments at the LHC may search for the striking signature of a displaced vertex plus an ionizing track.

In many cases, however, precision measurements from a future electron–positron collider seem to be necessary to exploit fully the collider–cosmology–astrophysics synergy. In addition “low-energy photon- collider” experiments – such as the Axion-Like Particle Search at DESY, the GammeV experiment at Fermilab and the Optical Search for QED magnetic birefringence, axions and photo regeneration at CERN, where the interactions of intense laser beams with strong electromagnetic fields are probed – may give viable insight into the existence of very lightweight, axion-like, dark-matter candidates.

In summary, there is evidence for non-baryonic dark matter that is not made of any known elementary particle. We are today in the exploratory stage to figure out its microscopic nature. Many ideas are currently being explored in theories and in experiments, and more will come. Nature has given us a few clues that we need to exploit. The data coming soon from accelerators, and from direct and indirect detection experiments, will be the final arbiter.

The post Dark-matter research arrives at the crossroads appeared first on CERN Courier.

]]>
https://cerncourier.com/a/dark-matter-research-arrives-at-the-crossroads/feed/ 0 Feature The 2008 DESY Theory Workshop focused on the nature of dark matter. https://cerncourier.com/wp-content/uploads/2009/04/CCdes1_03_09.jpg
Relativity: A Very Short Introduction https://cerncourier.com/a/relativity-a-very-short-introduction/ Wed, 01 Apr 2009 09:43:07 +0000 https://preview-courier.web.cern.ch/?p=104985 Robert Cailliau reviews in 2009 Relativity: A Very Short Introduction.

The post Relativity: A Very Short Introduction appeared first on CERN Courier.

]]>
by Russell Stannard, Oxford

University Press. Paperback ISBN 9780199236220, £7.99.

CCboo1_03_09

In the series of Very Short Introductions by Oxford University Press there have been nuggets and non-nuggets. The book Relativity is definitely a nugget. We can all do the simple maths and use Pythagoras’s theorem but I have always found it difficult – even from Albert Einstein’s popular little book – to gain some “more intuitive” understanding of relativity. Russell Stannard’s text is the best that I have read.

He begins with the familiar: simultaneity, constancy of the speed of light, the paradox of the twin astronauts and so on. In each case he goes straight to the heart of the phenomenon – and each time I felt that I came out with a deeper understanding and better appreciation of how simple it all is. Stannard has in this short work collected all of the best analogies that I have come across while also managing to keep the reader smiling with some tongue-in-cheek remarks. There are a number of mathematical expressions sprinkled throughout the text; and they are not beyond the abilities of the interested layperson. The drawings and formulae are good, with artwork that is vastly better than in some of the other volumes in the series.

However, OUP has still not got it all entirely right. For example, the square root symbol – important in this particular text – is just a V symbol. Weird. In all, this is a pleasant book to read. It reminds one of how strange reality really is and how difficult it is for us humans to make simple mental models. This book is to be recommended.

The post Relativity: A Very Short Introduction appeared first on CERN Courier.

]]>
Review Robert Cailliau reviews in 2009 Relativity: A Very Short Introduction. https://cerncourier.com/wp-content/uploads/2009/04/CCboo1_03_09.jpg
Conference probes the dark side of the universe https://cerncourier.com/a/conference-probes-the-dark-side-of-the-universe/ https://cerncourier.com/a/conference-probes-the-dark-side-of-the-universe/#respond Mon, 23 Feb 2009 01:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/conference-probes-the-dark-side-of-the-universe/ Researchers gathered in Munich to discuss dark energy.

The post Conference probes the dark side of the universe appeared first on CERN Courier.

]]>
During the past decade a consistent quantitative picture of the universe has emerged from a range of observations that include the microwave background, distant supernovae and the large-scale distribution of galaxies. In this “standard model” of the universe, normal baryonic matter contributes only 4.6% to the overall density; the remainder consists of dark components in the form of dark matter (23%) and dark energy (72%). The existence and dominance of dark energy is particularly unexpected and raises fundamental questions about the foundations of modern physics. Is dark energy merely Albert Einstein’s cosmological constant? Is it a new kind of field that evolves dynamically as the universe expands? Or is a new law of gravity needed?

In the search for answers to these questions, more than 250 participants, ranging from senior experts to young students, attended the 3rd Biennial Leopoldina Conference on Dark Energy held on 7–11 October 2008 at the Ludwig Maximilians University (LMU) in Munich. The meeting was organized jointly by the Bonn-Heidelberg-Munich Transregional Research Centre “The Dark Universe” and the German Academy of Sciences Leopoldina, with support from the Munich-based Excellence Cluster “Origin and Structure of the Universe”. The goal of the international symposium was to gain a better understanding of the nature of dark energy by bringing together observers, modellers and theoreticians from particle physics, astrophysics and cosmology to present and discuss their latest results and to explore possible future routes in the rapidly expanding field of dark-energy research.

CCdar1_03_09

Around 60 plenary talks at the conference were held in the central auditorium (Aula) of LMU Munich, with lively discussions following in poster sessions (where almost 100 posters were displayed) and during the breaks in the inner court of the university. There were fruitful exchanges between physicists engaged in a range of observations, from ground-based studies of supernovae to satellite probes of the cosmic microwave background (CMB), and theorists in search of possible explanations for the accelerated expansion of the universe, which was first reported in 1998. This acceleration has occurred in recent cosmic history, corresponding to redshifts of about z ≤ 1.

An accelerating expansion

Brian Schmidt of the Australian National University in Canberra gave the observational keynote speech. He led the High-z Supernova Search Team that presented the first convincing evidence for the existence of dark energy – which works against gravity to boost the expansion of the universe – almost simultaneously with the Supernova Cosmology Project led by Saul Perlmutter of the Lawrence Berkeley National Laboratory and the University of California at Berkeley. Adam Riess, a member of the High-z team, presented constraints on dark energy from the latest supernovae data, including those from the Hubble Space Telescope at redshift z > 1. This is where the acceleration becomes a deceleration, owing to the lessening impact of dark energy at earlier times (figure 1).

Both teams independently discovered the accelerating expansion of the universe by studying distant type Ia supernovae. They found that the light from these events is fainter than expected for a given expansion velocity, indicating that the supernovae are farther away than predicted (figure 2, p18). This implies that the expansion is not slowing under the influence of gravity – as might be expected – but is instead accelerating because of some uniformly distributed, gravitationally repulsive substance accounting for more than 70% of the mass-energy content of the universe – now known as dark energy.

Type Ia supernovae arise from runaway thermonuclear explosions following accretion on a carbon/oxygen white dwarf star and after calibration have an almost uniform brightness. This makes them “standard candles”, suitable as tools for the precise measurement of astronomical distances. Wolfgang Hillebrandt of the Munich Max-Planck Institute for Astrophysics presented 3D simulations of type Ia supernova explosions. It is still a matter of debate how standard these so-called “standard candles” really are. Their colour–luminosity relationship is inconsistent with Milky Way-type dust and, as Robert Kirshner of the Harvard-Smithsonian Center for Astrophysics mentioned, the role of dust is generally underestimated. Future supernova observations in the near infrared hold promise because, at these wavelengths, the extinction by dust is five times lower. Bruno Leibundgut of ESO said that infrared observations using the future James Webb Space Telescope will be crucial in solving the problem of reddening from dust.

As Schmidt pointed out, and others detailed in subsequent talks, measurements of the temperature fluctuations in the CMB provide independent support for the theory of an accelerating universe. These were first observed by the Cosmic Background Explorer in 1991 and subsequently in 2000 by the Boomerang and MAXIMA balloon experiments. Since 2003 the Wilkinson Microwave Anisotropy Probe (WMAP) has observed the full-sky CMB with high resolution. Additional evidence came from the Sloan Digital Sky Survey and 2-degree Field Survey. In 2005 they measured ripples in the distribution of galaxies that were imprinted in acoustic oscillations of the plasma when matter and radiation decoupled as protons and electrons combined to form hydrogen atoms, 380,000 years after the Big Bang. These are the “baryonic acoustic oscillations” (BAOs).

Dark-energy candidates

Eiichiro Komatsu of the Department of Astronomy at the University of Texas in Austin, lead author of WMAP’s paper on the cosmological interpretation of the five-year data, said that anything that can explain the observed luminosity distances of type Ia supernovae, as well as the angular-diameter distances in the CMB and BAO data, is “qualified for being called dark energy” (figure 3). Candidates include energy, modified gravity and an extreme inhomogeneity of space.

CCdar2_03_09

Although the latter approach was presented in several talks, the impression prevailed that the effects of dark energy are too large to be accounted for through spatial inhomogeneities and an accordingly adapted averaging procedure in general relativity. Komatsu – and many other speakers – clearly favours the Lambda-cold-dark-matter (ΛCDM) model, with a small cosmological constant Λ to account for the accelerated expansion. The dark-energy equation of state is usually taken to be w = p/ρ= –0.94 ± 0.1(stat.) ± 0.1 (syst.) with a negative pressure, p; a varying w is not currently favoured by the data. Several speakers presented various versions of modified gravity. Roy Maartens of the University of Portsmouth in the UK acknowledged that ΛCDM is currently the best model. As an alternative he presented a braneworld scenario in which the vacuum energy does not gravitate and the acceleration arises from 5D effects. This scenario is, however, challenged by both geometric and structure-formation data.

Theoretical keynote-speaker Christof Wetterich of Heidelberg University emphasized that the physical origin, the smallness and the present-day importance of the cosmological constant are poorly understood. In 1988, almost simultaneously with but independently from Bharat Ratra and James Peebles, he proposed the existence of a time-dependent scalar field, which gives rise to the concept of a dynamical dark energy and time-dependent fundamental “constants”, such as the fine-structure constant. Although observations may eventually decide between dynamical or static dark energy, this is not yet possible from the available data.

CCdar3_03_09

Yet another indication for the accelerated expansion comes from the investigation of the weak-lensing effect, as Matthias Bartelmann of Heidelberg University and others explained. This method of placing constraints on dark energy through its effect on the growth of structure in the universe relies on coherent distortions in the shapes of background galaxies by foreground mass structures, which include dark matter. The NASA-DOE Joint Dark Energy Mission (JDEM) is a space probe that will make use of this effect, in addition to taking BAO observations and distance and redshift measurements of more than 2000 type Ia supernovae a year. The project is now in the conceptual-design phase and has a target launch date of 2016. ESA’s corresponding project – the Dark UNiverse Explorer – is part of the planned Euclid mission, scheduled for launch in 2017. There were presentations on both missions.

The first major scientific results from the 10 m South Pole Telescope (SPT) initial survey were the highlight of the report by John Carlstrom, principal investigator for the project. The telescope is one of the first microwave telescopes that can take large-sky surveys with precision. It will be possible to use the resulting size-distribution pattern together with information from other telescopes to determine the strength of dark energy.

CCdar4_03_09

Carlstrom described the detection of four distant, massive clusters of galaxies in an initial analysis of SPT survey data – a first step towards a catalogue of thousands of galaxy clusters. The number of clusters as a function of time depends on the expansion rate, which leads back to dark energy. Three of the detected galaxy clusters were previously unknown systems. They are the first clusters detected in a Sunyaev–Zel’dovich (SZ) effect survey, and are the most significant SZ detections from a subset of the ongoing SPT survey. This shows that SZ surveys, and the SPT in particular, can be an effective means of finding galaxy clusters. The hope is for a catalogue of several thousand galaxy clusters in the southern sky by the end of 2011 – enough to rival the constraints on dark energy that are expected from the Euclid Mission and NASA’s JDEM.

The conference was lively and social activities enabled discussions outside the conference auditorium, particularly during the lunch breaks in nearby Munich restaurants. The presentations and discussions all demonstrated that the search for definite signatures and possible sources of the accelerated expansion of the universe continues to flourish and has an exciting future ahead. The results on supernovae and the CMB have led the way, but there is still much to learn. In his conference summary, Michael Turner of the University of Chicago emphasized that “cosmology has entered an era with large quantities of high-quality data”, and that the quest to understand dark energy will remain a grand scientific adventure. Future observational facilities – such as the Planck probe of the CMB, which is scheduled for launch around Easter 2009, the all-sky galaxy-cluster X-ray mission eROSITA, ESA’s Euclid and NASA’s JDEM – are all designed to produce unprecedented high-precision cosmology results that will shed new light on dark energy.

The post Conference probes the dark side of the universe appeared first on CERN Courier.

]]>
https://cerncourier.com/a/conference-probes-the-dark-side-of-the-universe/feed/ 0 Feature Researchers gathered in Munich to discuss dark energy. https://cerncourier.com/wp-content/uploads/2009/02/CCdar1_03_09.jpg
The Pauli principle faces testing times https://cerncourier.com/a/the-pauli-principle-faces-testing-times/ https://cerncourier.com/a/the-pauli-principle-faces-testing-times/#respond Tue, 27 Jan 2009 01:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-pauli-principle-faces-testing-times/ The SpinStat 2008 workshop looked at the nature of space–time.

The post The Pauli principle faces testing times appeared first on CERN Courier.

]]>
The Pauli exclusion principle (PEP), and more generally the spin-statistics connection, plays a pivotal role in our understanding of countless physical and chemical phenomena, ranging from the periodic table of the elements to the dynamics of white dwarfs and neutron stars. It has defied all attempts to produce a simple and intuitive proof, despite being spectacularly confirmed by the number and accuracy of its predictions, because its foundation lies deep in the structure of quantum theory. Wolfgang Pauli remarked in his Nobel Prize lecture (13 December 1946): “Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions. I had the feeling, and I still have it today, that this is a deficiency. The impression that the shadow of some incompleteness fell here on the bright light of success of the new quantum mechanics seems, to me, unavoidable.” Pauli’s conclusion remains basically true today.

CCspi1_01_09

The PEP was a major theme of the workshop “Theoretical and experimental aspects of the spin-statistics connection and related symmetries” at SpinStat 2008, held in Trieste on 21–25 October at the Stazione Marittima conference centre. Some 60 theoretical and experimental physicists attended, as well as a number of philosophers of science. The aim was to survey recent work that challenges traditional views and to put forward possible new experimental tests, including new theoretical frameworks.

A single framework for discussion

On the theoretical side, several researchers are currently exploring theories that may allow a tiny violation of PEP, such as quon theory, the existence of hidden dimensions, geometric quantization and a new spin-statistics connection in the framework of quantum gravity. Others have done several experiments over the past few years to search for possible small violations of the spin-statistics connection, for both fermions and photons. Thus scientists have recently obtained new limits for the validity of PEP for nuclei, nucleons and electrons, as well as for the validity of Bose–Einstein statistics for photons. These results were presented during the workshop and discussed for the first time in a single framework together with theoretical implications and future perspectives. The aim was to accomplish a “constructive interference” between theorists and experimentalists that could lead towards possible new ideas for nuclear and particle-physics tests of the PEP’s validity, including the interpretation of existing results.

CCspi2_01_09

The workshop benefited from the presence of researchers who have devoted a life’s work to the thorough examination of the structure of the spin-statistics connection in the context of quantum mechanics and field theory. In addition, young scientists put forward suggestions and experimental results that may pave the way to interesting future developments.

Oscar W Greenberg of the University of Maryland opened the workshop with a review talk on theoretical developments, with special emphasis on quon theory – which characterizes particles by a parameter q, where q spans the range from –1 to +1 and thus interpolates between fermion and boson – in an effort to develop more general statistics. Greenberg is the originator of this concept and he continues to be a major contributor to its theoretical development, maintaining a high degree of interest in the field. Robert Hilborn of the University of Texas reviewed the past experimental attempts to find a violation. Other theoretical speakers included distinguished scientists such as Stephen Adler, Michael Berry, Aiyalam P Balachandran, Sergio Doplicher, Giancarlo Ghirardi, Nikolaos Mavromatos and Allan Solomon.

CCspi3_01_09

The experimental reports included presentations on spectroscopic tests of Bose–Einstein statistics of photons by Dmitry Budker’s group at the University of California and the Lawrence Berkeley National Laboratory, and studies of spin-statistics effects in nuclear decays by Paul Kienle’s group at the Stefan Mayer Institute for Subatomic Physics in Vienna. Other talks included results from the Borexino neutrino experiment and the DAMA/LIBRA dark-matter detector in the Gran Sasso laboratory, the KLOE experiment at Frascati, the NEMO-2 detector in the Fréjus underground laboratory and the dedicated Violation of the Pauli exclusion principle experiment in the Gran Sasso laboratory. Each talk was followed by lively discussions concerning the interpretation of the results. Michela Massimi of University College London closed the workshop with an excellent talk on historical and philosophical issues.

Another highlight was the event held for the general public: a reading of selected parts of the book by George Gamow and Russell Stannard, The New World of Mr Tompkins, where the professor depicted in Gamow’s book was played by a witty Michael Berry from the University of Bristol. This event was a success, especially among the young students who participated so enthusiastically.

Overall, the workshop showed that the field is full of new and interesting ideas. Although nobody expects gross violations of the spin-statistics connection, there could be subtle effects that may point to new physics in a context quite different from that of the LHC.

The workshop was sponsored jointly by the INFN and the University of Trieste. It received generous contributions from the Consorzio per la Fisica, the Hadron Physics initiative (Sixth Framework Programme of the EU) and Regione Friuli–Venezia Giulia.

The post The Pauli principle faces testing times appeared first on CERN Courier.

]]>
https://cerncourier.com/a/the-pauli-principle-faces-testing-times/feed/ 0 Feature The SpinStat 2008 workshop looked at the nature of space–time. https://cerncourier.com/wp-content/uploads/2009/01/CCspi1_01_09.jpg
CERN pulls Strings together https://cerncourier.com/a/cern-pulls-strings-together/ https://cerncourier.com/a/cern-pulls-strings-together/#respond Mon, 08 Dec 2008 01:15:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cern-pulls-strings-together/ The annual “Strings” conference draws together a large number of active researchers in the field from all over the world.

The post CERN pulls Strings together appeared first on CERN Courier.

]]>
CCstr1_10_08

The annual “Strings” conference draws together a large number of active researchers in the field from all over the world. As the largest and most important event on string theory, it aims to review the recent developments for experts, rather than give a comprehensive overview of the field. CERN was an attractive venue for the conference this year, with the imminent start-up of the LHC together with the longer-term Theory Institutes on string phenomenology and black holes taking place just before and after the event. Organized by CERN’s Theory Unit, the universities of Geneva and Neuchâtel, and the ETH Zurich, Strings 2008 attracted more than 400 participants from 36 countries. It opened in the presence of CERN’s management and the rector of the University of Geneva, who also represented the state of Geneva. Appropriately, the first talk was by Gabriele Veneziano, formerly of CERN and one of the initiators of string theory following his famous formula invention 40 years ago. There was a welcome reception at the United Nations in Geneva, and the conference banquet was held in the Unimail building at the university.

A framework for unified particle physics

String theory can be seen as a framework for generalizing conventional particle quantum field theory, with applications stretching across a broad range of areas, such as quantum gravity, grand unification, gauge theories, heavy-ion physics, cosmology and black holes. It allows the systematic investigation of many of the important features of such theories by providing a coherent and consistent way of formulating the problems at hand. As Hirosi Ooguri from the California Institute of Technology so aptly said in his summary talk, string theory can be viewed, depending on the application, as a candidate, a model, a tool and/or a language.

The richness of string theory makes it a candidate for a consistent framework that truly unifies all of particle physics, including gravity. It also provides a stage for analysing complicated problems, such as quantum black holes and strongly coupled systems, as in quark–gluon plasma, through the means of idealized, often supersymmetric, models. Moreover, string theory has been proved to be an invaluable tool for doing computations in particle physics in an extremely efficient manner. It also often provides a novel language, with which it miraculously transforms seemingly hard problems into simple ones by reformulating them in a “dual” way. This also includes certain hard problems in mathematics that become simple when translated into the language of string theory.

CCstr2_10_08

The talks displayed all of these four facets of string theory effectively. Essentially there were five key areas on which the conference focused, roughly reflecting the fields of highest activity and progress during the past year. In addition, there were three talks on the LHC and its physics by the project leader, Lyn Evans; CERN’s chief scientific officer, Jos Engelen; and Oliver Buchmuller from CERN and the CMS experiment. These were intended to educate the string community in down-to-earth physics.

The first area covered was string phenomenology, which uses string theory as model and candidate for the unification of all particles and forces. The various approaches for model building reviewed were mostly of a geometrical nature. That is, many properties of the Standard Model can be translated into geometrical properties of the compactification space that is used to make strings look four-dimensional at low energies. While this translation can be pushed a long way qualitatively, it seems exceedingly difficult technically to go much beyond this stage and obtain predictions that would be testable at the LHC. On the other hand, for the most optimistic case in which the string scale is low (namely of the order of the scale of the weak interactions), concrete predictions of string theory are fully possible, as reported in one of the talks.

Another area, which has become highly visible during the past year, is the computation of certain scattering amplitudes, often in theories with extended supersymmetries and notably in N = 8 supergravity. Extensive computations based on string-inspired methods suggest that this theory may be finite, owing to unexpected cancellations of Feynman diagrams. However, some researchers have suggested that Feynman diagrams might not provide the most efficient way to perform quantum field theory; the results may instead point to the existence of a yet-to-be-discovered dual formulation of the theory that would be much simpler. Other related results concern theories with less supersymmetry, as well as amplitudes of phenomenological relevance, such as multi-gluon scattering amplitudes.

It is well known that string theory is a theory not only of strings but also of membranes and other extended objects. A hot topic of the past year has been the “M-brane mini-revolution”. This deals with a novel description of M-theory membranes and has created some controversy about the meaning of the results. Several talks duly reviewed this subject and it became apparent that the issues had not yet been completely settled.

A key topic of every string conference within the last 10 years has been the gauge theory/gravity duality, which maps ordinary gauge theories to gravitational – i.e., string – theories. This year’s focus was mainly on the connection between systems that are strongly coupled – and in a sense hydrodynamical – and gravity. This leads to a stringy, dual interpretation of certain states in heavy-ion physics, such as the quark–gluon plasma. In particular, a link can be made between the decay of glueball states in QCD and the decay of black holes by Hawking radiation. While these ideas seem to work well on a qualitative level, quantitatively solid results are much harder to obtain because of the strongly coupled nature of the physics involved. The significance of this approach is the subject of ongoing debate and collaboration between heavy-ion physicists and string theorists.

A field of permanent activity and conceptual importance is that of black hole physics, to which string theory has made extremely important contributions during the past few years. As reviews at the conference showed, the identification and counting of microscopic quantum states in stringy toy models has been refined and made more precise, even to the level of quantum corrections. Moreover, fascinating connections between black holes and topological strings have been proposed, and testing those connections has been an important field of activity during the past few years. The results of topological string theory have also had a considerable impact on certain areas of mathematics, and have led to fruitful interactions with mathematicians.

Apart from these five focus areas, other subjects were reviewed at the conference. For example, there was a lecture on loop quantum gravity so that the string community could judge whether there might be connections to this seemingly different approach to quantum gravity.

Both during the conference and afterwards, many participants expressed the view that string theory continues to be a healthy, fascinating and important subject for theoretical work. This is despite the fact that the original main goal, namely to explain the Standard Model of particle physics, appears to be much harder to achieve (if, indeed, achievable at all) than initially hoped. In the final outlook talk, David Gross of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, presented a picture of string theory as an umbrella that covers most of theoretical physics, similar to the way in which CERN has emerged as an umbrella for the worldwide community of particle physicists.

The post CERN pulls Strings together appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cern-pulls-strings-together/feed/ 0 Feature The annual “Strings” conference draws together a large number of active researchers in the field from all over the world. https://cerncourier.com/wp-content/uploads/2008/12/CCstr1_10_08-feature.jpg
Introduction to 3+1 Numerical Relativity https://cerncourier.com/a/introduction-to-31-numerical-relativity/ Mon, 20 Oct 2008 08:09:51 +0000 https://preview-courier.web.cern.ch/?p=105034 Starting from a brief introduction to general relativity, this book discusses the different concepts and tools necessary for the fully consistent numerical simulation of relativistic astrophysical systems, with strong and dynamical gravitational fields.

The post Introduction to 3+1 Numerical Relativity appeared first on CERN Courier.

]]>
by Miguel Alcubierre, Oxford University Press Series: International Series of Monographs on Physics, Volume 140. Hardback ISBN 9780199205677 £55 ($110).

41WmzcrGX9L._SX331_BO1,204,203,200_

An introduction to the modern field of 3+1 numerical relativity, this book has been written so as to be as self-contained as possible, assuming only a basic knowledge of special relativity. Starting from a brief introduction to general relativity, it discusses the different concepts and tools necessary for the fully consistent numerical simulation of relativistic astrophysical systems, with strong and dynamical gravitational fields. The topics discussed in detail include: hyperbolic reductions of the field equations, gauge conditions, the evolution of black hole space-times, relativistic hydrodynamics, gravitational wave extraction and numerical methods. There is also a final chapter with examples of some simple numerical space–times. The book is for graduates and researchers in physics and astrophysics.

The post Introduction to 3+1 Numerical Relativity appeared first on CERN Courier.

]]>
Review Starting from a brief introduction to general relativity, this book discusses the different concepts and tools necessary for the fully consistent numerical simulation of relativistic astrophysical systems, with strong and dynamical gravitational fields. https://cerncourier.com/wp-content/uploads/2022/08/41WmzcrGX9L._SX331_BO1204203200_.jpg
Quantum Field Theory of Non-equilibrium States https://cerncourier.com/a/quantum-field-theory-of-non-equilibrium-states/ Mon, 20 Oct 2008 08:09:50 +0000 https://preview-courier.web.cern.ch/?p=105032 This textbook presents quantum field theoretical applications to systems out of equilibrium.

The post Quantum Field Theory of Non-equilibrium States appeared first on CERN Courier.

]]>
by Jørgen Rammer, Cambridge University Press. Hardback ISBN 9780521874991 £45 ($85). E-book format ISBN 9780511292620, $68.

31WOV88NE-L._SX346_BO1,204,203,200_

This textbook presents quantum field theoretical applications to systems out of equilibrium. It introduces the real-time approach to non-equilibrium statistical mechanics and the quantum field theory of non-equilibrium states in general. It offers two ways of learning how to study non-equilibrium states of many-body systems: the mathematical canonical way and an intuitive way using Feynman diagrams. The latter provides an easy introduction to the powerful functional methods of field theory, and the use of Feynman diagrams to study classical stochastic dynamics is considered in detail. The developed real-time technique is applied to study numerous phenomena in many-body systems, and there are numerous exercises to aid self-study.

The post Quantum Field Theory of Non-equilibrium States appeared first on CERN Courier.

]]>
Review This textbook presents quantum field theoretical applications to systems out of equilibrium. https://cerncourier.com/wp-content/uploads/2022/08/31WOV88NE-L._SX346_BO1204203200_.jpg
Elements of String Cosmology https://cerncourier.com/a/elements-of-string-cosmology/ Mon, 20 Oct 2008 08:09:49 +0000 https://preview-courier.web.cern.ch/?p=105030 The first book dedicated to string cosmology, this contains a pedagogical introduction to the basic notions of the subject.

The post Elements of String Cosmology appeared first on CERN Courier.

]]>
By Maurizio Gasperini, Cambridge University Press. Hardback ISBN 9780521868754 £45. E-book format ISBN 9780511332296 $68.

41XcPvk5gJL

The standard cosmological picture of our universe emerging from a Big Bang leaves open many fundamental questions, which string theory, a unified theory of all forces of nature, should be able to answer. The first book dedicated to string cosmology, this contains a pedagogical introduction to the basic notions of the subject. It describes the new possible scenarios suggested by string theory for the primordial evolution of our universe and discusses the main phenomenological consequences of these scenarios, stressing their differences from each other, and comparing them to the more conventional models of inflation. It is self-contained, and so can be read by astrophysicists with no knowledge of string theory, and high-energy physicists with little understanding of cosmology. Detailed and explicit derivations of all the results presented provide a deeper appreciation of the subject.

The post Elements of String Cosmology appeared first on CERN Courier.

]]>
Review The first book dedicated to string cosmology, this contains a pedagogical introduction to the basic notions of the subject. https://cerncourier.com/wp-content/uploads/2022/08/41XcPvk5gJL.jpg
Searching for the Superworld: A Volume in Honor of Antonino Zichichi on the Occasion of the Sixth Centenary Celebrations of the University of Turin, Italy https://cerncourier.com/a/searching-for-the-superworld-a-volume-in-honor-of-antonino-zichichi-on-the-occasion-of-the-sixth-centenary-celebrations-of-the-university-of-turin-italy/ Mon, 20 Oct 2008 08:09:49 +0000 https://preview-courier.web.cern.ch/?p=105031 These papers represent a must-have collection, not only for their originality, but also for their complete analysis of expected scenarios on the basis of today’s knowledge of physics.

The post Searching for the Superworld: A Volume in Honor of Antonino Zichichi on the Occasion of the Sixth Centenary Celebrations of the University of Turin, Italy appeared first on CERN Courier.

]]>
By Sergio Ferrara and Rudolf M Mössbauer, World Scientific Series in 20th Century Physics, Volume 39. Hardback ISBN 9789812700186 £69 ($128).

9789812700186_p0_v2_s1200x630

The “superworld” is a subject of formidable interest for the immediate future of subnuclear physics to which Antonino Zichichi has contributed with a series of important papers of phenomenological and theoretical nature. These papers represent a must-have collection, not only for their originality, but also for their complete analysis of expected scenarios on the basis of today’s knowledge of physics. The contributions are divided into two parts. The first deals with the problem of the convergence of the three fundamental forces of nature measured by the gauge couplings, with the onset of the energy threshold for the production of the lightest supersymmetric particles and with the existence of a gap between the string scale and the GUT scale. The second deals with the study of a theoretical model capable of including supersymmetry with the minimum number of parameters (possibly one), and agreeing with all the conditions established by string theories – this turns out to be a “one-parameter no-scale supergravity” model whose experimental consequences are investigated for present and future facilities aimed at the discovery of the first example of the superparticle.

The post Searching for the Superworld: A Volume in Honor of Antonino Zichichi on the Occasion of the Sixth Centenary Celebrations of the University of Turin, Italy appeared first on CERN Courier.

]]>
Review These papers represent a must-have collection, not only for their originality, but also for their complete analysis of expected scenarios on the basis of today’s knowledge of physics. https://cerncourier.com/wp-content/uploads/2022/08/9789812700186_p0_v2_s1200x630.jpg
Subatomic Physics (third edition) https://cerncourier.com/a/subatomic-physics-third-edition/ Wed, 13 Aug 2008 09:33:20 +0000 https://preview-courier.web.cern.ch/?p=105108 This is the third and fully updated edition of a classic textbook, which provides an up-to-date and lucid introduction to both particle and nuclear physics.

The post Subatomic Physics (third edition) appeared first on CERN Courier.

]]>
by Ernest M Henley & Alejandro Garcia, World Scientific. Hardback ISBN 9789812700568 £56 ($98). Paperback ISBN 9789812700575 £33 ($58).

71PB6x6c1PL

This is the third and fully updated edition of a classic textbook, which provides an up-to-date and lucid introduction to both particle and nuclear physics. Topics are introduced with key experiments and their background, encouraging students to think and allow them to do back-of-the-envelope calculations in a diversity of situations. Suitable for experimental and theoretical physics students at the senior undergraduate and beginning graduate levels, the book covers earlier important experiments and concepts as well as topics of current interest, with extensive use of photographs and figures to convey concepts and experimental data.

The post Subatomic Physics (third edition) appeared first on CERN Courier.

]]>
Review This is the third and fully updated edition of a classic textbook, which provides an up-to-date and lucid introduction to both particle and nuclear physics. https://cerncourier.com/wp-content/uploads/2022/08/71PB6x6c1PL.jpg
Lisa Randall: dreams of warped space-time https://cerncourier.com/a/lisa-randall-dreams-of-warped-space-time/ https://cerncourier.com/a/lisa-randall-dreams-of-warped-space-time/#respond Tue, 08 Jul 2008 10:48:21 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/lisa-randall-dreams-of-warped-space-time/ Antonella Del Rosso speaks to straight-talking physicist Lisa Randall

The post Lisa Randall: dreams of warped space-time appeared first on CERN Courier.

]]>
Lisa Randall is the first female tenured theoretical physicist at Harvard University. This alone would probably be enough to raise the interest of most science journalists who are all too often confronted with the endless search for a female face who would look good in their newspapers, and make science somehow more human to non-scientific readers. Search her name in Google and read articles about her, then read her most recent book, and you realize that she is also one of the small band of physicists who can write popular science books. Then meet her, as I did at CERN, and you discover a no-nonsense person who finds it “normal” to deal with extra-dimensions and parallel universes, as well as hidden gravitons and quantum gravity.

Randall has visited CERN many times, staying for several months in 1992 to 1993, when she worked on B physics and also on ideas in supersymmetry and supersymmetry breaking. These ideas have since evolved, and she is now one of the world’s experts in the theory of extra dimensions, one of the solutions proposed for the puzzling question of quantum gravity. According to these theories, our universe could have extra dimensions beyond the four that we experience – three of space and one of time.

CCElis1_07-08

The idea of an extra dimension is simple to state, but how can we picture extra dimensions in our three-dimensional minds? As Randall concedes, explaining the extra dimensions is possible primarily through analogies, such as, Edwin Abbot’s analogy of Flatland. If you lived on a two-dimensional surface and could see only two dimensions, what would a three-dimensional object become for you? “In order to answer, you would have to explore your object in your two-dimensional view,” she explains. “The slice would be two dimensional but the object would still be three dimensional.” This is to say that, although extra dimensions are difficult to imagine in our limited three-dimensional world, we can nevertheless explore them.

Warping in a universe with extra dimensions would be an amazing discovery, but does Randall expect to find any evidence? The LHC, she explains, could hold the key. “The LHC will allow us to explore an energy scale never reached before – the TeV scale. We know there are questions about this particular scale. We know the simple Higgs theory is incomplete, so there should be something else around. That’s why people think it should be supersymmetry or extra dimensions, something just explaining why the Higgs boson is as light as it is,” she explains. Randall works in particular on the idea of warped geometry. If this is true, experiments at the LHC should see particles that travel in extra dimensions, the mass of which is around the tera-electron-volt scale that the LHC is studying.

One fascinating area of modern physics linked to extra dimensions is that of quantum gravity. Gravity is the best known among the forces that we experience every day, yet there is no theory that can describe it at the quantum level. Gravity also still holds secrets experimentally, because its force-carrying particle, the graviton, remains hidden from view, but Randall’s theories of extra dimensions could shed light here, too.

Could the graviton be found in the additional dimensions, and therefore in the proton–proton collisions at the LHC? “We don’t know for sure,” says Randall, “but the Kaluza–Klein partner of the graviton – the partner of the graviton that travels in extra dimensions – might be accessible.” It seems that even for the theorists leading the field, the theory is a little tricky to understand. “You have one graviton that doesn’t have any mass,” she explains, “and it acts just as a graviton is supposed to act in four dimensions. And you have another graviton that has momentum in the extra dimensions: it will look like a massive graviton according to four-dimensional physics. The particle will have momentum in the fifth dimension and this is the part that we will be able to see.”

CCElis2_07-08

The quantum effects of gravity have also led theorists to talk of the possibility that black holes could be formed at the LHC, but Randall remains sceptical. “I don’t really think we will find black holes at the LHC,” she says. “I think you’d have to get to even higher energy.” It is more likely in her opinion that experiments will see signs of quantum gravity emerging from a low-energy quantum gravity scale in higher dimensions. However, she admits: “If we really were able to have enough energy to see a black hole, it would be exciting. A black hole that you could study would be very interesting.”

Interesting, indeed, but also scary, because black holes have always been described as “matter-eaters”. However, there is nothing to fear. Massive black holes can only be created in the universe by the collapse of massive stars. These contain enormous amounts of gravitational energy, which pull in surrounding matter. Given the collision energy at the LHC, only microscopic and rapidly evaporating black holes can be produced in the collisions. Even if this does occur, the black holes will not be harmful: cosmic rays with energies much higher than at the LHC would already have produced many more black holes in their collisions with Earth and other astrophysical objects. The state of our universe is therefore the most powerful proof that there will be no danger from these high-energy collisions, which occur continuously on Earth.

So much for black holes, but I am still full of curiosity about Randall. What, for example, originally sparked her interest in physics? “I actually liked math first more than physics,” she says, “because when I was younger that is what you got introduced to first. I loved applying math a little bit more to the real world – at least what I hope is the real world.” Now, as a leading woman in a male-dominated research field, and as the author of a popular book, Warped Passages, she is the focus of media attention. She finds some of this surprising but notes that it’s not just attention to her but to the field in general. One of the motivations she had for writing her book, was that people are excited about the LHC. She saw the chance to give them the opportunity to find out more about what it will do. “These are difficult concepts to express. You could give an easy explanation or you could try to do it more carefully in a book. One of the very rewarding things is that a lot of people who have read my book have said they can’t wait for the LHC; they can’t wait to see what they are going to find. So it is exciting when you give a lecture and thousands of people are there – it’s exciting because you know that so many people are interested.” On the other hand, she finds some of the specific types of reporting disturbing, because it shows how far society still has to go: “We haven’t reached the point where it’s usual for women to be in the field.”

In addition to her work on black holes, gravity and so on, Randall is currently working on ideas of how to look for different models at the LHC, and how to look for heavier objects, such as the graviton, that might decay into energetic top quarks. She is also trying to explore alternative theories. “I’m not sure how far we’ll go in things like supersymmetry,” she says, “I’m playing around with models and ways to search for it at the LHC.”

Yes, physics is about playing around with ideas – ideas that nobody has ever had before but that have to be tested experimentally. The LHC will shed light on some of the current mysteries, and Randall, who like many others has played around with ideas for years, can’t wait for this machine to produce the experimental answers.

• For Lisa Randall’s lectures at CERN in March 2008 on “Warped Extra-Dimensional Opportunities and Signatures”, see http://indico.cern.ch/conferenceDisplay.py?confId=28978, http://indico.cern.ch/conferenceDisplay.py?confId=28979 and http://indico.cern.ch/conferenceDisplay.py?confId=28980.

The post Lisa Randall: dreams of warped space-time appeared first on CERN Courier.

]]>
https://cerncourier.com/a/lisa-randall-dreams-of-warped-space-time/feed/ 0 Feature Antonella Del Rosso speaks to straight-talking physicist Lisa Randall https://cerncourier.com/wp-content/uploads/2008/07/CCElis1_07-08.jpg
QCD: string theory meets collider physics https://cerncourier.com/a/qcd-string-theory-meets-collider-physics/ https://cerncourier.com/a/qcd-string-theory-meets-collider-physics/#respond Thu, 13 Mar 2008 13:31:37 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/qcd-string-theory-meets-collider-physics/ Two fields of physics met at the 2007 DESY theory workshop

The post QCD: string theory meets collider physics appeared first on CERN Courier.

]]>
With the title Quantum Chromodynamics – String Theory meets Collider Physics, the 2007 DESY theory workshop brought together a distinguished list of speakers to present and discuss recent advances and novel ideas in both fields. Among them was Juan Maldacena from the Institute for Advanced Study, Princeton, pioneer of the interrelationship between gauge theory and string theory, who also gave the Heinrich Hertz lecture for the general public.

CCqcd1_03_08

From a dynamical point of view, quantum chromodynamics (QCD), the theory of strong interactions, represents the most difficult sector of the Standard Model. Mastering the complexities of strong interactions is essential for a successful search for new physics at the LHC. In addition, the relevance of the QCD phase transition for the early evolution of our universe has ignited an intense interest in heavy-ion collisions, both at RHIC in Brookhaven and at the LHC at CERN. The QCD community is thus deeply engaged in investigations to further our understanding of QCD, to reach the highest accuracy in its theoretical predictions and to advance existing computational tools.

CCqcd2_03_08

String theory, initially considered a promising theoretical model for strong interactions, was long believed incapable of capturing, in detail, the correct high-energy behaviour. In 1997, however, Maldacena overcame a prominent obstacle for applications of string theory to gauge physics. He proposed describing strongly coupled four-dimensional (supersymmetric) gauge theories through closed strings in a carefully chosen five-dimensional background. In fact, equivalences (dualities in modern parlance) between gauge and string theories emerge, provided that the strings propagate in a five-dimensional space of constant negative curvature. Such a geometry is called an anti deSitter (AdS) space and the duality involving strings in an AdS background became known as AdS/CFT correspondence, where CFT denotes conformal field theory. If the duality turns out to be true, string-theory techniques can give access to strongly coupled gauge physics, a regime that only lattice gauge theory has so far been able to access. Though a string theory dual to real QCD has still to be found, AdS/CFT dualities are beginning to bring string theory closer to the “real world” of particle physics.

With the duality conjecture as its focus, the DESY workshop covered the full spectrum of research topics that have entered this interdisciplinary endeavour. Topics ranged from the role of QCD in the evaluation of experimental data and in Monte Carlo simulations to string theory calculations in AdS spaces.

To begin with the more practical side, QCD clearly dominates the daily analysis of data from RHIC, HERA at DESY, and Fermilab’s Tevatron. Tom LeCompte of Argonne presented results from the Tevatron, and Uta Klein of Liverpool looked at what we have learned from HERA. The results relating to parton densities will be of utmost importance for measurements at the LHC, not least in the kinematic region of small x, which was among the highlights of HERA physics. Diffraction – one of the puzzles for the HERA community – continues to demand attention at the LHC, in particular as a clean channel for the discovery of new physics, as Brian Cox of the University of Manchester explained.

Monte Carlo simulations represent an indispensable tool for analysing experimental data, and existing models need steady improvement as we approach the new energy regime at the LHC. Gösta Gustafson of Lund and Stefan Gieseke of Karlsruhe described the progress that is being made in this respect. Topics of particular current interest include a careful treatment of multiple parton interactions and the implementation of next-to-leading-order (NLO) QCD matrix elements in Monte Carlo programs.

At present, lattice calculations still offer the most reliable framework for studies of QCD beyond the weak coupling limit. Among other issues, the workshop addressed the calculation of low-energy parameters such as hadron masses and decay constants. In this context, Federico Farchioni of Münster noted that the limit of small quark masses calls for careful attention, and Philipp Hägler of Technische Universität, München discussed developments in calculating hadron structure from the lattice. Another important direction concerns the QCD phase structure and, in particular, accurate estimates of the phase-transition temperature, Tc, as Akira Ukawa of Tsukuba explained. Lattice gauge theories also allow the investigation of connections with string theory. Michael Teper of Oxford showed how once the dependence of gauge theory on the number of colours, Nc, is sufficiently well controlled, it may be possible to determine the energy spectrum of closed strings in the limit of large ‘t Hooft coupling.

QCD perturbation theory

NLO and next-to-NLO calculations in QCD perturbation theory are needed to derive precise expressions for cross-sections – they are crucial in describing experimental data at the existing colliders, and indispensable input for the discrimination of new physics from mere QCD background at the LHC. The necessary computations require a detailed understanding of perturbative QCD, as Werner Vogelsang from Brookhaven National Laboratory discussed. For example, the theoretical foundation of kt factorization and of unintegrated parton densities, along with their use in hadron–hadron collisions, is attracting much attention. For higher-order QCD calculations, Alexander Mitov of DESY, Zeuthen, described how advanced algorithms are being developed and applied.

Higher-order computations in QCD are becoming one of the most prominent examples of an extremely profitable bridge between gauge and string theories. Multiparton final states at the LHC have sparked interest in perturbative gauge theory computations of scattering amplitudes that involve a large number of incoming and/or outgoing partons. At the same time there is an urgent need for higher-loop results, which, in view of the rapidly growing number of Feynman diagrams, seem to be out of reach for more conventional approaches. Recent investigations in this direction have unravelled new structures, such as in the perturbative expansion of multigluon amplitudes.

In a few special cases, such as four-gluon amplitudes in N = 4 supersymmetric Yang–Mills theory, these investigations have led to highly non-trivial conjectures for all loop expressions. This was the topic of talks by David Dunbar of Swansea and Lance Dixon of Stanford. According to the AdS/CFT duality, the strong coupling behaviour of these amplitudes should be calculable within string theory. Indeed, Maldacena described how the relevant string-theory computation of four-gluon amplitudes has been performed, yielding results that agree with the gauge theoretic prediction. On the gauge theory side, a conjecture for a larger number of gluons has also been formulated. Maldacena noted that this is currently contested both by string theoretic arguments and more refined gauge theory calculations.

The expressions for four-gluon amplitudes contain a certain universal function, the so-called cusp anomalous dimension, which can again be computed at weak (gauge theory) and strong (supergravity) coupling. Gleb Arutyunov of Utrecht showed how this particular quantity is also being investigated using modern techniques of integrable systems. Remarkably, as Niklas Beisert of the Albert Einstein Institute in Golm explained, a formula for the cusp anomalous dimension in N = 4 super-Yang–Mills theory has recently been proposed that interpolates correctly between the known weak and strong coupling expansions. In addition, Vladimir Braun of Regensburg and Lev Lipatov of Hamburg and St Petersburg described how integrability features in the high-energy regime of QCD, both in the short distance and the small-x limit. The integrable structures have immediate applications to data analysis. Yuri Kovchegov of Ohio also pointed out that low-x physics in QCD, with all the complexities appearing in the NLO corrections, might possess close connections with the supersymmetric relatives of QCD. The higher order generalization of the Balitsky–Fadin–Kuraev–Lipatov pomeron, which is expected to correspond to the graviton, is of particular interest. In this way, studies of the high-energy regime seem to carry the seeds for new relations to string theory.

Another close contact between string theory and QCD appears at temperatures near and above the QCD phase transition. Heavy-ion experiments that probe this kinematic region are currently taking place at RHIC and will soon be carried out at the LHC. CERN’s Urs Wiedemann introduced the topic, and John Harris of Yale presented results and discussed their interpretation. The analysis of RHIC data requires somewhat unusual theoretical concepts, including, for example, QCD hydrodynamics. As in any other system of fluid mechanics, viscosity is an important parameter used to characterize quark–gluon plasmas, but its measured value cannot be explained through perturbative QCD. This suggests that the quark–gluon plasma at RHIC is strongly coupled, so string theory should be able to predict properties such as the plasma’s viscosity through the AdS/CFT correspondence. David Mateos of Santa Barbara and Hong Liu of Boston showed that the string theoretic computation of viscosity and other quantities is indeed possible, based on investigations of gravity in a thermal black-hole background. It leads to values that are intriguingly close to experimental data.

String theory is often perceived as an abstract theoretical framework, far away from the physics of the real world and experimental verification. When considered as a theory of strongly coupled gauge physics, however, it is beginning to slip into a new role – one that offers novel views of qualitative features of gauge theory and, in some cases, even quantitative predictions. The QCD community, on the other hand, is beginning to realize that its own tremendous efforts may profit from the novel alliance with string theory. The participants of the 2007 DESY Theory workshop witnessed this recent shift, through lively discussions and numerous excellent talks that successfully bridged the two communities.

The post QCD: string theory meets collider physics appeared first on CERN Courier.

]]>
https://cerncourier.com/a/qcd-string-theory-meets-collider-physics/feed/ 0 Feature Two fields of physics met at the 2007 DESY theory workshop https://cerncourier.com/wp-content/uploads/2008/03/CCqcd1_03_08-feature.jpg
Classical Charged Particles (third edition) https://cerncourier.com/a/classical-charged-particles-third-edition/ Thu, 13 Mar 2008 09:33:19 +0000 https://preview-courier.web.cern.ch/?p=105101 Originally written in 1964, this text is a study of the classical theory of charged particles.

The post Classical Charged Particles (third edition) appeared first on CERN Courier.

]]>
by Fritz Rohrlich, World Scientific. Hardback ISBN 9789812700049 £33 ($58).

51LpaOUbewS._SX328_BO1,204,203,200_

Originally written in 1964, this text is a study of the classical theory of charged particles. Many applications treat electrons as point particles, but there is nevertheless a widespread belief that the theory is beset with various difficulties, such as an infinite electrostatic self-energy and an equation of motion that allows physically meaningless solutions. The classical theory of charged particles has meanwhile been largely ignored and left incomplete. Despite the efforts of great physicists such as Lorentz, Poincaré and Dirac, it is usually regarded as a “lost cause”. Thanks to more recent progress, however, the author has been able to resolve the various problems and to complete this unfinished theory successfully.

The post Classical Charged Particles (third edition) appeared first on CERN Courier.

]]>
Review Originally written in 1964, this text is a study of the classical theory of charged particles. https://cerncourier.com/wp-content/uploads/2022/08/51LpaOUbewS._SX328_BO1204203200_.jpg
Superstrings reveal the interior structure of a black hole https://cerncourier.com/a/superstrings-reveal-the-interior-structure-of-a-black-hole/ https://cerncourier.com/a/superstrings-reveal-the-interior-structure-of-a-black-hole/#respond Fri, 15 Feb 2008 13:52:59 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/superstrings-reveal-the-interior-structure-of-a-black-hole/ A research group at KEK has succeeded in calculating the state inside a black hole using computer simulations based on superstring theory.

The post Superstrings reveal the interior structure of a black hole appeared first on CERN Courier.

]]>
A research group at KEK has succeeded in calculating the state inside a black hole using computer simulations based on superstring theory. The calculations confirmed for the first time that the temperature dependence of the energy inside a black hole agrees with the power-law behaviour expected from calculations based on Stephen Hawking’s theory of black-hole radiation. The result demonstrates that the behaviour of elementary particles as a collection of strings in superstring theory can explain thermodynamical properties of black holes.

CCnew6_02_08

In 1974, Stephen Hawking at Cambridge showed theoretically that black holes are not entirely black. A black hole in fact emits light and particles from its surface, so that it shrinks little by little. Since then, physicists have suspected that black holes should have a certain interior structure, but they have been unable to describe the state inside a black hole using general relativity, as the curvature of space–time becomes so large towards the centre of the hole that quantum effects make the theory no longer applicable. Superstring theory, however, offers the possibility of bringing together general relativity and quantum mechanics in a consistent manner, so many theoretical physicists have been investigating whether this theory can describe the interior of a black hole.

Jun Nishimura and colleagues at KEK established a method that efficiently treats the oscillation of elementary strings depending on their frequency. They used the Hitachi SR11000 model K1 supercomputer installed at KEK in March 2006 to calculate the thermodynamical behaviour of the collection of strings inside a black hole. The results showed that as the temperature decreased, the simulation reproduced behaviour of a black hole as predicted by Hawking’s theory (figure 1).

CCnew7_02_08

This demonstrates that the mysterious thermodynamical properties of black holes can be explained by a collection of strings fluctuating inside. The result also indicates that superstring theory will develop further to play an important role in solving problems such as the evaporation of black holes and the state of the early universe.

The post Superstrings reveal the interior structure of a black hole appeared first on CERN Courier.

]]>
https://cerncourier.com/a/superstrings-reveal-the-interior-structure-of-a-black-hole/feed/ 0 News A research group at KEK has succeeded in calculating the state inside a black hole using computer simulations based on superstring theory. https://cerncourier.com/wp-content/uploads/2008/02/CCnew6_02_08.jpg
From BCS to the LHC https://cerncourier.com/a/from-bcs-to-the-lhc/ Mon, 21 Jan 2008 09:54:15 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/from-bcs-to-the-lhc/ Steven Weinberg reflects on spontaneous symmetry breaking - an idea particle physicists learnt from Bardeen, Cooper and Schrieffer's theory of superconductivity.

The post From BCS to the LHC appeared first on CERN Courier.

]]>
It was a little odd for me, a physicist whose work has been mainly on the theory of elementary particles, to be invited to speak at a meeting of condensed-matter physicists celebrating a great achievement in their field. It is not only that there is a difference in the subjects that we explore. There are deep differences in our aims, in the kinds of satisfaction that we hope to get from our work.

Condensed-matter physicists are often motivated to deal with phenomena because the phenomena themselves are intrinsically so interesting. Who would not be fascinated by weird things, such as superconductivity, superfluidity, or the quantum Hall effect? On the other hand, I don’t think that elementary-particle physicists are generally very excited by the phenomena they study. The particles themselves are practically featureless, every electron looking tediously just like every other electron.

Another aim of condensed-matter physics is to make discoveries that are useful. In contrast, although elementary-particle physicists like to point to the technological spin-offs from elementary-particle experimentation, and these are real, this is not the reason that we want these experiments to be done, and the knowledge gained by these experiments has no foreseeable practical applications.

Most of us do elementary-particle physics neither because of the intrinsic interestingness of the phenomena that we study, nor because of the practical importance of what we learn, but because we are pursuing a reductionist vision. All of the properties of ordinary matter are what they are because of the principles of atomic and nuclear physics, which are what they are because of the rules of the Standard Model of elementary particles, which are what they are because…well, we don’t know, this is the reductionist frontier, which we are currently exploring.

I think that the single most important thing accomplished by the theory of John Bardeen, Leon Cooper, and Robert Schrieffer (BCS) was to show that superconductivity is not part of the reductionist frontier (Bardeen et al. 1957). Before BCS this was not so clear. For instance, in 1933 Walter Meissner raised the question of whether electric currents in superconductors are carried by the known charged particles, electrons and ions. The great thing that Bardeen, Cooper, and Schrieffer showed was that no new particles or forces had to be introduced to understand superconductivity. According to a book on superconductivity that Cooper showed me, many physicists were even disappointed that “superconductivity should, on the atomistic scale, be revealed as nothing more than a footling small interaction between electrons and lattice vibrations”. (Mendelssohn 1966).

His testimony was so scrupulously honest that I think it helped the SSC more than it hurt it.

The claim of elementary-particle physicists to be leading the exploration of the reductionist frontier has at times produced resentment among condensed-matter physicists. (This was not helped by a distinguished particle theorist, who was fond of referring to condensed-matter physics as “squalid state physics”.) This resentment surfaced during the debate over the funding of the Superconducting Super Collider (SSC). I remember that Phil Anderson and I testified in the same Senate committee hearing on the issue, he against the SSC and I for it. His testimony was so scrupulously honest that I think it helped the SSC more than it hurt it. What really did hurt was a statement opposing the SSC by a condensed-matter physicist who happened at the time to be the president of the American Physical Society. As everyone knows, the SSC project was cancelled, and now we are waiting for the LHC at CERN to get us moving ahead again in elementary-particle physics.

During the SSC debate, Anderson and other condensed-matter physicists repeatedly made the point that the knowledge gained in elementary-particle physics would be unlikely to help them to understand emergent phenomena like superconductivity. This is certainly true, but I think beside the point, because that is not why we are studying elementary particles; our aim is to push back the reductive frontier, to get closer to whatever simple and general theory accounts for everything in nature. It could be said equally that the knowledge gained by condensed-matter physics is unlikely to give us any direct help in constructing more fundamental theories of nature.

So what business does a particle physicist like me have at a celebration of the BCS theory? (I have written just one paper about superconductivity, a paper of monumental unimportance, which was treated by the condensed-matter community with the indifference it deserved.) Condensed-matter physics and particle physics are relevant to each other, despite everything I have said. This is because, although the knowledge gained in elementary-particle physics is not likely to be useful to condensed-matter physicists, or vice versa, experience shows that the ideas developed in one field can prove very useful in the other. Sometimes these ideas become transformed in translation, so that they even pick up a renewed value to the field in which they were first conceived.

The example that concerns me is an idea that elementary-particle physicists learnt from condensed-matter theory – specifically from the BCS theory. It is the idea of spontaneous symmetry breaking.

Spontaneous symmetry breaking

In particle physics we are particularly interested in the symmetries of the laws of nature. One of these symmetries is invariance of the laws of nature under the symmetry group of three-dimensional rotations, or in other words, invariance of the laws that we discover under changes in the orientation of our measuring apparatus.

When a physical system does not exhibit all the symmetries of the laws by which it is governed, we say that these symmetries are spontaneously broken. A very familiar example is spontaneous magnetization. The laws governing the atoms in a magnet are perfectly invariant under three-dimensional rotations, but at temperatures below a critical value, the spins of these atoms spontaneously line up in some direction, producing a magnetic field. In this case, and as often happens, a subgroup is left invariant: the two-dimensional group of rotations around the direction of magnetization.

Now to the point. A superconductor of any kind is nothing more or less than a material in which a particular symmetry of the laws of nature, electromagnetic gauge invariance, is spontaneously broken. This is true of high-temperature superconductors, as well as the more familiar superconductors studied by BCS. The symmetry group here is the group of two-dimensional rotations. These rotations act on a two-dimensional vector, whose two components are the real and imaginary parts of the electron field, the quantum mechanical operator that in quantum field theories of matter destroys electrons. The rotation angle of the broken symmetry group can vary with location in the superconductor, and then the symmetry transformations also affect the electromagnetic potentials, a point to which I will return.

The symmetry breaking in a superconductor leaves unbroken a rotation by 180°

The symmetry breaking in a superconductor leaves unbroken a rotation by 180°, which simply changes the sign of the electron field. In consequence of this spontaneous symmetry breaking, products of any even number of electron fields have non-vanishing expectation values in a superconductor, though a single electron field does not. All of the dramatic exact properties of superconductors – zero electrical resistance, the expelling of magnetic fields from superconductors known as the Meissner effect, the quantization of magnetic flux through a thick superconducting ring, and the Josephson formula for the frequency of the AC current at a junction between two superconductors with different voltages – follow from the assumption that electromagnetic gauge invariance is broken in this way, with no need to inquire into the mechanism by which the symmetry is broken.

Condensed-matter physicists often trace these phenomena to the appearance of an “order parameter”, the non-vanishing mean value of the product of two electron fields, but I think this is misleading. There is nothing special about two electron fields; one might just as well take the order parameter as the product of three electron fields and the complex conjugate of another electron field. The important thing is the broken symmetry, and the unbroken subgroup.

It may then come as a surprise that spontaneous symmetry breaking is mentioned nowhere in the seminal paper of Bardeen, Cooper and Schrieffer. Their paper describes a mechanism by which electromagnetic gauge invariance is in fact broken, but they derived the properties of superconductors from their dynamical model, not from the mere fact of broken symmetry. I am not saying that Bardeen, Cooper, and Schrieffer did not know of this spontaneous symmetry breaking. Indeed, there was already a large literature on the apparent violation of gauge invariance in phenomenological theories of superconductivity, the fact that the electric current produced by an electromagnetic field in a superconductor depends on a quantity known as the vector potential, which is not gauge invariant. But their attention was focused on the details of the dynamics rather than the symmetry breaking.

This is not just a matter of style. As BCS themselves made clear, their dynamical model was based on an approximation, that a pair of electrons interact only when the magnitude of their momenta is very close to a certain value, known as the Fermi surface. This leaves a question: How can you understand the exact properties of superconductors, like exactly zero resistance and exact flux quantization, on the basis of an approximate dynamical theory? It is only the argument from exact symmetry principles that can fully explain the remarkable exact properties of superconductors.

Though spontaneous symmetry breaking was not emphasized in the BCS paper, the recognition of this phenomenon produced a revolution in elementary-particle physics. The reason is that (with certain qualification, to which I will return), whenever a symmetry is spontaneously broken, there must exist excitations of the system with a frequency that vanishes in the limit of large wavelength. In elementary-particle physics, this means a particle of zero mass.

The first clue to this general result was a remark in a 1960 paper by Yoichiro Nambu, that just such collective excitations in superconductors play a crucial role in reconciling the apparent failure of gauge invariance in a superconductor with the exact gauge invariance of the underlying theory governing matter and electromagnetism. Nambu speculated that these collective excitations are a necessary consequence of this exact gauge invariance.

Nambu put this idea to good use in particle physics

A little later, Nambu put this idea to good use in particle physics. In nuclear beta decay an electron and neutrino (or their antiparticles) are created by currents of two different kinds flowing in the nucleus, known as vector and axial vector currents. It was known that the vector current was conserved, in the same sense as the ordinary electric current. Could the axial current also be conserved?

The conservation of a current is usually a symptom of some symmetry of the underlying theory, and holds whether or not the symmetry is spontaneously broken. For the ordinary electric current, this symmetry is electromagnetic gauge invariance. Likewise, the vector current in beta decay is conserved because of the isotopic spin symmetry of nuclear physics. One could easily imagine several different symmetries, of a sort known as chiral symmetries, that would entail a conserved axial vector current. However, it seemed that any such chiral symmetries would imply either that the nucleon mass is zero, which is certainly not true, or that there must exist a triplet of massless strongly interacting particles of zero spin and negative parity, which isn’t true either. These two possibilities simply correspond to the two possibilities that the symmetry, whatever it is, either is not, or is, spontaneously broken, not just in some material like a superconductor, but even in empty space.

Nambu proposed that there is indeed such a symmetry, and it is spontaneously broken in empty space, but the symmetry in addition to being spontaneously broken is not exact to begin with, so the particle of zero spin and negative parity required by the symmetry breaking is not massless, only much lighter than other strongly interacting particles. This light particle, he recognized, is nothing but the pion, the lightest and first discovered of all the mesons. In a subsequent paper with Giovanni Jona-Lasinio, Nambu presented an illustrative theory in which, with some drastic approximations, a suitable chiral symmetry was found to be spontaneously broken, and in consequence the light pion appeared as a bound state of a nucleon and an antinucleon.

So far, there was no proof that broken exact symmetries always entail exactly massless particles, just a number of examples of approximate calculations in specific theories. In 1961 Jeffrey Goldstone gave some more examples of this sort, and a hand-waving proof that this was a general result. Such massless particles are today known as Goldstone bosons, or Nambu–Goldstone bosons. Soon after, Goldstone, Abdus Salam and I made this into a rigorous and apparently quite general theorem.

Cosmological fluctuations

This theorem has applications in many branches of physics. One is cosmology. You may know that today the observation of fluctuations in the cosmic microwave background are being used to set constraints on the nature of the exponential expansion, known as inflation, that is widely believed to have preceded the radiation-dominated Big Bang. But there is a problem here. In between the end of inflation and the time that the microwave background that we observe was emitted, there intervened a number of events that are not at all understood: the heating of the universe after inflation, the production of baryons, the decoupling of cold dark matter, and so on. So how is it possible to learn anything about inflation by studying radiation that was emitted long after inflation, when we don’t understand what happened in between? The reason that we can get away with this is that the cosmological fluctuations now being studied are of a type, known as adiabatic, that can be regarded as the Goldstone excitations required by a symmetry, related to general co-ordinate invariance, that is spontaneously broken by the space–time geometry. The physical wavelengths of these cosmological fluctuations were stretched out by inflation so much that they were very large during the epochs when things were happening that we don’t understand, so they then had zero frequency, which means that the amplitude of these fluctuations was not changing, so that the value of the amplitude relatively close to the present tells us what it was during inflation.

Werner Heisenberg continued to believe this into the 1970s

But in particle physics, this theorem was at first seen as a disappointing result. There was a crazy idea going around, which I have to admit that at first I shared, that somehow the phenomenon of spontaneous symmetry breaking would explain why the symmetries being discovered in strong-interaction physics were not exact. Werner Heisenberg continued to believe this into the 1970s, when everyone else had learned better.

The prediction of new massless particles, which were ruled out experimentally, seemed in the early 1960s to close off this hope. But it was a false hope anyway. Except under special circumstances, a spontaneously broken symmetry does not look at all like an approximate unbroken symmetry; it manifests itself in the masslessness of spin-zero bosons, and in details of their interactions. Today we understand approximate symmetries such as isospin and chiral invariance as consequences of the fact that some quark masses, for some unknown reason, happen to be relatively small.

Though based on a false hope, this disappointment had an important consequence. Peter Higgs, Robert Brout and François Englert, and Gerald Guralnik, Dick Hagen and Tom Kibble were all led to look for, and then found, an exception to the theorem of Goldstone, Salam and me. The exception applies to theories in which the underlying physics is invariant under local symmetries, symmetries whose transformations, like electromagnetic gauge transformations, can vary from place to place in space and time. (This is in contrast with the chiral symmetry associated with the axial vector current of beta decay, which applies only when the symmetry transformations are the same throughout space–time.) For each local symmetry there must exist a vector field, like the electromagnetic field, whose quanta would be massless if the symmetry was not spontaneously broken. The quanta of each such field are particles with helicity (the component of angular momentum in the direction of motion) equal in natural units to +1 or –1. But if the symmetry is spontaneously broken, these two helicity states join up with the helicity-zero state of the Goldstone boson to form the three helicity states of a massive particle of spin one. Thus, as shown by Higgs, Brout and Englert, and Guralnik, Hagen and Kibble, when a local symmetry is spontaneously broken, neither the vector particles with which the symmetry is associated nor the Nambu–Goldstone particles produced by the symmetry breaking have zero mass.

This was actually argued earlier by Anderson, on the basis of the example provided by the BCS theory. But the BCS theory is non-relativistic, and the Lorentz invariance that is characteristic of special relativity had played a crucial role in the theorem of Goldstone, Salam and me, so Anderson’s argument was generally ignored by particle theorists. In fact, Anderson was right: the reason for the exception noted by Higgs et al. is that it is not possible to quantize a theory with a local symmetry in a way that preserves both manifest Lorentz invariance and the usual rules of quantum mechanics, including the requirement that probabilities be positive. In fact, there are two ways to quantize theories with local symmetries: one way that preserves positive probabilities but loses manifest Lorentz invariance, and another that preserves manifest Lorentz invariance but seems to lose positive probabilities, so in fact these theories actually do respect both Lorentz invariance and positive probabilities; they just don’t respect our theorem.

Effective field theories

The appearance of mass for the quanta of the vector bosons in a theory with local symmetry re-opened an old proposal of Chen Ning Yang and Robert Mills, that the strong interactions might be produced by the vector bosons associated with some sort of local symmetry, more complicated than the familiar electromagnetic gauge invariance. This possibility was specially emphasized by Brout and Englert. It took a few years for this idea to mature into a specific theory, which then turned out not to be a theory of strong interactions.

Perhaps the delay was because the earlier idea of Nambu, that the pion was the nearly massless boson associated with an approximate chiral symmetry that is not a local symmetry, was looking better and better. I was very much involved in this work, and would love to go into the details, but that would take me too far from BCS. I’ll just say that, from the effort to understand processes involving any number of low-energy pions beyond the lowest order of perturbation theory, we became comfortable with the use of effective field theories in particle physics. The mathematical techniques developed in this work in particle physics were then used by Joseph Polchinski and others to justify the approximations made by BCS in their work on superconductivity.

The story of the physical application of spontaneously broken local symmetries has often been told, by me and others, and I don’t want to take much time on it here, but I can’t leave it out altogether because I want to make a point about it that will take me back to the BCS theory. Briefly, in 1967 I went back to the idea of a theory of strong interactions based on a spontaneously broken local symmetry group, and right away, I ran into a problem: the subgroup consisting of ordinary isospin transformations is not spontaneously broken, so there would be a massless vector particle associated with these transformations with the spin and charges of the ρ meson. This, of course, was in gross disagreement with observation; the ρ meson is neither massless nor particularly light.

The theory requires a massless vector particle, but it is not the ρ meson, it is the photon

Then it occurred to me that I was working on the wrong problem. What I should have been working on were the weak nuclear interactions, like beta decay. There was just one natural choice for an appropriate local symmetry, and when I looked back at the literature I found that the symmetry group I had decided on was one that had already been proposed in 1961 by Sheldon Glashow, though not in the context of an exact spontaneously broken local symmetry. (I found later that the same group had also been considered by Salam and John Ward.) Even though it was now exact, the symmetry when spontaneously broken would yield massive vector particles, the charged W particles that had been the subject of theoretical speculation for decades, and a neutral particle, which I called the Z particle, to mediate a “neutral current” weak interaction, which had not yet been observed. The same symmetry breaking also gives mass to the electron and other leptons, and in a simple extension of the theory, to the quarks. This symmetry group contained electromagnetic gauge invariance, and since this subgroup is clearly not spontaneously broken (except in superconductors), the theory requires a massless vector particle, but it is not the ρ meson, it is the photon, the quantum of light. This theory, which became known as the electroweak theory, was also proposed independently in 1968 by Salam.

The mathematical consistency of the theory, which Salam and I had suggested but not proved, was shown in 1971 by Gerard ‘t Hooft; neutral current weak interactions were found in 1973; and the W and Z particles were discovered at CERN a decade later. Their detailed properties are just those expected according to the electroweak theory.

There was (and still is) one outstanding issue: just how is the local electroweak symmetry broken? In the BCS theory, the spontaneous breakdown of electromagnetic gauge invariance arises because of attractive forces between electrons near the Fermi surface. These forces don’t have to be strong; the symmetry is broken however weak these forces may be. But this feature occurs only because of the existence of a Fermi surface, so in this respect the BCS theory is a misleading guide for particle physics. In the absence of a Fermi surface, dynamical spontaneous symmetry breakdown requires the action of strong forces. There are no forces acting on the known quarks and leptons that are anywhere strong enough to produce the observed breakdown of the local electroweak symmetry dynamically, so Salam and I did not assume a dynamical symmetry breakdown; instead we introduced elementary scalar fields into the theory, whose vacuum expectation values in the classical approximation would break the symmetry.

This has an important consequence. The only elementary scalar quanta in the theory that are eliminated by spontaneous symmetry breaking are those that become the helicity-zero states of the W and Z vector particles. The other elementary scalars appear as physical particles, now generically known as Higgs bosons. It is the Higgs boson predicted by the electroweak theory of Salam and me that will be the primary target of the new LHC accelerator, to be completed at CERN sometime in 2008.

But there is another possibility, suggested independently in the late 1970s by Leonard Susskind and me. The electroweak symmetry might be broken dynamically after all, as in the BCS theory. For this to be possible, it is necessary to introduce new extra-strong forces, known as technicolour forces, that act on new particles, other than the known quarks and leptons. With these assumptions, it is easy to get the right masses for the W and Z particles and large masses for all the new particles, but there are serious difficulties in giving masses to the ordinary quarks and leptons. Still, it is possible that experiments at the LHC will not find Higgs bosons, but instead will find a great variety of heavy new particles associated with technicolour. Either way, the LHC is likely to settle the question of how the electroweak symmetry is broken.

It would have been nice if we could have settled this question by calculation alone, without the need for the LHC, in the way that Bardeen, Cooper and Schrieffer were able to find how electromagnetic gauge invariance is broken in a superconductor by applying the known principles of electromagnetism. But that is just the price we in particle physics have to pay for working in a field whose underlying principles are not yet known.

• This article is based on the talk given by Steven Weinberg at BCS@50, held on 10–13 October 2007 at the University of Illinois at Urbana–Champaign to celebrate the 50th anniversary of the BCS paper. For more about the conference see www.conferences.uiuc.edu/bcs50/.

The post From BCS to the LHC appeared first on CERN Courier.

]]>
Feature Steven Weinberg reflects on spontaneous symmetry breaking - an idea particle physicists learnt from Bardeen, Cooper and Schrieffer's theory of superconductivity. https://cerncourier.com/wp-content/uploads/2008/01/Mexican_hat_potential_polar.jpg
Physics in the multiverse https://cerncourier.com/a/physics-in-the-multiverse/ Tue, 20 Nov 2007 14:25:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/physics-in-the-multiverse/ Aurélien Barrau on why multiple universes should be taken seriously.

The post Physics in the multiverse appeared first on CERN Courier.

]]>
Is our entire universe a tiny island within an infinitely vast and infinitely diversified meta-world? This could be either one of the most important revolutions in the history of cosmogonies or merely a misleading statement that reflects our lack of understanding of the most fundamental laws of physics.

CCnbb_10_07

A self-reproducing universe. This computer-generated simulation shows exponentially large domains, each with different laws of physics (associated with different colours). Peaks are new “Big Bangs”, with heights corresponding to the energy density.
Image credit: simulations by Andrei and Dimitri Linde. The idea in itself is far from new: from Anaximander to David Lewis, philosophers have exhaustively considered this eventuality. What is especially interesting today is that it emerges, almost naturally, from some of our best – but often most speculative – physical theories. The multiverse is no longer a model; it is a consequence of our models. It offers an obvious understanding of the strangeness of the physical state of our universe. The proposal is attractive and credible, but it requires a profound rethinking of current physics.

At first glance, the multiverse seems to lie outside of science because it cannot be observed. How, following the prescription of Karl Popper, can a theory be falsifiable if we cannot observe its predictions? This way of thinking is not really correct for the multiverse for several reasons. First, predictions can be made in the multiverse: it leads only to statistical results, but this is also true for any physical theory within our universe, owing both to fundamental quantum fluctuations and to measurement uncertainties. Secondly, it has never been necessary to check all of the predictions of a theory to consider it as legitimate science. General relativity, for example, has been extensively tested in the visible world and this allows us to use it within black holes even though it is not possible to go there to check. Finally, the critical rationalism of Popper is not the final word in the philosophy of science. Sociologists, aestheticians and epistemologists have shown that there are other demarcation criteria to consider. History reminds us that the definition of science can only come from within and from the praxis: no active area of intellectual creation can be strictly delimited from outside. If scientists need to change the borders of their own field of research, it would be hard to justify a philosophical prescription preventing them from doing so. It is the same with art: nearly all artistic innovations of the 20th century have transgressed the definition of art as would have been given by a 19th-century aesthetician. Just as with science and scientists, art is internally defined by artists.

For all of these reasons, it is worth considering seriously the possibility that we live in a multiverse. This could allow understanding of the two problems of complexity and naturalness. The fact that the laws and couplings of physics appear to be fine-tuned to such an extent that life can exist and most fundamental quantities assume extremely “improbable” values would appear obvious if our entire universe were just a tiny part of a huge multiverse where different regions exhibit different laws. In this view, we are living in one of the “anthropically favoured” regions. This anthropic selection has strictly teleological and no theological dimension and absolutely no link with any kind of “intelligent design”. It is nothing other than the obvious generalization of the selection effect that already has to be taken into account within our own universe. When dealing with a sample, it is impossible to avoid wondering if it accurately represents the full set, and this question must of course be asked when considering our universe within the multiverse.

The multiverse is not a theory. It appears as a consequence of some theories, and these have other predictions that can be tested within our own universe. There are many different kinds of possible multiverses, depending on the particular theories, some of them even being possibly interwoven.

CCexp_10_07

The most elementary multiverse is simply the infinite space predicted by general relativity – at least for flat and hyperbolic geometries. An infinite number of Hubble volumes should fill this meta-world. In such a situation, everything that is possible (i.e. compatible with the laws of physics as we know them) should occur. This is true because an event with a non-vanishing probability has to happen somewhere if space is infinite. The structure of the laws of physics and the values of fundamental parameters cannot be explained by this multiverse, but many specific circumstances can be understood by anthropic selections. Some places are, for example, less homogenous than our Hubble volume, so we cannot live there because they are less life-friendly than our universe, where the primordial fluctuations are perfectly adapted as the seeds for structure formation.

General relativity also faces the multiverse issue when dealing with black holes. The maximal analytic extension of the Schwarzschild geometry, as exhibited by conformal Penrose–Carter diagrams, shows that another universe could be seen from within a black hole. This interesting feature is well known to disappear when the collapse is considered dynamically. The situation is, however, more interesting for charged or rotating black holes, where an infinite set of universes with attractive and repulsive gravity appear in the conformal diagram. The wormholes that possibly connect these universes are extremely unstable, but this does not alter the fact that this solution reveals other universes (or other parts of our own universe, depending on the topology), whether accessible or not. This multiverse is, however, extremely speculative as it could be just a mathematical ghost. Furthermore, nothing allows us to understand explicitly how it formed.

A much more interesting pluriverse is associated with the interior of black holes when quantum corrections to general relativity are taken into account. Bounces should replace singularities in most quantum gravity approaches, and this leads to an expanding region of space–time inside the black hole that can be considered as a universe. In this model, our own universe would have been created by such a process and should also have a large number of child universes, thanks to its numerous stellar and supermassive black holes. This could lead to a kind of cosmological natural selection in which the laws of physics tend to maximize the number of black holes (just because such universes generate more universes of the same kind). It also allows for several possible observational tests that could refute the theory and does not rely on the use of any anthropic argument. However, it is not clear how the constants of physics could be inherited from the parent universe by the child universe with small random variations and the detailed model associated with this scenario does not yet exist.

One of the richest multiverses is associated with the fascinating meeting of inflationary cosmology and string theory. On the one hand, eternal inflation can be understood by considering a massive scalar field. The field will have quantum fluctuations, which will, in half of the regions, increase its value; in the other half, the fluctuations will decrease the value of the field. In the half where the field jumps up, the extra energy density will cause the universe to expand faster than in the half where the field jumps down. After some time, more than half of the regions will have the higher value of the field simply because they expand faster than the low-field regions. The volume-averaged value of the field will therefore rise and there will always be regions in which the field is high: the inflation becomes eternal. The regions in which the scalar field fluctuates downward will branch off from the eternally inflating tree and exit inflation.

CCyau_10_07

A tri-dimensional representation of a quadri-dimensional Calabi–Yau manifold. This describes the geometry of the extra “internal” dimensions of M-theory and relates to one particular (string-inspired) multiverse scenario.
Image credit: simulation by Jean-François Colonna, CMAP/École Polytechnique. On the other hand, string theory has recently faced a third change of paradigm. After the revolutions of supersymmetry and duality, we now have the “landscape”. This metaphoric word refers to the large number (maybe 10500) of possible false vacua of the theory. The known laws of physics would just correspond to a specific island among many others. The huge number of possibilities arises from different choices of Calabi–Yau manifolds and different values of generalized magnetic fluxes over different homology cycles. Among other enigmas, the incredibly strange value of the cosmological constant (why are the 119 first decimals of the “natural” value exactly compensated by some mysterious phenomena, but not the 120th?) would simply appear as an anthropic selection effect within a multiverse where nearly every possible value is realized somewhere. At this stage, every bubble-universe is associated with one realization of the laws of physics and contains itself an infinite space where all contingent phenomena take place somewhere. Because the bubbles are causally disconnected forever (owing to the fast “space creation” by inflation) it will not be possible to travel and discover new laws of physics.

This multiverse – if true – would force a profound change of our deep understanding of physics. The laws reappear as kinds of phenomena; the ontological primer of our universe would have to be abandoned. At other places in the multiverse, there would be other laws, other constants, other numbers of dimensions; our world would be just a tiny sample. It could be, following Copernicus, Darwin and Freud, the fourth narcissistic injury.

Quantum mechanics was probably among the first branches of physics leading to the idea of a multiverse. In some situations, it inevitably predicts superposition. To avoid the existence of macro-scopic Schrödinger cats simultaneously living and dying, Bohr introduced a reduction postulate. This has two considerable drawbacks: first, it leads to an extremely intricate philosophical interpretation where the correspondence between the mathe-matics underlying the physical theory and the real world is no longer isomorphic (at least not at any time), and, second, it violates unitarity. No known physical phenomenon – not even the evaporation of black holes in its modern descriptions – does this.

CCete_10_07

These are good reasons for considering seriously the many-worlds interpretation of Hugh Everett. Every possible outcome to every event is allowed to define or exist in its own history or universe, via quantum decoherence instead of wave function collapse. In other words, there is a world where the cat is dead and another one where it is alive. This is simply a way of trusting strictly the fundamental equations of quantum mechanics. The worlds are not spatially separated, but exist more as kinds of “parallel” universes. This tantalizing interpretation solves some paradoxes of quantum mechanics but remains vague about how to determine when splitting of universes happens. This multiverse is complex and, depending on the very quantum nature of phenomena leading to other kinds of multiverses, it could lead to higher or lower levels of diversity.

More speculative multiverses can also be imagined, associated with a kind of platonic mathematical democracy or with nominalist relativism. In any case, it is important to underline that the multiverse is not a hypothesis invented to answer a specific question. It is simply a consequence of a theory usually built for another purpose. Interestingly, this consequence also solves many complexity and naturalness problems. In most cases, it even seems that the existence of many worlds is closer to Ockham’s razor (the principle of simplicity) than the ad hoc assumptions that would have to be added to models to avoid the existence of other universes.

Given a model, for example the string-inflation paradigm, is it possible to make predictions in the multiverse? In principle, it is, at least in a Bayesian approach. The probability of observing vacuum i (and the associated laws of physics) is simply Pi = Piprior fi where Piprior is determined by the geography of the landscape of string theory and the dynamics of eternal inflation, and the selection factor fi characterizes the chances for an observer to evolve in vacuum i. This distribution gives the probability for a randomly selected observer to be in a given vacuum. Clearly, predictions can only be made probabilistically, but this is already true in standard physics. The fact that we can observe only one sample (our own universe) does not change the method qualitatively and still allows the refuting of models at given confidence levels. The key points here are the well known peculiarities of cosmology, even with only one universe: the observer is embedded within the system described; the initial conditions are critical; the experiment is “locally” irreproducible; the energies involved have not been experimentally probed on Earth; and the arrow of time must be conceptually reversed.

However, this statistical approach to testing the multiverse suffers from severe technical short cuts. First, while it seems natural to identify the prior probability with the fraction of volume occupied by a given vacuum, the result depends sensitively on the choice of a space-like hypersurface on which the distribution is to be evaluated. This is the so-called “measure problem” in the multiverse. Second, it is impossible to give any sensible estimate of  fi. This would require an understanding of what life is – and even of what consciousness is – and that simply remains out of reach for the time being. Except in some favourable cases – for example when all the universes of the multiverse present a given characteristic that is incompatible with our universe – it is hard to refute explicitly a model in the multiverse. But difficult in practice does not mean intrinsically impossible. The multiverse remains within the realm of Popperian science. It is not qualitatively different from other proposals associated with usual ways of doing physics. Clearly, new mathematical tools and far more accurate predictions in the landscape (which is basically totally unknown) are needed for falsifiability to be more than an abstract principle in this context. Moreover, falsifiability is just one criterion among many possible ones and it should probably not be over-determined.

CCpcd_10_07

When facing the question of the incredible fine-tuning required for the fundamental parameters of physics to allow the emergence of complexity, there are few possible ways of thinking. If one does not want to use God or rely on an unbelievable luck that led to extremely specific initial conditions, there are mainly two remaining possible hypotheses. The first would be to consider that since complexity – and in particular, life – is an adaptive process, it would have emerged in nearly any kind of universe. This is a tantalizing answer, but our own universe shows that life requires extremely specific conditions to exist. It is hard to imagine life in a universe without chemistry, maybe without bound states or with other numbers of dimensions. The second idea is to accept the existence of many universes with different laws where we naturally find ourselves in one of those compatible with complexity. The multiverse was not imagined to answer this specific question but appears “spontaneously” in serious physical theories, so it can be considered as the simplest explanation to the puzzling issue of naturalness. This of course does not prove the model to be correct, but it should be emphasized that there is absolutely no “pre-Copernican” anthropocentrism in this thought process.

It could well be that the whole idea of multiple universes is misleading. It could well be that the discovery of the most fundamental laws of physics will make those parallel worlds totally obsolete in a few years. It could well be that with the multiverse, science is just entering a “no through road”. Prudence is mandatory when physics tells us about invisible spaces. But it could also very well be that we are facing a deep change of paradigm that revolutionizes our understanding of nature and opens new fields of possible scientific thought. Because they lie on the border of science, these models are dangerous, but they offer the extraordinary possibility of constructive interference with other kinds of human knowledge. The multiverse is a risky thought – but, then again, let’s not forget that discovering new worlds has always been risky.

The post Physics in the multiverse appeared first on CERN Courier.

]]>
Feature Aurélien Barrau on why multiple universes should be taken seriously. https://cerncourier.com/wp-content/uploads/2007/11/CCnbb_10_07.jpg
Exotic lead nuclei get into shape at ISOLDE https://cerncourier.com/a/exotic-lead-nuclei-get-into-shape-at-isolde/ https://cerncourier.com/a/exotic-lead-nuclei-get-into-shape-at-isolde/#respond Wed, 19 Sep 2007 12:30:23 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/exotic-lead-nuclei-get-into-shape-at-isolde/ Lead nuclei with too few neutrons turn out to be mainly spherical.

The post Exotic lead nuclei get into shape at ISOLDE appeared first on CERN Courier.

]]>
CCrad01_08_07

In nature, relatively few nuclei have a spherical shape in their ground state. Examples are 16O, 40Ca, 48Ca and 208Pb, which are “doubly magic”, with numbers of both protons and neutrons corresponding to closed shells in the nuclear shell model. By moving away from the closed shells and increasing the number of valence nucleons, both protons and neutrons, these nuclei can eventually acquire a permanent deformation in their ground state. Experiments reveal that sometimes – due to the complex interplay of single-particle and collective degrees of freedom – both a spherical and deformed shape occur in the same nucleus at low excitation energies. In the region around lead, for example, physicists in the 1970s first observed this “shape co-existence”, using optical spectroscopy at the ISOLDE facility at CERN (Bonn et al. 1972 and Dabkiewicz et al. 1979). Since then, an extensive amount of data has been collected throughout the chart of nuclei (Wood et al. 1992 and Julin et al. 2001).

Some of the best-known examples of shape co-existence are found in neutron-deficient lead nuclei (atomic number or number of protons, Z = 82). The uniqueness of this region is mainly due to three effects. First, the energy gap of 3.9 MeV above the Z = 82 closed proton shell forces the nuclei to adopt a spherical shape in their ground state. However, the energy difference is small enough for a second effect to occur: the creation of “extra” valence proton particles and holes as a result of proton-pair excitation across the gap. Third, a very large neutron valence space between the shell closures with the number of neutrons N = 82 and 126 results in a large number of possible valence neutrons as nuclei approach the neutron mid-shell at N = 104. The strong deformation -driving interaction between the “extra” valence protons and the valence neutrons produces unusually low-lying, deformed oblate (disc-like) and prolate (cigar-like) states in the vicinity of N = 104, where the number of valence neutrons is maximal (Wood et al. 1992). In some cases, the deformation-driving effect is so strong that the deformed state becomes the ground state, as happens near N = 104 in the light isotopes of mercury (Z = 80) and platinum (Z = 78).

Atomic spectroscopy provides direct and model-independent information on the properties of nuclear ground and isomeric states via a determination of hyperfine structure and the isotope shift. These are small effects on atomic energy levels due to the nuclear moments, masses, sizes and shapes of nuclear isotopes, allowing the spins, moments and changes in charge-radii of nuclei to be deduced. In particular, the changes in charge radii determined from the isotope shifts by optical spectroscopy in long isotopic chains have revealed collective nuclear properties clearly.

Figure 1 shows changes of mean-square charge radii (δ<r2>) of lead, mercury and platinum isotopes as a function of the number of neutrons. All the data for the nuclides furthest from stability were determined at ISOLDE by a variety of techniques (Otten 1989 and Kluge and Nörtershäuser 2003). In the 1970s, nuclear-radiation detected optical pumping and laser fluorescence spectroscopy were used, collinear spectroscopy in the 1980s and resonance ionization mass spectroscopy from the late 1980s onwards. Now laser spectroscopy in the laser ion source is used, as described below.

Figure 1 shows how the measured δ<r2> for platinum isotopes develop a distinct deviation from the smoothly decreasing trend expected from the spherical-droplet model. For mercury, a sudden and dramatic change in δ<r2> known as “shape staggering”, occurs between 187Hg and 185Hg (N = 107 and 105 respectively). A similar change occurs between the isomeric (I = 13/2) and ground (I = 1/2) states in 185Hg, in this case, “shape isomerism” or “shape co-existence” (Bonn et al. 1972 and Dabkiewicz et al. 1979). These effects are interpreted as a change from weakly deformed oblate to strongly deformed prolate shapes. The  neutron-deficient lead isotopes are a particularly interesting example of shape co-existence. Theoretical calculations have long suggested the co-existence in these nuclei of three different shapes: spherical, prolate and oblate – hence triple co-existence. Recent particle (α, β) and in-beam studies have found strong evidence for this phenomenon in some of the isotopes from 182Pb to 208Pb.

One of the most spectacular examples is the mid-shell nucleus 186Pb, as indicated in figure 2. Here, studies of the α-decay of the parent nucleus 190Po have revealed a triplet of low-lying (E* < 650 keV)>+ states (Andreyev et al. 2000). These were assigned to co-existing spherical, oblate and prolate shapes, with the spherical state being the ground state. Subsequent in-beam studies identified excited bands built on top of these states. An important question arises, however, concerning the degree of mixing between different configurations. As the excited 0+ states decrease their energy when approaching N = 104 (186Pb), their mixing with the 0+ ground state could increase substantially, an effect that could possibly be seen in the value of the charge radii.

CCrad02_08_07

Therefore, the aim of experiment IS483 at ISOLDE was to measure for the first time the isotope shifts in the atomic spectra of the very neutron-deficient nuclei in the region 182Pb to 190Pb, deducing the mean-square charge radii in order to probe the ground state directly (De Witte et al. 2007 and Anderyev et al. 2002). However, the expected production rates were far too low (e.g. 1 ion/s for 182Pb) for the laser spectroscopy  techniques used previously at ISOLDE. Instead, an extremely sensitive spectroscopic technique was employed: resonance ionization spectroscopy in the ion source, first developed at the Petersburg Nuclear Physics Institute in Gatchina for the investigation of rare-earth isotopes (Alkhazov et al. 1992).

The radioactive lead isotopes are produced at ISOLDE in a proton-induced spallation reaction, using protons at 1.4 GeV on a thick (50 g/cm2) target of uranium carbide (UCx). The reaction products diffuse out of the target toward the ionizer tube, which is heated to around 2050 °C. In the tube, a three-step laser ionization process selectively ionizes the lead isotopes. To determine the isotope shift of the appropriate optical spectral line, the laser for the first excitation step is set to a narrow linewidth of 1.2 GHz and its frequency is scanned over the resonance. After ionization and extraction, the radioactive ions are accelerated to 60 keV, mass separated and subsequently implanted in a carbon foil mounted on a rotating wheel at the focal plane of ISOLDE. A circular silicon detector (150 mm2 × 300 μm) placed behind the foil measures the α-radiation during a fixed implantation time, after which the laser frequency is changed and the implantation-measurement cycle repeated again. The implanted lead ions are counted via their characteristic α-decay.

CCrad03_08_07

Figure 3 shows the intensity of the α-lines as a function of laser frequency for a sequence of nuclei (with even N) from 188Pb to 182Pb. This reveals the optical isotope shift, which allows us to deduce the values of δ<r2> shown in figure 1. Similarly, the experiment also measured isotopes with an odd number of neutrons, 183,185,187Pb, all of them produced in the ground and isomeric states. Note that the “isomer separation” could be obtained by tuning the laser frequency to some specific values at which only one of the isomers is selectively ionized in the cavity and subsequently extracted and analysed.

Figure 1 compares the deduced values of δ<r2> with the predictions of the spherical-droplet model. The deviation from these predictions increases when moving away from the Z = 82 closed proton shell of lead. The large deviation observed for the ground state of the odd-mass mercury isotopes and the odd- and even-mass platinum isotopes around N = 104 has been interpreted as a result of the onset of strong prolate deformation. In the case of lead, from 190Pb downwards, the δ<r2> data show a distinct deviation from the spherical-droplet model. This suggests modest ground-state deformation, but comparisons of the data with model calculations show that δ<r2> is sensitive to correlations in the ground-state wave functions and that the lead isotopes essentially stay spherical in their ground state at – and even beyond – the N = 104 mid-shell region.

This experiment has shown that the extreme sensitivity of the combined in-source laser spectroscopy and α-detection allows us to explore the heavy-mass regions far from stability with isotopes produced at a rate of only a few ions a second (182Pb). An important development would be: to use the isomer shift in the case of odd-mass-number isotopes to ionize nuclei selectively in their ground or isomeric state; to post-accelerate these with the REX-ISOLDE facility; and use the isomerically pure beams of the 13/2+ and 3/2 isomers to investigate, for example, the influence of different spin states of the same incident particle on the reaction mechanism.

The post Exotic lead nuclei get into shape at ISOLDE appeared first on CERN Courier.

]]>
https://cerncourier.com/a/exotic-lead-nuclei-get-into-shape-at-isolde/feed/ 0 Feature Lead nuclei with too few neutrons turn out to be mainly spherical. https://cerncourier.com/wp-content/uploads/2007/09/CCrad01_08_07.jpg
Advanced Quantum Theory (3rd edition)  https://cerncourier.com/a/advanced-quantum-theory-3rd-edition/ Wed, 19 Sep 2007 11:44:02 +0000 https://preview-courier.web.cern.ch/?p=105153 This book looks at the techniques that are used in theoretical elementary-particle physics that are extended to other branches of modern physics.

The post Advanced Quantum Theory (3rd edition)  appeared first on CERN Courier.

]]>
By Michael D Scadron, World Scientific Publishing. Hardback ISBN 9789812700506 £51 ($88).

41qyw-qbf+L._SX336_BO1,204,203,200_

This book looks at the techniques that are used in theoretical elementary-particle physics that are extended to other branches of modern physics. The initial application is to non-relativistic scattering graphs encountered in atomic, solid-state and nuclear physics. Then, focusing on relativistic Feynman diagrams and their construction in lowest order, the book also covers relativistic quantum theory based on group theoretical language, scattering theory and finite parts of higher order graphs. Aimed at students and professors of physics, it should also aid the non-specialist in mastering the principles and calculation tools that probe the quantum nature of the fundamental forces.

The post Advanced Quantum Theory (3rd edition)  appeared first on CERN Courier.

]]>
Review This book looks at the techniques that are used in theoretical elementary-particle physics that are extended to other branches of modern physics. https://cerncourier.com/wp-content/uploads/2022/08/41qyw-qbfL._SX336_BO1204203200_.jpg
Polarized hyperons probe dynamics of quark spin https://cerncourier.com/a/polarized-hyperons-probe-dynamics-of-quark-spin/ https://cerncourier.com/a/polarized-hyperons-probe-dynamics-of-quark-spin/#respond Mon, 20 Aug 2007 11:14:56 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/polarized-hyperons-probe-dynamics-of-quark-spin/ Jefferson Laboratory researchers discuss two new models of spin transfer.

The post Polarized hyperons probe dynamics of quark spin appeared first on CERN Courier.

]]>
A continuing mystery in nuclear and particle physics is the large polarization observed in the production of Λ hyperons in high-energy, proton–proton interactions. These effects were first reported in the 1970s in reactions at incident proton momenta of several hundred GeV/c, where experiments measured surprisingly strong hyperon polarizations of around 30% (Heller 1997). Although the phenomenology of these reactions is now well known, the inability to distinguish between various competing theoretical models has hampered the field (Zuo-Tang and Boros 2000).

CChyp1_07_07

Two new measurements from the US Department of Energy’s Jefferson Lab in Virginia are now challenging existing ideas on quark spin dynamics through studies of beam-recoil spin transfer in the electro- and photoproduction of K+Λ final states from an unpolarized proton target. Analyses of the two experiments in Hall B at Jefferson Lab using the CLAS spectrometer (figure 1) have provided extensive results of spin transfer from the polarized incident photon (real or virtual) to the final state Λ hyperon.

The results indicate that the Λ polarization is predominantly in the direction of the spin of the incoming photon, independent of the centre-of-mass energy or the production angle of the K+. Moreover, the photoproduction data show that, even where the transferred Λ polarization component along the photon direction is less than unity, the total magnitude of the polarization vector is equal to unity. Since these observations are not required by the kinematics of the reaction (except at extreme forward and backward angles) there must be some underlying dynamical origin.

CChyp2_07_07

Both analyses have proposed simple quark-based models to explain the phenomenology, however they differ fundamentally in their description of the spin transfer mechanism. In the electroproduction analysis a simple model has been proposed from data using a 2.567 GeV longitudinally polarized electron beam (Carman et al. 2003). In this case a circularly polarized virtual photon (emitted by the polarized electron) strikes an oppositely polarized u quark inside the proton (figure 2a). The spin of the struck quark flips in direction according to helicity conservation and recoils from its neighbours, stretching a flux-tube of gluonic matter between them. When the stored energy in the flux-tube is sufficient, the tube is “broken” by the production of a strange quark–antiquark pair (the hadronization process).

In this simple model, the observed direction of the Λ polarization can be explained if it is assumed that the quark pair is produced with two spins in opposite directions – anti-aligned – with the spin of the s quark aligned opposite to the final u quark spin. The resulting Λ spin, which is essentially the same as the s quark spin, is predominantly in the direction of the spin of the incident virtual photon. The spin anti-alignment of the ss pair is unexpected, because according to the popular 3P0 model, the quark–antiquark pair should be produced with vacuum quantum numbers (J = 0, S = 1, L = 1, i.e. Jπ = 0+), which means that their spins should be aligned two-thirds of the time (Barnes 2002). This could imply that this model for hadronization may not be as widely applicable as previously thought.

The new photoproduction analysis, with data using a circularly polarized real photon beam in the 0.5–2.5 GeV range, introduces a different model that can also explain the Λ polarization data. In this hypothesis, shown in figure 2b, the strange quark–antiquark pair is created in a 3S1 configuration (J = 1, S = 1, L = 0, i.e. Jπ = 1). Here, following the principle of vector-meson dominance, the real photon fluctuates into a virtual φ meson that carries the polarization of the incident photon. Therefore, the quark spins are in the direction of the spin of the photon before the hadronization interaction.

The s quark of the pair merges with the unpolarized di-quark within the target proton to form the Λ baryon. The s  quark merges with the remnant u quark of the proton to form a spinless K+ meson. In this model, the strong force, which rearranges the s and s quarks into the Λ and K+, respectively, can precess the spin of the s quark away from the beam direction, but the s quark, and therefore the Λ, remains 100% polarized. This provides a natural explanation for the observed unit magnitude of the Λ polarization vector seen for the first time in the measurements by CLAS.

The model interpretations presented from the two viewpoints do not necessarily contradict each other. Both assume that the mechanism of spin transfer to the Λ hyperon involves a spectator Jπ = 0+ di-quark system. The difference is in the role of the third quark. Neither model specifies a dynamical mechanism for the process, namely the detailed mechanism for quark-pair creation in the first case or for quark spin precession in the second. If we take the gluonic degrees of freedom into consideration, the model proposed in the electroproduction paper (Carman et al. 2003) can be realized in terms of a possible mechanism in which a colourless Jπ = 0 two-gluon subsystem is emitted from the spectator di-quark system and produces the ss pair (figure 2a). This is in conflict with the 3P0 model, which requires a Jπ= 0+ exchange. To the same order of gluon coupling, the model interpretation proposed by the photoproduction analysis (Schumacher 2007) is the quark-exchange mechanism, which is again mediated by a two-gluon current. The amplitudes corresponding to these models may both be present in the production, in principle, and contribute at different levels depending on the reaction kinematics.

Extending these studies to the K*+Λ exclusive final state should be revealing. In the electroproduction model, the spin of the u quark is unchanged when switching from a pseudoscalar K+ to a vector K*+. If the ss quark pair is produced with anti-aligned spins, the spin direction of the Λ should flip. On the other hand, in the photoproduction model the u quark in the kaon is only a spectator. Changing its spin direction – changing the K+ to a K*+ – should not change the Λ spin direction. Thus, there are ways to disentangle the relative contributions and to understand better the reaction mechanism and dynamics underlying the associated strangeness-production reaction. Analyses at CLAS are underway to extract the polarization transfer to the hyperon in the K*+Λ final state.

Beyond the studies of hyperon production, understanding the dynamics in a process of this sort can shed light on quark–gluon dynamics in a domain thought to be dominated by traditional meson and baryon degrees of freedom. These issues are relevant for a better understanding of strong interactions and hadroproduction in general, owing to the non-perturbative nature of QCD at these energies. We eagerly await further experimental studies and new theoretical efforts to understand which multi-gluonic degrees of freedom dominate in quark pair creation and their role in strangeness production, as well as the appropriate mechanism(s) for the dynamics of spin transfer in hyperon production.

The post Polarized hyperons probe dynamics of quark spin appeared first on CERN Courier.

]]>
https://cerncourier.com/a/polarized-hyperons-probe-dynamics-of-quark-spin/feed/ 0 Feature Jefferson Laboratory researchers discuss two new models of spin transfer. https://cerncourier.com/wp-content/uploads/2007/08/CChyp1_07_07.jpg
NSCL discovers the heaviest known silicon isotope to date https://cerncourier.com/a/nscl-discovers-the-heaviest-known-silicon-isotope-to-date/ https://cerncourier.com/a/nscl-discovers-the-heaviest-known-silicon-isotope-to-date/#respond Mon, 20 Aug 2007 06:59:56 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/nscl-discovers-the-heaviest-known-silicon-isotope-to-date/ Researchers at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University have produced the heaviest silicon isotope ever observed.

The post NSCL discovers the heaviest known silicon isotope to date appeared first on CERN Courier.

]]>
Researchers at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University have produced the heaviest silicon isotope ever observed. The recent identification of 44Si expands the chart of known isotopes and lays the groundwork for the future study of rare, neutron-rich nuclei.

CCnew5_07_07

Beyond a certain range of combinations of protons and neutrons, nuclei cannot form at all, and additional nucleons will immediately leave the nucleus owing to zero binding energies. Pursuit of this limit, known as the drip line, has proved to be a scientific and technical challenge – particularly when it comes to neutron-rich nuclei. While the proton drip line has been mapped out for much of the chart of nuclei, the neutron drip line is known only up to oxygen (Z = 8). Producing isotopes at or near the neutron drip line remains a long-standing goal in experimental nuclear physics. For example, 43Si was detected for the first time at Japan’s Institute of Physical and Chemical Research (RIKEN) in 2002 (Notani et al. 2002). That same year, researchers at the GANIL laboratory in France detected the neutron-rich isotopes 34Ne and 37Na (Lukyanov et al. 2002).

In the 44Si experiment conducted at the NSCL Coupled Cyclotron Facility in January, a primary beam of 48Ca was accelerated to 142 MeV/u and directed at a tungsten target. Downstream from the target, the beam was filtered through NSCL’s A1900 fragment separator. Eventually, some 20 different isotopes (including three nuclei of 44Si) hit a set of detectors that could identify each ion as it arrived (Tarasov et al. 2007).

CCnew6_07_07

The study was intended to document the yield of isotopes containing 28 neutrons that lie between 48Ca (the nuclei in the beam) and 40Mg to extrapolate the expected yields in this region. 40Mg is yet to be observed, and according to some theories should be on the drip line. Knocking out only protons from 48Ca could create these isotopes, although this is a difficult feat because of the larger number of neutrons in the beam nuclei. The production of 44Si is therefore an even greater feat, given that the collision must also transfer two neutrons from the tungsten target to the beam nucleus as it speeds past. The observation of 44Si in the A1900 fragment separator stretches the limits of its single-stage separation. The excessive number of particles that come along with the rare nuclei can swamp the detectors used to identify the beam in the separator. The next-generation technique will use two-stage separation, delivering fewer particles to the detectors as more are filtered out travelling down the beamline.

Researchers are developing new two-stage separators that could run experiments with higher initial beam intensities, which offer a better chance of generating the sought-after, near-dripline nuclei. Preliminary testing on a new two-stage separator at NSCL has delivered promising results. Also, a new device has just been constructed at RIKEN in Japan, and one is planned for GSI in Germany. Nuclear scientists at NSCL hope that two-stage separation will help uncover the next generation of rare isotopes.

The post NSCL discovers the heaviest known silicon isotope to date appeared first on CERN Courier.

]]>
https://cerncourier.com/a/nscl-discovers-the-heaviest-known-silicon-isotope-to-date/feed/ 0 News Researchers at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University have produced the heaviest silicon isotope ever observed. https://cerncourier.com/wp-content/uploads/2007/08/CCnew5_07_07.jpg
Theory ties strings round jet suppression https://cerncourier.com/a/theory-ties-strings-round-jet-suppression/ https://cerncourier.com/a/theory-ties-strings-round-jet-suppression/#respond Mon, 30 Apr 2007 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/theory-ties-strings-round-jet-suppression/ The properties of quark–gluon plasma (QGP), where the quarks and gluons are no longer confined within hadrons, lead to intriguing effects that have already been studied in heavy-ion collisions at CERN's Super Proton Synchrotron (SPS) and at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven.

The post Theory ties strings round jet suppression appeared first on CERN Courier.

]]>
The properties of quark–gluon plasma (QGP), where the quarks and gluons are no longer confined within hadrons, lead to intriguing effects that have already been studied in heavy-ion collisions at CERN’s Super Proton Synchrotron (SPS) and at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. However, the hot, dense medium produced momentarily in the collisions is a challenging environment for calculations in quantum chromodynamics (QCD), the theory that describes the strong interactions between the quarks and gluons. Though the medium is hot enough for the hadrons to “melt” into the QGP state, its temperature is still relatively low and the couplings between the quarks and gluons remain too strong to allow the use of perturbative QCD, which relies on the couplings being weak at high energies.

To get to grips with the strong couplings, a small group of theorists has taken inspiration from string theory. Hong Liu and Krishna Rajagopal of MIT and Urs Wiedemann of CERN make use of “gauge-gravity duality”, in which a gauge theory and a gravitational theory provide alternative descriptions of the same physical system. They map more complex calculations in a strongly coupled gauge theory onto a simpler problem in a dual gravitational string theory, and have looked at two intriguing effects observed in heavy-ion collisions – “jet quenching” and the suppression of the production of J/Ψ mesons.

CCnew4_05_07

Strictly speaking, they are not working directly with QCD as no one yet knows which string theory is dual to QCD. They work instead with a duality that works for a large class of gauge theories that behave similarly to QCD at high temperature. They then conjecture that the effects they find should also hold for QCD, and have made some predictions that can be tested at RHIC and at CERN’s LHC.

Jet quenching is one of the most dramatic pieces of evidence for the strong-coupling nature of the quark–gluon matter produced at RHIC. Here highly energetic quarks and gluons produced in the collisions interact with the matter so strongly that they are stopped within much less than a nuclear diameter, “quenching” the jet of hadrons that would normally materialize from the liberated quark or gluon. Previous attempts using perturbative techniques to calculate the parameter that characterizes this effect produced values an order of magnitude too small. Now, using the dual technique, Liu and colleagues have calculated a quenching parameter that is consistent with the data from RHIC, and had for the first time the right order of magnitude (Liu et al. 2006).

In a second calculation, the theorists have turned their attention to the problem of J/Ψ suppression. Screening effects in QGP are sufficient to reduce the attraction between a c and a c in the plasma to the extent that they are less likely to bind together to form a J/Ψ. This should lead to a reduction in the number of J/Ψ mesons produced in energetic heavy-ion collisions relative to proton–proton or proton–nucleus collisions. Previous calculations of this effect have depended on the non-perturbative approaches of lattice QCD. However, in lattice QCD the J/Ψ mesons are produced at rest, whereas in reality they will move at high velocities; from the viewpoint of the mesons, they will be in a “wind” of hot QGP.

With the aid of the dual approach, Liu and colleagues have calculated the screening effect of such a hot wind, and how it depends on velocity (Liu et al. 2007). Assuming that the same effect holds in QCD, the calculations indicate that additional suppression should occur for J/Ψ mesons with higher values of transverse momentum. This should be observable in future in high-luminosity runs at RHIC, and at the LHC, where the temperatures of the QGP may even be high enough to give suppression of the heavier Υ mesons.

The post Theory ties strings round jet suppression appeared first on CERN Courier.

]]>
https://cerncourier.com/a/theory-ties-strings-round-jet-suppression/feed/ 0 News The properties of quark–gluon plasma (QGP), where the quarks and gluons are no longer confined within hadrons, lead to intriguing effects that have already been studied in heavy-ion collisions at CERN's Super Proton Synchrotron (SPS) and at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. https://cerncourier.com/wp-content/uploads/2007/04/CCnew4_05_07-feature.jpg
Physicists gather for an extravaganza of beauty https://cerncourier.com/a/physicists-gather-for-an-extravaganza-of-beauty/ https://cerncourier.com/a/physicists-gather-for-an-extravaganza-of-beauty/#respond Tue, 30 Jan 2007 00:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/physicists-gather-for-an-extravaganza-of-beauty/ Lively discussions and precise measurements dominated Beauty 2006, the latest international conference on B-physics. Neville Harnew and Guy Wilkinson report.

The post Physicists gather for an extravaganza of beauty appeared first on CERN Courier.

]]>
The 11th International Conference on B-Physics at Hadron Machines (Beauty 2006) took place on 25–29 September 2006 at the University of Oxford. This was the latest in a series of meetings dating back to the 1993 conference held at Liblice Castle in the Czech Republic. The aim is to review results in B-physics and CP violation and to explore the physics potential of current and future-generation experiments, especially those at hadron colliders. As the last conference in the series before the start-up of the LHC, Beauty 2006 was a timely opportunity to review the status of the field, and to exchange ideas for future measurements.

CCEphy1_01-07

More than 80 participants attended the conference, ranging from senior experts in the heavy-flavour field to young students. The sessions were held in the physics department, with lively discussions afterwards. There were fruitful exchanges between the physicists from operating facilities and those from future experiments (LHCb, ATLAS and CMS), with valuable input from theorists.

The conference reviewed measurements of the unitarity triangle, which is the geometrical representation of quark coupling and CP-violation in the Standard Model. The aim is to find a breakdown in the triangle through inconsistencies in the measurements of its sides and its angles, α, β and γ (φ2, φ1 _and φ3), as determined through CP-violating asymmetries and related phenomena.

The statistics and the quality of the data from the first-generation asymmetric energy e+ e B-factories are immensely impressive. The BaBar and Belle experiments, at PEPII and KEKB respectively, passed a significant milestone when they reached a combined integrated luminosity of 1000 fb-1 (1 ab-1), with 109 b bbar pairs now produced at the Υ(4S). The experiments are approved to continue until 2008 and should double their data-sets.

CCEphy2_01-07

The B-factories have studied with high precision the so-called golden mode of B-physics, the decay B0→J/ΨKs. The CP-asymmetry in this decay accesses sin 2β with negligible theoretical uncertainty and the measured world-average value in this and related channels is now 0.675 ± 0.026. The four-fold ambiguity in the value of β can be reduced to two by measuring cos 2β in channels such as B0→D*D*Ks. The results now strongly disfavour two of the solutions, although higher statistics and further theoretical effort are necessary to verify this interpretation.

A possible hint of physics beyond the Standard Model may appear in the measurement of sin 2β in b→s “penguin” decays (e.g. B0→φKs). There is a 2.6σ _discrepancy in the value averaged over a number of channels, namely sin 2β = 0.52 ± 0.05, when compared with the charmonium measurement. We need more data to resolve this ambiguity, and eagerly await further studies of these penguin processes at the LHC.

CCEphy3_01-07

BaBar and Belle have also produced important results related to the angles α and γ. The γ measurements are particularly interesting as it had generally been assumed that this parameter was beyond the scope of the B-factories. The angle is measured through the interference of tree-level B±→ D(*) K± and B± →Dbar(*)K± decay amplitudes. This strategy is intrinsically clean, and leads to a combined result for γ of 60(+38-24)°. The errors are still large and a precise measurement of γ is impossible at the B-factories. However, the LHCb experiment at CERN will improve the error on γ to less than 5°, with measurements contributing from the Bu, Bd and Bs sectors.

Year of the Tevatron

Despite the great successes of the B-factories, Beauty 2006 focused on B-physics at hadron machines, and 2006 has been the “Year of the Tevatron”. The CDF and D0 experiments at Fermilab’s Tevatron have not only demonstrated the proof-of-principle of B-physics at hadron machines, but have also made measurements that are highly competitive and complementary to those of the B-factories, in particular through the unique access that hadron machines have to the Bs sector. The results indicate the future at the LHC, where there should be 100 times more statistics.

The highlight of the conference was the first 5 σ observation of Bs oscillations, presented by the. They reported the mass difference between the mass eigenstates, Δms, as 17.77 ± 0.10 (stat) ± 0.07 (syst) ps-1, in agreement with Standard Model expectations. Data from hadronic channels, such as B0s→Dsπ have greatly enhanced the statistical power of the analysis; this measurement relies on the precision vertex detector. The measurement of Δms and Δmd allows the ratio of Cabibbo–Kobayashi–Maskawa (CKM) matrix elements |Vtd|/|Vts| to be extracted with around 5% systematic uncertainty (with input from lattice theory), which fixes the third side of the unitarity triangle with the same precision.

The study of rare processes dominated by loop effects provides an important window on new physics and should have significant contributions from new heavy-particle exchanges. The Tevatron experiments are intensively searching for the very rare decay Bs→μμ, which is expected to have a branching ratio of order 10-9 in the Standard Model, but is significantly enhanced in many supersymmetric extensions. The Tevatron is currently sensitive at the 10-7 level and is striving to improve this reach. The LHC experiments will explore down to the Standard Model value.

Towards the LHC

With the start-up of the LHC, B-physics will enter a new phase. Preparations for the experiments are now well advanced, as are the B-triggers necessary to enrich the sample in signal decays. Talks at the conference described the status of the detectors and their first running scenarios. The LHC pilot run scheduled for late 2007 will yield minimal physics-quality data, but will be invaluable for commissioning, calibrating and aligning the detectors. Researchers will accumulate the first real statistics for physics measurements in summer 2008. Key goals in the first two years of operation will be the first measurement of CP violation in the Bs system; a measurement approaching the Standard Model value (around 2°) of the Bs mixing phase in Bs→J/Ψφ; the likely first observation of the decay Bs→μμ; studies of the B angular distributions sensitive to new physics in the channels Bu,d→K*μμ and precise measurements of the angles α and γ. LHCb will cover a wide span of measurements, whereas ATLAS and CMS will focus on channels that can be selected with a (di-)muon trigger.

Participants at the conference made a strong science case for continued B-physics measurements beyond the baseline LHC programme, to elucidate the flavour structure of any new physics discovered. On the timescale of 2013, the LHCb collaboration is considering the possibility of upgrading the experiment to increase the operational luminosity to 10 times the present design, to accumulate around 100 fb-1 over five years. In addition there are two proposals on a similar timescale for asymmetric e+e “Super Flavour Factories” at luminosities of around 1036 cm-2s-1 – SuperKEKB and a linear-collider-based design (ILC-SFF) – each giving some 50 ab-1 of data by around 2018. The LHCb upgrade and the e+e flavour factories largely complement each other in their physics goals.

Social activities enabled discussions outside of the conference room. Keble College provided accommodation and hosted the banquet at which Peter Schlein, the founder of the conference series and chair for the first 10 meetings, was thanked for his efforts over the years and his pioneering contributions to B-physics at hadron machines.

The conference was extremely lively: B-physics continues to flourish and has an exciting future ahead. The B-factories and the Tevatron have led the way, but there is still much to learn. Heavy flavour results from ATLAS, CMS and, in particular, LHCb seem certain to be a highlight of the LHC era.

The post Physicists gather for an extravaganza of beauty appeared first on CERN Courier.

]]>
https://cerncourier.com/a/physicists-gather-for-an-extravaganza-of-beauty/feed/ 0 Feature Lively discussions and precise measurements dominated Beauty 2006, the latest international conference on B-physics. Neville Harnew and Guy Wilkinson report. https://cerncourier.com/wp-content/uploads/2007/01/CCEphy1_01-07-feature.jpg
Rochester conference goes back to Russia https://cerncourier.com/a/rochester-conference-goes-back-to-russia/ https://cerncourier.com/a/rochester-conference-goes-back-to-russia/#respond Wed, 06 Dec 2006 00:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/rochester-conference-goes-back-to-russia/ Moscow hosted this year's major summer conference, which presented the latest news across a broad range of topics. Gennady Kozlov and Simon Eidelman report.

The post Rochester conference goes back to Russia appeared first on CERN Courier.

]]>
In summer 1976, the International Conference on High Energy Physics (ICHEP), known traditionally as the Rochester conference, was held in Tbilisi, the last time it would take place within the USSR. Thirty years later, the Rochester conference returned to Russia, when around a thousand physicists from 53 countries attended ICHEP’06, held on 26 July – 2 August in the Russian Academy of Sciences in Moscow. The extensive scientific programme contained the customary mixture of plenary reports, parallel sessions and poster presentations. For six days, participants discussed key issues in high-energy physics, ranging from astrophysics and cosmology, through the physics of heavy-ions, rare decays and hadron spectroscopy, to theoretical scenarios and experimental searches beyond the Standard Model. Topics also included Grid technology for data processing, new accelerators and particle detectors, and mathematical aspects of quantum field theory and string theory.

CCEmos1_12-06

In his opening speech, the co-chair of the conference, Victor Matveev, emphasized that the entire community of Russian high-energy physicists was honoured to host the major international conference of 2006. The participants were also greeted by the director of the Budker Institute of Nuclear Physics (BINP) and co-chair of the conference, Alexander Skrinsky, and deputy rector of the Lomonosov Moscow State University, Vladimir Belokurov. The vice-chair of the organizing committee, and director of the Joint Institute for Nuclear Research (JINR), Alexei Sissakian then spoke about the structure of ICHEP’06 and its scientific programme.

Duality, QCD and heavy-ions

On the theory side, the progress in so-called “practical theory” is evident, primarily in the sophisticated calculations in quantum chromodynamics (QCD) presented by Giuseppe Marchesini of Milano-Bicocca University and Zvi Bern of the University of California, Los Angeles. Gerritt Schierholz from DESY, Adriano Di Giacomo of Pisa University and Valentin Zakharov of the Institute for Theoretical and Experimental Physics (ITEP), Moscow, explained the remarkable achievement of the splendid harmony between analytical calculations and the results obtained on the lattice using dynamical quarks.

The theoretical discussions emphasized the concept and use of gravity-gauge duality in a framework generalizing the anti-de Sitter space/conformal field theory correspondence. This duality is a conjectured relationship between confining gauge theories in four dimensions on the one hand, and gravity and string theory in five and more dimensions on the other. DESY’s Volker Schomerus described how, when applied to QCD, this approach reproduces numerous non-perturbative features of strong interactions, from the low-energy hadron spectrum through Regge trajectories and radial excitations to quark counting rules. On the experimental side, Pavel Pakhlov of ITEP Moscow, Antonio Vairo of Milano University and Alexandre Zaitsev of the Institute for High Energy Physics (IHEP) Protvino, reported on the numerous candidates for exotic hadronic states, both with light quarks only and with heavy quarks and/or gluons, that have been confirmed or newly reported by teams from the VES experiment in Protvino, BES II in Beijing, E852 at Brookhaven, CLEOc at Cornell, Belle at KEK, and BaBar at SLAC. These exotic states have still to be interpreted theoretically, within either gravity/gauge duality or more traditional approaches.

CCEmos2_12-06

The Relativistic Heavy Ion Collider (RHIC) at the Brookhaven National Laboratory is intensively studying a relatively new area of QCD – the properties of matter at high temperatures and high particle densities. Timothy Hallman from Brookhaven, Larisa Bravina of the Skobeltsyn Institute of Nuclear Physics (SINP) Moscow University, Nu Xu of Lawrence Berkeley National Laboratory (LBNL), and Oleg Rogachevsky of JINR, among others, presented numerous experimental results, some of which were reported for the first time. These results suggest, quite surprisingly, as Xin-Nian Wang of LBNL explained, that collisions of highly energetic ions at RHIC result in the formation of strongly coupled quark–gluon matter, rather than weakly interacting quark–gluon “gas”. Here, too, gravity/gauge duality can reflect the most remarkable properties such as the low viscosity of quark–gluon “fluid”, jet quenching and so on.

Karel Safarik from CERN and Lyudmila Sarycheva of SINP described how QCD will be probed at even higher temperatures at the Large Hadron Collider (LHC) at CERN. Sissakian and Alexander Sorin of JINR reported on plans at the JINR Nuclotron for complementary studies of matter at lower temperatures but high baryon number densities; there are also plans at GSI, Darmstadt. Most likely, matter at these extreme conditions will exhibit new surprising properties in addition to those observed at RHIC.

Quarks and leptons

With the B-factories and Tevatron operating, this conference witnessed impressive progress in flavour physics, including B meson decays, processes with CP violation, b → s and b → d transitions and so on, which featured in the review talks by KEK’s Yasuhiro Okada and Masashi Hazumi and Robert Kowalewski from Victoria University. The discovery of Bs oscillations at the Tevatron was one of the highlights of the year. Doug Glenzinski of Fermilab reported on these results from the CDF collaboration, which reveal a mass difference between the mass eigenstates equal to 17.31 ps-1 (central value). All data on flavour physics, including CP violation and Bs oscillations, are now well described by the Standard Model and Cabibbo–Kobayashi–Maskawa theory. Thus, the Standard Model once again has passed a series of highly non-trivial tests, this time in the heavy-quark sector.

CCEmos3_12-06

Dugan O’Neil of Simon Fraser University and Florencia Canelli from Fermilab were among those presenting precision measurements of the masses of the heaviest known particles, which are still an important aspect of experimental high-energy physics. New results presented at the conference were based mainly on data from the CDF and D0 collaborations at the Tevatron. The top quark became lighter than it had been at the Beijing Conference in 2004 (CERN Courier January/February 2005 p37): now its mass is 171.4±2.1 GeV. Measurements of the W-boson mass are also more accurate. Making use of these data, the Electroweak Working Group has produced a new fit for the mass of the Standard Model Higgs boson, mh = 85-28+39 GeV, which is somewhat lower than before. According to this fit, the upper limit on the Higgs boson mass is 166 GeV, as Darien Wood of Northeastern University explained. Yuri Tikhonov from BINP presented recent high-precision measurements of the mass of the τ lepton at Belle and at the KEDR detector at BINP, which have confirmed lepton universality in the Standard Model.

Beyond the Standard Model

The conference paid considerable attention to the search for new physics. Numerous possible properties beyond the Standard Model are even more strongly constrained than before, including supersymmetry; extra space–time dimensions; effective contact interactions in the quark and lepton sectors; additional heavy-gauge bosons; excited states of quarks and leptons; and leptoquarks. This was emphasized in various talks by Elisabetta Gallo of INFN Florence, Roger Barlow of Manchester University, Herbert Greenlee of Fermilab, Stephane Willocq of Massachusetts University and others. Yet most of the community is confident that new physics is within the reach of the LHC. Indeed, more theoretical scenarios for tera-electron-volt-scale physics beyond the Standard Model were presented at the conference, in talks for example by Rohini Godbole of the Indian Institute of Science, Alexander Belyaev of Michigan University, Pierre Savard of Toronto University and TRIUMF, Sergei Shmatov and Dmitri Kazakov of JINR, and Satya Nandi of Oklahoma University. Notable exceptions were Holger Bech Nielsen of the Niels Bohr Institute, who argued that even the Higgs boson might never be discovered (for a not necessarily scientific reason), and Mikhail Shaposhnikov of Lausanne University and the Institute for Nuclear Research (INR) Moscow, who defended his “nuMSM” model, which accounts for all existing data in particle physics and cosmology at the expense of extreme fine-tuning.

CCEmos4_12-06

CERN’s Fabiola Gianotti raised much interest by discussing the tactics for early running at the LHC, reflecting the community’s thirst for new physics and the high expectations for the LHC. More generally, there was a sense of expectation as this was the last Rochester meeting before the start-up of the LHC.

The properties of neutrinos continue to be among the top issues in high-energy physics. Geoff Pearce of the Rutherford Appleton Laboratory presented the first data from a new player, the MINOS collaboration, which support the pattern of the oscillations of muon neutrinos observed by the Super-Kamiokande and KEK-to-Kamioka (K2K) experiments. Other collaborations presented refined analyses of their data in talks by Kiyoshi Nakamura of KamLAND and Tohoku University, Yasuo Takeuchi of Super-Kamiokande and Tokyo University, keVin Graham of the Sudbury Neutrino Observatory and Carleton University, Valery Gorbachev of the Russian American Gallium Experiment and INR Moscow, and Yuri Kudenko of K2K and INR. These agree overall on oscillations of both electron and muon neutrinos, with evidence for oscillations of muon neutrinos into tau neutrinos confirmed by the Super-Kamiokande experiment. Also, the KamLand experiment has confirmed and enhanced the case for geo-neutrinos. The dominating oscillation parameters are now measured with the precision of 10–20%, except for the smallest mixing angle θ13 and a possible CP-violating phase, as Regina Rameika of Fermilab, Ferruccio Feruglio of Padova University and Kunio Inoue of Tohoku University explained. Interestingly, the range of neutrino masses 0.01 eV < mν < 0.3 eV, suggested by neutrino oscillation experiments, as well as by cosmology and direct searches, is in the right ballpark for leptogenesis – a mechanism for the generation of the matter–antimatter asymmetry in the universe.

Astroparticle physics is another area of continuing interest. Anatoli Serebrov of Petersburg Nuclear Physics Institute presented a new measurement of the neutron lifetime, which makes a significant contribution to the calculation of the abundance of primordial helium-4 in the universe. Techniques for the direct and indirect detection of dark-matter particles are rapidly developing, with indications for positive signals from DAMA and EGRET still persisting, as described by Alessandro Bettini of INFN Padova and by Kazakov. In cosmic-ray physics, the Greisen–Zatsepin–Kuzmin cut-off in the spectrum of ultra-high-energy cosmic rays is still an issue. Giorgio Matthiae of Rome University “Tor Vergata” presented the first data from the Pierre Auger Observatory. Masahiro Teshima of the Max Planck Institute, Munich, and Gordon Thomson of Rutgers University presented the new analyses by the AGASA and HiRes collaborations, respectively. As a result, as Yoshiyuki Takahashi of Alabama University explained, the discrepancy between different experiments is now reduced.

Traditionally, the Rochester conferences discuss future accelerators for high-energy physics and new developments in particle detection, and receive reports from the International Committee for Future Accelerators (ICFA) and the Commission on Particles and Fields (C11) of the International Union of Pure and Applied Physics (IUPAP). This was particularly timely in Moscow in view of the upcoming start-up of the LHC. At present, the scientific community is discussing a new megaproject – the large linear electron–positron collider with an energy of 0.5–1.0 TeV, known as the International Linear Collider (ILC). Together with the LHC, the ILC will be a unique tool for studying fundamental properties of matter and the universe. The talks by Skrinsky, DESY’s Albrecht Wagner and Rolf Heuer, and CERN’s Lyn Evans discussed the prospects for the project, including the contribution from Russia. Gregor Herten of Freiburg University, who heads the IUPAP Commission (C11), said that fundamental science is very important in Russia, and that the research conducted by Russian scientists is highly esteemed around the world.

Valery Rubakov of INR Moscow closed the conference with a summary talk emphasizing both the current confusion of some theorists regarding new physics and the impact of the LHC on the entire field and beyond. The hope is that, with results from the LHC, at least some of the numerous questions raised in Moscow will be answered at the next Rochester conference, to be held in summer 2008 in Philadelphia.

The ICHEP’06 conference was jointly organized by the Russian Academy of Sciences, the Russian Federation (RF) Ministry of Education and Science, the RF Federal Agency on Science and Innovation, the RF Federal Agency on Atomic Energy, the Lomonosov Moscow State University and JINR, the main coordinator of the meeting. It was financially supported by IUPAP, the Russian Foundation for Basic Research, RAS, JINR and the RF Federal Agency on Science and Innovation.

• The authors are indebted to Valery Rubakov for his help in preparing this article.

The post Rochester conference goes back to Russia appeared first on CERN Courier.

]]>
https://cerncourier.com/a/rochester-conference-goes-back-to-russia/feed/ 0 Feature Moscow hosted this year's major summer conference, which presented the latest news across a broad range of topics. Gennady Kozlov and Simon Eidelman report. https://cerncourier.com/wp-content/uploads/2006/12/CCEmos1_12-06.jpg
Metallic water becomes even more accessible https://cerncourier.com/a/metallic-water-becomes-even-more-accessible/ https://cerncourier.com/a/metallic-water-becomes-even-more-accessible/#respond Wed, 04 Oct 2006 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/metallic-water-becomes-even-more-accessible/ Water is well known for its astonishing range of unusual properties, and now Thomas Mattsson and Michael Desjarlais of Sandia National Laboratories in New Mexico have suggested yet another one.

The post Metallic water becomes even more accessible appeared first on CERN Courier.

]]>
Water is well known for its astonishing range of unusual properties, and now Thomas Mattsson and Michael Desjarlais of Sandia National Laboratories in New Mexico have suggested yet another one. They found that water should have a metallic phase at temperatures of 4000 K and pressures of 100 Gpa, which are a good deal more accessible than earlier calculations had indicated.

CCEmet1_10-06

The two researchers used density functional theory to calculate from first principles the ionic and electronic conductivity of water across a temperature range of 2000–70,000 K and a density range of 1–3.7 g/cm3. Their calculations showed that as the pressure increases, molecular water turns into an ionic liquid, which at higher temperatures is electronically conducting, in particular above 4000 K and 100 GPa. This is in contrast to previous studies that indicated a transition to a metallic fluid above 7000 K and 250 GPa. Interestingly, this metallic phase is predicted to lie just next to insulating “superionic” ice, in which the oxygen atoms are locked into place but all the hydrogen atoms are free to move around.

Suitable conditions for metallic water should exist on the giant gas planets. In particular, the line of constant entropy (isentrope) on the planet Neptune is expected to lie in the region of temperature and pressure suggested by these studies for the metallic liquid phase.

The post Metallic water becomes even more accessible appeared first on CERN Courier.

]]>
https://cerncourier.com/a/metallic-water-becomes-even-more-accessible/feed/ 0 News Water is well known for its astonishing range of unusual properties, and now Thomas Mattsson and Michael Desjarlais of Sandia National Laboratories in New Mexico have suggested yet another one. https://cerncourier.com/wp-content/uploads/2006/10/CCEmet1_10-06.jpg
Can experiment access Planck-scale physics? https://cerncourier.com/a/can-experiment-access-planck-scale-physics/ https://cerncourier.com/a/can-experiment-access-planck-scale-physics/#respond Wed, 04 Oct 2006 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/can-experiment-access-planck-scale-physics/ A gravitational analogue of Brownian motion could now make it possible to investigate Planck-scale physics using the latest quantum technology.

The post Can experiment access Planck-scale physics? appeared first on CERN Courier.

]]>
Physics on the large scale is based on Einstein’s theory of general relativity, which interprets gravity as the curvature of space–time. Despite its tremendous success as an isolated theory of gravity, general relativity has proved problematic in its integration with physics as a whole, and in particular with the physics of the very small, which is governed by quantum mechanics. There can be no unification of physics that does not include both general relativity and quantum mechanics. Superstring theory and its recent extension to the more general theory of branes is a popular candidate for a unified theory, but the links with experiment are very tenuous. The approach known as loop quantum gravity attempts to quantize general relativity without unification, and has so far received no obvious experimental verification. The lack of experimental guidance has made the issue extremely hard to pin down.

CCEcan1_10-06

One hundred years ago, when Max Planck introduced the constant named after him, he also introduced the Planck scales, which combined his constant with the velocity of light and Isaac Newton’s gravitational constant to give the fundamental Planck time around 10–43 s, the Planck length around 10–35 m and the Planck mass around 10–8 kg. Experiments on quantum gravity require access to these scales, but direct access using accelerators would require machines that reach an energy of 1019 GeV, well beyond the reach of any experiments currently conceivable.

For almost a century it has been widely perceived that the lack of experimental evidence for quantum gravity presents a major barrier to a breakthrough. One possible way of investigating physics at the Planck scale, however, is to use the kind of approach developed by Albert Einstein in his study of thermal fluctuations of small particles through Brownian motion, where he showed that the visible motion provided a window onto the invisible world of molecules and atoms. The idea is to access the Planck scale by observing decoherence in matter waves caused by quantum fluctuations, as first proposed using neutrons more than 20 years ago by CERN’s John Ellis and colleagues (Ellis et al. 1984). Since then, ultra-cold atom technologies have advanced considerably, and armed with the sensitivity of modern atomic matter-wave interferometry we are now in a position to consider using “macroscopic” instruments to access the Planck scales, a possibility that William Power and Ian Percival outlined more recently (Power and Percival 2000).

Our recent work represents a new approach to gravitationally produced decoherence near the Planck scale (Wang et al. 2006). It has been made possible by the recent discovery by one of us of the conformal structure – the scaling property of geometry – of canonical gravity, one of the earliest important approaches to quantum gravity. This leads to a theoretical framework in which the conformal field interacts with gravity waves at zero-point energy using a conformally decomposed Hamiltonian formulation of general relativity (Wang 2005). Working in this framework, we have found that the effects of ground-state gravitons on the geometry of space–time can lead to observable effects by causing quantum matter waves to lose coherence.

The basic scenario is that near the Planck scale, ground-state gravitons constantly stretch and squash the geometry of space–time causing conformal fluctuations in space–time. This process is analogous to the Brownian motion of a pollen particle interacting with ambient molecules of much smaller sizes. It means that information on gravitons near the Planck scale can be extracted by observing the conformal fluctuations of space–time, which can be done by analysing their blurring effects on coherent matter waves.

The curvature of space–time produces changes in proper time, the time measured by moving clocks. For sufficiently short time intervals, near the Planck time, proper time fluctuates strongly owing to quantum fluctuations. For longer time intervals, proper time is dominated by a steady drift due to smooth space–time. Proper time is therefore made up of the quantum fluctuations plus the steady drift. The boundary separating the shorter time-scale fluctuations from the longer time-scale drifts is marked by a cut-off time, τcut-off, which defines the borderline between semi-classical and fully quantum regimes of gravity. It is given by τcut-off = λTPlanck, for quantum-gravity theories, where TPlanck is the Planck time, and λ is a theory-dependent parameter determined by the amplitude of zero-point gravitational fluctuations. A lower limit on λ is given by noting that the quantum-to-classical transition should occur at length scales λLPlanck that are greater than the Planck length LPlanck by a few orders of magnitude, so we can expect λ > 102.

A matter-wave interferometer can be used to measure quantum decoherence due to fluctuations in space–time, and hence provide experimental guidance to the value of λ. In an atom interferometer an atomic wavepacket is split into two wavepackets, which follow different paths before recombining (see “Atom interferometer”). The phase change of each wavepacket is proportional to the proper time along its path, resulting in an interference pattern when the wavepackets recombine. The detection of the decoherence due to space–time fluctuations on the Planck scale would provide experimental access to quantum-gravity effects analogous to accessing to atomic scales provided by Brownian motion.

In our analysis we found an equation that gives λ (See “equation”).

CCEcan2_10-06

M is the mass of the quantum particle; T is the separation time before two wavepackets recombine; and Δ denotes the loss of contrast of the matter wave and is a measure of the decoherence (Wang et al. 2006). Existing matter-wave experiments set limits on the size of λ, their sensitivity depending on both Δ and M. Results using caesium atom interferometers (Chu et al. 1997) and also from a fullerene C70 molecule interferometer (Hackermueller et al. 2004) with its larger value of M, both set a lower bound for λ of the order of 104, well within the theoretical limits of λ > 102. This suggests that the sensitivities of advanced matter-wave interferometers may well be approaching the fundamental level due to quantum space–time fluctuations. Investigating Planck-scale physics using matter-wave interferometry may therefore become a reality in the near future.

Further improved measurements will confirm and refine this bound on λ, pushing it to higher values. An atom interferometer in space, such as the proposed HYPER mission, could provide such improvements. However, the lower bound of λ calculated using current experimental data is already within the expected range. This is a very good sign and strongly suggests that the measured decoherence effects are converging towards the fundamental decoherence due to quantum gravity. Therefore, a space mission flying an atom-wave interferometer with significantly improved accuracy will be able to investigate Planck-scale physics.

As well as causing quantum matter waves to lose coherence at small scales, the conformal gravitational field is responsible for cosmic acceleration linked to inflation and the problem of the cosmological constant. Our formula, which relates the measured decoherence of matter waves to space–time fluctuations, is “minimum” in the sense that ground-state matter fields have not been taken on board. Their inclusion may further increase the estimated conformal fluctuations and result in an improved “form factor” in our formula. In this sense, the implications go beyond quantum gravity to more generic physics at the Planck scale. Furthermore, it opens up new perspectives of the interplay between the conformal dynamics of space–time and vacuum energy due to gravitons, as well as elementary particles. (A well known example of vacuum energy is provided by the Casimir effect.) These may have important consequences on cosmological problems such as inflation and dark energy.

The post Can experiment access Planck-scale physics? appeared first on CERN Courier.

]]>
https://cerncourier.com/a/can-experiment-access-planck-scale-physics/feed/ 0 Feature A gravitational analogue of Brownian motion could now make it possible to investigate Planck-scale physics using the latest quantum technology. https://cerncourier.com/wp-content/uploads/2006/10/CCEcan1_10-06.gif
Representing Electrons: A Biographical Approach to Theoretical Entities https://cerncourier.com/a/representing-electrons-a-biographical-approach-to-theoretical-entities/ Wed, 04 Oct 2006 07:39:09 +0000 https://preview-courier.web.cern.ch/?p=105256 Both a history and a metahistory, this book focuses on the development of various theoretical representations of electrons from the late 1890s until 1925, and the methodological problems associated with writing about unobservable scientific entities.

The post Representing Electrons: A Biographical Approach to Theoretical Entities appeared first on CERN Courier.

]]>
by Theodore Arabatzis, The University of Chicago Press. Hardback ISBN 0226024202, £44.50 ($70). Paperback ISBN 0226024210, £18 ($28).

51TvtGOub6S._SX331_BO1,204,203,200_

Both a history and a metahistory, this book focuses on the development of various theoretical representations of electrons from the late 1890s until 1925, and the methodological problems associated with writing about unobservable scientific entities. Here, the electron – or rather its representation – is used as a historical actor in a novel biographical approach. Arabatzis illustrates the emergence and gradual consolidation of its representation in
physics, its career throughout old quantum theory, and its appropriation and reinterpretation by chemists. Furthermore, he argues that the considerable variance in the representation of the electron does not undermine its stable identity or existence. The book should appeal to historians, philosophers of science and scientists alike.

The post Representing Electrons: A Biographical Approach to Theoretical Entities appeared first on CERN Courier.

]]>
Review Both a history and a metahistory, this book focuses on the development of various theoretical representations of electrons from the late 1890s until 1925, and the methodological problems associated with writing about unobservable scientific entities. https://cerncourier.com/wp-content/uploads/2022/08/51TvtGOub6S._SX331_BO1204203200_.jpg
Brane-Localized Gravity https://cerncourier.com/a/brane-localized-gravity/ Wed, 06 Sep 2006 08:04:35 +0000 https://preview-courier.web.cern.ch/?p=105285 In this book the author provides a detailed introduction to the brane-localized gravity of Randall and Sundrum, in which gravitational signals can localize around our four-dimensional world in the event that it is a brane embedded in an infinitely sized, higher dimensional anti-de Sitter bulk space.

The post Brane-Localized Gravity appeared first on CERN Courier.

]]>
by Philip D Mannheim, World Scientific. Hardback ISBN 9812565612, £33 ($58).

315YGSx+eAL

In this book the author provides a detailed introduction to the brane-localized gravity of Randall and Sundrum, in which gravitational signals can localize around our four-dimensional world in the event that it is a brane embedded in an infinitely sized, higher dimensional anti-de Sitter bulk space. Mannheim pays particular attention to issues that are not ordinarily covered in brane-world literature, such as the completeness of tensor gravitational fluctuation modes, and the causality of brane-world propagators. This self-contained development of the material that is needed for brane-world studies also contains a significant amount of previously unpublished material.

The post Brane-Localized Gravity appeared first on CERN Courier.

]]>
Review In this book the author provides a detailed introduction to the brane-localized gravity of Randall and Sundrum, in which gravitational signals can localize around our four-dimensional world in the event that it is a brane embedded in an infinitely sized, higher dimensional anti-de Sitter bulk space. https://cerncourier.com/wp-content/uploads/2022/08/315YGSxeAL.jpg
The Cosmic Landscape. String Theory and the Illusion of Intelligent Design https://cerncourier.com/a/the-cosmic-landscape-string-theory-and-the-illusion-of-intelligent-design/ Mon, 24 Jul 2006 08:27:23 +0000 https://preview-courier.web.cern.ch/?p=105303 Luis Alvarez-Gaume reviews in 2006 The Cosmic Landscape. String Theory and the Illusion of Intelligent Design.

The post The Cosmic Landscape. String Theory and the Illusion of Intelligent Design appeared first on CERN Courier.

]]>
by Leonard Susskind, Little Brown and Company. Hardback ISBN 0316155799, $24.95.

CCEboo4_07-06

In some theoretical physics institutes, uttering the words “cosmic landscape” may give you the feeling of walking into a lion’s den. Leonard Susskind courageously takes upon himself the task of educating the general public on a very controversial subject – the scientific view on the notion of intelligent design. The ancestral questions of “Why are we here?”, “Why is the universe hospitable to life as we know it?” and “What is the meaning of the universe?”, are earnestly addressed from an original point of view.

Darwin taught us that, according to the theory of evolution, our existence in itself has no special meaning; we are the consequence of random mutation and selection, or survival of the fittest. This is a baffling turn of the Copernican screw, which puts us even farther away from the centre of the universe. We live in the age of bacteria and we are nothing but part of the tail in the distribution of possible living organisms here on Earth.

A possible counter to this reasoning is the notion of a benevolent intelligence that designed the laws of nature so that our existence would be possible. According to Susskind, this is a mirage. Using current versions of string theory and cosmology he provides yet another turn of the Copernican screw. A good aphorism for this book can be found on p347 – the basic organizing principle of biology and cosmology is “a landscape of possibilities populated by a megaverse of actualities”. This may sound arcane, but the book gives a consistent picture based on recent scientific results that support this view. This is no paradigm shift, but an intellectual earthquake.

The author masterfully avoids the temptation to give a detailed account of our understanding of particle physics and cosmology. Instead, he provides an impressionistic, but more than adequate, description of the theories that have inspired us over the past 30 years, some verified experimentally (such as the Standard Model) and some more speculative (such as string theory). A more accurate description may have kept many readers away from the book, yet enough information is given to grasp the gist of the argument.

The main theme is the understanding of the cosmological constant – Albert Einstein’s brainchild, which later he called the biggest blunder of his life – the numerical value of which has been measured by recent astronomical observations. The numerical value of the universal repulsion force represented by this constant simply boggles the imagination. In natural units (Planckian units, as explained in the book) it is a zero, followed by 119 zeroes after the decimal point and then a one. Fine-tuning at this level cannot be explained by any symmetry or any other known argument. It is 120 orders of magnitude, something to make strong men quail.

We can appeal to the anthropic principle, but this is often taken as synonymous with the theory of intelligent design. Susskind avoids this temptation by turning to our best bet yet to unify, or rather make compatible, quantum mechanics and general relativity – string theory. Work from Bousso Polchinski and others implies that string theory contains a bewildering variety of possible ground states for the universe. In recent counts, the number is a one followed by 500 zeroes – a nearly unimaginably big number – and most of these universes are not hospitable to bacteria or us. However, the number is so big that it could perfectly accommodate some pockets where life as we know it is possible. No need then to fine-tune; the range of possibilities is so large that all we need is a procedure efficient enough to turn possibilities into actualities.

This is the megaverse provided by eternal inflation. The laws of physics allow for a universe far bigger than we have imagined so far and as it evolves it creates different branches, which among other properties contain different laws of physics, sometimes those that allow our existence.

This is radical, hard to swallow, and against all the myths that the properties of our observed and observable universe can be calculated by an ultimate theory from very few inputs – but it is remarkably consistent.

The topics analysed in this book are deep – it deals with many of the questions that humans have posed for millennia. It is refreshing to find a hard-nosed scientist coming out to address such controversial questions in the public glare, without fearing the religious or philosophical groups (or even worse, his colleagues), who for quite some time have monopolized the discussion.
Despite the difficult questions raised, when unfamiliar concepts are introduced, one finds a humorous punctuation based on the author’s personal experience, which lets you recover your breath. Some will find the arguments convincing, some will find them irritating, but few will remain indifferent.

The post The Cosmic Landscape. String Theory and the Illusion of Intelligent Design appeared first on CERN Courier.

]]>
Review Luis Alvarez-Gaume reviews in 2006 The Cosmic Landscape. String Theory and the Illusion of Intelligent Design. https://cerncourier.com/wp-content/uploads/2022/08/CCEboo4_07-06-feature.jpg
Modern Supersymmetry: Dynamics and Duality https://cerncourier.com/a/modern-supersymmetry-dynamics-and-duality/ Tue, 06 Jun 2006 08:33:32 +0000 https://preview-courier.web.cern.ch/?p=105324 Alongside an overview of important recent developments in supersymmetry the book covers topics of interest to both formal and phenomenological theorists.

The post Modern Supersymmetry: Dynamics and Duality appeared first on CERN Courier.

]]>
by John Terning, Oxford University Press. Hardback ISBN 0198567634 £55.

9a6b5527e173bbe272afadf802ba8a712473bb2d-00-00

The book begins with a brief review of supersymmetry and the construction of the minimal supersymmetric Standard Model and approaches to supersymmetry breaking. It also reviews general non-perturbative methods that led to holomorphy and the Affleck-Dine-Seiberg superpotential as powerful tools for analysing supersymmetric theories. Seiberg duality is discussed with example applications, paying special attention to its use in understanding dynamical supersymmetry breaking. Alongside an overview of important recent developments in supersymmetry the book covers topics of interest to both formal and phenomenological theorists.

The post Modern Supersymmetry: Dynamics and Duality appeared first on CERN Courier.

]]>
Review Alongside an overview of important recent developments in supersymmetry the book covers topics of interest to both formal and phenomenological theorists. https://cerncourier.com/wp-content/uploads/2022/08/9a6b5527e173bbe272afadf802ba8a712473bb2d-00-00.jpg
Relativity on a mountain https://cerncourier.com/a/relativity-on-a-mountain/ https://cerncourier.com/a/relativity-on-a-mountain/#respond Tue, 02 May 2006 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/relativity-on-a-mountain/ Alan Walker describes how a schoolgirl from Scotland used steel and scintillator to test Einstein's special theory of relativity.

The post Relativity on a mountain appeared first on CERN Courier.

]]>
CCErel1_05-06

Studying cosmic rays from mountain tops has a grand tradition with famous observatories in dramatic surroundings, such as the Jungfraujoch in Switzerland, the Pic du Midi in France and Mount Chacaltaya in Bolivia. Last year one of Scotland’s highest mountains, Cairn Gorm, temporarily joined the elite club when Ingrid Burt from Beeslack High School in Penicuik, near Edinburgh, set up a high-school cosmic-ray project. Rather than measuring cosmic-ray showers, as many schools projects are now doing, Burt set out during World Year of Physics to test Albert Einstein’s special theory of relativity.

Burt spent six weeks funded by a Nuffield Bursary looking at the feasibility of doing such an experiment with a small muon detector, using a UK mountain. Peter Reid from the Scottish Science and Technology Road Show and myself of the Particle Physics Group at the University of Edinburgh guided her studies. The study and the subsequent experiment were undertaken in conjunction with the Particle Physics for Scottish Schools outreach project, which also provided the detector. The aim was to verify time dilation by comparing the number of cosmic-ray muons detected near the top of the mountain at 1097 m with the number arriving in the university 76 m above sea level.

The amount of dilation depends on the particle’s velocity, so a key part of the experiment was to ensure that the muons detected in the physics department at close to sea-level had the same speed when they passed the altitude of Cairn Gorm as the muons actually detected on the mountain. To do this we used steel sheets to slow the muons until they stopped in a thick scintillator detector and subsequently decayed. The “signal” was thus a pulse in a thin counter from a muon entering the apparatus followed within 20 μs by a delayed pulse from the exiting electron created in the muon’s decay.

For a student in Edinburgh, Cairn Gorm was a clear choice for an experiment at altitude. It is only 227 km from Edinburgh and, like the Jungfraujoch, has a mountain railway – an important criterion where heavy equipment is involved. To this end, Burt asked CairnGorm Mountain Ltd, the company that operates the funicular railway, for help in transporting the 400 kg of steel and other apparatus.

We took the first measurements at the Ptarmigan Top Station of the Cairn Gorm funicular railway, where we needed 49.3 cm of steel to slow the muons so that they would stop and decay in the scintillator. Given that we can calculate the energy losses in both materials, this accurately measures the velocity of the muons as they enter the top of the steel at this altitude before they stop and decay in the scintillator.

The energy lost as a muon of this velocity passes through the atmosphere can also be accurately calculated, so we compensated for this loss between the Cairn Gorm and university sites by removing 21 cm of steel – the equivalent to the slowing power of the intervening 1021 m of atmosphere – and ran the experiment at the university with 28.3 cm of steel. This meant that the muons detected at both experimental sites had the same energies and speeds. As muons travel down from Cairn Gorm to the university, they change velocity and their numbers reduce according to the exponential decay law. The number of muons detected each minute decreases as they travel downwards, and the reduction depends solely on the time elapsed. Without the effect of time dilation, the reduction in this experiment would be a factor of about 4; taking time dilation into account gives a reduction factor of 1.3.

For 10 days in October 2005, visitors arriving at the top of the Cairn Gorm funicular had the chance to see the experiment in action as it counted stopping muons there – at a rate of 1.3 a minute. This meant that if Einstein was right, we should detect 1 a minute at the university, and it was no surprise to do so.

Burt is now refining these calculations to estimate the errors in the predictions, but we do not expect these to render the results invalid. It would be interesting to repeat the experiment with a greater height difference, for example at CERN and at the top of Jungfraujoch railway.

• Ingrid Burt was a gold finalist at the British Association 2006 Crest Science Fair, and won a week at the London International Science Fair in August.

The post Relativity on a mountain appeared first on CERN Courier.

]]>
https://cerncourier.com/a/relativity-on-a-mountain/feed/ 0 Feature Alan Walker describes how a schoolgirl from Scotland used steel and scintillator to test Einstein's special theory of relativity. https://cerncourier.com/wp-content/uploads/2006/05/CCErel1_05-06.jpg
Lattice Gauge Theories: An Introduction, 3rd edition https://cerncourier.com/a/lattice-gauge-theories-an-introduction-3rd-edition/ Tue, 02 May 2006 08:40:59 +0000 https://preview-courier.web.cern.ch/?p=105341 This broad introduction to lattice gauge field theories, in particular quantum chromodynamics, serves as a textbook for advanced graduate students, and also provides the reader with the necessary analytical and numerical techniques to carry out research.

The post Lattice Gauge Theories: An Introduction, 3rd edition appeared first on CERN Courier.

]]>
by Heinz J Rothe, World Scientific. Hardback ISBN 9812560629 £51 ($84). Paperback ISBN 9812561684 £29 ($48).

51JC2SC5MGL

This broad introduction to lattice gauge field theories, in particular quantum chromodynamics, serves as a textbook for advanced graduate students, and also provides the reader with the necessary analytical and numerical techniques to carry out research. Although the analytic calculations can be demanding, they are discussed in sufficient detail that the reader can fill in the missing steps. The book also introduces problems currently under investigation and emphasizes numerical results from pioneering work.

The post Lattice Gauge Theories: An Introduction, 3rd edition appeared first on CERN Courier.

]]>
Review This broad introduction to lattice gauge field theories, in particular quantum chromodynamics, serves as a textbook for advanced graduate students, and also provides the reader with the necessary analytical and numerical techniques to carry out research. https://cerncourier.com/wp-content/uploads/2022/08/51JC2SC5MGL.jpg
Chern-Simons Theory, Matrix Models, and Topological Strings https://cerncourier.com/a/chern-simons-theory-matrix-models-and-topological-strings/ Tue, 02 May 2006 08:40:58 +0000 https://preview-courier.web.cern.ch/?p=105339 This book gives the first coherent presentation of this and other related topics.

The post Chern-Simons Theory, Matrix Models, and Topological Strings appeared first on CERN Courier.

]]>
by Marcos Mariño, Oxford University Press. Hardback ISBN 0198568495 £49.50.

41QNroIfzTL._SX313_BO1,204,203,200_

One of the most important examples of string theory/gauge theory correspondence relates Chern-Simons theory – a topological gauge theory in three dimensions that describes knot and three-manifold invariants – to topological string theory. This book gives the first coherent presentation of this and other related topics. After an introduction to matrix models and Chern-Simons theory, it describes the topological string theories that correspond to these gauge theories and develops the mathematical implications of this duality for the enumerative geometry of Calabi-Yau manifolds and knot theory. It will be useful reading for graduate students and researchers in both mathematics and physics.

The post Chern-Simons Theory, Matrix Models, and Topological Strings appeared first on CERN Courier.

]]>
Review This book gives the first coherent presentation of this and other related topics. https://cerncourier.com/wp-content/uploads/2022/08/41QNroIfzTL._SX313_BO1204203200_.jpg
Progress in String Theory: TASI 2003 Lecture Notes https://cerncourier.com/a/progress-in-string-theory-tasi-2003-lecture-notes/ Wed, 01 Mar 2006 09:29:21 +0000 https://preview-courier.web.cern.ch/?p=105390 Intended mainly for advanced graduate students in theoretical physics, this comprehensive volume covers recent advances in string theory and field theory dualities.

The post Progress in String Theory: TASI 2003 Lecture Notes appeared first on CERN Courier.

]]>
by Juan M Maldacena (ed.), World Scientific. Hardback ISBN 9812564063, £62 ($108).

41moAhmHQtL

Intended mainly for advanced graduate students in theoretical physics, this comprehensive volume covers recent advances in string theory and field theory dualities. It is based on the annual lectures given at the School of the Theoretical Advanced Study Institute (2003), a traditional event that brings together graduate students in high-energy physics for an intensive course given by leaders in their fields.

The post Progress in String Theory: TASI 2003 Lecture Notes appeared first on CERN Courier.

]]>
Review Intended mainly for advanced graduate students in theoretical physics, this comprehensive volume covers recent advances in string theory and field theory dualities. https://cerncourier.com/wp-content/uploads/2022/08/41moAhmHQtL.jpg
Particles in Portugal: new high-energy physics results https://cerncourier.com/a/particles-in-portugal-new-high-energy-physics-results/ https://cerncourier.com/a/particles-in-portugal-new-high-energy-physics-results/#respond Wed, 08 Feb 2006 00:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/particles-in-portugal-new-high-energy-physics-results/ Europe's premier particle-physics conference took place in 2005 on the banks of the River Tagus, near Lisbon. Per Osland and Jorma Tuominiemi report.

The post Particles in Portugal: new high-energy physics results appeared first on CERN Courier.

]]>
The 2005 European Physical Society (EPS) Conference on High Energy Physics (HEP) took place in Lisbon on 21-27 July at the Cultural Centre of Belém, beautifully situated on the right bank of the Tagus river, 10 km west of downtown Lisbon. Held in alternate years, the EPS HEP conference starts with three days of parallel talks, followed by a day off, and then three days of plenary sessions. The format thus differs from that of the Lepton-Photon conferences, which are organized in the same year, and allows the participation of more “grass-root” and young speakers.

CCEhep1_01-06

In 2005 a total of 17 sessions yielded a wealth of detailed results from both experiment and theory, including new results from astroparticle physics. One of the highlights was provided by Barry Barish, newly appointed director of the Global Design Effort for the International Linear Collider (ILC). The EPS and the European Committee for Future Accelerators organized a particularly popular “Lab directors’ session”, which presented status and future plans.

CCEhep2_01-06

The opening ceremony was honoured by the presence of Mariano Gago, Portuguese Minister for Science, Technology and Universities, who as an experimental high-energy physicist, was also a member of the local organizing committee. As usual, the plenary sessions started with the prize awards. The EPS 2005 High Energy Particle Physics Prize was presented jointly to Heinrich Wahl of CERN and to the NA31 collaboration, with other prizes awarded to Mathieu de Naurois, Matias Zaldarriaga, Dave Barney and Peter Kalmus (CERN Courier, September 2005, p43). The next highlight was the invited talk by David Gross of Santa Barbara/KITP, Nobel Laureate in 2004 and EPS Prize winner in 2003. He checked off the list of predictions he had made in the summary talk of the 1993 Cornell Lepton-Photon conference, the majority of which had been confirmed.

CCEhep3_01-06

Sijbrand de Jong of Nijmegen/NIKHEF and Tim Greenshaw of Liverpool started the main business of the plenary session with talks on tests of the electroweak and quantum chromodynamic sectors of the Standard Model, respectively. The new (lower) mass for the top quark from Fermilab, of 172.7±2.9 GeV, as presented by Koji Sata of Tsukuba in the parallel sessions, gives an upper Higgs mass of 219 GeV at 95% confidence level. Greenshaw discussed how HERA continues to play a major role in precision studies in quantum chromodynamics (QCD) of the proton, now mapped down to 10-18 m, or a thousandth of its radius. Such results will be very valuable for the analysis of data from the Large Hadron Collider (LHC). New results on the spin structure of the proton were also reported.

Riccardo Rattazzi of CERN and Pisa then talked on physics beyond the Standard Model and was followed by Fermilab’s Chris Quigg, who reviewed hadronic physics and exotics. Rattazzi presented an interesting “LEP paradox”: the hierarchy problem, with a presumed light Higgs particle, requires new physics at a low scale, whereas there are no signs of it in the data from CERN’s Large Electron-Positron collider. He also reviewed the anthropic approach to the hierarchy problem: we inhabit one of very many possible universes. This many-vacua hypothesis is also referred to as “the landscape”, and might have implications for supersymmetry. Quigg reviewed several new states discovered by the CLEO collaboration at Cornell and at the B-factories, and reminded us that the pentaquark states are still controversial.

Near- and more-distant-future possibilities were reviewed by Günther Dissertori of ETH Zurich in his talk on “LHC Expectations (Machine, Detectors and Physics)” and by Klaus Desch of Freiburg in “Physics and Experiments – Linear Collider”. Dissertori gave an overview of all the complex instrumentation in the process of being completed for both the LHC and its four major detectors. The first beams are planned for the summer of 2007, with a pilot proton run scheduled for November 2007. All detectors are expected to be ready to exploit LHC collisions starting on “Day 1”. Desch presented the ILC project and highlights of the precision measurements it will provide in electroweak physics, in particular, in the Higgs sector.

More theoretical considerations were offered by CERN’s Gabriele Veneziano and Yaron Oz of Tel Aviv, who spoke on cosmology (including neutrino mass limits) and string theory, respectively. Veneziano reviewed current understanding, according to which the total energy content of the universe is split into 5% baryons, 25% dark matter and 70% dark energy. The question of what dark energy is was compared with the problem that faced Max Planck when he realized that the total power emitted by a classical black body is infinite. Interesting speculations on alternative interpretations of cosmic acceleration were also discussed. Precision measurements in cosmology have an impact on high-energy physics: they provide an upper bound on neutrino masses, indicate preferred regions in the parameter space of minimal supergravity grand unification, and suggest self-interacting dark matter. Oz reviewed the beauties of strings and their two major challenges: to explain the big bang singularity, and the structure and parameters of the Standard Model. So far, neither is explained, but the consistencies are impressive.

The recently discovered connection between string theory and QCD was described by SLAC’s Lance Dixon. An important problem being solved is how to optimize the calculation of multiparticle processes (which might be backgrounds to new physics processes). By ingeniously exploiting the symmetries of the theory, one is able to go beyond the method of Feynman diagrams in terms of efficiency. Roughly speaking, this amounts to first representing four-vectors by spinors, and then Fourier-transforming the left-handed but not the right-handed spinors.

Getting results

Christine Davies of Glasgow presented new results on non-perturbative field theory, in particular in lattice QCD (LQCD). She reported on the very impressive recent advances in LQCD, where high-precision unquenched results are now available to confront the physics of the Cabibbo-Kobayashi-Maskawa (CKM) matrix with only 10% errors on the decay matrix elements. This has been made possible by breakthroughs in the theoretical understanding of the approximations, together with faster computers.

Josh Klein of Austin and Federico Sanchez of Barcelona reviewed neutrino physics results and prospects, respectively. Neutrino physics has become precision physics, and now oscillations, rather than just flux reductions, are beginning to emerge in data from the KamLAND and Super-Kamiokande II experiments in Japan. Sanchez discussed rich plans for the future, with two main questions to tackle. Is the neutrino mass of Majorana or Dirac origin? How can the small angle θ13 and the CP-violating phase δ be constrained, or preferably, measured? The plans include the Karlsruhe Tritium Neutrino experiment to study tritium decay, and the GERDA experiment in the Gran Sasso National Laboratory (LNGS), the Neutrino Mediterranean Observatory and the Enriched Xenon Observatory, all of which will look for neutrinoless double beta decay. The Main Injector Neutrino Oscillation Search, the Oscillation Project with Emulsion Tracking Apparatus (OPERA) in the LNGS, and the Tokai to Kamioka (T2K) long-baseline neutrino experiments will all study the phenomena of “atmospheric” neutrino oscillations under controlled conditions, and the Double CHOOZ experiment will further bound the small values of θ13. A new idea is to exploit beams of unstable nuclei, which would provide monochromatic neutrinos. Meanwhile, the CERN Neutrinos to Gran Sasso project will start taking data in 2006, with a neutrino beam from CERN to the OPERA detector.

Flavour physics was the topic for both Gustavo Branco of Centro de Física Teórica das Partículas in Lisbon, in “Flavour Physics – Theory (Leptons and Quarks)”, and Marie-Hélène Schune of LAL/Orsay, who talked about CP violation and heavy flavours. At the B-factories, the Belle detector is collecting a lot of luminosity, and after a long shutdown, BaBar is back in operation. Many detailed results on CP violation in B-decays were presented at the meeting. The BaBar and Belle results on β or φ1 are now in agreement, and the CKM mechanism works very well, leaving little room for new physics, although the precision is also steadily improving.

Looking to the skies

Astrophysics was covered by three speakers, with Thomas Lohse of Berlin talking about cosmic rays (gammas, hadrons, neutrinos), Alessandro Bettini of Padova presenting dark matter searches, and Yanbei Chen from the Max-Planck Institute for Gravitational Physics reviewing work on gravitational waves. What and where are the sources of high-energy cosmic rays? How do they work? Are the particles accelerated or due to new physics (decay products) at large mass scales? The Pierre Auger Observatory is beginning to collect data in the region of the Greissen-Zatsepin-Kuzmin cut-off, while neutrino detectors search for “coincidences” (repeated events from the same direction).

The HESS telescopes and other detectors have discovered tera-electron-volt gamma rays from the sky! The origin is unknown, but they are correlated with X-ray intensities. The galactic centre is one such tera-electron-volt gamma-ray point source. It has also been discovered that supernova shells accelerate particles (electrons or hadrons?) up to at least 100 TeV. The searches for weakly interacting massive particles, on the other hand, remain inconclusive. Other experiments are still unable to confirm or refute the observation of an annular modulation seen by the DAMA project at the LNGS.

A major instrument in the search for gravity waves is the Laser Interferometer Gravitational-Wave Observatory, a ground-based laser interferometer that is sensitive in the region from 10 Hz to 10 kHz. The sources include pulsars, and one hopes to detect a signal after the planned upgrade. The Laser Interferometer Space Antenna will be launched in 2015, and will be sensitive to lower frequencies, in the range 0.01 mHz to 0.1 Hz, as might come from super-massive black-hole binaries.

Paula Bordalo of the Laboratério de Instrumentação e Física Experimental de Partículas in Lisbon presented an experimental overview of ultra-relativistic heavy-ion physics. Photon probes are important for the study of the new state of matter observed, as they do not interact strongly and carry information about the early stage of the collision. There is also a related virtual photon or dilepton signal that shows some interesting features. The new state being explored is possibly a colour glass condensate, which is behaving more like a low-viscosity liquid rather than a gas.

Plasma wake-field appears to be still in an early stage of development, although it has the potential to achieve very high acceleration gradients.

Alexander Skrinsky of the Budker Institute of Nuclear Physics reviewed the status and prospects of accelerators for high-energy physics, covering machines in operation as well as new facilities under construction or planned. Superconductivity is widely used and is being further developed for accelerating structures and for magnets. One important line of development is oriented towards higher luminosity and higher quality beams, including longitudinal polarization and monochromization techniques. There are studies aiming at shorter and more intense bunches, suppression of instabilities involving fast digital bunch-to-bunch feedbacks and minimization of electron-cloud effects. Rapid progress is being made on energy-recovery linacs, recyclers and free-electron lasers, which are being studied for future synchrotron light sources. Higher power proton beams and megawatt targets are being developed and several promising options for neutrino factories are under study. Plasma wake-field acceleration appears to be still in an early stage of development, although it has the potential to achieve very high acceleration gradients.

Grid references

Turning to computing, DESY’s Mathias Kasemann described the status of the Grid projects in high-energy physics. The big experiments running today – CDF, D0, BaBar and ZEUS – are already using distributed computing resources and are migrating their software and production systems to the existing Grid tools. The LHC experiments are building a vast hierarchical computing system with well defined computing models. The LHC Computing Grid (LCG) collaboration has been set up to provide the resources for this huge and complex project. The LCG system is being developed with connections to the Enabling Grids for E-science (EGEE) project and the Nordic Data Grid Facility in Europe and to the Open Science Grid in the US. Basic Grid services have been defined and first implementations are already available and tested. Kasemann’s personal prediction was that the data analysis of the LHC experiments will not be late because of problems in Grid computing.

On the detector front, CERN’s Fabio Sauli reported on new developments presented at the conference. Interesting progress has been achieved in fabricating the radiation-hard solid-state detectors needed for the LHC and other high-radiation-level applications. One way is through material engineering: choosing materials that are radiation resistant, such as oxygenated silicon, silicon processed with the Czochralski method, or using thin epitaxial detectors. Other solutions have been developed by device engineering, and these include pixel detectors, monolithic active pixels or three-dimensional silicon structures. For high-rate tracking and triggering, gas micropattern detectors, such as the gas-electron multipliers, have found versatile solutions in several experiments. For calorimetry, new materials like lead tungstenate crystals have been adopted in LHC experiments. Also new scintillation materials with large light yield, fast decay time and with high density have been tested.

Boris Kayser from Fermilab closed the conference with an eloquent summary. On the day off, various excursions to charming medieval villages and ancient monasteries all converged on the city of Mafra, where the conference participants met Portuguese students and teachers in a baroque palace dating from 1717. There was also a visit to a precious library created by Franciscan friars, with 36,000 prize volumes (the “arXiv” of its time!) and where bats control the insect numbers (visitors were told). Gaspar Barreira and his colleagues handled the local organization masterfully, and the many excellent fish restaurants nearby provided a relaxed setting for informal discussions.

• The next EPS-HEP conference, in July 2007, will take place in Manchester, UK.

The post Particles in Portugal: new high-energy physics results appeared first on CERN Courier.

]]>
https://cerncourier.com/a/particles-in-portugal-new-high-energy-physics-results/feed/ 0 Feature Europe's premier particle-physics conference took place in 2005 on the banks of the River Tagus, near Lisbon. Per Osland and Jorma Tuominiemi report. https://cerncourier.com/wp-content/uploads/2006/02/CCEhep1_01-06.jpg
Warped Passages: Unravelling the Universe’s Hidden Dimensions https://cerncourier.com/a/warped-passages-unravelling-the-universes-hidden-dimensions/ Fri, 25 Nov 2005 09:44:15 +0000 https://preview-courier.web.cern.ch/?p=105411 James Gillies reviews in 2005 Warped Passages: Unravelling the Universe's Hidden Dimensions.

The post Warped Passages: Unravelling the Universe’s Hidden Dimensions appeared first on CERN Courier.

]]>
by Lisa Randall, Allen Lane, Penguin Books. Hardback ISBN 0713996994, £25.
(In the US, HarperCollins, ISBN 0060531088, $27.95.)

They say you should never judge a book by its cover, which is advice worth considering if you’re thinking of buying Lisa Randall’s Warped Passages. The violent pink with the title scrawled graffiti-like across it (in the Penguin edition) makes the book jump off the shelf, screaming “I’m no ordinary popular-science book.” Don’t be put off. Randall does break the mould, but not by filling the book with graffiti. She delivers a bold journey from the origins of 20th-century science to the frontiers of today’s theoretical physics. It’s bold because, despite her protestations that the book is about physics and not personalities, it turns out to be a very personal journey in the company of one of the field’s most cited practitioners.

CCEboo1_12-05

This is most true at the beginning, where Randall tells us a little about who she is and why she has devoted her life’s work to the science of extra dimensions. She begins with the words: “When I was a young girl, I loved the play and intellectual games in math problems or in books like Alice in Wonderland.” Thereafter, she affords us a glimpse of who she is through her choice of musical snippets at the beginning of each chapter, and the Alice-inspired story of Ike, Athena and Dieter, which unfolds throughout the book, one episode per chapter. The result is that the reader gets not only a competent review of a difficult subject, but also a feeling for what drives someone at the cutting edge of science.

I have to confess that I read the story of Ike, Athena and Dieter from cover to cover before embarking on the book proper, and having done so would recommend that course of action. Should physics cease to be a fruitful career, Randall could perhaps turn her hand to fiction. Coupled with the What to Remember and What’s New sections at the end of each chapter, the story gives a pretty good overview of what the book is about.

The personality that emerges as the book progresses is not the kind of physicist who would be lost for words at a party if asked what she does. As well as being, according to her publisher, the world’s most influential physicist thanks to the citations-index-topping paper she published with Raman Sundrum in 1999, Randall is also a woman with a life. She has broad interests, she is cultured and she climbs mountains in her spare time. In short, she’s the sort of role model science needs.

Clearly conscious of the “no equations” school of science communication, she tries early on to put the reader at ease by promising that the descriptions will never be too complicated. Inevitably she cannot hold this promise throughout, and there are places where even the most dedicated amateur scientists will be baffled, but that is more the nature of the subject than the author. If Niels Bohr thought that quantum mechanics was profoundly shocking, what would he have made of hidden dimensions? In places, Randall goes so far to try to make things easy that the tone verges on the patronizing, and in others, she hides difficult stuff in a “math notes” section at the end of the book. On balance, however, she has done a good job of making a difficult subject accessible.

Bohr is on record as saying to a young physicist, “We are all agreed that your theory is crazy. The question which divides us is whether it is crazy enough to have a chance of being correct.” Could the same be true of extra dimensions? If you do not already have an opinion, this book will certainly help you to make up your mind. Don’t let the cover, or the publisher’s hype, put you off.

The post Warped Passages: Unravelling the Universe’s Hidden Dimensions appeared first on CERN Courier.

]]>
Review James Gillies reviews in 2005 Warped Passages: Unravelling the Universe's Hidden Dimensions. https://cerncourier.com/wp-content/uploads/2022/08/CCEboo1_12-05-feature.jpg
Symmetry and the Beautiful Universe https://cerncourier.com/a/symmetry-and-the-beautiful-universe/ Fri, 25 Nov 2005 09:42:54 +0000 https://preview-courier.web.cern.ch/?p=105413 Peggie Rimmer reviews in 2005 Symmetry and the Beautiful Universe.

The post Symmetry and the Beautiful Universe appeared first on CERN Courier.

]]>
by Leon M Lederman and Christopher T Hill, Prometheus Books. Hardback ISBN 1591022428, $29.

A tribute to mathematical genius Emmy Noether (1882-1935) is long overdue. Noether’s theorem, which neatly linked symmetries in physical laws to constants of nature, heralded the most important conceptual breakthrough of modern physics and yet her name is rarely found in books on the subject. Symmetry and the Beautiful Universe attempts to right that wrong.

CCEboo4_12-05

This popular-science book is presented as being accessible to “lay readers” and “the serious student of nature”. So is it? Well, any treatise on symmetry begs for pictures but we find very few until near the end, and often we get the proverbial thousand words instead. Also there are more mathematical equations than appear at first sight, as some are embedded in the text. So, I suspect that the going would be easier for the serious student than for lay readers.

The range of topics and styles is humongous, from cartoon character Professor Peabody with angular momentum worthy of a dervish (smoking a pipe), to Feynman diagrams for first-order quantum corrections in electron-electron scattering. The short biography of Noether is good and her theorem is well praised, although the chapter devoted to explaining it is rather long-winded.
More than once the reader is first given an esoteric example of some process or other and only later a more familiar example; momentum conservation starts with radioactive neutron decay and goes on to colliding billiard balls. Then there are “gedanken” experiments. These are familiar devices to scientists but will a lay reader believe that space is isotropic because a hypothetical experiment is said to show that it is? And sometimes the book is mystifyingly US-centric. What are EPA rules? And why is Kansas special?

However, the undeniable enthusiasm of the authors for their subject, indeed for almost any subject, shines brightly throughout. Even leaving aside the 60 or so pages of notes and appendix, the book brims over with facts, figures and fun fictions, often straying far from the subject of symmetry. I estimate that a smart cut-and-paste editor could produce three good books out of the material on offer, each at a quite different level. Find your own.

Reviewing a book that has one Nobel laureate as an author and two among the constellation of stars glowingly quoted on the dust jacket is a daunting task. I was once told that “astounding” conveys an acceptable amalgam of the polite and the honest when one is overwhelmed. This book is astounding.

The post Symmetry and the Beautiful Universe appeared first on CERN Courier.

]]>
Review Peggie Rimmer reviews in 2005 Symmetry and the Beautiful Universe. https://cerncourier.com/wp-content/uploads/2005/11/CCEboo4_12-05.jpg
From Fields to Strings: Circumnavigating Theoretical Physics (Ian Kogan Memorial Collection) https://cerncourier.com/a/from-fields-to-strings-circumnavigating-theoretical-physics-ian-kogan-memorial-collection/ Wed, 02 Nov 2005 10:01:45 +0000 https://preview-courier.web.cern.ch/?p=105439 Luis Alvarez-Gaume reviews in 2005 From Fields to Strings: Circumnavigating Theoretical Physics (Ian Kogan Memorial Collection).

The post From Fields to Strings: Circumnavigating Theoretical Physics (Ian Kogan Memorial Collection) appeared first on CERN Courier.

]]>
by Misha Shifman, Arkady Veinshtein and John Wheater (eds), World Scientific. Hardback ISBN 9812389555 (three volume set), £146 ($240).

On the morning of 6 June 2003, Ian Kogan’s heart stopped beating. It was the untimely departure of an outstanding physicist and a warm human being. Ian had an eclectic knowledge of theoretical physics, as one can easily appraise by perusing the list of his publications at the end of the third volume of this memorial collection.

CCEboo2_11-05

The editors of these three volumes had an excellent idea: the best tribute that could be offered to Ian’s memory was a snapshot of theoretical physics as he left it. The response of the community was overwhelming. The submitted articles and reviews provide a thorough overview of the subjects of current interest in theoretical high-energy physics and all its neighbouring subjects, including mathematics, condensed-matter physics, astrophysics and cosmology. Other subjects of Ian’s interest, not related to physics, will have to be left to a separate collection.

The series starts with some personal recollections from Ian’s family and close friends. It then develops into a closely knit tapestry of subjects including, among many other things, quantum chromodynamics, general field theory, condensed-matter physics, the quantum-hole effect, the state of unification of the fundamental forces, extra dimensions, string theory, black holes, cosmology and plenty of “unorthodox physics” the way Ian liked.

These books provide a good place to become acquainted with many of the new ideas and methods used recently in theoretical physics. It is also a great document for future historians to understand, first hand, what physicists thought of their subject at the turn of the 21st century. There is much to learn and profit from this trilogy. Circumnavigating theoretical physics is indeed fun. It is unfortunate, however, that it had to be gathered in such sad circumstances.

The post From Fields to Strings: Circumnavigating Theoretical Physics (Ian Kogan Memorial Collection) appeared first on CERN Courier.

]]>
Review Luis Alvarez-Gaume reviews in 2005 From Fields to Strings: Circumnavigating Theoretical Physics (Ian Kogan Memorial Collection). https://cerncourier.com/wp-content/uploads/2005/11/CCEboo2_11-05.jpg
50 Years of Yang-Mills Theory https://cerncourier.com/a/50-years-of-yang-mills-theory/ Wed, 02 Nov 2005 10:01:44 +0000 https://preview-courier.web.cern.ch/?p=105437 Ian Aitchison reviews in 2005 50 Years of Yang-Mills Theory.

The post 50 Years of Yang-Mills Theory appeared first on CERN Courier.

]]>
by Gerardus ‘t Hooft (ed), World Scientific. Hardback ISBN 9812389342, £51 ($84). Paperback ISBN 9812560076, £21 ($34).

Anniversary volumes usually mark a significant birthday of an individual, or perhaps an institution. But this fascinating compilation celebrates the golden jubilee of a theory – namely, the type of non-Abelian quantum gauge field theory first published by Chen Ning Yang and Robert L Mills in 1954, and now established as a central concept in the Standard Model of particle physics. It was a brilliant idea (by the editor, Gerardus ‘t Hooft, I assume) to signal the 50th birthday of Yang-Mills theory by gathering together a wide range of articles by leading experts on many aspects of the subject. The result is a most handsome tribute of both historical and current interest, and a substantial addition to the existing literature.

CCEboo1_11-05

There are 19 contributions, only two of which have been published elsewhere. They are grouped into 16 sections (“Quantizing Gauge Fields”, “Ghosts for Physicists”, “Renormalization” and so on), each accompanied by brief but illuminating comments from the editor. The style of the contributions ranges from an equation-free essay by Frank Wilczek, to a paper by Raymond Stora on gauge-fixing and Koszul complexes. Somewhere in between lie, for example, François Englert’s review of “Breaking the Symmetry”, and Stephen Adler’s exemplary account of “Anomalies to All Orders”.

One recurrent theme is how unfashionable quantum field theory was in the 1950s and 1960s. As ‘t Hooft puts it: “In 1954, most of those investigators who did still adhere to quantum field theory were either stubborn, or ignorant, or both. In 1967 Faddeev and Popov not only had difficulties getting their work published in Western journals; they found it equally difficult to get their work published in the USSR, because of Landau’s ban on quantized field theories in the leading Soviet journals.” One of the most interesting papers in the book is the 1972 English translation of their 1967 “Kiev Report”, produced via an initiative of Martinus Veltman and Benjamin Lee. It is more detailed than their famous 1967 paper in Physics Letters, and includes a discussion of the gravitational field.

Alvaro De Rújula inimitably brings to life the strong interactions between theorists and experimentalists in the heady days of 1973-1978. He includes a candid snap of Howard Georgi and Sheldon Glashow, circa 1975, which made me wish there were more such shots of the leading players from that era. De Rújula’s is the only contribution to address the experimental situation, despite the editor’s admission that the lasting impact of Yang-Mills theory depended on “numerous meticulous experimental tests and searches”. But, after all, this is a volume celebrating the birthday of a theory.

Many contributors look to the future, as well as the past. These include Alexander Polyakov on “Confinement and Liberation”, Peter Hasenfratz on “Chiral Symmetry and the Lattice”, and Edward Witten on “Gauge/String Duality for Weak Coupling”.

I have only had space enough to (I hope) whet the reader’s appetite. This unusual and elegant festschrift is a treat for theorists – and, as a bonus, you get a full-colour representation on the cover of a 17-instanton solution of the Yang-Mills field equations (designed by the editor).

The post 50 Years of Yang-Mills Theory appeared first on CERN Courier.

]]>
Review Ian Aitchison reviews in 2005 50 Years of Yang-Mills Theory. https://cerncourier.com/wp-content/uploads/2005/11/CCEboo1_11-05-feagt.jpg
Uppsala 2005: leptons, photons and a lot more https://cerncourier.com/a/uppsala-2005-leptons-photons-and-a-lot-more/ https://cerncourier.com/a/uppsala-2005-leptons-photons-and-a-lot-more/#respond Wed, 02 Nov 2005 00:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/uppsala-2005-leptons-photons-and-a-lot-more/ The biennial Lepton-Photon conference was held in Uppsala on 30 June - 5 July. The talks erected the impressive edifice known as the Standard Model and showed that experimental ingenuity has not yet shaken its foundations. Francis Halzen summarizes.

The post Uppsala 2005: leptons, photons and a lot more appeared first on CERN Courier.

]]>
Twenty-five years ago at the Rochester meeting held in Madison, Leon Lederman said, “The experimentalists do not have enough money and the theorists are overconfident.” Nobody could have anticipated then that experiments would establish the Standard Model as a gauge theory with a precision of one in 1000, pushing any interference from possible new physics to energy scales beyond 10 TeV. The theorists can modestly claim that they have taken revenge for Lederman’s remark. However, as the Lepton-Photon 2005 meeting underlined, there is no feeling that we are now dotting the i’s and crossing the t’s of a mature theory. All the big questions remain unanswered; worse still, the theory has its own demise built into its radiative corrections.

CCEupp1_11-05

The electroweak challenge

The most evident of unanswered questions is why are the weak interactions weak? In 1934 Enrico Fermi provided an answer with a theory that prescribed a quantitative relation between the fine-structure constant, α, and the weak coupling, G ˜ α⁄MW2, where MW can be found from the rate of muon decay to be around 100 GeV (once parity violation and neutral currents, which Fermi did not know about, are taken into account). Fermi could certainly not have anticipated that his early phenomenology would develop into a renormalizable gauge theory that allows us to calculate the radiative corrections to his formula. Besides regular higher-order diagrams, loops associated with the top quark and the Higgs boson also contribute, and are consistent with observations.

CCEupp2_11-05

One of my favourite physicists once referred to the Higgs as the “ugly” particle. Indeed, if one calculates the radiative corrections to the mass appearing in the Higgs potential, the same gauge theory that withstood the onslaught of precision experiments at CERN’s Large Electron-Positron collider, the SLAC linear collider and Fermilab’s Tevatron grows quadratically. Some new physics is needed to tame the divergent behaviour, at an energy scale, L, of less than a few tera-electron-volts by the most conservative of estimates. There is an optimistic interpretation, just as Fermi anticipated particle physics at 100 GeV in 1934, that the electroweak gauge theory requires new physics at 2˜3 TeV, to be revealed by the Large Hadron Collider (LHC) at CERN and, possibly, the Tevatron.

CCEupp3_11-05

Dark clouds have built up on this sunny horizon, however, because some electroweak precision measurements match the Standard Model predictions with too high a precision, pushing L to around 10 TeV. Some theorists have panicked and proposed that the factor multiplying the unruly quadratic correction, 2 MW2 + MZ2 + Mh2 – 4Mt2, must vanish exactly. This has been dubbed the Veltman condition. It “solves” the problem because the observations can accommodate scales as large as 10 TeV, possibly even higher, once the dominant contribution is eliminated.

CCEupp4_11-05

If the Veltman condition does happen to be satisfied, it would leave particle physics with an ugly fine-tuning problem reminiscent of the cosmological constant; but this is very unlikely. The LHC must reveal the “Higgs” physics already observed via radiative corrections, or at least discover the physics that implements the Veltman condition, which must still appear at 2 ˜ 3 TeV although higher scales can be rationalized for other tests of the theory. Supersymmetry is a textbook example. Even though it elegantly controls the quadratic divergence by the cancellation of boson and fermion contributions, it is already fine-tuned at a scale of 2 ˜ 3 TeV. There has been an explosion of creativity to resolve the challenge in other ways; the good news is that all involve new physics in the form of scalars, new gauge bosons, non-standard interactions, and so on.

CCEupp5_11-05

Alternatively, we may be guessing the future while holding too small a deck of cards, and the LHC will open a new world that we did not anticipate. The hope then is that particle physics will return to its early traditions where experiment leads theory, as it should, and where innovative techniques introduce new accelerators and detection methods that allow us to observe with an open mind and without a plan.

CP violation and neutrino mass

Another grand unresolved question concerns baryogenesis: why are we here? At some early time in the evolution of the universe quarks and antiquarks annihilated into light, except for just one quark in 1010 that failed to find a partner and became us. We are here because baryogenesis managed to accommodate Andrei Sakharov’s three conditions, one of which dictates CP violation. Precision data on CP violation in neutral kaons have been accumulated over 40 years, and the measurements can, without exception, be accommodated by the Standard Model with three families of quarks. History has repeated itself for B-mesons, but in only three years, owing to the magnificent performance of the experiments at the B-factories – Belle at KEK and BaBar at SLAC. Direct CP violation has been established in the decay Bd →Kπ with a significance in excess of 5σ. Unfortunately, this result and a wealth of data contributed by the CLEO collaboration at Cornell, DAFNE at Frascati and the Beijing Spectrometer (BES) fail to reveal evidence for new physics. Given the rapid progress and the better theoretical understanding of the expectations in the Standard Model relative to the kaon system, the hope is that improved data will pierce the Standard Model’s resistant armour. Where theory is concerned, it is worth noting that the lattice now does calculations that are confirmed by experiment.

A third important question concerns neutrino mass. A string of fundamental experimental measurements has led progress in neutrino physics. Supporting evidence from reactor and accelerator experiments, including first data from the reborn Super-Kamiokande detector, has confirmed discovery of oscillations in solar and atmospheric neutrinos. High-precision data from the pioneering experiments now trickle in more slowly, although evidence for the oscillatory behaviour in L/E of the muon neutrinos in the atmospheric-neutrino beam has become very convincing.

Nevertheless, the future of neutrino physics is undoubtedly bright. Construction at Karlsruhe of the KATRIN spectrometer, which by studying the kinematics of tritium decay will be sensitive to an electron-neutrino mass as low as 0.02 eV, is in progress, and a wealth of ideas on double beta decay and long-baseline experiments is approaching reality. These experiments will have to answer the great “known unknowns” of neutrino physics: their absolute mass and hierarchy, the value of the third small mixing angle and its associated CP-violating phase, and whether neutrinos are really Majorana particles. Discovering neutrinoless double beta decay would settle the last question, yield critical information on the absolute-mass scale and, possibly, resolve the hierarchy problem. In the meantime we will keep wondering whether small neutrino masses are our first glimpse of grand unified theories via the seesaw mechanism, or represent a new Yukawa scale tantalizingly connected to lepton conservation and, possibly, the cosmological constant.

Information on neutrino mass has also emerged from an unexpected direction – cosmology. The structure of the universe is dictated by the physics of cold dark matter and the galaxies we see today are the remnants of relatively small overdensities in its nearly uniform distribution in the very early universe. Overdensity means overpressure that drives an acoustic wave into the other components that make up the universe, i.e. the hot gas of nuclei and photons and the neutrinos. These acoustic waves are seen today in the temperature fluctuations of the microwave background, as well as in the distribution of galaxies in the sky. With a contribution to the universe’s matter similar to that of light, neutrinos play a secondary, but identifiable role. Because of their large mean-free paths, the neutrinos prevent the smaller structures in the cold dark matter from fully developing and this effect is visible in the observed distribution of galaxies.

Simulations of structure formation with varying amounts of matter in the neutrino component, i.e. varying neutrino mass, can be matched to a variety of observations of today’s sky, including measurements of galaxy-galaxy correlations and temperature fluctuations on the surface of last scattering. The results suggest a neutrino mass of no more than 1 eV, summed over the three neutrino flavours – a range compatible with the one deduced from oscillations.

The imprint on the surface of last scattering of the acoustic waves driven into the hot gas of nuclei and photons also reveals a value for the relative abundance of baryons to photons of 6.5 +0.40.3 × 10-10 (from the Wilkinson Microwave Anisotropy Probe). Nearly 60 years ago, George Gamow realized that a universe born as hot plasma must consist mostly of hydrogen and helium, with small amounts of deuterium and lithium added. The detailed balance depends on basic nuclear physics, as well as the relative abundance of baryons to photons: the state-of-the-art result of this exercise yields 4.7+1.0-0.8 × 10-10. The agreement of the two observations is stunning, not just because of their precision, but because of the concordance of two results derived from totally unrelated ways of probing the early universe.

The physics of partons

Physics at the high-energy frontier is the physics of partons, probing the question of what the proton really is. At the LHC, it will be gluons that produce the Higgs boson, and in the highest-energy experiments, neutrinos interact with sea-quarks in the detector. We can master this physics with unforeseen precision because of a decade of steadily improving measurements of the nucleon’s structure at HERA, DESY’s electron-proton collider. These now include experiments using targets of polarized protons and neutrons.

HERA is our nucleon microscope, tunable by the wavelength and the fluctuation time of the virtual photon exchanged in the electron-proton collision. With the wavelengths achievable, the proton has now been probed with a resolution of one thousandth of its 1 fm size. In these interactions, the fluctuations of the virtual photons survive over distances ct ˜ 1/x, where x is the relative momentum of the parton. In this way, HERA now studies the production of chains of gluons as long as 10 fm, an order of magnitude larger than, and probably totally insensitive to, the proton target. These are novel structures, the understanding of which has been challenging for quantum chromodynamics (QCD).

Theorists analyse HERA data with calculations performed to next-to-next-to-leading order in the strong coupling, and at this level of precision must include the photon as a parton inside the proton. The resulting electromagnetic structure functions violate isospin and differentiate a u quark in a proton from a d quark in a neutron because of the different electric charge of the quark. Interestingly, the inclusion of these effects modifies the extraction of the Weinberg angle from data from the NuTeV experiment at Fermilab, bridging roughly half of the discrepancy between NuTeV’s result and the value in the Particle Data Book. Added to already anticipated intrinsic isospin violations associated with sea-quarks, the NuTeV anomaly may be on its way out.

While history has proven that theorists had the right to be confident in 1980 at the time of Lederman’s remark, they have not faded into the background. Despite the dominance of experimental results at the conference, they provided some highlights of their own. Developing QCD calculations to the level at which the photon structure of the proton becomes a factor is a tour de force, and there were other such highlights at this meeting. Progress in higher-order QCD computations of hard processes is mind-boggling and valuable, sometimes essential, for interpreting LHC experiments. Discussions at the conference of strings, supersymmetry and additional dimensions were very much focused on the capability of experiments to confirm or debunk these concepts.

Towards the highest energies

Theory and experiment joined forces in the ongoing attempts to read the information supplied by the rapidly accumulating data from the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. Rather than the anticipated quark-gluon plasma, the data suggest the formation of a strongly interacting fluid with very low viscosity for its entropy. Similar fluids of cold 6Li atoms have been created in atomic traps. Interestingly, theorists are exploiting Juan Maldacena’s connection between four-dimensional gauge theory and 10-dimensional string theory to model just such a thermodynamic system. The model is of a 10D rotating black hole with Hawking-Beckenstein entropy, which accommodates the low viscosities observed. This should give notice that very-high-energy collisions of nuclei may prove more interesting than anticipated from “QCD-inspired” logarithmic extrapolations of accelerator data. Such physics is relevant to analysing cosmic-ray experiments.

A century has passed since cosmic rays were discovered, yet we do not know how and where they are accelerated. Solving this mystery is very challenging, as can be seen by simple dimensional analysis. A magnetic field B of size R can accelerate a particle with electric charge q to an energy Ε < ΓqvBR, with velocity v˜c, and no higher (where Γ is a possible boost factor between the frame of the accelerator and ourselves ). This is the Hillas formula. Note that it applies to our man-made accelerators, where kilogauss fields over several kilometres yield 1 TeV, because the accelerators reach efficiencies that can come close to the dimensional limit.

Opportunity for particle acceleration to the highest energies in the cosmos is limited to dense regions where exceptional gravitational forces create relativistic particle flows, such as the dense cores of exploding stars, inflows on supermassive black holes at the centres of active galaxies, and so on. Given the weak magnetic field (microgauss) of our galaxy, no structures seem large or massive enough to yield the energies of the highest-energy cosmic rays, implying instead extragalactic objects. Common speculations include nearby active galactic nuclei powered by black holes of 1 billion solar masses, or the gamma-ray-burst-producing collapse of a supermassive star into a black hole.

The problem for astrophysics is that in order to reach the highest energies observed, the natural accelerators must have efficiencies approaching 10% to operate close to the dimensional limit. This is so daunting a concept that many believe that cosmic rays are not the beams of cosmic accelerators but the decay products of remnants from the early universe, for instance topological defects associated with a grand unified theory phase transition near 1024 eV.

There is a realistic hope that this long-standing puzzle will be resolved soon by ambitious experiments: air-shower arrays of 10,000 km2, arrays of air Cherenkov detectors, and kilometre-scale neutrino observatories. While no definitive breakthroughs were reported at the conference, preliminary data forecast rapid progress and imminent results in all three areas.

The air-shower array of the Pierre Auger Observatory is confronting the problem of low statistics at the highest energies by instrumenting a huge collection area covering 3000 km2 on an elevated plane in western Argentina. The completed detector will observe several thousand events a year above 10 EeV and tens above 100 EeV, with the exact numbers depending on the detailed shape of the observed spectrum.

The end of the cosmic-ray spectrum is a matter of speculation given the somewhat conflicting results from existing experiments. Above a threshold of 50 EeV cosmic rays interact with cosmic microwave photons and lose energy to pions before reaching our detectors. This is the origin of the Greissen-Zatsepin-Kuzmin cutoff that limits the sources to our supercluster of galaxies. This feature in the spectrum is seen by the High Resolution Fly’s Eye (HiRes) in the US at the 5s level but is totally absent from the data from the Akeno Giant Air Shower Array (AGASA) in Japan.

At this meeting the Auger collaboration presented the first results from the partially deployed array, with an exposure similar to that of the final AGASA data. The data confirm the existence of events above 100 EeV, but there is no evidence for the anisotropy in arrival directions claimed by the AGASA collaboration. Importantly, the Auger data reveal a systematic discrepancy between the energy measurements made using the independent fluorescent and Cherenkov detector components. Reconciling the measurements requires that very-high-energy showers develop deeper in the atmosphere than anticipated by the particle-physics simulations used to analyse previous experiments. The performance of the detector foreshadows a qualitative improvement of the observations in the near future.

Cosmic accelerators are also cosmic-beam dumps producing secondary beams of photons and neutrinos. The AMANDA neutrino telescope at the South Pole, now in its fifth year of operation, has steadily improved its performance and has increased its sensitivity by more than an order of magnitude since reporting its first results in 2000. It has reached a sensitivity roughly equal to the neutrino flux anticipated to accompany the highest-energy cosmic rays, dubbed the Waxman-Bahcall bound. Expansion into the IceCube kilometre-scale neutrino observatory is in progress. Companion experiments in the deep Mediterranean are moving from R&D to construction with the goal of eventually building a detector the size of IceCube.

However, it is the HESS array of four air Cherenkov gamma-ray telescopes deployed under the southern skies of Namibia that delivered the particle-astrophysics highlights at the conference. This is the first instrument capable of imaging astronomical sources in gamma rays at tera-electron-volt energies, and it has detected sources with no counterparts in other wavelengths. Its images of young galactic supernova remnants show filament structures of high magnetic fields that are capable of accelerating protons to the energies, and with the energy balance, required to explain the galactic cosmic rays. Although the smoking gun for cosmic-ray acceleration is still missing, the evidence is tantalizingly close.

• The next Lepton-Photon conference will take place in Daegu, Korea, in 2007.

The post Uppsala 2005: leptons, photons and a lot more appeared first on CERN Courier.

]]>
https://cerncourier.com/a/uppsala-2005-leptons-photons-and-a-lot-more/feed/ 0 Feature The biennial Lepton-Photon conference was held in Uppsala on 30 June - 5 July. The talks erected the impressive edifice known as the Standard Model and showed that experimental ingenuity has not yet shaken its foundations. Francis Halzen summarizes. https://cerncourier.com/wp-content/uploads/2005/11/CCEupp1_11-05.jpg
An Introduction to Black Holes, Information and the String Theory Revolution: The Holographic Universe https://cerncourier.com/a/an-introduction-to-black-holes-information-and-the-string-theory-revolution-the-holographic-universe/ Wed, 28 Sep 2005 07:13:35 +0000 https://preview-courier.web.cern.ch/?p=105451 Luis Alvarez-Gaume reviews in 2005 An Introduction to Black Holes, Information and the String Theory Revolution: The Holographic Universe.

The post An Introduction to Black Holes, Information and the String Theory Revolution: The Holographic Universe appeared first on CERN Courier.

]]>
by Leonard Susskind and James Lindesay, World Scientific. Hardback ISBN 9812560831, £17 ($28). Paperback ISBN 9812561315, £9 ($14).

Black holes have attracted the imagination of the public and of professional astronomers for quite some time. The astrophysical phenomena associated with them are truly spectacular. They seem to be ubiquitous in the centre of galaxies, and they are believed to be the power engines behind quasars. There is little doubt of their existence as astronomical objects, but this very existence poses deep and unresolved paradoxes in the context of quantum mechanics when one tries to understand the quantum properties of the gravitational field.

CCEboo1_10-05

For many readers, the title of this book may sound odd because the contents have little to do with the astrophysical or observational properties of black holes. If you look for nice pictures of galaxy centres and gamma-ray bursts, you will find none. If, however, you are looking for the deep paradoxes in our understanding of quantum-field theory in nontrivial gravitational environments, and the riddles encountered when trying to harness the gravitational force within the quantum framework, then you will find plenty.

At the end of the 19th century, Max Planck was confronted with serious paradoxes and apparent contradictions between statistical thermodynamics and Maxwell’s electromagnetic theory. The resolution of the puzzle brought the quantum revolution. When Albert Einstein asked himself what someone would observe when travelling at the same speed as a light beam, the answer revealed a fundamental contradiction between Newtonian mechanics and electromagnetic theory.

The resolution of these problems led to the relativity revolution, first with special and then general relativity. Sometimes experiment itself is not the only way towards progress in our understanding of nature. Conceptual paradoxes often provide the way to a deeper view of the world.
In the 1960s, largely due to Roger Penrose and Steven Hawking, it became understood that under very general conditions, very massive objects would undergo gravitational collapse. The end state would be a singularity of infinite curvature in space-time shrouded by an event horizon – the last light surface that did not manage to leave the region. The horizon is a profoundly non-local property of a black hole that cannot be detected by local measurements of an unaware, infalling observer.

Classically, black holes were supposed to be black. However, in the early 1970s Jacob Bekenstein and Hawking showed that black holes must necessarily have very unsettling properties. As Bekenstein argued, if the second law of thermodynamics is supposed to hold, then an intrinsic entropy must be assigned to a black hole. Since entropy measures the logarithm of the number of available states for a given equilibrium state, it is logical to ask what these states are and where they came from.The entropy in this case is proportional to the area of the black-hole horizon measured in Planck units (a Planck unit of length is 10-33 cm). This is vastly different from the behaviour of ordinary quantum-field theoretic systems.

Meanwhile, Hawking showed that if one considers the presence of a black hole in the context of quantum-field theory, it radiates thermally with a temperature inversely proportional to its mass, so the hole is not black after all. If the radiation is truly thermal, this raises a fundamental paradox, as Hawking realized. Imagine that we generate a gravitational collapse from an initial state that is a pure state quantum-mechanically. Since thermal radiation cannot encode quantum correlations, once the black hole fully evaporates it carries with it all the subtle correlations contained in a pure quantum state. Hence the very process of evaporation leads to the loss of quantum coherence and unitary time evolution, two basic features of quantum-mechanical laws.

These puzzles were formulated nearly 30 years ago and they still haunt the theory community. It was, nevertheless, realized that resolving these puzzles requires deep changes in our understanding of both quantum mechanics and general relativity, and also a profound modification of the sacrosanct principle of locality in quantum-field theory.

This book is precisely dedicated to explaining what we have learned about these puzzles and their proposed solutions. Assuming that some of the basic features of quantum mechanics (such as unitary evolution) and general relativity (such as the consistency of different observers’ observations, no matter how different they may be) do indeed hold, the authors analyse the conceptual changes that are required to accommodate strange phenomena such as black-hole evaporation.

In the process, they masterfully present a whole host of subjects including quantum-field theory in curved spaces; the Unruh effect and states; the Rindler vacua; the black-hole complementarity principle; holography; the Maldacena conjecture and the role of string theory in the whole affair; the notion of information in quantum systems; the no-cloning theorem for quantum states; and the general concept of entropy bounds.

A remarkable feature of this book is that relatively little specialized knowledge is required from the reader; a cursory acquaintance with quantum mechanics and relativity is sufficient. This is impressive, given that the authors cover some of the hottest topics in current research.

The technical demands are low, but conceptually the book is truly challenging. It makes us think about many ideas we take for granted and shakes the foundations of our understanding of basic physics. It provides a rollercoaster ride into the treacherous and largely uncharted land of quantum gravity. This book is highly recommended for those interested in these fascinating topics.

The authors end with the sentence: “At the time of the writing of this book there are no good ideas about the quantum world behind the horizon. Nor for that matter is there any good idea of how to connect the new paradigm of quantum gravity to cosmology. Hopefully our next book will have more to say about this.” We hope so too.

The post An Introduction to Black Holes, Information and the String Theory Revolution: The Holographic Universe appeared first on CERN Courier.

]]>
Review Luis Alvarez-Gaume reviews in 2005 An Introduction to Black Holes, Information and the String Theory Revolution: The Holographic Universe. https://cerncourier.com/wp-content/uploads/2005/09/CCEboo1_10-05.jpg
Double dose of magic proves key to element production https://cerncourier.com/a/double-dose-of-magic-proves-key-to-element-production/ https://cerncourier.com/a/double-dose-of-magic-proves-key-to-element-production/#respond Mon, 06 Jun 2005 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/double-dose-of-magic-proves-key-to-element-production/ Researchers at Michigan State University's National Superconducting Cyclotron Laboratory (NSCL) have reported the first measurement of the half-life of nickel-78.

The post Double dose of magic proves key to element production appeared first on CERN Courier.

]]>
Researchers at Michigan State University’s National Superconducting Cyclotron Laboratory (NSCL) have reported the first measurement of the half-life of nickel-78 (78Ni). With completely filled proton and neutron shells, 78Ni is doubly magic and also neutron-rich, and is an important element for understanding heavy-metal nucleosynthesis.

Doubly magic nuclei are of fundamental interest to nuclear physics, as their simplified structure makes it feasible for them to be modelled. In addition, neutron-rich nuclei play an important role in the astrophysical rapid neutron-capture process, or “r process”. The r process is responsible for the origin of about half of the elements heavier than iron in the universe, yet its exact mechanism is still unknown. 78Ni is the only doubly magic nucleus that provides an important “waiting point” in the path of the r process, where the reaction sequence halts to wait for the decay of the nucleus.

There are 10 doubly magic nuclei (excluding super-heavy ones), and only four of these are far from stability: 48Ni, 78Ni, 100Sn and 132Sn. Of these, (neutron-poor) 48Ni and (neutron-rich) 78Ni are the last ones with properties yet to be experimentally measured. Now the results from NSCL demonstrate that experiments with 78Ni are finally feasible.

In this experiment, a secondary beam comprised of a mix of several neutron-rich nuclei near 78Ni was produced by the fragmentation of a 86Kr34+ primary beam with and energy of 140 MeV per nucleon on a beryllium target at the NSCL Coupled Cyclotron Facility. A total of 11 78Ni events were identified over a total beam-time of 104 h. The half-life obtained, 110 + 100 – 60 ms, is lower than models predict. The measurement provides a first constraint for nuclear models and valuable experimental input to the understanding of the r process.

The post Double dose of magic proves key to element production appeared first on CERN Courier.

]]>
https://cerncourier.com/a/double-dose-of-magic-proves-key-to-element-production/feed/ 0 News Researchers at Michigan State University's National Superconducting Cyclotron Laboratory (NSCL) have reported the first measurement of the half-life of nickel-78.
Model suggests dark energy is an illusion https://cerncourier.com/a/model-suggests-dark-energy-is-an-illusion/ https://cerncourier.com/a/model-suggests-dark-energy-is-an-illusion/#respond Thu, 05 May 2005 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/model-suggests-dark-energy-is-an-illusion/ Arguably the most fascinating question in modern cosmology is why the universe is expanding at an accelerating rate.

The post Model suggests dark energy is an illusion appeared first on CERN Courier.

]]>
Arguably the most fascinating question in modern cosmology is why the universe is expanding at an accelerating rate. An original solution to this puzzle has been put forward by four theoretical physicists: Edward Kolb of Fermilab, Sabino Matarrese of the University of Padova, Alessio Notari of the University of Montreal, and Antonio Riotto of the Italian National Institute for Research in Nuclear and Subnuclear Physics (INFN)/Padova. Their study has been submitted to the journal Physical Review Letters.

CCEnew3_05-05

In 1998, observations of distant supernovae provided detailed information about the expansion rate of the universe, demonstrating that it is accelerating. This can be interpreted as evidence of “dark energy”, a new component of the universe, representing some 70% of its total mass. (Of the rest, about 25% appears to be another mysterious component, dark matter, while only about 5% consists of the ordinary “baryonic” matter.) Other explanations include a modification of gravity at large distances and more exotic ideas, such as the presence of a dynamic scalar field referred to as “quintessence”.

Although the hypothesis of dark energy is fascinating and more appealing than the other explanations, it faces a serious problem. Attempts to calculate the amount of dark energy give answers much larger than its measured magnitude: more than 100 orders of magnitude larger, in fact.

Kolb and colleagues offer an alternative explanation, which they say is rather conservative. They propose no new ingredient for the universe; instead, their explanation is firmly rooted in inflation, an essential concept of modern cosmology, according to which the universe experienced an incredibly rapid expansion at a very early stage.

The new explanation, which the researchers refer to as the Super-Hubble Cold Dark Matter (SHCDM) model, considers what would happen if there were cosmological perturbations with very long wavelengths (“super-Hubble”) larger than the size of the observable universe. They show that a local observer would infer an expansion history of the universe that would depend on the time evolution of the perturbations, which in certain cases would lead to the observation of accelerated expansion. The origin of the long-wavelength perturbations is inflation, as, effectively, the visible universe is only a tiny part of the pre-inflation-era universe. The accelerating universe is therefore simply an impression due to our inability to see the full picture.

Of course, observation is the ultimate arbiter between theories. The SHCDM model predicts a different relationship between luminosity-distance and redshift than the dark-energy models do. While the two models are indistinguishable within current experimental precision, more precise cosmological observations in the future should be able to distinguish between them.

The post Model suggests dark energy is an illusion appeared first on CERN Courier.

]]>
https://cerncourier.com/a/model-suggests-dark-energy-is-an-illusion/feed/ 0 News Arguably the most fascinating question in modern cosmology is why the universe is expanding at an accelerating rate. https://cerncourier.com/wp-content/uploads/2005/05/CCEnew3_05-05.jpg
Physics in the Italian Alps https://cerncourier.com/a/physics-in-the-italian-alps/ https://cerncourier.com/a/physics-in-the-italian-alps/#respond Thu, 05 May 2005 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/physics-in-the-italian-alps/ In February, 120 physicists travelled to the mountain village of La Thuile in Italy to discuss results and perspectives in particle physics. Michael Koratzinos reports.

The post Physics in the Italian Alps appeared first on CERN Courier.

]]>
Now in its 19th year, the Rencontres de Physique de la Vallée d’Aoste is known for being a vibrant winter conference, where presentations of new results and in-depth discussions are interlaced with time for skiing. Taking place in La Thuile, a village on the Italian side of Mont Blanc, it consistently attracts a balanced mix of young researchers and seasoned regulars from both theoretical and experimental high-energy physics. The 2005 meeting, which took place from 27 February to 5 March, was no exception.

CCEalp1_05-05

As well as the standard sessions on particle physics, cosmology and astrophysics typical for such a conference, the organizers always try to include a round-table session on a topical subject, as well as a session on a wider-interest topic that tackles the impact of science on society. This year, the first of these sessions was Physics and the Feasibility of High-Intensity, Medium-Energy Accelerators, and the second was The Energy Problem.

Dark energy, WIMPs and cannon balls

An increasing number of experiments are trying to answer questions in high-energy physics by taking to the skies, making the distinction between particle physics and astronomy more fuzzy. The first session of the conference presented an impressive array of experiments and results, ranging from gravitational-wave detection to gamma-ray astronomy. The team working on the Laser Interferometer Gravitational-Wave Observatory (LIGO), with two fully functioning antennas 3000 km apart, now understands the systematics and has begun the fourth period of data-taking with improved sensitivity.

In gamma-ray astronomy, ground-based detectors – which detect the Cherenkov light emitted when gamma-ray-induced particle showers traverse the atmosphere – are constantly improving. The High Energy Stereoscopic System (HESS) in Namibia became fully operational in 2004 with a threshold of 100 GeV, while new detectors with thresholds as low as 20 GeV are in the pipeline. Satellite-based gamma-ray detectors have also provided some excitement, with the Energetic Gamma Ray Experiment Telescope (EGRET) observing an excess of diffuse gamma rays above 1 GeV, uniformly distributed over all directions in the sky.

CCEalp2_05-05

This excess could be interpreted as due to the annihilation of neutralinos. The neutralino is the supersymmetric candidate of choice as a weakly interacting massive particle (WIMP) – a popular option for the dark matter of the universe. This prompted Dmitri Kajakov of the Institute for Theoretical and Experimental Physics (ITEP), Moscow, to state that “dark matter is the supersymmetric partner of the cosmic microwave background”, since neutralinos can be thought of as spin-½ photons.

The Gamma-Ray Large Area Space Telescope (GLAST) satellite, launching in 2007, will offer an important improvement in gamma-ray astronomy, with sensitivity to 10,000 gamma-ray sources compared with EGRET’s 200.

The DAMA/NaI collaboration raised some eyebrows. It reported an annual modulation of 6.3σ significance in data observed over seven years in its nuclear-recoil experiment at the Gran Sasso National Laboratory, which stopped taking data in 2002. This modulation could be interpreted as due to a WIMP component in the galactic halo, which is seen from Earth as a “wind” with different speeds, depending on the annual cycle. The collaboration’s study of possible backgrounds has not identified any process that could mimic such a signal, but other experiments have not observed a similar effect. The new set-up, DAMA/LIBRA, which is more than twice as big and started taking data in 2003, might shed some light.

Another way of looking for WIMPs is through their annihilations that produce antimatter. Antimatter in the universe is not produced in large quantities in standard processes, therefore any excess of antimatter seen would be exciting news for WIMP searchers. The Payload for Antimatter Matter Exploration and Light-Nuclei Astrophysics (PAMELA) satellite due to be launched later this year will provide valuable data on antiproton and positron spectra.

Alvaro De Rújula of CERN, using traditional (and increasingly rare) coloured transparencies written by hand, gave an account of his theory of gamma-ray bursts (GRBs), which has now developed into a theory of cosmic rays. Central to the theory are the cosmic “cannon balls”, objects ejected from supernovae with a density of one particle per cubic centimetre, and with a mass similar to that of the planet Mercury but a radius similar to that of the orbit of Mars. These cannon balls, moving through the interstellar medium at high speeds (with initial γ factors of the order of 1000), not only explain GRBs and their afterglows in a simple way, but also explain all features of cosmic-ray spectra and composition, at least semi-quantitatively, without the need to resort to fanciful new physics. What the theory does not attempt to explain, however, is how cannon balls are accelerated in the first place.

Dark energy was reviewed by Antonio Masiero of the University of Padova. Masiero pointed out that theories that do not associate dark energy with the cosmological constant do exist. One can assume, for instance, that general relativity does not hold over very long distances, or that there is some dynamical explanation, like an evolving scalar field that has not yet reached its state of minimum energy (known as a quintessence scalar field), or even that dark energy is tracking neutrinos. With the latter assumption, he came to the interesting conclusion that the mass of the neutrinos depends on their density, and therefore that neutrino mass changes with time. The cosmological constant or vacuum-energy approach, however, offers the less exotic explanation of dark energy.

Finally, Andreas Eckart of the University of Cologne reviewed our knowledge of black holes, with emphasis on the massive black hole at the centre of our own galaxy, Sagittarius A*. He played an impressive time sequence of observations taken over 10 years of the vicinity of this black hole, showing star orbits curving around it.

The golden age of neutrino experiments

The neutrino session began with Guido Altarelli of CERN, who reviewed the subject in some depth. Although impressive progress has been made during the past decade, there are unmeasured parameters that the new generation of experiments must address. The Antarctic Muon and Neutrino Detector Array (AMANDA), which uses the clean ice of the South Pole for neutrino detection, reported no signal from its search for neutrino point-sources in the sky, but the collaboration is already excited about its sequel, IceCube.

The Sudbury Neutrino Observatory (SNO) collaboration has added salt to its apparatus, to increase the detection efficiency by nearly a factor of three compared with the earlier runs. Analysis yields slightly smaller errors on Δm13 than K2K (KEK to Kamiokande), the long-baseline experiment in Japan, which reported on the end of data-taking. K2K is now handing over to the Main Injector Neutrino Oscillation Search (MINOS) in the US, which had recorded the first events in its near detector just in time for the conference. MINOS is similar in conception to K2K, but has a magnetic field in its fiducial volume – the first time in such an underground detector – and it will need three years of data-taking to provide competitive results.

The director of the Gran Sasso National Laboratory, Eugenio Coccia, gave a status report of the activities of the laboratory, which is undergoing an important safety and infrastructure upgrade following a chemical leak. The laboratory is the host of a multitude of experiments on neutrino and dark-matter physics. These include the Imaging Cosmic And Rare Underground Signals (ICARUS) and Oscillation Project With Emulsion Tracking Apparatus (OPERA) experiments for the future CERN Neutrinos to Gran Sasso (CNGS) project and Borexino, which is the only experiment other than KamLAND in Japan that can measure low-energy solar neutrinos. The laboratory also houses neutrinoless double-beta-decay experiments.

Strong, weak and electroweak matters

In the session on quantum chromodynamics, Michael Danilov of ITEP had the unenviable task of reviewing the numerous experiments that have looked for pentaquarks. In recent years, there have been 17 reports of a pentaquark signal and 17 null results. Danilov justified his sceptical approach by pointing out various problems with the observed signals. The small width of the Θ+ is very unusual for strong decays. Moreover, this state has not been seen at the Large Electron Positron (LEP) collider, although this fact can be circumvented by assuming that the production cross-section falls with energy. However, the Belle experiment at KEK does not see the signal either, weakening the cross-section argument. The Θc is seen by the H1 experiment at HERA, but not by ZEUS or by the Collision Detector at Fermilab (CDF). Finally, many experiments have not seen the Ξ signal. Although Danilov thinks that the statistical significance of the reported signals has been overestimated, it is still too large to be a statistical fluctuation. The question will only be settled by high-statistics experiments coming soon.

Amarjit Soni of Brookhaven summarized our knowledge of charge-parity (CP) violation by emphasizing the success of the B-factories, the fact that the Cabibbo-Kobayashi-Maskawa paradigm is confirmed, and that we now know how to determine the unitarity triangle angles α and γ, as well as the previously known angle β.

The electroweak session began with a report on new results from LEP, with LEP showing no signs that it has said its final word yet. The running of αQED has been the subject of a new analysis of Bhabha events at LEP. The results from the OPAL experiment, recently submitted for publication, give the strongest direct evidence for the running of αQED ever achieved in a single experiment, with a significance above 5σ. Regarding the W mass, the combined data error for LEP now stands at 42 MeV, whereas at the Tevatron, Run II is being analysed and the error from CDF from 200 fb-1 of data (a third of the collected data) is already less than their published result for Run I. The Tevatron collaborations expect to achieve a 30-40 MeV error on the W mass with 2 fb-1 of data. The search is on for the Higgs particle at Fermilab with a new evaluation of the Tevatron’s reach. For a low-mass Standard Model Higgs, the integrated luminosity needed for discovery (5σ) is 8 fb-1; evidence (3σ) needs 3 fb-1, while exclusion up to 130 GeV needs 4 fb-1.

From high intensity to future physics

The round-table discussion on physics and the feasibility of high-intensity, medium-energy accelerators was chaired by Giorgio Chiarelli of the University of Pisa, and after a short introduction he asked the panel members for their views. Pantaleo Raimondi of Frascati gave an overview of B and f factories and Gino Isidori, also of Frascati, pointed to a series of interesting measurements that can be performed by a possible upgrade to the Double Annular Ring For Nice Experiments (DAFNE) set-up at Frascati, where the time schedule would be a key point.

Francesco Forti of Pisa discussed the possibility of a “super B-factory”. He noted that by 2009, 1 ab-1-worth of B-physics data will be available around the world, and to have a real impact any new machine would need to provide integrated luminosities of the order of 50 ab-1. Roland Garoby of CERN talked about a future high-intensity proton beam at CERN, where the need for a powerful proton driver, a necessary building block of future projects, has been identified. Finally, Franco Cervelli of Pisa reviewed the high-intensity frontier, including prospects for the physics of quantum chromodynamics, kaons, the muon dipole-moment and neutrinos. A lively debate followed.

In the interesting science and society session on alternative energy sources, Durante Borda of the Instituto Superior Tecnico of Lisbon gave a detailed account of ITER, the prototype nuclear-fusion reactor that is expected to be the first of its kind to generate more energy than it consumes. ITER is designed to fuse deuterium (obtained from water) with tritium obtained in situ from lithium bombarded with neutrons, thereby creating helium and releasing heat (in the form of neutrons) captured through heat exchangers. It is hoped that this ambitious project, with its many engineering challenges, will pave the way for commercial fusion-power plants.

This talk was followed by presentations on geothermal, solar, hydroelectric and wind energy, covering a wide spectrum of renewable energy resources. It was clear from the presentations that the problem of future energy production is complicated, and a clear winner has yet to emerge from these alternative energy sources.

In the session on physics beyond the Standard Model, Andrea Romanino of CERN did not make many friends among the community working towards the Large Hadron Collider (LHC) at CERN. He stated that “split supersymmetry” – a variation of supersymmetry (SUSY) that ignores the naturalness criterion – pushes the SUSY scale (and any SUSY particles) beyond reach of the LHC, although within reach of a future multi-tera-electron-volt collider.

Fabiola Gianotti of CERN appeared undeterred. She closed the session and the conference by giving a taste of the first data-taking period of the LHC to come. She reminded the audience that for Standard Model processes at least, one typical day at the LHC (at a luminosity of 1033) is equivalent to 10 years at previous machines.

• The conference series is organized by Giorgio Bellettini and Giorgio Chiarelli of the University of Pisa and Mario Greco of the University of Rome.

The post Physics in the Italian Alps appeared first on CERN Courier.

]]>
https://cerncourier.com/a/physics-in-the-italian-alps/feed/ 0 Feature In February, 120 physicists travelled to the mountain village of La Thuile in Italy to discuss results and perspectives in particle physics. Michael Koratzinos reports. https://cerncourier.com/wp-content/uploads/2005/05/CCEalp1_05-05-feature.jpg
Exploiting the synergy between great and small https://cerncourier.com/a/exploiting-the-synergy-between-great-and-small/ https://cerncourier.com/a/exploiting-the-synergy-between-great-and-small/#respond Tue, 29 Mar 2005 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/exploiting-the-synergy-between-great-and-small/ Cosmological discoveries shed new light on the nature of fundamental particles, and vice versa. This synergy between the greatest and the smallest components of the universe was the subject of the 2004 DESY Theory Workshop, held in Hamburg.

The post Exploiting the synergy between great and small appeared first on CERN Courier.

]]>
There are a number of astrophysical phenomena, notably in connection with cosmology and ultrahigh-energy cosmic rays, that open a new window onto particle physics and lead to a better microscopic understanding of matter, space and time. On the other hand, particle physics is often exploited to great depths for an ultimate understanding of astrophysical phenomena, in particular the structure and evolution of the universe. These frontier-physics issues attracted a record number of 188 participants to Hamburg for the latest annual DESY Theory Workshop, held on 28 September – 1 October 2004 and organized by Georg Raffelt.

CCEexp1_04-05

The workshop started with the traditional day of introductory lectures aimed at young physicists, which covered the main topics of the later plenary sessions. Most of the participants jumped at this opportunity. At the end of the day, they had learned much about Big Bang cosmology, including the thermal history of the universe; about the evolution of small fluctuations in the early universe, and their imprints on the cosmic microwave background (CMB) radiation and the large-scale distribution of matter; and about how these initial fluctuations may emerge during an inflationary era of the universe. They were also up to date in ultrahigh-energy cosmic-ray physics. Thus the ground was laid for the workshop proper.

Highlighting the dark

In recent years, significant advances have been made in observational cosmology, as several plenary talks emphasized. Observations of large-scale gravity, deep-field galaxy counts and Type Ia supernovae favour a universe that is currently about 70% dark energy – accounting for the observed accelerating expansion of the universe – and about 30% dark matter. The position of the first Doppler peak in recent measurements of the CMB radiation, by for example the Wilkinson Microwave Anisotropy Probe (WMAP) satellite, strongly suggests that the universe is spatially flat. These values for the cosmological parameters, together with today’s Hubble expansion rate, are collectively known as the “concordance” model of cosmology, for they fit a wide assortment of cosmological data. Indeed, we have entered the era of precision cosmology, with the precision set to continue increasing in the coming decade as a result of further observational efforts. It is now the turn of theoretical particle physicists to explain these cosmological findings, in particular why the dominant contribution to the energy density of the present universe is dark and what it is made of microscopically.

Dark matter

Successful Big Bang nucleosynthesis requires that about 5% of the energy content of the universe is in the form of ordinary baryonic matter. But what about the remaining non-baryonic dark matter?

CCEexp2_04-05

This 25% cannot be accounted for in the Standard Model of particle physics: the only Standard Model candidates for dark matter, the light neutrinos, were relativistic at the time of recombination and therefore cannot explain structure formation on small galactic scales. Studies of the formation of structure – as observed today by the Sloan Digital Sky Survey, for example – from primordial density perturbations measured in the CMB radiation yield an upper bound of about 2% on the energy fraction in massive neutrinos. This translates into an upper bound of around 1 eV for the sum of the neutrino masses. Observations, by means of the forthcoming Planck satellite, of distortions in the temperature and polarization of the CMB will improve the sensitivity in the sum of neutrino masses by an order of magnitude to 0.1 eV. This is comparable to the sensitivity of the future Karlsruhe Tritium Neutrino Experiment (KATRIN), which measures the neutrino mass via the tritium beta-decay endpoint spectrum, and of the planned second-generation experiments on neutrinoless double beta-decay.

In theories beyond the Standard Model, there is no lack of candidates for the dominant component of dark matter. Notable viable candidates are the lightest supersymmetric partners of the known elementary particles, which arise in supersymmetric extensions of the Standard Model: the neutralinos, which are spin-½ partners of the photon, the Z-boson and the neutral Higgs boson, and the gravitinos, which are spin-½ partners of the graviton. Showing that one of these particles accounts for the bulk of dark matter would not only answer a key question in cosmology, but would also shed new light on the fundamental forces and particles of nature.

CCEexp3_04-05

While ongoing astronomical observations will measure the quantity and location of dark matter to greater accuracy, the ultimate determination of its nature will almost certainly rely on the direct detection of dark-matter particles through their interactions in detectors on Earth. Second-generation experiments such as the Cryogenic Dark Matter Search II (CDMS II) and the Cryogenic Rare Event Search with Superconducting Thermometers II (CRESST II), which are currently being assembled, will provide a serious probe of the neutralino as a dark-matter candidate.

Complementary, but indirect, information can be obtained from searches for neutrinos and gamma rays from neutralino-antineutralino annihilation, coming from the direction of particularly dense regions of dark matter, for example in the central regions of our galaxy, the Sun or the Earth. Ultimately, however, the proof of the existence of dark matter and the determination of its particle nature will have to come from searches at accelerators, notably CERN’s Large Hadron Collider (LHC). Even the gravitino, which is quite resistant to detection in direct and indirect dark-matter searches because it interacts only very feebly through the gravitational force, can be probed at the LHC.

Dark energy

In contrast with dark matter, dark energy has so far no explanation in particle physics. Apart from the observed accelerated expansion, the fact that we seem to be living at a special time in cosmic history, when dark energy appears only recently to have begun to dominate dark and other forms of matter, is also puzzling. Explanations put forth for dark energy range from the energy of the quantum vacuum to the influence of unseen space dimensions. Popular explanations invoke an evolving scalar field, often called “quintessence”, with an energy density varying in time in such a way that it is relevant today. Such an evolution may also be linked to a time variation of fundamental constants – a hot topic in view of recent indications of shifts in the frequencies of atomic transitions in quasar absorption systems, which hint that the electromagnetic fine-structure constant was smaller 7-11 billion years ago than it is today.

Depending on the nature of dark energy, the universe could continue to accelerate, begin to slow down or even recollapse. If this cosmic speed-up continues, the sky will become essentially devoid of visible galaxies in only 150 billion years. Until we understand dark energy, we cannot comprehend the destiny of the universe. Determining its nature may well lead to important progress in our understanding of space, time and matter.

The first order of business is to establish further evidence for dark energy and to discern its properties. The gravitational effects of dark energy are determined by its equation of state, i.e. the ratio of its pressure to its energy density. The more negative its pressure, the more repulsive the gravity of the dark energy. The dark energy influences the expansion rate of the universe, which in turn governs the rate at which structure grows, and the correlation between redshift and distance. Over the next two decades, high-redshift supernovae, counts of galaxy clusters, weak-gravitational lensing and the microwave background will all provide complementary information about the existence and properties of dark energy.

Inflationary ideas

The inflationary paradigm that the very early universe underwent a huge and rapid expansion is a bold attempt to extend the Big Bang model back to the first moments of the universe. It uses some of the most fundamental ideas in particle physics, in particular the notion of a vacuum energy, to answer many of the basic questions of cosmology, such as “Why is the observed universe spatially flat?” and “What is the origin of the tiny fluctuations seen in the CMB?”.

The exact cause of inflation is still unknown. Thermalization at the end of the inflationary epoch leads to a loss of details about the initial conditions. There is, however, a notable exception: inflation leaves a telltale signature of gravitational waves, which can be used to test the theory and distinguish between different models of inflation. The strength of the gravitational-wave signal is a direct indicator of what caused inflation. Direct detection of the gravitational radiation from inflation might be possible in the future with very-long-baseline, space-based, laser-interferometer gravitational-wave detectors. A promising shorter-term approach is to search for the signature of these gravitational waves in the polarized radiation from the CMB.

Matter Matters

The ordinary baryonic matter of which we are made is the tiny residue of the annihilation of matter and antimatter that emerged from the earliest universe in not-quite-equal amounts. This tiny imbalance may arise dynamically from a symmetric initial state if baryon number is not conserved in interactions that violate the conservation of C (C = charge conjugation) and the combination CP (P = parity), which produce more baryons than antibaryons in an expanding universe.

There are a few dozen viable scenarios for baryogenesis, all of which invoke more or less physics beyond the Standard Model. A particularly attractive scenario is leptogenesis, according to which neutrinos play a central role in the origin of baryon asymmetry. Leptogenesis predicts that the out-of-equilibrium, lepton-number violating decays of heavy Majorana neutrinos, with an exchange responsible for the smallness of the masses of the known light neutrinos, generate a lepton asymmetry in the early universe that is transferred into a baryon asymmetry by means of non-perturbative electroweak baryon- and lepton-number violating processes. Leptogenesis works nicely within the currently allowed window for the masses of the known light neutrinos.

Heavenly accelerators

The Earth’s atmosphere is continuously bombarded by cosmic particles. Ground-based observatories have measured them in the form of extensive air showers with energies up to 3 x 1020 eV, corresponding to centre-of-mass energies of 750 TeV, far beyond the reach of any accelerator here on Earth. We do not yet know the sources of these particles and thus cannot understand how they are produced.

Astrophysical candidates for high-energy sources include active galaxies and gamma-ray bursts. Alternatively, a completely new constituent of the universe could be involved, such as a topological defect or a long-lived superheavy dark-matter particle, both associated with the physics of grand unification. Only by observing many more of these particles, including the associated gamma rays, neutrinos and perhaps gravitational waves, will we be able to distinguish these possibilities.

Identifying the sources of ultrahigh-energy cosmic rays requires several kinds of large-scale experiments, such as the Pierre Auger Observatory, currently under construction, to collect large enough data samples and determine the particle directions and energies precisely. Dedicated neutrino telescopes of cubic-kilometre size in deep water or ice, such as IceCube at the South Pole, can be used to search for cosmic sources of high-energy neutrinos. An extension of their sensitivity to the ultrahigh-energy regime above 1017 eV will offer possibilities to infer information about physics in neutrino-nucleon scattering beyond the reach of the LHC.

The post Exploiting the synergy between great and small appeared first on CERN Courier.

]]>
https://cerncourier.com/a/exploiting-the-synergy-between-great-and-small/feed/ 0 Feature Cosmological discoveries shed new light on the nature of fundamental particles, and vice versa. This synergy between the greatest and the smallest components of the universe was the subject of the 2004 DESY Theory Workshop, held in Hamburg. https://cerncourier.com/wp-content/uploads/2005/03/CCEexp1_04-05.jpg
Particles meet cosmology and strings in Boston https://cerncourier.com/a/particles-meet-cosmology-and-strings-in-boston/ https://cerncourier.com/a/particles-meet-cosmology-and-strings-in-boston/#respond Tue, 01 Mar 2005 00:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/particles-meet-cosmology-and-strings-in-boston/ PASCOS 2004 is the latest in the symposium series that brings together disciplines from the frontier areas of modern physics.

The post Particles meet cosmology and strings in Boston appeared first on CERN Courier.

]]>
PASCOS 2004 is the latest in the symposium series that brings together disciplines from the frontier areas of modern physics.

The Tenth International Symposium on Particles, Strings and Cosmology took place at Northeastern University, Boston, on 16-22 August 2004. Two days of the symposium, 18-19 August, were devoted to the Pran Nath Fest in celebration of the 65th birthday of Matthews University Distinguished Professor Pran Nath. The PASCOS symposium is the largest interdisciplinary gathering on the interface of the three disciplines of cosmology, particle physics and string theory, which have become increasingly entwined in recent years.

CCEpar1_03-05

Topics at PASCOS 2004 included the large-scale structure of the universe, cosmic strings, inflationary models, unification scenarios based on supersymmetry and extra dimensions, M-theory and brane models, and string cosmology. Experimental talks discussed data from the Wilkinson Microwave Anisotropy Probe (WMAP), neutrino physics, the direct and the indirect detection of dark matter, B-physics and data from the CDF and D0 detectors at Fermilab’s Tevatron.

Cosmology and quantum gravity

The issue of dark matter in the universe and prospects for the future were reviewed by Joseph Silk of Oxford and Margaret Geller of the Harvard-Smithsonian Center for Astrophysics. Geller observed that, while the cosmic microwave background combined with large redshift surveys suggests that the critical matter density of the universe is Ωm ~ 0.3, direct dynamical measurements combined with the estimates of the luminosity density indicate Ωm = 0.1-0.2. She suggested that the apparent discrepancy may result from variations in the dark-matter fraction with mass and scale. She also suggested that gravitational lensing maps combined with large redshift surveys promise to measure the dark-matter distribution in the universe. The microwave background can also provide clues to inflation in the early universe. Eva Silverstein from SLAC discussed a new mechanism for inflation that results from a strong back-reaction on rolling scalar-field dynamics near regions with extra-light states. She claimed that this leads to a distinctive non-Gaussian signature in the cosmic microwave background, which can distinguish this mechanism from traditional slow-roll inflation.

CCEpar2_03-05

Cosmology and particle physics connected again in a talk at the Nath Fest by Steven Weinberg of the University of Texas, Austin. He spoke on the analogy between perturbations to the Friedmann-Robertson-Walker cosmology and the Goldstone bosons of particle physics in his talk “Goldstone Bosons Through the Ages”. Ali Chamseddine of the Center for Advanced Mathematical Sciences, American University of Beirut, showed that consistency problems on the action for massive coloured gravitons can be resolved by employing spontaneous symmetry-breaking to give masses to gravitons.

In his talk on quantum gravity, Lee Smolin of Perimeter Institute described rigorous results and the possibility of testing them experimentally. He discussed possible violations of the Greisen-Kuzmin-
Zatsepin bound on the upper energies of cosmic rays, which may be observed by the Pierre Auger Observatory, and possible variations of the speed of light with energy, which would be observable by the GLAST gamma-ray observatory. Dark energy in the universe formed part of the talk by Gregory Tarlé of Michigan reviewing the SNAP (Supernova Acceleration Probe) satellite observatory.

Supersymmetry and strings

Strings featured at the symposium on both the cosmic and the fundamental particle scales. In a talk on cosmic strings, Alexander Vilenkin of Tufts presented their current status in view of recent developments in string cosmology. At the opposite end of the scale, other speakers discussed string- and brane-based models in particle physics. Mary K Gaillard of the University of California, Berkeley, presented results from studies of effective Lagrangian theories that arise from compactification of the weakly coupled heterotic string. Models based on D-branes and their implications were discussed by Mirjam Cvetic of Pennsylvania, while Richard Arnowitt from Texas A&M examined the gravitational forces felt by point particles on two 3-branes (the Planck brane and the tera-electron-volt brane) bounding a 5D anti de Sitter (AdS) space with S1/Z2 symmetry.

CCEpar3_03-05

Nima Arkani-Hamed of Harvard and Michael Dine of the University of California, Santa Cruz, discussed string-based landscape scenarios from two different perspectives: whether the landscape does or does not predict low-energy supersymmetry. Arkani-Hamed argued for a high scale for supersymmetry or split supersymmetry while Dine said that, under rather mild assumptions, the landscape seems to favour a low and possibly even a very low scale for supersymmetry breaking. In considering the possibility for inflation in string theory, Boris Kors from MIT discussed a Stückelberg extension of both the Standard Model and the Minimal Supersymmetric Standard Model, recently introduced in collaboration with Pran Nath. In this extension, the vector bosons become massive without spontaneous symmetry-breaking, via condensation of Higgs scalar fields. Furthermore, such an extension implies the existence of a sharp Z boson and may lead to a new lightest supersymmetric particle composed mainly of Stückelberg fermions. In this case, the signals of supersymmetry will change in a significant way and the Stückelberg fermion may become the new candidate for dark matter.

Experiment and phenomenology

A number of talks dealt with supersymmetry phenomenology, specifically with regard to searches for supersymmetry at particle colliders and in dark matter. Howard Baer of Florida State described the possibilities for direct and indirect detection of supersymmetric dark matter, as well as searches at colliders, within the minimal supergravity grand unification (mSUGRA) paradigm. Searches at colliders were also discussed by Xerxes Tata of Hawaii, this time in the light of data from WMAP and other experimental constraints on weakly interacting massive particles (WIMPs). On the experimental side, Rupak Mahapatra of the University of California, Santa Barbara, reported on the world’s lowest exclusion limits on the coherent WIMP-nucleon scalar cross-section for WIMP masses above 13 GeV/c2 based on data from the Cryogenic Dark-Matter Search experiment at the Soudan Underground Laboratory. These results rule out a significant part of the parameter space of supersymmetric models.

David Cline of UCLA presented the current ZEPLIN II programme for the direct detection of dark matter as a prototype of large liquid- xenon detectors. He then described ZEPLIN IV and other 1 t liquid xenon detectors, and discussed the limiting backgrounds for such detectors in exploring the full range of the SUSY parameter space. Stefano Lacaprara of INFN, Padua, looked at the prospects for dark-matter searches at the Large Hadron Collider, and Rita Bernabei from INFN Rome reviewed the observation of dark-matter signals using the low-background NaI(Tl) detector of the DAMA dark-matter project in the Gran Sasso Laboratory.

Neutrinos and other particles

Several speakers at the symposium emphasized the promising future for neutrino physics and astrophysics. Vernon Barger from Wisconsin gave an in-depth presentation about the status and future prospects of precision neutrino physics. Haim Goldberg of Northeastern discussed galactic and extra-galactic neutrino sources, and Sandip Pakvasa from Hawaii showed how high-energy astrophysical neutrinos can provide information about neutrino lifetimes and mass hierarchies. Tom Weiler of Vanderbilt reviewed the particle physics and astrophysics information encoded in the energy spectrum, arrival directions and the flavour content of such cosmic neutrinos.

The detection of high-energy neutrinos was discussed by Stefan Schlenstedt of DESY-Zeuthen, who gave an update on the AMANDA experiment at the South Pole and the construction of the IceCube experiment for the observation of high-energy neutrinos. Luis Anchordoqui of Northeastern University gave an overview of the current status of the Pierre Auger Observatory being built to detect the highest-energy cosmic rays.

At lower energies, there are new measurements of the solar neutrino spectrum at the Sudbury Neutrino Observatory, using salt to enhance the detection of neutral currents. These were presented by José Maneira of Queen’s University, who also described the prospects for using strings of 3He proportional counters to increase the sensitivity by a factor of two. Nikolai Tolich from Stanford presented the improved measurement from KamLAND of Δm2 versus sin22θ for neutrino oscillations, while Ion Stancu of Alabama covered the status of the MiniBooNE neutrino oscillation experiment. Hans Volker Klapdor-Kleingrothaus of MPI-Heidelberg discussed the evidence for neutrinoless double-beta-decay using data from the Heidelberg-Moscow experiment, which shows a signal at the 4.2 σ level, and discussed its consequences for particle physics.

Other aspects of particle physics were not neglected. Shiro Suzuki from Saga University presented new results from the Belle experiment at KEK on the measurement of time-dependent charge-parity (CP) violation in b→s penguin processes. These yield in an average value 2.4 σ away from the Standard Model value.

Continuing with B-physics, Stefano Passaggio of INFN Genova reported the direct observation of CP violation at BaBar in B→K+π at a confidence level of 4.2 σ. Results from DESY’s HERA collider and prospects for HERA II were reviewed by Chiara Genta of INFN Florence, while electroweak results from LEP2, the upgraded Large Electron Positron collider at CERN, were summarized by Roberto Chierici of CERN. Markus Schumacher from Bonn presented results of searches for new physics by the LEP experiments. Recent results from the D0 experiment at Fermilab were presented by Pushpalatha Bhat from Fermilab and Nick Hadley of Maryland. Those from CDF were presented by Un-ki Yang of Chicago and Dmitri Tsybychev from SUNY, Stonybrook. Ernst Sichtermann of Lawrence Berkeley National Laboratory gave the latest status of the muon g-2 experiment at Brookhaven, and William Marciano of Brookhaven reviewed the theoretical implications of the g-2 results.

Other talks dealt with a range of interdisciplinary topics. In his status report on using lattice quantum chromodynamics (QCD) in the calculation of light quark masses and the CP-violation parameter BK, Rajan Gupta of Los Alamos was able to weave in some early history of the lattice gauge calculations from his time at Northeastern University in the early 1980s. Roman Jackiw of MIT discussed the consequences of a vanishing Cotton tensor, which ensures that the 3D gravitational Chern-Simons term is stationary. He showed that this condition leads to kink solutions and that the effective theory is a new type of dilaton gravity.

• PASCOS 2005 will be held in the 1600-year-old ancient Korean town of Gyeong-Ju.

The post Particles meet cosmology and strings in Boston appeared first on CERN Courier.

]]>
https://cerncourier.com/a/particles-meet-cosmology-and-strings-in-boston/feed/ 0 Feature PASCOS 2004 is the latest in the symposium series that brings together disciplines from the frontier areas of modern physics. https://cerncourier.com/wp-content/uploads/2005/03/CCEpar1_03-05-feature.jpg
The Future of Theoretical Physics and Cosmology: Celebrating Stephen Hawking’s 60th Birthday https://cerncourier.com/a/the-future-of-theoretical-physics-and-cosmology-celebrating-stephen-hawkings-60th-birthday/ Fri, 12 Nov 2004 11:38:25 +0000 https://preview-courier.web.cern.ch/?p=105499 Stephen Hawking's 60th birthday was celebrated in Cambridge, UK, with a meeting attended by many well-known theoretical physicists. This volume is based on lectures given at the meeting.

The post The Future of Theoretical Physics and Cosmology: Celebrating Stephen Hawking’s 60th Birthday appeared first on CERN Courier.

]]>
by G W Gibbons, E P S Shellard and S J Rankin (eds), Cambridge University Press. Hardback ISBN 0521820812, £40 ($60).

9780521820813
Stephen Hawking’s 60th birthday was celebrated in Cambridge, UK, with a meeting attended by many well-known theoretical physicists. This volume is based on lectures given at the meeting. It begins with talks by Martin Rees, James Hartle, Roger Penrose, Kip Thorne and Hawking himself given at a public symposium that formed part of the conference. Subsequent chapters cover advanced presentations on space-time singularities, black holes, Hawking radiation, quantum gravity, M-theory, cosmology and quantum cosmology.

The post The Future of Theoretical Physics and Cosmology: Celebrating Stephen Hawking’s 60th Birthday appeared first on CERN Courier.

]]>
Review Stephen Hawking's 60th birthday was celebrated in Cambridge, UK, with a meeting attended by many well-known theoretical physicists. This volume is based on lectures given at the meeting. https://cerncourier.com/wp-content/uploads/2022/08/9780521820813-feature.jpg
Finding the real world in a foam bath https://cerncourier.com/a/finding-the-real-world-in-a-foam-bath/ https://cerncourier.com/a/finding-the-real-world-in-a-foam-bath/#respond Thu, 11 Nov 2004 00:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/finding-the-real-world-in-a-foam-bath/ A new approach to simulating quantum geometry suggests that starting with a random froth, one might expect a world of three dimensions of space and one of time to appear naturally at large scales.

The post Finding the real world in a foam bath appeared first on CERN Courier.

]]>
A new approach to simulating quantum geometry suggests that starting with a random froth, one might expect a world of three dimensions of space and one of time to appear naturally at large scales. J Ambjørn of the Niels Bohr Institute in Copenhagen, J Jurkiewicz of Jagellonian University in Krakow, and R Loll of Utrecht University have added one crucial ingredient to the randomness – causality, or a speed limit of the speed of light – and this turns out to be enough to yield a world much like the one we live in. The authors comment that to their knowledge this is the “first example of a theory of quantum gravity that generates a quantum space-time with such properties dynamically.”

The post Finding the real world in a foam bath appeared first on CERN Courier.

]]>
https://cerncourier.com/a/finding-the-real-world-in-a-foam-bath/feed/ 0 News A new approach to simulating quantum geometry suggests that starting with a random froth, one might expect a world of three dimensions of space and one of time to appear naturally at large scales.
Singular Null Hypersurfaces in General Relativity https://cerncourier.com/a/singular-null-hypersurfaces-in-general-relativity/ Sun, 03 Oct 2004 11:55:47 +0000 https://preview-courier.web.cern.ch/?p=105519 This book presents a comprehensive view of the mathematical theory of impulsive light-like signals in general relativity.

The post Singular Null Hypersurfaces in General Relativity appeared first on CERN Courier.

]]>
by C Barrabès and P A Hogan, World Scientific. Paperback ISBN 9812387374, £36 ($48).

41Zo7M0AmVL._SX338_BO1,204,203,200_

This book presents a comprehensive view of the mathematical theory of impulsive light-like signals in general relativity. Such signals can result from cataclysmic astrophysical events, and as the sub-title of “Light-like signals from astrophysical events” suggests, the topic has applications in relativistic astrophysics and cosmology, as well as in alternative theories of gravity deduced from string theory.

The post Singular Null Hypersurfaces in General Relativity appeared first on CERN Courier.

]]>
Review This book presents a comprehensive view of the mathematical theory of impulsive light-like signals in general relativity. https://cerncourier.com/wp-content/uploads/2022/08/41Zo7M0AmVL._SX338_BO1204203200_.jpg
Theory and experiment peer across the frontier https://cerncourier.com/a/theory-and-experiment-peer-across-the-frontier/ https://cerncourier.com/a/theory-and-experiment-peer-across-the-frontier/#respond Mon, 07 Jun 2004 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/theory-and-experiment-peer-across-the-frontier/ The fourth conference in the "Beyond" series presented a clear overview of - and beyond - the current frontiers of particle physics, astrophysics and cosmology.

The post Theory and experiment peer across the frontier appeared first on CERN Courier.

]]>
cernbey1_6-04

New developments in extensions of the Standard Model, through supergravity, superstrings and extra dimensions, were among the highlights of “Beyond the Desert 03 – Accelerator, Non-accelerator and Space Approaches”, which was held last year in Castle Ringberg in Tegernsee, Germany. Supergravity had recently celebrated its 20th birthday and two of its “inventors” – Pran Nath and Richard Arnowitt – were among the participants at the conference.

Nath, of Northeastern University, Boston, summarized the developments of minimal supergravity grand unification (mSUGRA) and its extensions since the formulation of these models in 1982, while Arnowitt, from Texas A&M, highlighted the connection to dark matter and the value of g-2 of the muon. Focusing on quantum gravity, Alon Faraggi of Oxford argued that the experimental data of the past decade suggest that the quantum-gravity vacuum should possess two key ingredients – the existence of three generations and their embedding into SO(10) representations. He explained that the Z1 x Z2 orbifold of the heterotic string provides examples of vacua that accommodate these properties. He also showed that three generations require a non-perturbative breaking of the grand unification gauge group, and in this context examined the issue of mass and mixing in the neutrino versus the quark systems.

Fundamental physics, including fundamental symmetries, formed another important aspect of the meeting. Peter Herczeg from Los Alamos reviewed CPT-invariant, and CP- and P-violating electron-quark interactions in extensions of the Standard Model. Turning to fundamental constants, Harald Fritzsch of Munich discussed astrophysical indications that the fine structure constant has undergone a small time variation during the cosmological evolution, within the framework of the Standard Model and grand unification. The case where the variation is caused by a time variation of the unification scale is particularly interesting.

Interferometry

The potential of neutron interferometry for tests of fundamental physics was outlined by Helmut Rauch of Vienna. Recent experiments in neutron interferometry, based on post-selection methods, have renewed the discussion about quantum non-locality and the quantum measuring process. It has been shown that interference phenomena can be revived when the overall interference pattern has lost its contrast. This indicates a persistent coupling in phase space, even in cases of spatially separated Schroedinger-cat-like situations.

Interesting developments in general relativity and aspects of special relativity were also discussed at the conference. Mayeul Arminjon of Grenoble presented a new “scalar ether theory” of gravitation. One of the motivations for trying such an alternative approach is to solve problems that occur in general relativity and in most extensions of it – namely the existence of singularities and the interpretation of the gauge condition. Arminjon showed that this scalar theory fits nicely with observations on binary pulsars. Lorenzo Iorio of Bari reported on new perspectives in testing the general relativistic Lense-Thirring effect. Turning to experiment, the present status of the search for gravitational waves was outlined by Peter Aufmuth of Hannover. Only astrophysical events, such as supernovae, or compact objects, for example, black holes and neutron stars, produce detectable gravitational wave amplitudes. The current generation of resonant-mass antennas and laser interferometers has reached the sensitivity necessary to detect gravitational waves from sources in the Milky Way. Within a few years the next generation of detectors will open the field of gravitational astronomy.

Cosmological connections

Talks about the early universe included cosmological, quantum-gravitational and other possible violations of CPT symmetry. Nick Mavromatos of King’s College, London, discussed the various ways in which CPT symmetry may be violated, and reviewed their phenomenology in current or near-future experimental facilities, both terrestrial and astrophysical. First he outlined violations of CPT symmetry due to the impossibility of defining a scattering matrix as a consequence of the existence of microscopic or macroscopic space-time boundaries, such as Planck-scale black-hole event horizons or cosmological horizons due to the presence of a positive cosmological constant in the universe. Second he discussed CPT violation due to the breaking of Lorentz symmetry, which may characterize certain approaches to quantum gravity. He stressed that although most of the Lorentz-violating cases of CPT breaking are already excluded by experiment, there are some (stringy) models that can evade these constraints.

Trans-Planckian physics was discussed by Ulf Danielsson of Uppsala, who outlined how the cosmic microwave background radiation might probe physics at or near the Planck scale. Danielsson reviewed a potential modulation of the power spectrum of primordial density fluctuations generated through trans-Planckian (maybe stringy) effects during inflation.

cernbey2_6-04

Margarida Rebelo of Lisbon discussed CP violation in the leptonic sector at both low and high energies in the framework of the “seesaw” mechanism. She pointed out that leptogenesis is a possible and likely explanation for the observed baryon asymmetry of the universe. It seems to be one of the most promising scenarios, in view of the fact that several other alternative proposals are on the verge of being ruled out. The leptogenesis scenario implies constraints on both light and heavy neutrino masses, which, as she showed, are consistent with the present value obtained from the double beta decay of 76Ge.

Cosmoparticle physics was another major theme of the conference. Maxim Khlopov of Rome and Moscow gave a broad overview of the topic, calling it the “Challenge for the Millennium”, and results linking particle-physics experiments with cosmological problems, and vice versa, were among the experimental highlights.

The existence of dark matter in the universe has for many years been an intriguing problem. Rita Bernabei of Rome presented the final results of the DAMA dark-matter experiment, which confirm their first indications for the observation of cold dark matter at a 6 σ level. Measurements of the cosmic microwave background by the Wilkinson Microwave Anisotropy Probe (WMAP), which are revealing the proportions of dark matter – and dark energy – in the universe, were presented by Eiichiro Komatsu of Princeton. Neutrino parameters are also deducible from this experiment, as well as from current large-scale galaxy surveys, as Steen Hannestad of Odense described. However, the cosmic microwave background experiments cannot at present differentiate between the different neutrino-mass scenarios.

Neutrino highlights

Moving on to ground-based studies of neutrino properties, the Heidelberg-Moscow double beta decay experiment in the Gran Sasso Laboratory has results for the period 1990-2003, which were presented by Hans Volker Klapdor-Kleingrothaus of MPI Heidelberg. With three additional years of data included in this analysis, the evidence for neutrinoless double beta decay has now improved to a 4.2 σ level. For 10 years this experiment has been the most sensitive double beta experiment worldwide, and with the statistics now reached, it has essentially already achieved scientifically what was expected from the larger GENIUS project proposed in 1997. The conclusion from this result is that the total lepton number is not conserved (neutrino oscillations reveal only the violation of family lepton number). This has fundamental consequences for the early universe. Furthermore, according to the Schechter-Valle theorem, the existence of neutrinoless double beta decay implies that the neutrino is a Majorana particle. (The announcement of the start of the GENIUS Test Facility in Gran Sasso, in May 2003, was now of most interest in the context of the search for dark matter. The goal of the GENIUS Test Facility is to confirm the DAMA result by looking for the seasonal modulation signal.)

cernbey3_6-04

On the theoretical side Mariana Kirchbach of San Luis Potosi in Mexico stressed the importance of double beta decay for fixing the absolute scale of the neutrino mass spectrum. She showed that in the case of Majorana neutrinos, in single beta decay the mass might lead to unexpected results. In this scenario a sensitive tritium decay experiment should see no mass if the neutrino is a Majorana particle, while the dependence of the neutrinoless double beta decay rate. Ernest Ma of Irvine outlined how a rather precise knowledge of neutrino oscillation parameters, i.e. the correct form of the 3 x 3 neutrino mass matrix, may be obtained from symmetry principles. He showed that the latter predict three nearly degenerate Majorana neutrinos with masses in the 0.2 eV range. This theoretical result is of great interest, in view of the results from double beta decay, WMAP, etc.

Contributions to fundamental physics, obtained using Penning traps, were outlined by one of the pioneers of the field, Ingmar Bergstrom of Stockholm. A Penning trap is a storage device in which frequency measurements can be used to determine the mass of electrons and ions, as well as g-factors of electrons and positrons, with extremely high accuracy. Bergstrom has recently measured, for example, the Q value of the double beta decay of 76Ge with unprecedented precision.

Other experimental highlights on neutrinos included the results obtained for solar neutrinos by the Sudbury Neutrino Observatory (SNO). As George Ewan of Kingston, Canada, described, SNO now has strong evidence at a 5.3 σ level, and independently of the details of solar models, that neutrinos change flavour on their way from the Sun to the Earth. These results, together with those of other neutrino experiments, among them the Japanese 250 km long-baseline experiment that was presented by Takashi Kobayashi of KEK, mean that our knowledge of neutrino properties has improved considerably over the past few years. In this context, Oliver Manuel of Missouri gave a highly interesting, non-mainstream view of the structure of the solar core.

Supernova and relic neutrinos were the topic of another session. Irina Vladimirovna Krivosheina of Heidelberg and Nishnij-Novgorod, who was a member of the Baksan group that was one of three groups which observed neutrinos from the supernova SN1987A, gave a retrospective view of this exciting event and some insider details of its discovery. Mark Vagins of Irvine and Shinichiro Ando of Tokyo discussed further the observation of relic and supernova neutrinos, one of the future tasks of the Super-Kamiokande experiment in Japan.

Accelerator approaches

Turning to the physics of nuclei, results on superheavy elements have reached an exciting level. Dieter Ackermann showed that elements 107-112 have been synthesized and unambiguously identified at GSI, Darmstadt. The observation of elements 112, 116 and 118 by the Oganessian group at Dubna was also announced by Vladimir Utyonkov. At the interface between nuclear physics and particle physics, the status of the search for a phase transition between hadronic matter and a quark-gluon plasma at Brookhaven’s Relativistic Heavy Ion Collider was outlined by Raimond Snellings of Amsterdam, and compared with measurements at CERN’s Super Proton Synchrotron.

cernbey4_6-04

Several sessions were devoted to the search for new physics with colliders. The final analyses of the search for Higgs bosons, R-parity violation, leptoquarks and exotic couplings at CERN and Fermilab, presented by Rosy Nikolaidou of CEA Saclay, Silvia Costantini of Rome “La Sapienza”, Stefan Soeldner-Remboldt of Manchester and others, show no indication of physics beyond the Standard Model. This reinforces the observation that the only new physics to emerge recently is from underground experiments.

Particles from space

Nearly a century after the discovery of cosmic rays, their origins are still unknown. Eckart Lorenz of Munich reviewed the status and perspectives of ground-based gamma-ray astronomy, where new telescopes under construction, such as MAGIC, should lead to a big step in sensitivity. At gamma-ray energies of around 10-30 GeV the universe becomes basically transparent, so gamma-emitting objects as far as red-shifts of more than three should become visible, that is, up to a time where star and galaxy formation has been particularly strong. New projects like MAGIC will allow the gap to be closed between satellite-borne instruments and previous, ground-based telescopes. Exciting results from the CANGAROO experiment, an array of four imaging Cherenkov telescopes in Australia, were presented by Ken’ichi Tsuchiya of Tokyo. The team has observed TeV gamma rays from SNR SN1006 and from new types of objects, such as gamma rays from a normal spiral galaxy showing starburst activity, NGC253. This is the first detection of gamma rays from an extragalactic object other than active galactic nuclei, and is the largest structure ever detected.

The Auger Observatory is under construction and will look for cosmic rays at the highest energies. It will be the largest cosmic-ray detector ever built, covering 3000 square kilometres in both the southern and northern hemispheres in its final configuration. Johannes Bluemer of Karlsruhe described the present status of the construction at the southern site in Argentina, which began in 1999.

The highest cosmic energies, beyond the Greisen-Kuzmin-Zatsepin limit, find an interesting theoretical explanation in the Z-burst scenario, in which a large fraction of the cosmic rays are decay products of Z-bosons produced in the scattering of ultra-high-energy neutrinos on cosmological relic neutrinos. This was discussed by Daniel Fargion of Rome and Sandor Katz of DESY and Budapest. Interestingly, they find that neutrinos should have a mass in the range of 0.1-1 eV – which is consistent with the result of the HEIDELBERG-MOSCOW experiment – in order to make this explanation work properly.

Hunting for antimatter

The search for antimatter (and dark matter) with the Alpha Mass Spectrometer, which is planned to be installed on the International Space Station in 2005/2006 for a three-year mission, was discussed by Frank Raupach of Aachen. The existence of large domains of antimatter in the universe is still an open question. The observed uniformity of the cosmic microwave background indicates that no voids exist at all between matter and antimatter worlds, hence annihilation processes should be inevitable and the resulting diffuse gamma-ray spectrum might be observable.

Returning to neutrinos, but this time from space, Christian Spiering of Zeuthen gave an overview of results from AMANDA, the neutrino telescope at the South Pole, and Jan-Arys Dzhilkibaev reviewed the status and perspectives of the Baikal Neutrino Project. Finally, Yoshitaka Kuno from Osaka outlined the goals of future neutrino and muon factories. A neutrino factory would have great potential for examining the mass hierarchy of neutrinos, the matter effects, and CP violation in the neutrino sector. A rich physics programme would also be possible with a high-intensity muon beam at a muon factory, ranging from searches for muon processes that violate lepton flavour (such as µ to e conversion) and the muon electric dipole moment to further precision measurements of the muon magnetic moment (g-2). Lepton flavour violation in the charged sector will be studied also by the muon to electron conversion experiment, MECO, presented by Michael Herbert of Irvine.

In summary, the lively and highly stimulating atmosphere during this Beyond meeting reflected a splendid scientific future for particle physics. The proceedings of Beyond 03 are now available as a book, Beyond the Desert 2003, Springer Proceedings in Physics, vol 92.

The post Theory and experiment peer across the frontier appeared first on CERN Courier.

]]>
https://cerncourier.com/a/theory-and-experiment-peer-across-the-frontier/feed/ 0 Feature The fourth conference in the "Beyond" series presented a clear overview of - and beyond - the current frontiers of particle physics, astrophysics and cosmology. https://cerncourier.com/wp-content/uploads/2004/06/cernbey1_6-04-feature.jpg
The W and Z at LEP https://cerncourier.com/a/the-w-and-z-at-lep/ https://cerncourier.com/a/the-w-and-z-at-lep/#respond Mon, 03 May 2004 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-w-and-z-at-lep/ The Large Electron Positron collider made significant contributions to the process of establishing the Standard Model as the basis for matter and forces, and also built a platform for physics scenarios beyond the model.

The post The W and Z at LEP appeared first on CERN Courier.

]]>
cernleplogo_5-04

The Standard Model of particle physics is arguably one of the greatest achievements in physics in the 20th century. Within this framework the electroweak interactions, as introduced by Sheldon Glashow, Abdus Salam and Steven Weinberg, are formulated as an SU(2) x U(1) gauge field theory with the masses of the fundamental particles generated by the Higgs mechanism. Both of the first two crucial steps in establishing experimentally the electroweak part of the Standard Model occurred at CERN. These were the discovery of neutral currents in neutrino scattering by the Gargamelle collaboration in 1973, and only a decade later the discovery by the UA1 and UA2 collaborations of the W and Z gauge bosons in proton-antiproton collisions at the converted Super Proton Synchrotron.

cernlep1_5-04

Establishing the theory at the quantum level was the next logical step, following the pioneering theoretical work of Gerard ‘t Hooft and Martinus Veltman. Such experimental proof is a necessary requirement for a theory describing phenomena in the microscopic world. At the same time, performing experimental analyses with high precision also opens windows to new physics phenomena at much higher energy scales, which can be accessed indirectly through virtual effects. These goals were achieved at the Large Electron Positron (LEP) collider.

cernlep2_5-04

LEP also provided indirect evidence for the fourth step in this process, establishing the Higgs mechanism for generating mass. However, the final word on this must await experimentation in the near future at the Large Hadron Collider.

The beginnings of LEP

Before LEP started operating in 1989, the state of the electroweak sector could be described by a small set of characteristic parameters. The masses of the W and Z bosons had been measured to an accuracy of a few hundred MeV, and the electroweak mixing parameter sin2θW had been determined at the percent level. This accuracy allowed the top-quark mass to be predicted at 130 ± 50 GeV, but no bound could be derived on the Higgs mass.

The idea of building such an e+e collider in the energy region up to 200 GeV was put forward soon after the first highly successful operation of smaller machines in the early 1970s at energies of a few GeV. The physics potential of such a high-energy facility was outlined in a seminal CERN yellow report (figure 1).

cernlep3_5-04

LEP finally started operation in 1989, equipped with four universal detectors, ALEPH, DELPHI, L3 and OPAL. The machine operated in two phases. In the first phase, between 1989 and 1995, 18 million Z bosons were collected, while in the second phase, from 1996 to 2000, some 80,000 W bosons were generated at energies gradually climbing from the W-pair threshold to the maximum of 209 GeV. The machine performance was excellent at all the energy steps.

Phase I: Z physics

The Z boson in the Glashow-Salam-Weinberg model is a mixture of the neutral isospin SU(2) and the hypercharge U(1) gauge fields, with the mixing parameterized by sin2θW. The Z boson interacts with vector and axial-vector currents of matter. The Z-matter couplings, including the mixing angle, are affected by radiative corrections so that high-precision analyses allow both tests at the quantum level and extrapolations to new scales of virtual particles.

The properties of the Z boson and the underlying electroweak theory were studied at LEP by measuring the overall formation cross-section, the forward-backward asymmetries of the leptons and quarks, and the polarization of tau leptons. Outstandingly clear events were observed in each of the four detectors (see figure 2). As a result, the experimental analyses of the Z line-shape (see figure 3) of the decay branching ratios and the asymmetries were performed with a precision unprecedented in high-energy experiments (see equation 1 for all Z data, including SLD).

cernlep4_5-04

Thus, the electroweak sector of the Standard Model successfully passed the examination at the per-mille level, as highlighted by global analysis of the electroweak mixing parameter sin2θW. This is truly in the realm where quantum theory is the proper framework for formulating the laws of nature. Figure 4 shows the observables that were precisely measured at LEP. The picture is uniform in all the observables, with deviations from the average line a little above and below 2σ only in the forward-backward asymmetry of the b-quark jets, and the left-right polarization asymmetry measured at the Stanford Linear Collider facility.

However, beyond this most stringent test of the electroweak theory itself, Z physics at LEP allowed important conclusions to be drawn on several other aspects of the Standard Model and potential physics beyond.

cernlep5_5-04

The first of these concerned the three families of leptons in the Standard Model. The number of light neutrinos could be determined by comparing the Z width as measured in the Breit-Wigner line-shape with the visible lepton and quark-decay channels. The ensuing difference determines the number of light neutrino species to be three: Nν = 2.985 ± 0.008. Thus, LEP put the lid on the Standard Model with three families of matter particles.

The physics of the top quark was another real success story at LEP. Not only could the existence of this heaviest of all quarks be predicted from LEP data, but the mass could also be pre-determined with amazing accuracy from the analysis of quantum corrections – a textbook example of the fruitful co-operation of theory and experiment. By analysing rate and angular asymmetries in Z decays to b-quark jets at LEP and complementing this set with production rates at the lower energy collider PETRA, the isospin of the b-quark could be uniquely determined (figure 5). From the quantum number I3L = -1/2, the existence of an isospin +1/2 partner to the bottom quark could be derived conclusively – in other words, the top quark.

cernlep6_5-04
There was more than this, however. Virtual top quarks affect the masses and the couplings of the electroweak gauge bosons, in particular the relation between the Fermi coupling GF of beta decay, the Sommerfeld fine-structure constant α, the electroweak mixing sin2θW and the mass of the Z boson, MZ. This correction is parameterized in the ρ parameter and increases quadratically in the top-quark mass: Δρ ~ GFmt2. This led to the prediction of mt = 173 +12-13 +18-20 GeV, before top quarks were established at the Tevatron and their mass confirmed by direct observation. This was truly a triumph of high-precision experimentation at LEP coupled with theoretical high-precision calculations at the quantum level of the Standard Model.

Beyond the electroweak sector

Z physics at LEP has also contributed to our knowledge of quantum chromodynamics (QCD), the theory of strong interactions in the complete SU(3) x SU(2) x U(1) Standard Model. As was already apparent from the study of PETRA jets at DESY, the clean environment of electron-positron collisions enables these machines to be used as precision tools for studying QCD. At LEP several remarkable observations contributed to putting QCD on a firm experimental basis.

cernlep7_5-04

Firstly, with the measurement of the QCD coupling αs = 0.1183 ± 0.0027 at the scale MZ and the jet analysis of the running from low energies at PETRA to high energies at LEP, the validity of asymptotic freedom could be demonstrated in a wonderful way (see figure 6a). Secondly, the observation of the three-gluon self-coupling in four-jet final states of Z-boson decays enabled QCD to be established as a non-abelian gauge theory (see figure 6b). With the measured value CA = 3.02 ± 0.55, the strength of the three-gluon coupling agrees with the predicted value CA = 3 for non-abelian SU(3), and is far from the value of zero in any abelian “QED type” field theory without self-coupling of the gauge bosons. Thirdly, in the same way as couplings run, quark masses change when weighed at different energy scales, induced by the retarded motion of the surrounding gluon cloud. This effect was observed in a unique way by measuring the b-quark mass at the Z scale (see figure 6c).

There is one further triumph of the Z-physics programme. When extrapolating the three couplings associated with the gauge symmetries SU(3) x SU(2) x U(1) in the Standard Model to high energies, they approach each other but do not really meet at the same point. This is different if the particle spectrum of the Standard Model is extended by supersymmetric partners. Independently of the mass values, so long as they are in the TeV region, the new degrees of freedom provided by supersymmetry make the couplings converge to an accuracy close to 2% (see figure 7). This opens up the exciting vista that the electromagnetic, weak and strong forces of the Standard Model may be unified at an energy scale close to 1016 GeV, while at the same time giving support to supersymmetry, a symmetry that may be intimately related to gravity, the fourth of the forces we observe in nature.

cernlep8_5-04

Phase II: W physics

Gauge field theories appear to be the theoretical framework within which the three fundamental particle forces can be understood. The gauge symmetry theory was introduced by Hermann Weyl as the basic symmetry principle of quantum electrodynamics; the scheme was later generalized by C N Yang and R L Mills to non-abelian gauge symmetries, before being recognized as the basis of the (electro) weak and strong interactions.

One of the central tasks of the LEP experiments at energies beyond the W-pair threshold was the analysis of the electroweak three-gauge-boson couplings, predicted in form and magnitude by the gauge symmetry. A first glimpse was also caught of the corresponding four-boson couplings.

cernlep9_5-04

Charged W+W pairs are produced in e+e collisions by three different mechanisms: neutrino exchange, and photon- and Z-boson exchanges. From the steep increase of the excitation curve near the threshold and from the reconstruction of the W bosons in the leptonic and hadronic decay modes, the mass MW and the width ΓW can be reconstructed with high precision (see equation 2).

This value of the directly measured W mass is in excellent agreement with the value extracted indirectly from radiative corrections.

Any of the three production mechanisms for W+W pairs, if evaluated separately, leads to a cross-section that rises indefinitely with energy. However, the amplitudes interfere destructively as a result of the gauge symmetry, and the final cross-section is damped for large energies. The prediction of gauge cancellations is clearly borne out by the LEP data (see figure 8), thus confirming the crucial impact of gauge symmetries on the dynamics of the electroweak Standard Model sector in a most impressive way.

cernlep10_5-04

The role of the gauge symmetries can be quantified by measuring the static electroweak parameters of the charged W bosons, i.e. the monopole charges (gW), the magnetic dipole moments (µW) and the electric quadrupole moments (qW) of the W bosons coupled to the photon and Z boson. For the photon coupling gW = e, µW = 2 x e/2MW, qW = -e/MW2 and for the Z coupling analogously. These predictions have been confirmed experimentally within a margin of a few percent.

Studying the quattro-linear couplings requires three-boson final states. Some first analyses of W+Wγ final states bounded any anomalies to less than a few percent.

Hunting the Higgs

cernlep11_5-04

The fourth step in establishing the Standard Model experimentally – the search for the Higgs particle – could not be completed by LEP. Nevertheless, two important results could be reported by the experiments. The first of these was to estimate the mass of the Higgs when acting as a virtual particle. By emitting and reabsorbing a virtual Higgs boson, the masses of electroweak bosons are slightly shifted. In parallel to the top quark, this effect can be included in the ρ parameter. With Δρ ~ GFMW2logMH2/MW2, the effect is however only logarithmic in the Higgs mass, so that the sensitivity is reduced considerably. Nevertheless, from the celebrated “blue-band plot”, a most probable value of about 100 GeV in the Standard Model, though with large error, is indicated by evaluating the entire set of established precision data (see figure 9). An upper bound close to 200 GeV has been found in the analysis shown in equation 3a.

cernlep12_5-04

Thus, in the framework of the Standard Model and a large class of possible extensions, LEP data point to a Higgs mass in the moderately small, intermediate mass range. This is corroborated by individual analyses of all the observables, except the forward-backward asymmetry of b-jets. (This indirect evidence for a light Higgs sector is complemented by indirect counter-evidence against a large class of models constructed for generating mechanisms of electroweak symmetry breaking by new strong interactions.)

The direct search for the real production of the Higgs particle at LEP through the “Higgs-strahlung” process, e+e→ZH, set a stringent lower limit on the mass of the particle in the Standard Model (see equation 3b).

cernlep13_5-04

However, we have been left with a 1.7σ effect for Higgs masses in excess of 115 GeV, fuelled by the four-jet channel in one experiment. “This deviation, although of low significance, is compatible with a Standard Model Higgs boson in this mass range, while also being in agreement with the background hypothesis.” (LEP Higgs Working Group.)

cernlep14_5-04

LEP’s legacy

Based on the high-precision measurements by the four experiments, ALEPH, DELPHI, L3 and OPAL, and in coherent action with a complex corpus of theoretical analyses, LEP achieved an impressive set of fundamental results, the traces of which will be imprinted in the history of physics. LEP firmly established essential elements of the Standard Model at the quantum level. It provided indirect evidence for the existence of a light Higgs boson of the type required by the Standard Model. The extrapolations of the three gauge couplings measured at LEP point to the grand unification of the individual gauge interactions at a high-energy scale – compatible with the supersymmetric extension of the Standard Model in the TeV range.

cernlep15_5-04

In addition, the precision analyses performed at LEP probed the many physics scenarios beyond the Standard Model, constraining their parameters in the ranges between the upper LEP energy to the TeV and multi-TeV scales. These studies have led to a large number of bounds on masses of supersymmetric particles, masses and mixings of novel heavy gauge bosons, scales of extra space-time dimensions, radii of leptons and quarks, and many other examples.

•The figures and the experimental numbers are from the four LEP experiments, the LEP Electroweak Working Group, the LEP Higgs Working Group, G Altarelli, S Bethke, D Haidt, W Porod, D Schaile and R Seuster.

This article is based on a talk given by Peter Zerwas at the symposium held at CERN in September 2003 entitled “1973: neutral currents, 1983: W± and Z0 bosons. The anniversary of CERN’s discoveries and a look into the future.” The full proceedings will be published as volume 34 issue 1 of The European Physical Journal C. Hardback ISBN: 3540207503.

The post The W and Z at LEP appeared first on CERN Courier.

]]>
https://cerncourier.com/a/the-w-and-z-at-lep/feed/ 0 Feature The Large Electron Positron collider made significant contributions to the process of establishing the Standard Model as the basis for matter and forces, and also built a platform for physics scenarios beyond the model. https://cerncourier.com/wp-content/uploads/2004/05/cernlep1_5-04-feature.jpg
Space goes quantum at Stony Brook https://cerncourier.com/a/space-goes-quantum-at-stony-brook/ https://cerncourier.com/a/space-goes-quantum-at-stony-brook/#respond Mon, 03 May 2004 22:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/space-goes-quantum-at-stony-brook/ Does a melting crystal provide the key to developing a quantum description of gravity? Advances at the first Simons Workshop point to a connection.

The post Space goes quantum at Stony Brook appeared first on CERN Courier.

]]>
cernquan1_5-04

Like players on a stage, most forces act in a fixed, pre-existing space, but in Einstein’s classical theory of general relativity gravity is the dynamic shape of space. When classical forces enter the quantum arena the stage plays a new and more visible role: in its usual formulation quantum theory demands a ground state. The ground state is a fixed vacuum, above which excitations – which we can calculate and often measure to astonishing accuracy – propagate and interact. Because the vacuum is fixed it is not a surprise that for decades there was no convincing way to apply quantum principles to gravity.

Attempts were made to use the techniques of quantum field theory, which successfully quantizes light, to quantize Einstein’s theory. In this approach, just as light is described by a particle, the photon, gravity is described by a particle, the graviton. This is already a compromise as the graviton is a quantum ripple in a pre-assumed space that is not quantized. More drastically, the quantum field theory for the graviton fails because what should be small quantum corrections overwhelm the classical approximation, giving uncontrollable infinite modifications of the theory.

String theory solved part of this problem. In string theory the graviton is a string vibrating in one of its possible patterns, or “modes”. As the string moves through space it splits and rejoins itself (figure 1). These bifurcations and recombinations are the “stringy” quantum corrections, which are milder than those in quantum field theory and give rise to a quantum theory of gravity that is well defined and finite as a systematic expansion in the number of splittings. But the strings still move in space rather than being a part of it; they are quantized, while space itself remains stubbornly classical. Furthermore, it is not known if this expansion around Einstein’s classical theory can be summed to give a completely defined quantum theory.

Even before string theory John Wheeler suggested that at the Planck scale, the distance where the quantum corrections to gravity become large, the topology and geometry of space-time are unavoidably subject to quantum fluctuations (Wheeler 1964). This idea of a space-time quantum “foam” was explored by Stephen Hawking but has remained mysterious (Hawking 1978). New developments from an unexpected direction, however, have now given hints of an underlying, fundamentally quantum theory of strings that realizes these ideas: a mapping has been found to the theory of how crystals melt. In this picture classical geometry corresponds to a macroscopic crystal and quantum geometry to its underlying microscopic, atomic description.

These ideas grew out of discussions between a string theorist, Cumrun Vafa of Harvard, and two mathematicians, Nikolai Reshetikhin of UC Berkeley and Andrei Okounkov of the Institute for Advanced Study. They were brought together at the first of a series of interdisciplinary workshops held at the C N Yang Institute for Theoretical Physics and supported by the Simons Foundation at Stony Brook University in the summer of 2003.

The full theory of strings is still not understood well enough to formulate the problem of strings moving in a quantum space in complete generality. Instead, Okounkov, Reshetikhin and Vafa studied a part (or “sector”) of string theory called “topological string theory”. For concreteness, they focused on topological strings moving in a special class of spaces known as Calabi-Yau (CY) spaces.

Topological string theory is a simplification of the full theory of strings in which the motion of strings does not depend on the details of the space through which they move. As such it is mathematically more tractable than the full theory. At the same time, CY spaces are very interesting in string theory. In part this is because they are candidates for the as yet unobserved six dimensions that complement our familiar three dimensions of space and one of time. A widely discussed string-theory scenario is that the familiar four-dimensional space-time and a six-dimensional CY space combine to make up the 10-dimensional space-time that is required for the self-consistency of string theory.

cernquan2_5-04

Many interesting properties of topological string theory on CY spaces have already become known through the work of Vafa and collaborators over the past few years. In particular they have shown that quantum corrections can be computed. These corrections are given by the relative likelihoods that strings split and join as they move in the space. The potential for one string to split or join is measured by a number called the “string coupling”. This is actually a measure of the force between strings – in string theory forces are generated between two strings when one splits and one of its parts joins with the other. The larger the string coupling the more likely it is that this will happen and the stronger the force. These calculations are fine as long as the string coupling is small, but they become unmanageable when the coupling gets too large.

The crystal connection

At last summer’s Simons Workshop, Okounkov, Reshetikhin and Vafa realized that a formula describing the topological string splitting on a CY space also has a completely different interpretation involving a crystal composed of a regular array of idealized atoms (Okounkov et al. 2003). When they identified the temperature of the crystal with the inverse of the string coupling, the likelihood of an atom leaving the lattice became the same as that of a string splitting. Once this connection is made, the same formula that describes the splitting of the strings describes the melting of the crystal (figure 2).

At high temperatures the idealized crystal melts into a smooth surface with a well-defined shape. This surface is a two-dimensional portrait of a CY space, called a “projection” of the space (figure 3). At these temperatures the string coupling is small and topological strings can be described in terms of the calculable quantum corrections. However, as the string coupling and hence the force between strings increases, the strings split so often it is unclear how to compute their behaviour in string theory. But increasing string coupling means decreasing temperature, and at low temperatures the crystal theory comes to the rescue. The crystal becomes simple at low temperatures, with most atoms fixed in their positions in the lattice. This means that the smooth surface of the melted crystal is replaced by the discrete structure of the lattice. The CY space naturally becomes discrete.

This led Okounkov, Reshetikhin and Vafa to conclude that topological string theory and crystal theory are “dual” descriptions of a single underlying system valid for the whole range of weak and strong string coupling, or equivalently, high and low temperatures, respectively. In particular when the string coupling is small, quantum fluctuations appear only at scales much smaller than the natural size of the strings themselves, and the picture of smooth strings remains self-consistent.

cernquan3_5-04

The new picture that emerges from this duality is that of a “quantum” CY geometry. To understand what this means it is worth recalling that in a classical space of any kind each point is specified by a set of numbers, or co-ordinates. Examples of co-ordinates are the longitude and latitude of the Earth’s surface. In the quantum CY space the co-ordinates are no longer simple numbers to be specified at will. Rather they obey the Heisenberg uncertainty principle, which relates the position and momentum of a quantum particle. For the quantum CY spaces of Okounkov, Reshetikhin and Vafa’s dual description of topological string theory, the long-standing dream of replacing a smooth classical space with a discrete quantum substructure is thus realized. In this system the emergence of a classical geometry out of a quantum system can be clearly controlled and understood. As is shown in further work by Vafa et al., this gives an explicit and controllable picture of the Wheeler-Hawking notion of topological fluctuations – or “foam” – in space-time (Iqbal et al. 2003). The fluctuations of topology and geometry actually become the deep origin of strings. They extend rather than reduce the predictive power of the quantum theory of gravity.

Of course many challenges remain before a full theory of this kind can be realized. Chief among these is the extension of the picture from topological strings to full string theory. A possible path has been identified, however, suggesting that in string theory, as in Einstein’s gravity, the distinction between forces and the space in which they act melts away.

The post Space goes quantum at Stony Brook appeared first on CERN Courier.

]]>
https://cerncourier.com/a/space-goes-quantum-at-stony-brook/feed/ 0 Feature Does a melting crystal provide the key to developing a quantum description of gravity? Advances at the first Simons Workshop point to a connection. https://cerncourier.com/wp-content/uploads/2004/05/cernquan1_5-04.jpg
The Global Approach to Quantum Field Theory https://cerncourier.com/a/the-global-approach-to-quantum-field-theory/ Wed, 31 Mar 2004 09:43:06 +0000 https://preview-courier.web.cern.ch/?p=105605 Luis Alvarez-Gaume reviews in 2004 The Global Approach to Quantum Field Theory.

The post The Global Approach to Quantum Field Theory appeared first on CERN Courier.

]]>
by Bryce DeWitt, Oxford University Press (vols I and II). Hardback ISBN 0198510934, £115 ($230).

71GxoUhRL5L

It is difficult to describe or even summarize the huge amount of information contained in this two-volume set. Quantum field theory (QFT) is the more basic language to express the fundamental laws of nature. It is a difficult language to learn, not only because of its technical intricacies but also because it contains so many conceptual riddles, even more so when the theory is considered in the presence of a gravitational background. The applied field theory techniques to be used in concrete computations of cross-sections and decay rates are scarce in this book, probably because they are adequately explained in many other texts. The driving force of these volumes is to provide, from the beginning, a manifestly relativistic invariant construction of QFT.

Early in the book we come across objects such as Jacobi fields, Peierls brackets (as a replacement of Poisson brackets), the measurement problem, Schwinger’s variational principle and the Feynman path integral, which form the basis of many things to come. One advantage of the global approach is that it can be formulated in the presence of gravitational fields. There are various loose ends in regular expositions of QFT that are clearly tied in the book, and one can find plenty of jewels throughout: for instance a thorough analysis of the measurement problem in quantum mechanics and QFT, something that is hard to find elsewhere. The treatment of symmetries is rather unique. DeWitt introduces local (gauge) symmetries early on; global symmetries follow at the end as a residue or bonus. This is a very modern point of view that is spelt out fully in the book. In the Standard Model, for example, the global symmetry (B-L, baryon minus lepton number) appears only after we consider the most general renormalizable Lagrangian consistent with the underlying gauge symmetries. In most modern approaches to the unification of fundamental forces, global symmetries are quite accidental. String theory is an extreme example where all symmetries are related to gauge symmetries.

There are many difficult and elaborate technical areas of QFT that are very well explained in the book, such as heat kernel expansions, quantization of gauge theories, quantization in the presence of gravity and so on. There are also some conceptually difficult and profound questions that DeWitt addresses head on with authority and clarity, including the measurement problem mentioned previously and the Everett interpretation of quantum mechanics and its implications in quantum cosmology. There is also a cogent and impressive study of QFT in the presence of black holes, their Hawking emission, the final-state problem for quantum black holes and a long etcetera.

The book’s presentation is very impressive. Conceptual problems are elegantly exhibited and there is an inner coherent logic of exposition that could only come from someone who had long and deeply reflected on the subject, and made important contributions to it. It should be said, however, that the book is not for the faint hearted. The level is consistently high throughout its 1042 pages. Nonetheless it does provide a deep, uncompromising review of the subject, with both its bright and dark sides clearly exposed. One can read towards the end of the preface: “The book is in no sense a reference book in quantum field theory and its applications to particle physics…”. I agree with the second statement but strongly disagree with the first.

The post The Global Approach to Quantum Field Theory appeared first on CERN Courier.

]]>
Review Luis Alvarez-Gaume reviews in 2004 The Global Approach to Quantum Field Theory. https://cerncourier.com/wp-content/uploads/2004/03/71GxoUhRL5L.jpg
Fundamentals in Hadronic Atom Theory https://cerncourier.com/a/fundamentals-in-hadronic-atom-theory/ Tue, 09 Dec 2003 12:38:08 +0000 https://preview-courier.web.cern.ch/?p=105666 This is the first book to describe the theory of hadronic atoms and the unique laboratory they provide for studying hadronic interactions at threshold.

The post Fundamentals in Hadronic Atom Theory appeared first on CERN Courier.

]]>
by A Deloff, World Scientific. Hardback ISBN 9812383719, £46 ($68).

41CZ3YozIDL._SX349_BO1,204,203,200_

This is the first book to describe the theory of hadronic atoms and the unique laboratory they provide for studying hadronic interactions at threshold. With an emphasis on recent developments, it is aimed at advanced students and researchers in nuclear, atomic and elementary particle physics.

The post Fundamentals in Hadronic Atom Theory appeared first on CERN Courier.

]]>
Review This is the first book to describe the theory of hadronic atoms and the unique laboratory they provide for studying hadronic interactions at threshold. https://cerncourier.com/wp-content/uploads/2022/08/41CZ3YozIDL._SX349_BO1204203200_.jpg