Astrophysics and cosmology Archives – CERN Courier https://cerncourier.com/c/astrophysics-cosmology/ Reporting on international high-energy physics Tue, 08 Jul 2025 20:12:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://cerncourier.com/wp-content/uploads/2025/03/cropped-favicon-32x32.png Astrophysics and cosmology Archives – CERN Courier https://cerncourier.com/c/astrophysics-cosmology/ 32 32 Neutron stars as fundamental physics labs https://cerncourier.com/a/neutron-stars-as-fundamental-physics-labs/ Tue, 08 Jul 2025 20:12:41 +0000 https://cerncourier.com/?p=113630 Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use extreme environments as precise laboratories for fundamental physics.

The post Neutron stars as fundamental physics labs appeared first on CERN Courier.

]]>
Neutron stars are truly remarkable systems. They pack between one and two times the mass of the Sun into a radius of about 10 kilometres. Teetering on the edge of gravitational collapse into a black hole, they exhibit some of the strongest gravitational forces in the universe. They feature extreme densities in excess of atomic nuclei. And due to their high densities they produce weakly interacting particles such as neutrinos. Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use these extreme environments as precise laboratories for fundamental physics.

Perhaps the most intriguing open question surrounding neutron stars is what is actually inside them. Clearly they are primarily composed of neutrons, but many theories suggest that other forms of matter should appear in the highest density regions near the centre of the star, including free quarks, hyperons and kaon or pion condensates. Diverse data can constrain these hypotheses, including astronomical inferences of the masses and radii of neutron stars, observations of the mergers of neutron stars by LIGO, and baryon production patterns and correlations in heavy-ion collisions at the LHC. Theoretical consistency is critical here. Several talks highlighted the importance of low-energy nuclear data to understand the behaviour of nuclear matter at low densities, though also emphasising that at very high densities and energies any description should fall within the realm of QCD – a theory that beautifully describes the dynamics of quarks and gluons at the LHC.

Another key question for neutron stars is how fast they cool. This depends critically on their composition. Quarks, hyperons, nuclear resonances, pions or muons would each lead to different channels to cool the neutron star. Measurements of the temperatures and ages of neutron stars might thereby be used to learn about their composition.

Research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics

The workshop revealed that research into neutron stars has progressed so rapidly in recent years that it allows key tests of fundamental physics including tests of particles beyond the Standard Model, including the axion: a very light and weakly coupled dark-matter candidate that was initially postulated to explain the “strong CP problem” of why strong interactions are identical for particles and antiparticles. The workshop allowed particle theorists to appreciate the various possible uncertainties in their theoretical predictions and propagate them into new channels that may allow sharper tests of axions and other weakly interacting particles. An intriguing question that the workshop left open is whether the canonical QCD axion could condense inside neutron stars.

While many uncertainties remain, the workshop revealed that the field is open and exciting, and that upcoming observations of neutron stars, including neutron-star mergers or the next galactic supernova, hold unique opportunities to understand fundamental questions from the nature of dark matter to the strong CP problem.

The post Neutron stars as fundamental physics labs appeared first on CERN Courier.

]]>
Meeting report Fifty experts on nuclear physics, particle physics and astrophysics met at CERN from 9 to 13 June to discuss how to use extreme environments as precise laboratories for fundamental physics. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_FN_Neutron.jpg
The battle of the Big Bang https://cerncourier.com/a/the-battle-of-the-big-bang/ Tue, 08 Jul 2025 20:05:48 +0000 https://cerncourier.com/?p=113664 Battle of the Big Bang provides an entertaining update on the collective obsessions and controlled schizophrenias in cosmology, writes Will Kinney.

The post The battle of the Big Bang appeared first on CERN Courier.

]]>
As Arthur Koestler wrote in his seminal 1959 work The Sleepwalkers, “The history of cosmic theories … may without exaggeration be called a history of collective obsessions and controlled schizophrenias; and the manner in which some of the most important individual discoveries were arrived at, reminds one more of a sleepwalker’s performance than an electronic’s brain.” Koestler’s trenchant observation about the state of cosmology in the first half of the 20th century is perhaps even more true of cosmology in the first half of the 21st, and Battle of the Big Bang: The New Tales of Our Cosmic Origins provides an entertaining – and often refreshingly irreverent – update on the state of current collective obsessions and controlled schizophrenias in cosmology’s effort to understand the origin of the universe. The product of a collaboration between a working cosmologist (Afshordi) and a science communicator (Halper), Battle of the Big Bang tells the story of our modern efforts to comprehend the nature of the first moments of time, back to the moment of the Big Bang and even before.

Rogues gallery

The story told by the book combines lucid explanations of a rogues’ gallery of modern cosmological theories, some astonishingly successful, others less so, interspersed with anecdotes culled from Halper’s numerous interviews with key players in the game. These stories of the real people behind the theories add humanistic depth to the science, and the balance between Halper’s engaging storytelling and Afshordi’s steady-handed illumination of often esoteric scientific ideas is mostly a winning combination; the book is readable, without sacrificing too much scientific depth. In this respect, Battle of the Big Bang is reminiscent of Dennis Overbye’s 1991 Lonely Hearts of the Cosmos. As with Overbye’s account of the famous conference-banquet fist fight between Rocky Kolb and Gary Steigman, there is no shortage here of renowned scientists behaving like children, and the “mean girls of cosmology” angle makes for an entertaining read. The story of University of North Carolina professor Paul Frampton getting catfished by cocaine smugglers posing as model Denise Milani and ending up in an Argentine prison, for example, is not one you see coming.

Battle of the Big Bang: The New Tales of Our Cosmic Origins

A central conflict propelling the narrative is the longstanding feud between Andrei Linde and Alan Guth, both originators of the theory of cosmological inflation, and Paul Steinhardt, also an originator of the theory who later transformed into an apostate and bitter critic of the theory he helped establish.

Inflation – a hypothesised period of exponential cosmic expansion by more than 26 orders of magnitude that set the initial conditions for the hot Big Bang – is the gorilla in the room, a hugely successful theory that over the past several decades has racked up win after win when confronted by modern precision cosmology. Inflation is rightly considered by most cosmologists to be a central part of the “standard” cosmology, and its status as a leading theory inevitably makes it a target of critics like Steinhardt, who argue that inflation’s inherent flexibility means that it is not a scientific theory at all. Inflation is introduced early in the book, and for the remainder, Afshordi and Halper ably lead the reader through a wild mosaic of alternative theories to inflation: multiverses, bouncing universes, new universes birthed from within black holes, extra dimensions, varying light speed and “mirror” universes with reversed time all make appearances, a dizzying inventory of our most recent collective obsessions and schizophrenias.

In the later chapters, Afshordi describes some of his own efforts to formulate an alternative to inflation, and it is here that the book is at its strongest; the voice of a master of the craft confronting his own unconscious assumptions and biases makes for compelling reading. I have known Niayesh as a friend and colleague for more than 20 years. He is a fearlessly creative theorist with deep technical skill, but he has the heart of a rebel and a poet, and I found myself wishing that the book gave his unique voice more room to shine, instead of burying it beneath too many mundane pop-science tropes; the book could have used more of the science and less of the “science communication”. At times the pop-culture references come so thick that the reader feels as if he is having to shake them off his leg.

Compelling arguments

Anyone who reads science blogs or follows science on social media is aware of the voices, some of them from within mainstream science and many from further out on the fringe, arguing that modern theoretical physics suffers from a rigid orthodoxy that serves to crowd out worthy alternative ideas to understand problems such as dark matter, dark energy and the unification of gravity with quantum mechanics. This has been the subject of several books such as Lee Smolin’s The Trouble with Physics and Peter Woit’s Not Even Wrong. A real value in Battle of the Big Bang is to provide a compelling counterargument to that pessimistic narrative. In reality, ambitious scientists like nothing better than overturning a standard paradigm, and theorists have put the standard model of cosmology in the cross hairs with the gusto of assassins gunning for John Wick. Despite – or perhaps because of – its focus on conflict, this book ultimately paints a picture of a vital and healthy scientific process, a kind of controlled chaos, ripe with wild ideas, full of the clash of egos and littered with the ashes of failed shots at glory.

What the book is not is a reliable scholarly work on the history of science. Not only was the manuscript rather haphazardly copy-edited (the renowned Mount Palomar telescope, for example, is not “two hundred foot”, but in fact 200 inches), but the historical details are sometimes smoothed over to fit a coherent narrative rather than presented in their actual messy accuracy. While I do not doubt the anecdote of David Spergel saying “we’re dead”, referring to cosmic strings when data from the COBE satellite was first released, it was not COBE that killed cosmic strings. The blurry vision of COBE could accommodate either strings or inflation as the source of fluctuations in the cosmic microwave background (CMB), and it took a clearer view to make the distinction. The final nail in the coffin came from BOOMERanG nearly a decade later, with the observation of the second acoustic peak in the CMB. And it was not, as claimed here, BOOMERanG that provided the first evidence for a flat geometry to the cosmos; that happened a few years earlier, with the Saskatoon and CAT experiments.

Afshordi and Halper ably lead the reader through a wild mosaic of alternative theories to inflation

The book makes a point of the premature death of Dave Wilkinson, when in fact he died at age 67, not (as is implied in the text) in his 50s. Wilkinson – who was my freshman physics professor – was a great scientist and a gifted teacher, and it is appropriate to memorialise him, but he had a long and productive career.

Besides these points of detail, there are some more significant omissions. The book relates the story of how the Ukrainian physicist Alex Vilenkin, blacklisted from physics and working as a zookeeper in Kharkiv, escaped the Soviet Union. Vilenkin moved to SUNY Buffalo, where I am currently a professor, because he had mistaken Mendel Sachs, a condensed matter theorist, for Ray Sachs, who originally predicted fluctuations in the CMB. It’s a funny story, and although the authors note that Vilenkin was blacklisted for refusing to be an informant for the KGB, they omit the central context that he was Jewish, one of many Jews banished from academic life by Soviet authorities who escaped the stifling anti-Semitism of the Soviet Union for scientific freedom in the West. This history resonates today in light of efforts by some scientists to boycott Israeli institutes and even blacklist Israeli colleagues. Unlike the minutiae of CMB physics, this matters, and Battle of the Big Bang should have been more careful to tell the whole story.

The post The battle of the Big Bang appeared first on CERN Courier.

]]>
Review Battle of the Big Bang provides an entertaining update on the collective obsessions and controlled schizophrenias in cosmology, writes Will Kinney. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_Rev_Steinhardt.jpg
Exceptional flare tests blazar emission models https://cerncourier.com/a/exceptional-flare-tests-blazar-emission-models/ Tue, 08 Jul 2025 19:51:49 +0000 https://cerncourier.com/?p=113571 A new analysis of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer sheds light on the emission mechanisms of active galactic nuclei.

The post Exceptional flare tests blazar emission models appeared first on CERN Courier.

]]>
Active galactic nuclei (AGNs) are extremely energetic regions at the centres of galaxies, powered by accretion onto a supermassive black hole. Some AGNs launch plasma outflows moving near light speed. Blazars are a subclass of AGNs whose jets are pointed almost directly at Earth, making them appear exceptionally bright across the electro­magnetic spectrum. A new analysis of an exceptional flare of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer (IXPE) has now shed light on their emission mechanisms.

The spectral energy distribution of blazars generally has two broad peaks. The low-energy peak from radio to X-rays is well explained by synchrotron radiation from relativistic electrons spiraling in magnetic fields, but the origin of the higher-energy peak from X-rays to γ-rays is a longstanding point of contention, with two classes of models, dubbed hadronic and leptonic, vying to explain it. Polarisation measurements offer a key diagnostic tool, as the two models predict distinct polarisation signatures.

Model signatures

In hadronic models, high-energy emission is produced by protons, either through synchrotron radiation or via photo-hadronic interactions that generate secondary particles. Hadronic models predict that X-ray polarisation should be as high as that in the optical and millimetre bands, even in complex jet structures.

Leptonic models are powered by inverse Compton scattering, wherein relativistic electrons “upscatter” low-energy photons, boosting them to higher energies with low polarisation. Leptonic models can be further subdivided by the source of the inverse-Compton-scattered photons. If initially generated by synchrotron radiation in the AGN (synchrotron self-Compton, SSC), modest polarisation (~50%) is expected due to the inherent polarisation of synchrotron photons, with further reductions if the emission comes from inhomogeneous or multiple emitting regions. If initially generated by external sources (external Compton, EC), isotropic photon fields from the surrounding structures are expected to average out their polarisation.

IXPE launched on 9 December 2021, seeking to resolve such questions. It is designed to have 100-fold better sensitivity to the polarisation of X-rays in astrophysical sources than the last major X-ray polarimeter, which was launched half a century ago (CERN Courier July/August 2022 p10). In November 2023, it participated in a coordinated multiwavelength campaign spanning radio, millimetre and optical, and X-ray bands targeted the blazar BL Lacertae, whose X-ray emission arises mostly from the high-energy component, with its low-energy synchrotron component mainly at infrared energies. The campaign captured an exceptional flare, providing a rare opportunity to test competing emission models.

Optical telescopes recorded a peak optical polarisation of 47.5 ± 0.4%, the highest ever measured in a blazar. The short-mm (1.3 mm) polarisation also rose to about 10%, with both bands showing similar trends in polarisation angle. IXPE measured no significant polarisation in the 2 to 8 keV X-ray band, placing a 3σ upper limit of 7.4%.

The striking contrast between the high polarisation in optical and mm bands, and a strict upper limit in X-rays, effectively rules out all single-zone and multi-region hadronic models. Had these processes dominated, the X-ray polarisation would have been comparable to the optical. Instead, the observations strongly support a leptonic origin, specifically the SSC model with a stratified or multi-zone jet structure that naturally explains the low X-ray polarisation.

A key feature of the flare was the rapid rise and fall of optical polarisation

A key feature of the flare was the rapid rise and fall of optical polarisation. Initially, it was low, of order 5%, and aligned with the jet direction, suggesting the dominance of poloidal or turbulent fields. A sharp increase to nearly 50%, while retaining alignment, indicates the sudden injection of a compact, toroidally dominated magnetic structure.

The authors of the analysis propose a “magnetic spring” model wherein a tightly wound toroidal field structure is injected into the jet, temporarily ordering the magnetic field and raising the optical polarisation. As the structure travels outward, it relaxes, likely through kink instabilities, causing the polarisation to decline over about two weeks. This resembles an elastic system, briefly stretched and then returning to equilibrium.

A magnetic spring would also explain the multiwavelength flaring. The injection boosted the total magnetic field strength, triggering an unprecedented mm-band flare powered by low-energy electrons with long cooling times. The modest rise in mm-wavelength polarisation (green points) suggests emission from a large, turbulent region. Meanwhile, optical flaring (black points) was suppressed due to the rapid synchrotron cooling of high-energy electrons, consistent with the observed softening of the optical spectrum. No significant γ-ray enhancement was observed, as these photons originate from the same rapidly cooling electron population.

Turning point

These findings mark a turning point in high-energy astrophysics. The data definitively favour leptonic emission mechanisms in BL Lacertae during this flare, ruling out efficient proton acceleration and thus any associated high-energy neutrino or cosmic-ray production. The ability of the jet to sustain nearly 50% polarisation across parsec scales implies a highly ordered, possibly helical magnetic field extending far from the supermassive black hole.

The results cement polarimetry as a definitive tool in identifying the origin of blazar emission. The dedicated Compton Spectrometer and Imager (COSI) γ-ray polarimeter is soon set to complement IXPE at even higher energies when launched by NASA in 2027. Coordinated campaigns will be crucial for probing jet composition and plasma processes in AGNs, helping us understand the most extreme environments in the universe.

The post Exceptional flare tests blazar emission models appeared first on CERN Courier.

]]>
News A new analysis of BL Lacertae by NASA’s Imaging X-ray Polarimetry Explorer sheds light on the emission mechanisms of active galactic nuclei. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_NA_Astro.jpg
Advances in very-high-energy astrophysics https://cerncourier.com/a/advances-in-very-high-energy-astrophysics/ Tue, 08 Jul 2025 19:14:12 +0000 https://cerncourier.com/?p=113677 Advances in Very High Energy Astrophysics summarises the progress made by the third generation of imaging atmospheric Cherenkov telescopes.

The post Advances in very-high-energy astrophysics appeared first on CERN Courier.

]]>
Advances in Very High Energy Astrophysics: The Science Program of the Third Generation IACTs for Exploring Cosmic Gamma Rays

Imaging atmospheric Cherenkov telescopes (IACTs) are designed to detect very-high-energy gamma rays, enabling the study of a range of both galactic and extragalactic gamma-ray sources. By capturing Cherenkov light from gamma-ray-induced air showers, IACTs help trace the origins of cosmic rays and probe fundamental physics, including questions surrounding dark matter and Lorentz invariance. Since the first gamma-ray source detection by the Whipple telescope in 1989, the field has rapidly advanced through instruments like HESS, MAGIC and VERITAS. Building on these successes, the Cherenkov Telescope Array Observatory (CTAO) represents the next generation of IACTs, with greatly improved sensitivity and energy coverage. The northern CTAO site on La Palma is already collecting data, and major infrastructure development is now underway at the southern site in Chile, where telescope construction is set to begin soon.

Considering the looming start to CTAO telescope construction, Advances in Very High Energy Astrophysics, edited by Reshmi Mukherjee of Barnard College and Roberta Zanin, from the University of Barcelona, is very timely. World-leading experts tackle the almost impossible task of summarising the progress made by the third-generation IACTs: HESS, MAGIC and VERITAS.

The range of topics covered is vast, spanning the last 20 years of progress in the areas of IACT instrumentation, data-analysis techniques, all aspects of high-energy astrophysics, cosmic-ray astrophysics and gamma-ray cosmology.  The authors are necessarily selective, so the depth into each sector is limited, but I believe that the essential concepts were properly introduced and the most important highlights captured. The primary focus of the book lies in discussions surrounding gamma-ray astronomy and high-energy physics, cosmic rays and ongoing research into dark matter.

It appears, however, that the individual chapters were all written independently of each other by different authors, leading to some duplications. Source classes and high-energy radiation mechanisms are introduced multiple times, sometimes with different terminology and notation in the different chapters, which could lead to confusion for novices in the field. But though internal coordination could have been improved, a positive aspect of this independence is that each chapter is self-contained and can be read on its own. I recommend the book to emerging researchers looking for a broad overview of this rapidly evolving field.

The post Advances in very-high-energy astrophysics appeared first on CERN Courier.

]]>
Review Advances in Very High Energy Astrophysics summarises the progress made by the third generation of imaging atmospheric Cherenkov telescopes. https://cerncourier.com/wp-content/uploads/2025/07/CCJulAug25_Rev_Advances_feature.jpg
Discovering the neutrino sky https://cerncourier.com/a/discovering-the-neutrino-sky/ Mon, 19 May 2025 08:01:22 +0000 https://cerncourier.com/?p=113109 Lu Lu looks forward to the next two decades of neutrino astrophysics, exploring the remarkable detector concepts needed to probe ultra-high energies from 1 EeV to 1 ZeV.

The post Discovering the neutrino sky appeared first on CERN Courier.

]]>
Lake Baikal, the Mediterranean Sea and the deep, clean ice at the South Pole: trackers. The atmosphere: a calorimeter. Mountains and even the Moon: targets. These will be the tools of the neutrino astrophysicist in the next two decades. Potentially observable energies dwarf those of the particle physicist doing repeatable experiments, rising up to 1 ZeV (1021 eV) for some detector concepts.

The natural accelerators of the neutrino astrophysicist are also humbling. Consider, for instance, the extraordinary relativistic jets emerging from the supermassive black hole in Messier 87 – an accelerator that stretches for about 5000 light years, or roughly 315 million times the distance from the Earth to the Sun.

Alongside gravitational waves, high-energy neutrinos have opened up a new chapter in astronomy. They point to the most extreme events in the cosmos. They can escape from regions where high-energy photons are attenuated by gas and dust, such as NGC 1068, the first steady neutrino emitter to be discovered (see “The neutrino sky” figure). Their energies can rise orders of magnitude above 1 PeV (1015 eV), where the universe becomes opaque to photons due to pair production with the cosmic microwave background. Unlike charged cosmic rays, they are not deflected by magnetic fields, preserving their original direction.

Breaking into the exascale calls for new thinking

High-energy neutrinos therefore offer a unique window into some of the most profound questions in modern physics. Are there new particles beyond the Standard Model at the highest energies? What acceleration mechanisms allow nature to propel them to such extraordinary energies? And is dark matter implicated in these extreme events? With the observation of a 220+570–110 PeV neutrino confounding the limits set by prior observatories and opening up the era of ultra-high-energy neutrino astronomy (CERN Courier March/April 2025 p7), the time is ripe for a new generation of neutrino detectors on an even grander scale (see “Thinking big” table).

A cubic-kilometre ice cube

Detecting high-energy neutrinos is a serious challenge. Though the neutrino–nucleon cross section increases a little less than linearly with neutrino energy, the flux of cosmic neutrinos drops as the inverse square or faster, reducing the event rate by nearly an order of magnitude per decade. A cubic-kilometre-scale detector is required to measure cosmic neutrinos beyond 100 TeV, and Earth starts to be opaque as energies rise beyond a PeV or so, when the odds of a neutrino being absorbed as it passes through the planet are roughly even depending on the direction of the event.

Thinking big

The journey of cosmic neutrino detection began off the coast of the Hawaiian Islands in the 1980s, led by John Learned of the University of Hawaii at Mānoa. The DUMAND (Deep Underwater Muon And Neutrino Detector) project sought to use both an array of optical sensors to measure Cherenkov light and acoustic detectors to measure the pressure waves generated by energetic particle cascades in water. It was ultimately cancelled in 1995 due to engineering difficulties related to deep-sea installation, data transmission over long underwater distances and sensor reliability under high pressure.

The next generation of cubic-kilometre-scale neutrino detectors built on DUMAND’s experience. The IceCube Neutrino Observatory has pioneered neutrino astronomy at the South Pole since 2011, probing energies from 10 GeV to 100 PeV, and is now being joined by experiments under construction such as KM3NeT in the Mediterranean Sea, which observed the 220 PeV candidate, and Baikal–GVD in Lake Baikal, the deepest lake on Earth. All three experiments watch for the deep inelastic scattering of high-energy neutrinos, using optical sensors to detect Cherenkov photons emitted by secondary particles.

Exascale from above

A decade of data-taking from IceCube has been fruitful. The Milky Way has been observed in neutrinos for the first time. A neutrino candidate event has been observed that is consistent with the Glashow resonance – the resonant production in the ice of a real W boson by a 6.3 PeV electron–antineutrino – confirming a longstanding prediction from 1960. Neutrino emission has been observed from supermassive black holes in NGC 1068 and TXS 0506+056. A diffuse neutrino flux has been discovered beyond 10 TeV. Neutrino mixing parameters have been measured. And flavour ratios have been constrained: due to the averaging of neutrino oscillations over cosmological distances, significant deviations from a 1:1:1 ratio of electron, muon and tau neutrinos could imply new physics such as the violation of Lorentz invariance, non-standard neutrino interactions or neutrino decay.

The sensitivity and global coverage of water-Cherenkov neutrino observatories is set to increase still further. The Pacific Ocean Neutrino Experiment (P-ONE) aims to establish a cubic-kilometre-scale deep-sea neutrino telescope off the coast of Canada; IceCube will expand the volume of its optical array by a factor eight; and the TRIDENT and HUNT experiments, currently being prototyped in the South China Sea, may offer the largest detector volumes of all. These detectors will improve sky coverage, enhance angular resolution, and increase statistical precision in the study of neutrino sources from 1 TeV to 10 PeV and above.

Breaking into the exascale calls for new thinking.

Into the exascale

Optical Cherenkov detectors have been exceptionally successful in establishing neutrino astronomy, however, the attenuation of optical photons in water and ice requires the horizontal spacing of photodetectors to a few hundred metres at most, constraining the scalability of the technology. To achieve sensitivity to ultra-high energies measured in EeV (1018 eV), an instrumented area of order 100 km2 would be required. Constructing an optical-based detector on such a scale is impractical.

Earth skimming

One solution is to exchange the tracking volume of IceCube and its siblings with a larger detector that uses the atmosphere as a calorimeter: the deposited energy is sampled on the Earth’s surface.

The Pierre Auger Observatory in Argentina epitomises this approach. If IceCube is presently the world’s largest detector by volume, the Pierre Auger Observatory is the world’s largest detector by area. Over an area of 3000 km2, 1660 water Cherenkov detectors and 24 fluorescence telescopes sample the particle showers generated when cosmic rays with energies beyond 10 EeV strike the atmosphere, producing billions of secondary particles. Among the showers it detects are surely events caused by ultra-high-energy neutrinos, but how might they be identified?

Out on a limb

One of the most promising approaches is to filter events based on where the air shower reaches its maximum development in the atmosphere. Cosmic rays tend to interact after traversing much less atmosphere than neutrinos, since the weakly interacting neutrinos have a much smaller cross-section than the hadronically interacting cosmic rays. In some cases, tau neutrinos can even skim the Earth’s atmospheric edge or “limb” as seen from space, interacting to produce a strongly boosted tau lepton that emerges from the rock (unlike an electron) to produce an upward-going air shower when it decays tens of kilometres later – though not so much later (unlike a muon) that it has escaped the atmosphere entirely. This signature is not possible for charged cosmic rays. So far, Auger has detected no neutrino candidate events of either topology, imposing stringent upper limits on the ultra-high-energy neutrino flux that are compatible with limits set by IceCube. The AugerPrime upgrade, soon expected to be fully operational, will equip each surface detector with scintillator panels and improved electronics.

Pole position

Experiments in space are being developed to detect these rare showers with an even larger instrumentation volume. POEMMA (Probe of Extreme Multi-Messenger Astrophysics) is a proposed satellite mission designed to monitor the Earth’s atmosphere from orbit. Two satellites equipped with fluorescence and Cherenkov detectors will search for ultraviolet photons produced by extensive air showers (see “Exascale from above” figure). EUSO-SPB2 (Extreme Universe Space Observatory on a Super Pressure Balloon 2) will test the same detection methods from the vantage point of high-atmosphere balloons. These instruments can help distinguish cosmic rays from neutrinos by identifying shallow showers and up-going events.

Another way to detect ultra-high-energy neutrinos is by using mountains and valleys as natural neutrino targets. This Earth-skimming technique also primarily relies on tau neutrinos, as the tau leptons produced via deep inelastic scattering in the rock can emerge from Earth’s crust and decay within the atmosphere to generate detectable particle showers in the air.

The Giant Radio Array for Neutrino Detection (GRAND) aims to detect radio signals from these tau-induced air showers using a large array of radio antennas spread over thousands of square kilometres (see “Earth skimming” figure). GRAND is planned to be deployed in multiple remote, mountainous locations, with the first site in western China, followed by others in South America and Africa. The Tau Air-Shower Mountain-Based Observatory (TAMBO) has been proposed to be deployed on the face of the Colca Canyon in the Peruvian Andes, where an array of scintillators will detect the electromagnetic signals from tau-induced air showers.

Another proposed strategy that builds upon the Earth-skimming principle is the Trinity experiment, which employs an array of Cherenkov telescopes to observe nearby mountains. Ground-based air Cherenkov detectors are known for their excellent angular resolution, allowing for precise pointing to trace back to the origin of the high-energy primary particles. Trinity is a proposed system of 18 wide-field Cherenkov telescopes optimised for detecting neutrinos in the 10 PeV–1000 PeV energy range from the direction of nearby mountains – an approach validated by experiments such as Ashra–NTA, deployed on Hawaii’s Big Island utilising the natural topography of the Mauna Loa, Mauna Kea and Hualālai volcanoes.

Diffuse neutrino landscape

All these ultra-high-energy experiments detect particle showers as they develop in the atmosphere, whether from above, below or skimming the surface. But “Askaryan” detectors operate deep within the ice of the Earth’s poles, where both the neutrino interaction and detection occur.

In 1962 Soviet physicist Gurgen Askaryan reasoned that electromagnetic showers must develop a net negative charge excess as they develop, due to the Compton scattering of photons off atomic electrons and the ionisation of atoms by charged particles in the shower. As the charged shower propagates faster than the phase velocity of light in the medium, it should emit radiation in a manner analogous to Cherenkov light. However, there are key differences: Cherenkov radiation is typically incoherent and emitted by individual charged particles, while Askaryan radiation is coherent, being produced by a macroscopic buildup of charge, and is significantly stronger at radio frequencies. The Askaryan effect was experimentally confirmed at SLAC in 2001.

Optimised arrays

Because the attenuation length of radio waves is an order of magnitude longer than for optical photons, it becomes feasible to build much sparser arrays of radio antennas to detect the Askaryan signals than the compact optical arrays used in deep ice Cherenkov detectors. Such detectors are optimised to cover thousands of square kilometres, with typical energy thresholds beyond 100 PeV.

The Radio Neutrino Observatory in Greenland (RNO-G) is a next-generation in-ice radio detector currently under construction on the ~3 km-thick ice sheet above central Greenland, operating at frequencies in the 150–700 MHz range. RNO-G will consist of a sparse array of 35 autonomous radio detector stations, each separated by 1.25 km, making it the first large-scale radio neutrino array in the northern hemisphere.

Moon skimming

In the southern hemisphere, the proposed IceCube-Gen2 will complement the aforementioned eightfold expanded optical array with a radio component covering a remarkable 500 km2. The cold Antarctic ice provides an optimal medium for radio detection, with radio attenuation lengths of roughly 2 km facilitating cost-efficient instrumentation of the large volumes needed to measure the low ultra-high-energy neutrino flux. The radio array will combine in-ice omnidirectional antennas 150 m below the surface with high-gain antennas at a depth of 15 m and upward-facing antennas on the surface to veto the cosmic-ray background.

The IceCube-Gen2 radio array will have the sensitivity to probe features of the spectrum of astrophysical neutrino beyond the PeV scale, addressing the tension between upper limits from Auger and IceCube, and KM3NeT’s 220 +570–110PeV neutrino candidate – the sole ultra-high-energy neutrino yet observed. Extrapolating an isotropic and diffuse flux, IceCube should have detected 75 events in the 72–2600 PeV energy range over its operational period. However, no events have been observed above 70 PeV.

Perhaps the most ambitious way to observe ultra-high-energy neutrinos is to use the Moon as a target

If the detected KM3NeT event has a neutrino energy of around 100 PeV, it could originate from the same astrophysical sources responsible for accelerating ultra-high-energy cosmic rays. In this case, interactions between accelerated protons and ambient photons from starlight or synchrotron radiation would produce pions that decay into ultra-high-energy neutrinos. Alternatively, if its true energy is closer to 1 EeV, it is more likely cosmogenic: arising from the Greisen–Zatsepin–Kuzmin process, in which ultra-high-energy cosmic rays interact with cosmic microwave background photons, producing a Δ-resonance that decays into pions and ultimately neutrinos. IceCube-Gen2 will resolve the spectral shape from PeV to 10 EeV and differentiate between these two possible production mechanisms (see “Diffuse neutrino landscape” figure).

Moonshots

Remarkably, the Radar Echo Telescope (RET) is exploring using radar to actively probe the ice for transient signals. Unlike Askaryan-based detectors, which passively listen for radio pulses generated by charge imbalances in particle cascades, RET’s concept is to beam a radar signal and watch for reflections off the ionisation caused by particle showers. SLAC’s T576 experiment demonstrated the concept in the lab in 2022 by observing a radar echo from a beam of high-energy electrons scattering off a plastic target. RET has now been deployed in Greenland, where it seeks echoes from down-going cosmic rays as a proof of concept.

Full-sky coverage

Perhaps the most ambitious way to observe ultra-high-energy neutrinos foresees using the Moon as a target. When neutrinos with energies above 100 EeV interact near the rim of the Moon, they can induce particle cascades that generate coherent Askaryan radio emission which could be detectable on Earth (see “Moon skimming” figure). Observations could be conducted from Earth-based radio telescopes or from satellites orbiting the Moon to improve detection sensitivity. Lunar Askaryan detectors could potentially be sensitive to neutrinos up to 1 ZeV (1021 eV). No confirmed detections have been reported so far.

Neutrino network

Proposed neutrino observatories are distributed across the globe – a necessary requirement for full sky coverage, given the Earth is not transparent to ultra-high-energy neutrinos (see “Full-sky coverage” figure). A network of neutrino telescopes ensures that transient astrophysical events can always be observed as the Earth rotates. This is particularly important for time-domain multi-messenger astronomy, enabling coordinated observations with gravitational wave detectors and electromagnetic counterparts. The ability to track neutrino signals in real time will be key to identifying the most extreme cosmic accelerators and probing fundamental physics at ultra-high energies.

The post Discovering the neutrino sky appeared first on CERN Courier.

]]>
Feature Lu Lu looks forward to the next two decades of neutrino astrophysics, exploring the remarkable detector concepts needed to probe ultra-high energies from 1 EeV to 1 ZeV. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_NEUTRINOS_sky.jpg
DESI hints at evolving dark energy https://cerncourier.com/a/desi-hints-at-evolving-dark-energy/ Fri, 16 May 2025 16:57:24 +0000 https://cerncourier.com/?p=113047 The new data could indicate a deviation from the ΛCDM model.

The post DESI hints at evolving dark energy appeared first on CERN Courier.

]]>
The dynamics of the universe depend on a delicate balance between gravitational attraction from matter and the repulsive effect of dark energy. A universe containing only matter would eventually slow down its expansion due to gravitational forces and possibly recollapse. However, observations of Type Ia supernovae in the late 1990s revealed that our universe’s expansion is in fact accelerating, requiring the introduction of dark energy. The standard cosmological model, called the Lambda Cold Dark Matter (ΛCDM) model, provides an elegant and robust explanation of cosmological observations by including normal matter, cold dark matter (CDM) and dark energy. It is the foundation of our current understanding of the universe.

Cosmological constant

In ΛCDM, Λ refers to the cosmological constant – a parameter introduced by Albert Einstein to counter the effect of gravity in his pursuit of a static universe. With the knowledge that the universe is accelerating, Λ is now used to quantify this acceleration. An important parameter that describes dark energy, and therefore influences the evolution of the universe, is its equation-of-state parameter, w. This value relates the pressure dark energy exerts on the universe, p, to its energy density, ρ, via p = wρ. Within ΛCDM, w is –1 and ρ is constant – a combination that has to date explained observations well. However, new results by the Dark Energy Spectroscopic Instrument (DESI) put these assumptions under increasing stress.

These new results are part of the second data release (DR2) from DESI. Mounted on the Nicholas U Mayall 4-metre telescope at Kitt Peak National Observatory in
Arizona, DESI is optimised to measure the spectra of a large number of objects in the sky simultaneously. Joint observations are possible thanks to 5000 optical fibres controlled through robots, which continuously optimise the focal plane of the detector. Combined with a highly efficient processing pipeline, this allows DESI to perform detailed simultaneous spectrometer measurements of a large number of objects in the sky, resulting in a catalogue of measurements of the distance of objects based on their velocity-induced shift in wavelength, or redshift. For its first data release, DESI used 6 million such redshifts, allowing it to show that w was several sigma away from its expected value of –1 (
CERN Courier May/June 2024 p11). For DR2, 14 million measurements are used, enough to provide strong hints of w changing with time.

The first studies of the expansion rate of the universe were based on redshift measurements of local objects, such as supernovae. As the objects are relatively close, they provide data on the acceleration at small redshifts. An alternative method is to use the cosmic microwave background (CMB), which allows for measurements of the evolution of the early universe through complex imprints left on the current distribution of the CMB. The significantly smaller expansion rate measured through the CMB compared to local measurements resulted in a “Hubble tension”, prompting novel measurements to resolve or explain the observed difference (CERN Courier March/April 2025 p28). One such attempt comes from DESI, which aims to provide a detailed 3D map of the universe focusing on the distance between galaxies to measure the expansion (see “3D map” figure).

Tension with ΛCDM

The 3D map produced by DESI can be used to study the evolution of the universe as it holds imprints from small fluctuations in the density of the early universe. These density fluctuations have been studied through their imprint on the CMB, however, they also left imprints in the distribution of baryonic matter until the age of recombination occurred. The variations in baryonic density grew over time into the varying densities of galaxies and other large-scale structures that are observed today.

The regions originally containing higher baryon densities are now those with larger densities of galaxies. Exactly how the matter-density fluctuations evolved into variations in galaxy densities throughout the universe depends on a range of parameters from the ΛCDM model, including w. The detailed map of the universe produced by DESI, which contains a range of objects with redshifts up to 2.5, can therefore be fitted against the ΛCDM model.

Among other studies, the latest data from DESI was combined with that of CMB observations and fitted to the ΛCDM model. This worked relatively well, although it requires a lower matter-density parameter than found from CMB data alone. However, using the resulting cosmological parameters results in a poor match with the data for the early universe coming from supernova measurements. Similarly, fitting the ΛCDM model using the supernova data results in poor agreement with both the DESI and CMB data, thereby putting some strain on the ΛCDM model. Things don’t get significantly better when adding some freedom in these analyses by allowing w to differ from –1.

The new data release provides significant evidence of a deviation from the ΛCDM model

An adaption of the ΛCDM model that results in an agreement with all three datasets requires w to evolve with redshift, or time. The implications for the acceleration of the universe based on these results are shown in the “Tension with ΛCDM” figure, which shows the deceleration rate of the expansion of the universe as a function of redshift. q < 0 implies an accelerating universe. In the ΛCDM model, acceleration increases with time, as redshift approaches 0. DESI data suggests that the acceleration of the universe started earlier, but is currently less than that predicted by ΛCDM.

Although this model matches the data well, a theoretical explanation is difficult. In particular, the data implies that w(z) was below –1, which translates into an energy density that increases with the expansion; however, the energy density seems to have peaked at a redshift of 0.45 and is now decreasing.

Overall, the new data release provides significant evidence of a deviation from the ΛCDM model. The exact significance depends on the specific analysis and which data sets are combined, however, all such studies provide similar results. As no 5σ discrepancy is found yet, there is no reason to discard ΛCDM, though this could change with another two years of DESI data coming up, along with data from the European Euclid mission, Vera C Rubin Observatory, and the Nancy Grace Roman Space Telescope. Each will provide new insights into the expansion for various redshift periods.

The post DESI hints at evolving dark energy appeared first on CERN Courier.

]]>
News The new data could indicate a deviation from the ΛCDM model. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_NA-DESI.jpg
Gravitational remnants in the sky https://cerncourier.com/a/gravitational-remnants-in-the-sky/ Fri, 16 May 2025 16:36:48 +0000 https://cerncourier.com/?p=113216 Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to one of the most exciting frontiers in modern cosmology and particle physics.

The post Gravitational remnants in the sky appeared first on CERN Courier.

]]>
Astrophysical gravitational waves have revolutionised astronomy; the eventual detection of cosmological gravitons promises to open an otherwise inaccessible window into the universe’s earliest moments. Such a discovery would offer profound insights into the hidden corners of the early universe and physics beyond the Standard Model. Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to the most exciting frontiers in modern cosmology and particle physics.

Giovannini is an esteemed scholar and household name in the fields of theoretical cosmology and early-universe physics. He has written influential research papers, reviews and books on cosmology, providing detailed discussions on several aspects of the early universe. He also authored 2008’s A Primer on the Physics of the Cosmic Microwave Background – a book most cosmologists are very familiar with.

In Relic Gravitons, Giovannini provides a comprehensive exploration of recent developments in the field, striking a remarkable balance between clarity, physical intuition and rigorous mathematical formalism. As such, it serves as an excellent reference – equally valuable for both junior researchers and seasoned experts seeking depth and insight into theoretical cosmology and particle physics.

Relic Gravitons opens with an overview of cosmological gravitons, offering a broad perspective on gravitational waves across different scales and cosmological epochs, while drawing parallels with the electromagnetic spectrum. This graceful introduction sets the stage for a well-contextualised and structured discussion.

Gravitational rainbow

Relic gravitational waves from the early universe span 30 orders of magnitude, from attohertz to gigahertz. Their wavelengths are constrained from above by the Hubble radius, setting a lower frequency bound of 10–18 Hz. At the lowest frequencies, measurements of the cosmic microwave background (CMB) provide the most sensitive probe of gravitational waves. In the nanohertz range, pulsar timing arrays serve as powerful astrophysical detectors. At intermediate frequencies, laser and atomic interferometers are actively probing the spectrum. At higher frequencies, only wide-band interferometers such as LIGO and Virgo currently operate, primarily within the audio band spanning from a few hertz to several kilohertz.

Relic Gravitons

The theoretical foundation begins with a clear and accessible introduction to tensor modes in flat spacetime, followed by spherical harmonics and polarisations. With these basics in place, tensor modes in curved spacetime are also explored, before progressing to effective action, the quantum mechanics of relic gravitons and effective energy density. This structured progression builds a solid framework for phenomenological applications.

The second part of the book is about the signals of the concordance paradigm, which includes discussions of Sakharov oscillations, short, intermediate and long wavelengths, before entering technical interludes in the next section. Here, Giovannini emphasises that the evolution of the comoving Hubble radius is uncertain, spectral energy density and other observables require approximate methods. The chapter expands to include conventional results using the Wentzel–Kramers–Brillouin approach, which is particularly useful when early-universe dynamics deviate from standard inflation.

Phenomenological implications are discussed in the final section, starting with the low-frequency branch that covers the analysis of the phenomenological implications in the lowest-frequency domain. Giovannini then examines the intermediate and high-frequency ranges. The concordance paradigm suggests that large-scale inhomogeneities originate from quantum mechanics, where traveling waves transform into standing waves. The penultimate chapter addresses the hot topic of the “quantumness” of relic gravitons, before diving into the conclusion. The book finishes with five appendices covering all sorts of useful topics, from notation to basic related topics in general relativity and cosmic perturbations.

Relic Gravitons is a must-read for anyone intrigued by the gravitational-wave background and its unparalleled potential to unveil new physics. It is an invaluable resource for those interested in gravitational waves and the unique potential to explore the unknown parts of particle physics and cosmology.

The post Gravitational remnants in the sky appeared first on CERN Courier.

]]>
Review Relic Gravitons, by Massimo Giovannini of INFN Milan Bicocca, offers a timely and authoritative guide to one of the most exciting frontiers in modern cosmology and particle physics. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Rev_GreenBank.jpg
Particle Cosmology and Astrophysics https://cerncourier.com/a/particle-cosmology-and-astrophysics/ Fri, 16 May 2025 16:10:30 +0000 https://cerncourier.com/?p=113221 In Particle Cosmology and Astrophysics, Dan Hooper captures the rapid developments in particle cosmology over the past three decades.

The post Particle Cosmology and Astrophysics appeared first on CERN Courier.

]]>
Particle Cosmology and Astrophysics

In 1989, Rocky Kolb and Mike Turner published The Early Universe – a seminal book that offered a comprehensive introduction to the then-nascent field of particle cosmology, laying the groundwork for a generation of physicists to explore the connections between the smallest and largest scales of the universe. Since then, the interfaces between particle physics, astrophysics and cosmology have expanded enormously, fuelled by an avalanche of new data from ground-based and space-borne observatories.

In Particle Cosmology and Astrophysics, Dan Hooper follows in their footsteps, providing a much-needed update that captures the rapid developments of the past three decades. Hooper, now a professor at the University of Wisconsin–Madison, addresses the growing need for a text that introduces the fundamental concepts and synthesises the vast array of recent discoveries that have shaped our current understanding of the universe.

Hooper’s textbook opens with 75 pages of “preliminaries”, covering general relativity, cosmology, the Standard Model of particle physics, thermodynamics and high-energy processes in astrophysics. Each of these disciplines is typically introduced in a full semester of dedicated study, supported by comprehensive texts. For example, students seeking a deeper understanding of high-energy phenomena are likely to benefit from consulting Longair’s High Energy Astrophysics or Sigl’s Astroparticle Physics. Similarly, those wishing to advance their knowledge in particle physics will find that more detailed treatments are available in Griffiths’ Introduction to Elementary Particles or Peskin and Schroeder’s An Introduction to Quantum Field Theory, to mention just a few textbooks recommended by the author.

A much-needed update that captures the rapid developments of the past three decades

By distilling these complex subjects into just enough foundational content, Hooper makes the field accessible to those who have been exposed to only a fraction of the standard coursework. His approach provides an essential stepping stone, enabling students to embark on research in particle cosmology and astrophysics with a well calibrated introduction while still encouraging further study through more specialised texts.

Part II, “Cosmology”, follows a similarly pragmatic approach, providing an updated treatment that parallels Kolb and Turner while incorporating a range of topics that have, in the intervening years, become central to modern cosmology. The text now covers areas such as cosmic microwave background (CMB) anisotropies, the evidence for dark matter and its potential particle candidates, the inflationary paradigm, and the evidence and possible nature of dark energy.

Hooper doesn’t shy away from complex subjects, even when they resist simple expositions. The discussion on CMB anisotropies serves as a case in point: anyone who has attempted to condense this complex topic into a few graduate lectures is aware of the challenge in maintaining both depth and clarity. Instead of attempting an exhaustive technical introduction, Hooper offers a qualitative description of the evolution of density perturbations and how one extracts cosmological parameters from CMB observations. This approach, while not substituting for the comprehensive analysis found in texts such as Dodelson’s Modern Cosmology or Baumann’s Cosmology, provides students with a valuable overview that successfully charts the broad landscape of modern cosmology and illustrates the interconnectedness of its many subdisciplines.

Part III, “Particle Astrophysics”, contains a selection of topics that largely reflect the scientific interests of the author, a renowned expert in the field of dark matter. Some colleagues might raise an eyebrow at the book devoting 10 pages each to entire fields such as cosmic rays, gamma rays and neutrino astrophysics, and 50 pages to dark-matter candidates and searches. Others might argue that a book titled Particle Cosmology and Astrophysics is incomplete without detailing the experimental techniques behind the extraordinary advances witnessed in these fields and without at least a short introduction to the booming field of gravitational-wave astronomy. But the truth is that, in the author’s own words, particle cosmology and astrophysics have become “exceptionally multidisciplinary,” and it is impossible in a single textbook to do complete justice to domains that intersect nearly all branches of physics and astronomy. I would also contend that it is not only acceptable but indeed welcome for authors to align the content of their work with their own scientific interests, as this contributes to the diversity of textbooks and offers more choice to lecturers who wish to supplement a standard curriculum with innovative, interdisciplinary perspectives.

Ultimately, I recommend the book as a welcome addition to the literature and an excellent introductory textbook for graduate students and junior scientists entering the field.

The post Particle Cosmology and Astrophysics appeared first on CERN Courier.

]]>
Review In Particle Cosmology and Astrophysics, Dan Hooper captures the rapid developments in particle cosmology over the past three decades. https://cerncourier.com/wp-content/uploads/2025/05/CCMayJun25_Rev_hooper_feature.jpg
The Hubble tension https://cerncourier.com/a/the-hubble-tension/ Wed, 26 Mar 2025 15:22:42 +0000 https://cerncourier.com/?p=112638 Vivian Poulin asks if the tension between a direct measurement of the Hubble constant and constraints from the early universe could be resolved by new physics.

The post The Hubble tension appeared first on CERN Courier.

]]>

Just like particle physics, cosmology has its own standard model. It is also powerful in prediction, and brings new mysteries and profound implications. The first was the realisation in 1917 that a homogeneous and isotropic universe must be expanding. This led Einstein to modify his general theory of relativity by introducing a cosmological constant (Λ) to counteract gravity and achieve a static universe – an act he labelled his greatest blunder when Edwin Hubble provided observational proof of the universe’s expansion in 1929. Sixty-nine years later, Saul Perlmutter, Adam Riess and Brian Schmidt went further. Their observations of Type Ia supernovae (SN Ia) showed that the universe’s expansion was accelerating. Λ was revived as “dark energy”, now estimated to account for 68% of the total energy density of the universe.

On large scales the dominant motion of galaxies is the Hubble flow, the expansion of the fabric of space itself

The second dominant component of the model emerged not from theory but from 50 years of astrophysical sleuthing. From the “missing mass problem” in the Coma galaxy cluster in the 1930s to anomalous galaxy-rotation curves in the 1970s, evidence built up that additional gravitational heft was needed to explain the formation of the large-scale structure of galaxies that we observe today. The 1980s therefore saw the proposal of cold dark matter (CDM), now estimated to account for 27% of the energy density of the universe, and actively sought by diverse experiments across the globe and in space.

Dark energy and CDM supplement the remaining 5% of normal matter to form the ΛCDM model. ΛCDM is a remarkable six-parameter framework that models 13.8 billion years of cosmic evolution from quantum fluctuations during an initial phase of “inflation” – a hypothesised expansion of the universe by 26 to 30 orders of magnitude in roughly 10–36 seconds at the beginning of time. ΛCDM successfully models cosmic microwave background (CMB) anisotropies, the large-scale structure of the universe, and the redshifts and distances of SN Ia. It achieves this despite big open questions: the nature of dark matter, the nature of dark energy and the mechanism for inflation.

The Hubble tension

Cosmologists are eager to guide beyond-ΛCDM model-building efforts by testing its end-to-end predictions, and the model now seems to be failing the most important: predicting the expansion rate of the universe.

One of the main predictions of ΛCDM is the average energy density of the universe today. This determines its current expansion rate, otherwise known as the Hubble constant (H0). The most precise ΛCDM prediction comes from a fit to CMB data from ESA’s Planck satellite (operational 2009 to 2013), which yields H0 = 67.4 ± 0.5 km/s/Mpc. This can be tested against direct measurements in our local universe, revealing a surprising discrepancy (see “The Hubble tension” figure).

At sufficiently large distances, the dominant motion of galaxies is the Hubble flow – the expansion of the fabric of space itself. Directly measuring the expansion rate of the universe calls for fitting the increase in the recession velocity of galaxies deep within the Hubble flow as a function of distance. The gradient is H0.

Receding supernovae

While high-precision spectroscopy allows recession velocity to be precisely measured using the redshifts (z) of atomic spectra, it is more difficult to measure the distance to astrophysical objects. Geometrical methods such as parallax are imprecise at large distances, but “standard candles” with somewhat predictable luminosities such as cepheids and SN Ia allow distance to be inferred using the inverse square-law. Cepheids are pulsating post-main-sequence stars whose radius and observed luminosity oscillate over a period of one to 100 days, driven by the ionisation and recombination of helium in their outer layers, which increases opacity and traps heat; their period increases with their true luminosity. Before going supernova, SN Ia were white dwarf stars in binary systems; when the white dwarf accretes enough mass from its companion star, runaway carbon fusion produces a nearly standardised peak luminosity for a period of one to two weeks. Only SN Ia are deep enough in the Hubble flow to allow precise measurements of H0. When cepheids are observable in the same galaxies, they can be used to calibrate them.

Distance ladder

At present, the main driver of the Hubble tension is a 2022 measurement of H0 by the SH0ES (Supernova H0 for the Equation of State) team led by Adam Riess. As the SN Ia luminosity is not known from first principles, SH0ES built a “distance ladder” to calibrate the luminosity of 42 SN Ia within 37 host galaxies. The SN Ia are calibrated against intermediate-distance cepheids, and the cepheids are calibrated against four nearby “geometric anchors” whose distance is known through a geometric method (see “Distance ladder” figure). The geometric anchors are: Milky Way parallaxes from ESA’s Gaia mission; detached eclipsing binaries in the large and small magellanic clouds (LMC and SMC); and the “megamaser” galaxy host NGC4258, where water molecules in the accretion disk of a supermassive black hole emit Doppler-shifting microwave maser photons.

The great strength of the SH0ES programme is its use of NASA and ESA’s Hubble Space Telescope (HST, 1990–) at all three rungs of the distance ladder, bypassing the need for cross-calibration between instruments. SN Ia can be calibrated out to 40 Mpc. As a result, in 2022 SH0ES used measurements of 300 or so high-z SN Ia deep within the Hubble flow to measure H0 = 73.04 ± 1.04 km/s/Mpc. This is in more than 5σ tension with Planck’s ΛCDM prediction of 67.4 ± 0.5 km/s/Mpc.

Baryon acoustic oscillation

The sound horizon

The value of H0 obtained from fitting Planck CMB data has been shown to be robust in two key ways.

First, Planck data can be bypassed by combining CMB data from NASA’s WMAP probe (2001–2010) with observations by ground-based telescopes. WMAP in combination with the Atacama Cosmology Telescope (ACT, 2007–2022) yields H0 = 67.6 ± 1.1 km/s/Mpc. WMAP in combination with the South Pole Telescope (SPT, 2007–) yields H0 = 68.2 ± 1.1 km/s/Mpc. Second, and more intriguingly, CMB data can be bypassed altogether.

In the early universe, Compton scattering between photons and electrons was so prevalent that the universe behaved as a plasma. Quantum fluctuations from the era of inflation propagated like sound waves until the era of recombination, when the universe had cooled sufficiently for CMB photons to escape the plasma when protons and electrons combined to form neutral atoms. This propagation of inflationary perturbations left a characteristic scale known as the sound horizon in both the acoustic peaks of the CMB and in “baryon acoustic oscillations” (BAOs) seen in the large-scale structure of galaxy surveys (see “Baryon acoustic oscillation” figure). The sound horizon is the distance travelled by sound waves in the primordial plasma.

While the SH0ES measurement relies on standard candles, ΛCDM predictions rely instead on using the sound horizon as a “standard ruler” against which to compare the apparent size of BAOs at different redshifts, and thereby deduce the expansion rate of the universe. Under ΛCDM, the only two free parameters entering the computation of the sound horizon are the baryon density and the dark-matter density. Planck evaluates both by studying the CMB, but they can be obtained independently of the CMB by combining BAO measurements of the dark-matter density with Big Bang nucleosynthesis (BBN) measurements of the baryon density (see “Sound horizon” figure). The latest measurement by the Dark Energy Spectroscopic Instrument in Arizona (DESI, 2021–) yields H0 = 68.53 ± 0.80 km/s/Mpc, in 3.4σ tension with SH0ES and fully independent of Planck.

Sound horizon

The next few years will be crucial for understanding the Hubble tension, and may decide the fate of the ΛCDM model. ACT, SPT and the Simons Observatory in Chile (2024–) will release new CMB data. DESI, the Euclid space telescope (2023–) and the forthcoming LSST wide-field optical survey in Chile will release new galaxy surveys. “Standard siren” measurements from gravitational waves with electromagnetic counterparts may also contribute to the debate, although the original excitement has dampened with a lack of new events after GW170817. More accurate measurements of the age of the oldest objects may also provide an important new test. If H0 increases, the age of the universe decreases, and the SH0ES measurement favours less than 13.1 billion years at 2σ significance.

The SH0ES measurement is also being checked directly. A key approach is to test the three-step calibration by seeking alternative intermediate standard candles besides cepheids. One candidate is the peak-luminosity “tip” of the red giant branch (TRGB) caused by the sudden start of helium fusion in low-mass stars. The TRGB is bright enough to be seen in distant galaxies that host SN Ia, though at distances smaller than that of cepheids.

Settling the debate

In 2019 the Carnegie–Chicago Hubble Program (CCHP) led by Wendy Freedman and Barry Madore calibrated SN Ia using the TRGB within the LMC and NGC4258 to determine H0 = 69.8 ± 0.8 (stat) ± 1.7 (syst). An independent reanalysis including authors from the SH0ES collaboration later reported H0 = 71.5 ± 1.8 (stat + syst) km/s/Mpc. The difference in the results suggests that updated measurements with the James Webb Space Telescope (JWST) may settle the debate.

James Webb Space Telescope

Launched into space on 25 December 2021, JWST is perfectly adapted to improve measurements of the expansion rate of the universe thanks to its improved capabilities in the near infrared band, where the impact of dust is reduced (see “Improved resolution” figure). Its four-times-better spatial resolution has already been used to re-observe a subsample of the 37 hosts galaxies home to the 42 SN Ia studied by SH0ES and the geometric anchor NGC4258.

So far, all observations suggest good agreement with the previous observations by HST. SH0ES used JWST observations to obtain up to a factor 2.5 reduction in the dispersion of the period-luminosity relation for cepheids with no indication of a bias in HST measurements. Most importantly, they were able to exclude the confusion of cepheids with other stars as being responsible for the Hubble tension at 8σ significance.

Meanwhile, the CCHP team provided new measurements based on three distance indicators: cepheids, the TRGB and a new “population based” method using the J-region of the asymptotic giant branch (JAGB) of carbon-rich stars, for which the magnitude of the mode of the luminosity function can serve as a distance indicator (see the last three rows of “The Hubble tension” figure).

Galaxies used to measure the Hubble constant

The new CCHP results suggest that cepheids may show a bias compared to JAGB and TRGB, though this conclusion was rapidly challenged by SH0ES, who identified a missing source of uncertainty and argued that the size of the sample of SN Ia within hosts with primary distance indicators is too small to provide competitive constraints: they claim that sample variations of order 2.5 km/s/Mpc could explain why the JAGB and TRGB yield a lower value. Agreement may be reached when JWST has observed a larger sample of galaxies – across both teams, 19 of the 37 calibrated by SH0ES have been remeasured so far, plus the geometric anchor NGC 5468 (see “The usual suspects” figure).

At this stage, no single systematic error seems likely to fully explain the Hubble tension, and the problem is more severe than it appears. When calibrated, SN Ia and BAOs constrain not only H0, but the entire redshift range out to z ~ 1. This imposes strong constraints on any new physics introduced in the late universe. For example, recent DESI results suggest that the dynamics of dark energy at late times may not be exactly that of a cosmological constant, but the behaviour needed to reconcile Planck and SH0ES is strongly excluded.

Comparison of JWST and HST views

Rather than focusing on the value of the expansion rate, most proposals now focus on altering the calibration of either SN Ia or BAOs. For example, an unknown systematic error could alter the luminosity of SN Ia in our local vicinity, but we have no indication that their magnitude changes with redshift, and this solution appears to be very constrained.

The most promising solution appears to be that some new physics may have altered the value of the sound horizon in the early universe. As the sound horizon is used to calibrate both the CMB and BAOs, reducing it by 10 Mpc could match the value of H0 favoured by SH0ES (see “Sound horizon” figure). This can be achieved either by increasing the redshift of recombination or the energy density in the pre-recombination universe, giving the sound waves less time to propagate.

The best motivated models invoke additional relativistic species in the early universe such as a sterile neutrino or a new type of “dark radiation”. Another intriguing possibility is that dark energy played a role in the pre-recombination universe, boosting the expansion rate at just the right time. The wide variety and high precision of the data make it hard to find a simple mechanism that is not strongly constrained or finely tuned, but existing models have some of the right features. Future data will be decisive in testing them.

The post The Hubble tension appeared first on CERN Courier.

]]>
Feature Vivian Poulin asks if the tension between a direct measurement of the Hubble constant and constraints from the early universe could be resolved by new physics. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_coverpiccrop.jpg
Boost for compact fast radio bursts https://cerncourier.com/a/boost-for-compact-fast-radio-bursts/ Wed, 26 Mar 2025 14:21:58 +0000 https://cerncourier.com/?p=112596 New results from the CHIME telescope support the hypothesis that fast radio bursts originate in close proximity to the turbulent magnetosphere of a central engine.

The post Boost for compact fast radio bursts appeared first on CERN Courier.

]]>
Fast radio bursts (FRBs) are short but powerful bursts of radio waves that are believed to be emitted by dense astrophysical objects such as neutron stars or black holes. They were discovered by Duncan Lorimer and his student David Narkevic in 2007 while studying archival data from the Parkes radio telescope in Australia. Since then, more than a thousand FRBs have been detected, located both within and without the Milky Way. These bursts usually last only a few milliseconds but can release enormous amounts of energy – an FRB detected in 2022 gave off more energy in a millisecond than the Sun does in 30 years – however, the exact mechanism underlying their creation remains a mystery.

Inhomogeneities caused by the presence of gas and dust in the interstellar medium scatter the radio waves coming from an FRB. This creates a stochastic interference pattern on the signal, called scintillation – a phenomenon akin to the twinkling of stars. In a recent study, astronomer Kenzie Nimmo and her colleagues used scintillation data from FRB 20221022A to constrain the size of its emission region. FRB 20221022A is a 2.5 millisecond burst from a galaxy about 200 million light-years away. It was detected on 22 October 2022 by the Canadian Hydrogen Intensity Mapping Experiment Fast Radio Burst project (CHIME/FRB).

The CHIME telescope is currently the world’s leading FRB detector, discovering an average of three new FRBs every day. It consists of four stationary 20 m-wide and 100 m-long semi-cylindrical paraboloidal reflectors with a focal length of 5 m (see “Right on CHIME” figure). 256 dual-polarisation feeds suspended along each axis gives it a field of view of more than 200 square degrees. With a wide bandwidth, high sensitivity and a high-performance correlator to pinpoint where in the sky signals are coming from, CHIME is an excellent instrument for the detection of FRBs. The antenna receives radio waves in the frequency range of 400 to 800 MHz.

Two main classes of models compete to explain the emission mechanisms of FRBs. Near-field models hypothesise that emission occurs in close proximity to the turbulent magnetosphere of a central engine, while far-away models hypothesise that emission occurs in relativistic shocks that propagate out to large radial distances. Nimmo and her team measured two distinct scintillation scales in the frequency spectrum of FRB 20221022A: one originating from its host galaxy or local environment, and another from a scattering site within the Milky Way. By using these scattering sites as astrophysical lenses, they were able to constrain the size of the FRB’s emission region to better than 30,000 km. This emission size contradicted expectations from far-away models. It is more consistent with an emission process occurring within or just beyond the magnetosphere of a central compact object – the first clear evidence for the near-field class of models.

Additionally, FRB 20221022A’s detection paper notes a striking change in the burst’s polarisation angle – an “S-shaped” swing covering about 130° – over a mere 2.5 milliseconds. They interpret this as the emission beam physically sweeping across our line of sight, much like a lighthouse beam passing by an observer, and conclude that it hints at a magnetospheric origin of the emission, as highly magnetised regions can twist or shape how radio waves are emitted. The scintillation studies by Nimmo et al. independently support this conclusion, narrowing the possible sources and mechanisms that power FRBs. Moreover, they highlight the potential of the scintillation technique to explore the emission mechanisms in FRBs and understand their environments.

The field of FRB physics looks set to grow by leaps and bounds. CHIME can already identify host galaxies for FRBs, but an “outrigger” programme using similar detectors geographically displaced from the main telescope at the Dominion Radio Astrophysical Observatory near Penticton, British Columbia, aims to strengthen its localisation capabilities to a precision of tens of milliarcsecond. CHIME recently finished deploying its third outrigger telescope in northern California.

The post Boost for compact fast radio bursts appeared first on CERN Courier.

]]>
News New results from the CHIME telescope support the hypothesis that fast radio bursts originate in close proximity to the turbulent magnetosphere of a central engine. https://cerncourier.com/wp-content/uploads/2025/03/CCMarApr25_NA_Chime.jpg
CERN and ESA: a decade of innovation https://cerncourier.com/a/cern-and-esa-a-decade-of-innovation/ Mon, 27 Jan 2025 07:59:01 +0000 https://cerncourier.com/?p=112108 Enrico Chesta, Véronique Ferlet-Cavrois and Markus Brugger highlight seven ways CERN and ESA are working together to further fundamental exploration and innovation in space technologies.

The post CERN and ESA: a decade of innovation appeared first on CERN Courier.

]]>
Sky maps

Particle accelerators and spacecraft both operate in harsh radiation environments, extreme temperatures and high vacuum. Each must process large amounts of data quickly and autonomously. Much can be gained from cooperation between scientists and engineers in each field.

Ten years ago, the European Space Agency (ESA) and CERN signed a bilateral cooperation agreement to share expertise and facilities. The goal was to expand the limits of human knowledge and keep Europe at the leading edge of progress, innovation and growth. A decade on, CERN and ESA have collaborated on projects ranging from cosmology and planetary exploration to Earth observation and human spaceflight, supporting new space-tech ventures and developing electronic systems, radiation-monitoring instruments and irradiation facilities.

1. Mapping the universe

The Euclid space telescope is exploring the dark universe by mapping the large-scale structure of billions of galaxies out to 10 billion light-years across more than a third of the sky. With tens of petabytes expected in its final data set – already a substantial reduction of the 850 billion bits of compressed images Euclid processes each day – it will generate more data than any other ESA mission by far.

With many CERN cosmologists involved in testing theories of beyond-the-Standard-Model physics, Euclid first became a CERN-recognised experiment in 2015. CERN also contributes to the development of Euclid’s “science ground segment” (SGS), which processes raw data received from the Euclid spacecraft into usable scientific products such as galaxy catalogues and dark-matter maps. CERN’s virtual-machine file system (CernVM-FS) has been integrated into the SGS to allow continuous software deployment across Euclid’s nine data centres and on developers’ laptops.

The telescope was launched in July 2023 and began observations in February 2024. The first piece of its great map of the universe was released in October 2024, showing millions of stars and galaxies from observations and covering 132 square degrees of the southern sky (see “Sky map” figure). Based on just two weeks of observations, it accounts for just 1% of project’s six-year survey, which will be the largest cosmic map ever made.

Future CERN–ESA collaborations on cosmology, astrophysics and multimessenger astronomy are likely to include the Laser Interferometer Space Antenna (LISA) and the NewAthena X-ray observatory. LISA will be the first space-based observatory to study gravitational waves. NewAthena will study the most energetic phenomena in the universe. Both projects are expected to be ready to launch about 10 years from now.

2. Planetary exploration

Though planetary exploration is conceptually far from fundamental physics, its technical demands require similar expertise. A good example is the Jupiter Icy Moons Explorer (JUICE) mission, which will make detailed observations of the gas giant and its three large ocean-bearing moons Ganymede, Callisto and Europa.

Jupiter’s magnetic field is a million times greater in volume than Earth’s magnetosphere, trapping large fluxes of highly energetic electrons and protons. Before JUICE, the direct and indirect impact of high-energy electrons on modern electronic devices, and in particular their ability to cause “single event effects”, had never been studied before. Two test campaigns took place in the VESPER facility, which is part of the CERN Linear Electron Accelerator for Research (CLEAR) project. Components were tested with tuneable beam energies between 60 and 200 MeV, and average fluxes of roughly 108 electrons per square centimetre per second, mirroring expected radiation levels in the Jovian system.

JUICE radiation-monitor measurements

JUICE was successfully launched in April 2023, starting an epic eight-year journey to Jupiter including several flyby manoeuvres that will be used to commission the onboard instruments (see “Flyby” figure). JUICE should reach Jupiter in July 2031. It remains to be seen whether test results obtained at CERN have successfully de-risked the mission.

Another interesting example of cooperation on planetary exploration is the Mars Sample Return mission, which must operate in low temperatures during eclipse phases. CERN supported the main industrial partner, Thales Alenia Space, in qualifying the orbiter’s thermal-protection systems in cryogenic conditions.

3. Earth observation

Earth observation from orbit has applications ranging from environmental monitoring to weather forecasting. CERN and ESA collaborate both on developing the advanced technologies required by these applications and ensuring they can operate in the harsh radiation environment of space.

In 2017 and 2018, ESA teams came to CERN’s North Area with several partner companies to test the performance of radiation monitors, field-programmable gate arrays (FPGAs) and electronics chips in ultra-high-energy ion beams at the Super Proton Synchrotron. The tests mimicked the ultra-high-energy part of the galactic cosmic-ray spectrum, whose effects had never previously been measured on the ground beyond 10 GeV/nucleon. In 2017, ESA’s standard radiation-environment monitor and several FPGAs and multiprocessor chips were tested with xenon ions. In 2018, the highlight of the campaign was the testing of Intel’s Myriad-2 artificial intelligence (AI) chip with lead ions (see “Space AI” figure). Following its radiation characterisation and qualification, in 2020 the chip embarked on the φ-sat-1 mission to autonomously detect clouds using images from a hyperspectral camera.

Myriad 2 chip testing

More recently, CERN joined Edge SpAIce – an EU project to monitor ecosystems onboard the Balkan-1 satellite and track plastic pollution in the oceans. The project will use CERN’s high-level synthesis for machine learning (hls4ml) AI technology to run inference models on an FPGA that will be launched in 2025.

Looking further ahead, ESA’s φ-lab and CERN’s Quantum Technology Initiative are sponsoring two PhD programmes to study the potential of quantum machine learning, generative models and time-series processing to advance Earth observation. Applications may accelerate the task of extracting features from images to monitor natural disasters, deforestation and the impact of environmental effects on the lifecycle of crops.

4. Dosimetry for human spaceflight

In space, nothing is more important than astronauts’ safety and wellbeing. To this end, in August 2021 ESA astronaut Thomas Pesquet activated the LUMINA experiment inside the International Space Station (ISS), as part of the ALPHA mission (see “Space dosimetry” figure). Developed under the coordination of the French Space Agency and the Laboratoire Hubert Curien at the Université Jean-Monnet-Saint-Étienne and iXblue, LUMINA uses two several-kilometre-long phosphorous-doped optical fibres as active dosimeters to measure ionising radiation aboard the ISS.

ESA astronaut Thomas Pesquet

When exposed to radiation, optical fibres experience a partial loss of transmitted power. Using a reference control channel, radiation-induced attenuation can be accurately measured related to the total ionising dose, with the sensitivity of the device primarily governed by the length of the fibre. Having studied optical-fibre-based technologies for many years, CERN helped optimise the architecture of the dosimeters and performed irradiation tests to calibrate the instrument, which will operate on the ISS for a period of up to five years.

LUMINA complements dosimetry measurements performed on the ISS using CERN’s Timepix technology – an offshoot of the hybrid-pixel-detector technology developed for the LHC experiments (CERN Courier September/October 2024 p37). Timepix dosimeters have been integrated in multiple NASA payloads since 2012.

5. Radiation-hardness assurance

It’s no mean feat to ensure that CERN’s accelerator infrastructure functions in increasingly challenging radiation environments. Similar challenges are found in space. Damage can be caused by accumulating ionising doses, single-event effects (SEEs) or so-called displacement damage dose, which dislodges atoms within a material’s crystal lattice rather than ionising them. Radiation-hardness assurance (RHA) reduces radiation-induced failures in space through environment simulations, part selection and testing, radiation-tolerant design, worst-case analysis and shielding definition.

Since its creation in 2008, CERN’s Radiation to Electronics project has amplified the work of many equipment and service groups in modelling, mitigating and testing the effect of radiation on electronics. A decade later, joint test campaigns with ESA demonstrated the value of CERN’s facilities and expertise to RHA for spaceflight. This led to the signing of a joint protocol on radiation environments, technologies and facilities in 2019, which also included radiation detectors and radiation-tolerant systems, and components and simulation tools.

CHARM facility

Among CERN’s facilities is CHARM: the CERN high-energy-accelerator mixed-field facility, which offers an innovative approach to low-cost RHA. CHARM’s radiation field is generated by the interaction between a 24 GeV/c beam from the Proton Synchrotron and a metallic target. CHARM offers a uniquely wide spectrum of radiation types and energies, the possibility to adjust the environment using mobile shielding, and enough space to test a medium-sized satellite in full operating conditions.

Radiation testing is particularly challenging for the new generation of rapidly developed and often privately funded “new space” projects, which frequently make use of commercial and off-the-shelf (COTS) components. Here, RHA relies on testing and mitigation rather than radiation hardening by design. For “flip chip” configurations, which have their active circuitry facing inward toward the substrate, and dense three-dimensional structures that cannot be directly exposed without compromising their performance, heavy-ion beams accelerated to between 10 and 100 MeV/nucleon are the only way to induce SEE in the sensitive semiconductor volumes of the devices.

To enable testing of highly integrated electronic components, ESA supported studies to develop the CHARM heavy ions for micro-electronics reliability-assurance facility – CHIMERA for short (see “CHIMERA” figure). ESA has sponsored key feasibility activities such as: tuning the ion flux in a large dynamic range; tuning the beam size for board-level testing; and reducing beam energy to maximise the frequency of SEE while maintaining a penetration depth of a few millimetres in silicon.

6. In-orbit demonstrators

Weighing 1 kg and measuring just 10 cm on each side – a nanosatellite standard – the CELESTA satellite was designed to study the effects of cosmic radiation on electronics (see “CubeSat” figure). Initiated in partnership with the University of Montpellier and ESA, and launched in July 2022, CELESTA was CERN’s first in-orbit technology demonstrator.

Radiation-testing model of the CELESTA satellite

As well as providing the first opportunity for CHARM to test a full satellite, CELESTA offered the opportunity to flight-qualify SpaceRadMon, which counts single-event upsets (SEUs) and single-event latchups (SELs) in static random-access memory while using a field-effect transistor for dose monitoring. (SEUs are temporary errors caused by a high-energy particle flipping a bit and SELs are short circuits induced by high-energy particles.) More than 30 students contributed to the mission development, partially in the frame of ESA’s Fly Your Satellite Programme. Built from COTS components calibrated in CHARM, SpaceRadMon has since been adopted by other ESA missions such as Trisat and GENA-OT, and could be used in the future as a low-cost predictive maintenance tool to reduce space debris and improve space sustainability.

The maiden flight of the Vega-C launcher placed CELESTA on an atypical quasi-circular medium-Earth orbit in the middle of the inner Van Allen proton belt at roughly 6000 km. Two months of flight data sufficed to validate the performance of the payload and the ground-testing procedure in CHARM, though CELESTA will fly for thousands of years in a region of space where debris is not a problem due to the harsh radiation environment.

The CELESTA approach has since been adopted by industrial partners to develop radiation-tolerant cameras, radios and on-board computers.

7. Stimulating the space economy

Space technology is a fast-growing industry replete with opportunities for public–private cooperation. The global space economy will be worth $1.8 trillion by 2035, according to the World Economic Forum – up from $630 billion in 2023 and growing at double the projected rate for global GDP.

Whether spun off from space exploration or particle physics, ESA and CERN look to support start-up companies and high-tech ventures in bringing to market technologies with positive societal and economic impacts (see “Spin offs” figure). The use of CERN’s Timepix technology in space missions is a prime example. Private company Advacam collaborated with the Czech Technical University to provide a Timepix-based radiation-monitoring payload called SATRAM to ESA’s Proba-V mission to map land cover and vegetation growth across the entire planet every two days.

The Hannover Messe fair

Advacam is now testing a pixel-detector instrument on JoeySat – an ESA-sponsored technology demonstrator for OneWeb’s next-generation constellation of satellites designed to expand global connectivity. Advacam is also working with ESA on radiation monitors for Space Rider and NASA’s Lunar Gateway. Space Rider is a reusable spacecraft whose maiden voyage is scheduled for the coming years, and Lunar Gateway is a planned space station in lunar orbit that could act as a staging post for Mars exploration.

Another promising example is SigmaLabs – a Polish startup founded by CERN alumni specialising in radiation detectors and predictive-maintenance R&D for space applications. SigmaLabs was recently selected by ESA and the Polish Space Agency to provide one of the experiments expected to fly on Axiom Mission 4 – a private spaceflight to the ISS in 2025 that will include Polish astronaut and CERN engineer Sławosz Uznański (CERN Courier May/June 2024 p55). The experiment will assess the scalability and versatility of the SpaceRadMon radiation-monitoring technology initially developed at CERN for the LHC and flight tested on the CELESTA CubeSat.

In radiation-hardness assurance, the CHIMERA facility is associated with the High-Energy Accelerators for Radiation Testing and Shielding (HEARTS) programme sponsored by the European Commission. Its 2024 pilot user run is already stimulating private innovation, with high-energy heavy ions used to perform business-critical research on electronic components for a dozen aerospace companies.

The post CERN and ESA: a decade of innovation appeared first on CERN Courier.

]]>
Feature Enrico Chesta, Véronique Ferlet-Cavrois and Markus Brugger highlight seven ways CERN and ESA are working together to further fundamental exploration and innovation in space technologies. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_CERNandESA_pesquet.jpg
Chinese space station gears up for astrophysics https://cerncourier.com/a/chinese-space-station-gears-up-for-astrophysics/ Mon, 27 Jan 2025 07:16:33 +0000 https://cerncourier.com/?p=112214 China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades.

The post Chinese space station gears up for astrophysics appeared first on CERN Courier.

]]>
Completed in 2022, China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades. Like the International Space Station, its ability to provide large amounts of power, support heavy payloads and access powerful communication and computing facilities give it many advantages over typical satellite platforms. As such, both Chinese and international collaborations have been developing a number of science missions ranging from optical astronomy to the detection of cosmic rays with PeV energies.

For optical astronomy, the space station will be accompanied by the Xuntian telescope, which can be translated to “survey the heavens”. Xuntian is currently planned to be launched in mid-2025 to fly alongside Tiangong, thereby allowing for regular maintenance. Although its spatial resolution will be similar to that of the Hubble Space Telescope, Xuntian’s field of view will be about 300 times larger, allowing the observation of many objects at the same time. In addition to producing impressive images similar to those sent by Hubble, the instrument will be important for cosmological studies where large statistics for astronomical objects are typically required to study their evolution.

Another instrument that will observe large portions of the sky is LyRIC (Lyman UV Radiation from Interstellar medium and Circum-galactic medium). After being placed on the space station in the coming years, LyRIC will probe the poorly studied far-ultraviolet regime that contains emission lines from neutral hydrogen and other elements. While difficult to measure, this allows studies of baryonic matter in the universe, which can be used to answer important questions such as why only about half of the total baryons in the standard “ΛCDM” cosmological model can be accounted for.

At slightly higher energies, the Diffuse X-ray Explorer (DIXE) aims to use a novel type of X-ray detector to reach an energy resolution better than 1% in the 0.1 to 10 keV energy range. It achieves this using cryogenic transition-edge sensors (TESs), which exploit the rapid change in resistance that occurs during a superconducting phase transition. In this regime, the resistivity of the material is highly dependent on its temperature, allowing the detection of minuscule temperature increases resulting from X-rays being absorbed by the material. Positioned to scan the sky above the Tiangong space station, DIXE will be able, among other things, to measure the velocity of mat­erial that appears to have been emitted by the Milky Way during an active stage of its central black hole. Its high-energy resolution will allow Doppler shifts of the order of several eV to be measured, requiring the TES detectors to operate at 50 mK. Achieving such temperatures demands a cooling system of 640 W – a power level that is difficult to achieve on a satellite, but relatively easy to acquire on a space station. As such, DIXE will be one of the first detectors using this new technology when it launches in 2025, leading the way for missions such as the European ATHENA mission that plans to use it starting in 2037.

Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up

POLAR-2 was accepted as an international payload on the China space station through the United Nations Office for Outer Space Affairs and has since become a CERN-recognised experiment. The mission started as a Swiss, German, Polish and Chinese collaboration building on the success of POLAR, which flew on the space station’s predecessor Tiangong-2. Like its earlier incarnation, POLAR-2 measures the polarisation of high-energy X rays or gamma rays to provide insights into, for example, the magnetic fields that produced the emission. As one of the most sensitive gamma-ray detectors in the sky, POLAR-2 can also play an important role in alerting other instruments when a bright gamma-ray transient, such as a gamma-ray burst, appears. The importance of such alerts has resulted in the expansion of POLAR-2 to include an accompanying imaging spectrometer, which will provide detailed spectral and location information on any gamma-ray transient. Also now foreseen for this second payload is an additional wide-field-of-view X-ray polarimeter. The international team developing the three instruments, which are scheduled to be launched in 2027, is led by the Institute of High Energy Physics in Beijing.

For studying the universe using even higher energy emissions, the space station will host the High Energy cosmic-Radiation Detection Facility (HERD). HERD is designed to study both cosmic rays and gamma rays at energies beyond those accessible to instruments like AMS-02, CALET (CERN Courier July/August 2024 p24) and DAMPE. It aims to achieve this, in part, by simply being larger, resulting in a mass that is currently only possible to support on a space station. The HERD calorimeter will be 55 radiation lengths long and consist of several tonnes of scintillating cubic LYSO crystals. The instrument will also use high-precision silicon trackers, which in combination with the deep calorimeter, will provide a better angular resolution and a geometrical acceptance 30 times larger than the present AMS-02 (which is due to be upgraded next year). This will allow HERD to probe the cosmic-ray spectrum up to PeV energies, filling in the energy gap between current space missions and ground-based detectors. HERD started out as an international mission with a large European contribution, however delays on the European side regarding participation, in combination with a launch requirement of 2027, mean that it is currently foreseen to be a fully Chinese mission.

Although not as large or mature as the International Space Station, Tiangong’s capacity to host cutting-edge astrophysics missions is catching up. As well as providing researchers with a pristine view of the electromagnetic universe, instruments such as HERD will enable vital cross-checks of data from AMS-02 and other unique experiments in space.

The post Chinese space station gears up for astrophysics appeared first on CERN Courier.

]]>
News China’s Tiangong space station represents one of the biggest projects in space exploration in recent decades. https://cerncourier.com/wp-content/uploads/2025/01/CCJanFeb25_NA_astro.jpg
A rich harvest of results in Prague https://cerncourier.com/a/a-rich-harvest-of-results-in-prague/ Wed, 20 Nov 2024 13:34:58 +0000 https://cern-courier.web.cern.ch/?p=111420 The 42nd international conference on high-energy physics reported progress across all areas of high-energy physics.

The post A rich harvest of results in Prague appeared first on CERN Courier.

]]>
The 42nd international conference on high-energy physics (ICHEP) attracted almost 1400 participants to Prague in July. Expectations were high, with the field on the threshold of a defining moment, and ICHEP did not disappoint. A wealth of new results showed significant progress across all areas of high-energy physics.

With the long shutdown on the horizon, the third run of the LHC is progressing in earnest. Its high-availability operation and mastery of operational risks were highly praised. Run 3 data is of immense importance as it will be the dataset that experiments will work with for the next decade. With the newly collected data at 13.6 TeV, the LHC experiments showed new measurements of Higgs and di-electroweak-boson production, though of course most of the LHC results were based on the Run 2 (2014 to 2018) dataset, which is by now impeccably well calibrated and understood. This also allowed ATLAS and CMS to bring in-depth improvements to reconstruction algorithms.

AI algorithms

A highlight of the conference was the improvements brought by state-of-the-art artificial-intelligence algorithms such as graph neural networks, both at the trigger and reconstruction level. A striking example of this is the ATLAS and CMS flavour-tagging algorithms, which have improved their rejection of light jets by a factor of up to four. This has important consequences. Two outstanding examples are: di-Higgs-boson production, which is fundamental for the measurement of the Higgs boson self-coupling (CERN Courier July/August 2024 p7); and the Higgs boson’s Yukawa coupling to charm quarks. Di-Higgs-boson production should be independently observable by both general-purpose experiments at the HL-LHC, and an observation of the Higgs boson’s coupling to charm quarks is getting closer to being within reach.

The LHC experiments continue to push the limits of precision at hadron colliders. CMS and LHCb presented new measurements of the weak mixing angle. The per-mille precision reached is close to that of LEP and SLD measurements (CERN Courier September/October 2024 p29). ATLAS presented the most precise measurement to date (0.8%) of the strong coupling constant extracted from the measurement of the transverse momentum differential cross section of Drell–Yan Z-boson production. LHCb provided a comprehensive analysis of the B0→ K0* μ+μ angular distributions, which had previously presented discrepancies at the level of 3σ. Taking into account long-distance contributions significantly weakens the tension down to 2.1σ.

Pioneering the highest luminosities ever reached at colliders (setting a record at 4.7 × 1034 cm–2 s–1), SuperKEKB has been facing challenging conditions with repeated sudden beam losses. This is currently an obstacle to further progress to higher luminosities. Possible causes have been identified and are currently under investigation. Meanwhile, with the already substantial data set collected so far, the Belle II experiment has produced a host of new results. In addition to improved CKM angle measurements (alongside LHCb), in particular of the γ angle, Belle II (alongside BaBar) presented interesting new insights in the long standing |Vcb| and |Vub| inclusive versus exclusive measurements puzzle (CERN Courier July/August 2024 p30), with new |Vcb| exclusive measurements that significantly reduce the previous 3σ tension.

Maurizio Pierini

ATLAS and CMS furthered their systematic journey in the search for new phenomena to leave no stone unturned at the energy frontier, with 20 new results presented at the conference. This landmark outcome of the LHC puts further pressure on the naturalness paradigm.

A highlight of the conference was the overall progress in neutrino physics. Accelerator-based experiments NOvA and T2K presented a first combined measurement of the mass difference, neutrino mixing and CP parameters. Neutrino telescopes IceCube with DeepCore and KM3NeT with ORCA (Oscillation Research with Cosmics in the Abyss) also presented results with impressive precision. Neutrino physics is now at the dawn of a bright new era of precision with the next-generation accelerator-based long baseline experiments DUNE and Hyper Kamiokande, the upgrade of DeepCore, the completion of ORCA and the medium baseline JUNO experiment. These experiments will bring definitive conclusions on the measurement of the CP phase in the neutrino sector and the neutrino mass hierarchy – two of the outstanding goals in the field.

The KATRIN experiment presented a new upper limit on the effective electron–anti-neutrino mass of 0.45 eV, well en route towards their ultimate sensitivity of 0.2 eV. Neutrinoless double-beta-decay search experiments KamLAND-Zen and LEGEND-200 presented limits on the effective neutrino mass of approximately 100 meV; the sensitivity of the next-generation experiments LEGEND-1T, KamLAND-Zen-1T and nEXO should reach 20 meV and either fully exclude the inverted ordering hypothesis or discover this long-sought process. Progress on the reactor neutrino anomaly was reported, with recent fission data suggesting that the fluxes are overestimated, thus weakening the significance of the anti-neutrino deficits.

Neutrinos were also a highlight for direct-dark-matter experiments as Xenon announced the observation of nuclear recoil events from8B solar neutrino coherent elastic scattering on nuclei, thus signalling that experiments are now reaching the neutrino fog. The conference also highlighted the considerable progress across the board on the roadmap laid out by Kathryn Zurek at the conference to search for dark matter in an extraordinarily large range of possibilities, spanning 89 orders of magnitude in mass from 10–23 eV to 1057 GeV. The roadmap includes cosmological and astrophysical observations, broad searches at the energy and intensity frontier, direct searches at low masses to cover relic abundance motivated scenarios, building a suite of axion searches, and pursuing indirect-detection experiments.

Lia Merminga and Fabiola Gianotti

Neutrinos also made the headlines in multi-messenger astrophysics experiments with the announcement by the KM3Net ARCA (Astroparticle Research with Cosmics in the Abyss) collaboration of a muon-neutrino event that could be the most energetic ever found. The energy of the muon from the interaction of the neutrino is compatible with having an energy of approximately 100 PeV, thus opening a fascinating window on astrophysical processes at energies well beyond the reach of colliders. The conference showed that we are now well within the era of multi-messenger astrophysics, via beautiful neutrinos, gamma rays and gravitational-wave results.

The conference saw new bridges across fields being built. The birth of collider-neutrino physics with the beautiful results from FASERν and SND fill the missing gap in neutrino–nucleon cross sections between accelerator neutrinos and neutrino astronomy. ALICE and LHCb presented new results on He3 production that complement the AMS results. Astrophysical He3 could signal the annihilation of dark matter. ALICE also presented a broad, comprehensive review of the progress in understanding strongly interacting matter at extreme energy densities.

The highlight in the field of observational cosmology was the recent data from DESI, the Dark Energy Spectroscopic Instrument in operation since 2021, which bring splendid new data on baryon acoustic oscillation measurements. These precious new data agree with previous indirect measurements of the Hubble constant, keeping the tension with direct measurements in excess of 2.5σ. In combination with CMB measurements, the DESI measurements also set an upper limit on the sum of neutrino masses at 0.072 eV, in tension with the inverted ordering of neutrino masses hypothesis. This limit is dependent on the cosmological model.

In everyone’s mind at the conference, and indeed across the domain of high-energy physics, it is clear that the field is at a defining moment in its history: we will soon have to decide what new flagship project to build. To this end, the conference organised a thrilling panel discussion featuring the directors of all the major laboratories in the world. “We need to continue to be bold and ambitious and dream big,” said Fermilab’s Lia Merminga, summarising the spirit of the discussion.

“As we have seen at this conference, the field is extremely vibrant and exciting,” said CERN’s Fabiola Gianotti at the conclusion of the panel. In these defining times for the future of our field, ICHEP 2024 was an important success. The progress in all areas is remarkable and manifest through the outstanding number of beautiful new results shown at the conference.

The post A rich harvest of results in Prague appeared first on CERN Courier.

]]>
Meeting report The 42nd international conference on high-energy physics reported progress across all areas of high-energy physics. https://cerncourier.com/wp-content/uploads/2024/10/CCNovDec24FN_ICHEP1-2.jpg
Hypertriton and ‘little bang’ nucleosynthesis https://cerncourier.com/a/hypertriton-and-little-bang-nucleosynthesis/ Wed, 20 Nov 2024 10:55:45 +0000 https://cern-courier.web.cern.ch/?p=111434 The ALICE collaboration investigated the nucleosynthesis mechanism by measuring hypertriton production in heavy-ion collisions.

The post Hypertriton and ‘little bang’ nucleosynthesis appeared first on CERN Courier.

]]>
ALICE figure 1

According to the cosmological standard model, the first generation of nuclei was produced during the cooling of the hot mixture of quarks and gluons that was created shortly following the Big Bang. Relativistic heavy-ion collisions create a quark–gluon plasma (QGP) on a small scale, producing a “little bang”. In such collisions, the nucleosynthesis mechanism at play is different from the one of the Big Bang due to the rapid cool down of the fireball. Recently, the nucleosynthesis mechanism in heavy-ion collisions has been investigated via the measurement of hypertriton production by the ALICE collaboration.

The hypertriton, which consists of a proton, a neutron and a Λ hyperon, can be considered to be a loosely bound deuteron-Λ molecule (see “Inside pentaquarks and tetraquarks“). In this picture, the energy required to separate the Λ from the deuteron (BΛ) is about 100 keV, significantly lower than the binding energy of ordinary nuclei. This makes hypertriton production a sensitive probe of the properties of the fireball.

In heavy-ion collisions, the formation of nuclei can be explained by two main classes of models. The statistical hadronisation model (SHM) assumes that particles are produced from a system in thermal equilibrium. In this model, the production rate of nuclei depends only on their mass, quantum numbers and the temperature and volume of the system. On the other hand, in coalescence models, nuclei are formed from nucleons that are close together in phase space. In these models, the production rate of nuclei is also sensitive to their nuclear structure and size.

For an ordinary nucleus like the deuteron, coalescence and SHM predict similar production rates in all colliding systems, but for a loosely bound molecule such as the hypertriton, the predictions of the two models differ significantly. In order to identify the mechanism of nuclear production, the ALICE collab­oration used the ratio between the production rates of hypertriton and helium-3 – also known as a yield ratio – as an observable.

ALICE measured hypertriton production as a function of charged-particle multiplicity density using Pb–Pb collisions collected at a centre-of-mass energy of 5.02 TeV per nucleon pair during LHC Run 2. Figure 1 shows the yield ratio of hypertriton to 3He across different multiplicity intervals. The data points (red) exhibit a clear deviation from the SHM (dashed orange line), but are well-described by the coalescence model (blue band), supporting the conclusion that hypertriton formation at the LHC is driven by the coalescence mechanism.

The ongoing LHC Run 3 is expected to improve the precision of these measurements across all collision systems, allowing us to probe the internal structure of hypertriton and even heavier hypernuclei, whose properties remain largely unknown. This will provide insights into the interactions between ordinary nucleons and hyperons, which are essential for understanding the internal composition of neutron stars.

The post Hypertriton and ‘little bang’ nucleosynthesis appeared first on CERN Courier.

]]>
News The ALICE collaboration investigated the nucleosynthesis mechanism by measuring hypertriton production in heavy-ion collisions. https://cerncourier.com/wp-content/uploads/2019/05/LRsaba_CERN_0212_00685.jpg
A pevatron at the galactic centre https://cerncourier.com/a/a-pevatron-at-the-galactic-centre/ Mon, 16 Sep 2024 14:05:06 +0000 https://preview-courier.web.cern.ch/?p=111115 Recently, researchers at the High-Altitude Water Cherenkov observatory in Mexico reported the observation of ultra-high energy (> 100 TeV) gamma rays from the central region of the galaxy.

The post A pevatron at the galactic centre appeared first on CERN Courier.

]]>
Best-fit HAWC spectral energy distribution

The measured all-particle energy spectrum for cosmic rays (CRs) is famously described by a steeply falling power law. The spectrum is almost featureless from energies of around 30 GeV to 3 PeV, where a break (also known as the “knee”) is encountered, after which the spectrum becomes steeper. It is believed that CRs with energies below the knee have galactic origins. This is supported by the observation of diffuse gamma rays from the galactic disk in the GeV range (a predominant mechanism for the production of gamma rays is via the decay of neutral pions created when relativistic protons interact with the ambient gas). The knee could be explained by either the maximum energy that galactic sources can accelerate CR particles to, or the escape of CR particles from the galaxy if they are energetic enough to overcome the confinement of galactic magnetic fields. Both scenarios, however, assume the presence of astrophysical sources within the galaxy that could accelerate CR particles up to PeV energies. For decades, scientists have therefore been on the hunt for such sources, reasonably called “pevatrons”.

Recently, researchers at the High-Altitude Water Cherenkov (HAWC) observatory in Mexico reported the observation of ultra-high energy (> 100 TeV) gamma rays from the central region of the galaxy. Using nearly seven years of data, the team found that a point source, HAWC J1746-2856, with a simple power-law spectrum and no signs of a cutoff from 6 to 114 TeV best describes the observed gamma-ray flux. A total of 98 events were observed at energies above 100 TeV.

To analyse the spatial distribution of the observed gamma rays, the researchers plotted a significance map of the galactic centre. On this map, they also plotted the point-like supernova remnant SNR G0.9+0.1 and an unidentified extended source HESS J1745-303, both located 1° away from the galactic centre. While supernova remnants have long been a favoured candidate for galactic pevatrons, HAWC did not observe any excess at either of these source positions. There are, however, two other interesting point sources in this region: Sgr A* (HESS J1745-290), the supermassive black hole in the galactic centre; and HESS J1746-285, an unidentified source that is spatially coincident with the galactic radio arc. Imaging atmospheric Cherenkov telescopes such as HESS, VERITAS and MAGIC have measured the gamma-ray emissions from these sources up to an energy of about 20 TeV, but HAWC has an angular resolution about six times larger at such energies and therefore cannot resolve them.

To eliminate the contamination to the flux from these sources, the authors assumed that their spectra cover the full HAWC energy range and then estimated the event count by convolving the reported best-fit model from HESS with the instrument-response functions of HAWC. The resulting HAWC spectral energy distribution, after subtracting these sources (see figure), seems to be compatible with the diffuse emission data points from HESS while still maintaining a power-law behaviour, with no signs of a cutoff and extending up to at least 114 TeV. This is the first detection of gamma rays at energies > 100 TeV from the galactic centre, thereby providing convincing evidence of the presence of a pevatron.

This is the first detection of gamma rays at energies > 100 TeV from the galactic centre

Furthermore, the diffuse emission is spatially correlated with the morphology of the central molecular zone (CMZ) – a region in the innermost 500 pc of the galaxy consisting of enormous molecular clouds corresponding to around 60 million solar masses. Such a correlation supports a hadronic scenario for the origin of cosmic rays, where gamma rays are produced via the interaction of relativistic protons with the ambient gas. In the leptonic scenario, electrons with energies above 100 TeV produce gamma rays via inverse Compton scattering, but such electrons suffer severe radiative losses; for a magnetic field strength of 100 μG, the maximum distance that such electrons can traverse is much smaller than the CMZ. On the other hand, in the hadronic case the escape time for protons is orders of magnitude shorter than the cooling time (via π0 decay). The stronger magnetic field could confine them for a longer period but, as the authors argue, the escape time is also much smaller than the age of the galaxy, thereby pointing to a young source that is quasi-continuously injecting and accelerating protons into the CMZ.

The study also computes the energy density of cosmic-ray protons with energies above 100 TeV to be 8.1 × 10–3eV/cm3. This is higher than the 1 × 10–3eV/cm3 local measurement from the Alpha Magnetic Spectrometer in 2015, indicating the presence of newly accelerated protons in the energy range 0.1–1 PeV. The capabilities of this study did not extend to the identification of the source, but with better modelling of the CMZ in the future, and improved performances of upcoming observatories such as CTAO and SWGO, candidate sites in the galactic centre are expected to be probed with much higher resolution.

The post A pevatron at the galactic centre appeared first on CERN Courier.

]]>
News Recently, researchers at the High-Altitude Water Cherenkov observatory in Mexico reported the observation of ultra-high energy (> 100 TeV) gamma rays from the central region of the galaxy. https://cerncourier.com/wp-content/uploads/2024/09/CCSepOct24_NA_Astro_feature.jpg
In defiance of cosmic-ray power laws https://cerncourier.com/a/in-defiance-of-cosmic-ray-power-laws/ Fri, 05 Jul 2024 08:44:45 +0000 https://preview-courier.web.cern.ch/?p=110771 From its pristine vantage point on the International Space Station, the Calorimetric Electron Telescope, CALET, has uncovered anomalies in the spectra of protons and electrons below the cosmic-ray knee.

The post In defiance of cosmic-ray power laws appeared first on CERN Courier.

]]>
The Calorimetric Electron Telescope

In a series of daring balloon flights in 1912, Victor Hess discovered radiation that intensified with altitude, implying extra-terrestrial origins. A century later, experiments with cosmic rays have reached low-Earth orbit, but physicists are still puzzled. Cosmic-ray spectra are difficult to explain using conventional models of galactic acceleration and propagation. Hypotheses for their sources range from supernova remnants, active galactic nuclei and pulsars to physics beyond the Standard Model. The study of cosmic rays in the 1940s and 1950s gave rise to particle physics as we know it. Could these cosmic messengers be about to unlock new secrets, potentially clarifying the nature of dark matter?

The cosmic-ray spectrum extends well into the EeV regime, far beyond what can be reached by particle colliders. For many decades, the spectrum was assumed to be broken into intervals, each following a power law, as Enrico Fermi had historically predicted. The junctures between intervals include: a steepening decline at about 3 × 106 GeV known as the knee; a flattening at about 4 × 109 GeV known as the ankle; and a further steepening at the supposed end of the spectrum somewhere above 1010 GeV (10 EeV).

The Calorimetric Electron Telescope detector

While the cosmic-ray population at EeV energies may include contributions from extra-galactic cosmic rays, and the end of the spectrum may be determined by collisions with relic cosmic-microwave-background photons – the Greisen–Zatsepin–Kuzmin cutoff – the knee is still controversial as the relative abundance of protons and other nuclei is largely unknown. What’s more, recent direct measurements by space-borne instruments have discovered “spectral curvatures” below the knee. These significant deviations from a pure power law range from a few hundred GeV to a few tens of TeV. Intriguing anomalies in the spectra of cosmic-ray electrons and positrons have also been observed below the knee.

Electron origins

The Calorimetric Electron Telescope (CALET; see “Calorimetric telescope” figure) on board the International Space Station (ISS) provides the highest-energy direct measurements of the spectrum of cosmic-ray electrons and positrons. Its goal is to observe discrete sources of high-energy particle acceleration in the local region of our galaxy. Led by the Japan Aerospace Exploration Agency, with the participation of the Italian Space Agency and NASA, CALET was launched from the Tanegashima Space Center in August 2015, becoming the second high-energy experiment operating on the ISS following the deployment of AMS-02 in 2011. During 2017 a third experiment, ISS-CREAM, joined AMS-02 and CALET, but its observation time ended prematurely.

A candidate electron event in CALET

As a result of radiative losses in space, high-energy cosmic-ray electrons are expected to originate just a few thousand light-years away, relatively close to Earth. CALET’s homogeneous calorimeter (fully active, with no absorbers) is optimised to reconstruct such particles (see “Energetic electron” figure). With the exception of the highest energies, anisotropies in their arrival direction are typically small due to deflections by turbulent interstellar magnetic fields.

Energy spectra also contain crucial information as to where and how cosmic-ray electrons are accelerated. And they could provide possible signatures of dark matter. For example, the presence of a peak in the spectrum could be a sign of dark-matter decay, or dark-matter annihilation into an electron–positron pair, with a detected electron or positron in the final state.

Direct measurements of the energy spectra of charged cosmic rays have recently achieved unprecedented precision thanks to long-term observations of electrons and positrons of cosmic origin, as well as of individual elements from hydrogen to nickel, and even beyond. Space-borne instruments such as CALET directly identify cosmic nuclei by measuring their electric charge. Ground-based experiments must do so indirectly by observing the showers they generate in the atmosphere, incurring large systematic uncertainties. Either way, hadronic cosmic rays can be assumed to be fully stripped of atomic electrons in their high-temperature regions of origin.

A rich phenomenology

The past decade has seen the discovery of unexpected features in the differential energy spectra of both leptonic and hadronic cosmic rays. The observation by PAMELA and AMS of an excess of positrons above 10 GeV has generated widespread interest and still calls for an unambiguous explanation (CERN Courier December 2016 p26). Possibilities include pair production in pulsars, in addition to the well known interactions with the interstellar gas, and the annihilation of dark matter into electron–positron pairs.

Combined electron and positron flux measurements as a function of kinetic energy

Regarding cosmic-ray nuclei, significant deviations of the fluxes from pure power-law spectra have been observed by several instruments in flight, including by CREAM on balloon launches from Antarctica, by PAMELA and DAMPE aboard satellites in low-Earth orbit, and by AMS-02 and CALET on the ISS. Direct measurements have also shown that the energy spectra of “primary” cosmic rays is different from those of “secondary” cosmic rays created by collisions of primaries with the interstellar medium. This rich phenomenology, which encodes information on cosmic-ray acceleration processes and the history of their propagation in the galaxy, is the subject of multiple theoretical models.

An unexpected discovery by PAMELA, which had been anticipated by CREAM and was later measured with greater precision by AMS-02, DAMPE and CALET, was the observation of a flattening of the differential energy spectra of protons and helium. Starting from energies of a few hundred GeV, the proton flux shows a smooth and progressive hardening (increase in gradient) of the spectrum that continues up to around 10 TeV, above which a completely different regime is established. A turning point was the subsequent discovery by CALET and DAMPE of an unexpected softening of proton and helium fluxes above about 10 TeV/Z, where the atomic number Z is one for protons and two for helium. The presence of a second break challenges the conventional “standard model” of cosmic-ray spectra and calls for a further extension of the observed energy range, currently limited to a few hundred TeV.

At present, only two experiments in low-Earth orbit have an energy reach beyond 100 TeV: CALET and DAMPE. They rely on a purely calorimetric measurement of the energy, while space-borne magnetic spectrometers are limited to a maximum magnetic “rigidity” – a particle’s momentum divided by its charge – of a few teravolts. Since the end of PAMELA’s operations in 2016, AMS-02 is now the only instrument in orbit with the ability to discriminate the sign of the charge. This allows separate measurements of the high-energy spectra of positrons and antiprotons – an important input to the observation of final states containing antiparticles for dark-matter searches. AMS-02 is also now preparing for an upgrade: an additional silicon tracker layer will be deployed at the top of the instrument to enable a significant increase in its acceptance and energy reach (CERN Courier March/April 2024 p7).

Pioneering observations

CALET was designed to extend the energy reach beyond the rigidity limit of present space-borne spectrometers, enabling measurements of electrons up to 20 TeV and measurements of hadrons up to 1 PeV. As an all-calorimetric instrument with no magnetic field, its main science goal is to perform precision measurements of the detailed shape of the inclusive spectra of electrons and positrons.

The Vela Pulsar

Thanks to its advanced imaging calorimeter, CALET can measure the kinetic energy of incident particles well into TeV energies, maintaining excellent proton–electron discrimination throughout. CALET’s homogeneous calorimeter has a total thickness of 30 radiation lengths, allowing for a full containment of electron showers. It is preceded by a high-granularity pre-shower detector with imaging capabilities that provide a redundant measurement of charge via multiple energy-loss measurements. The calibration of the two instruments is the key to controlling the energy scale, motivating beam tests at CERN before launch.

A first important deviation from a scale-invariant power-law spectrum was found for electrons near 1 TeV. Here, CALET and DAMPE observed a significant flux reduction, as expected from the large radiative losses of electrons during their travel in space. CALET has now published a high-statistics update up to 7.5 TeV, reporting the presence of candidate electrons above the 1 TeV spectral break (see “Electron break” figure).

This unexplored region may hold some surprises. For example, the detection of even higher energy electrons, such as the 12 TeV candidate recently found by CALET, may indicate the contribution of young and nearby sources such as the Vela supernova remnant, which is known to host a pulsar (see “Pulsar home” image).

CALET was designed to extend the energy reach beyond the rigidity limit of present space-borne spectrometers

A second unexpected finding is the observation of a significant reduction in the proton flux around 10 TeV. This bump and dip were also observed by DAMPE and anticipated by CREAM, albeit with low statistics (see “Proton bump” figure). A precise measurement of the flux has allowed CALET to fit the spectrum with a double-broken power law: after a spectral hardening starting at a few hundred GeV, which is also observed by AMS-02 and PAMELA, and which progressively increases above 500 GeV, a steep softening takes place above 10 TeV.

Proton flux measurements as a function of the kinetic energy

A similar bump and dip have been observed in the helium flux. These spectral features may result from a single physical process that generates a bump in the cosmic-ray spectrum. Theoretical models include an anomalous diffusive regime near the acceleration sources, the dominance of one or more nearby supernova remnants, the gradual release of cosmic rays from the source, and the presence of additional sources.

CALET is also a powerful hunter of heavier cosmic rays. Measurements of the spectra of boron, carbon and oxygen ions have been extended in energy reach and precision, providing evidence of a progressive spectral hardening for most of the primary elements above a few hundred GeV per nucleon. The boron-to-carbon flux ratio is an important input for understanding cosmic-ray propagation. This is because diffusion through the interstellar medium causes an additional softening of the flux of secondary cosmic rays such as boron with respect to primary cosmic rays such as carbon (see “Break in B/C?” figure). The collaboration also recently published the first high-resolution flux measurement of nickel (Z = 28), revealing the element to have a very similar spectrum to iron, suggesting similar acceleration and propagation behaviour.

CALET is also studying the spectra of sub-iron elements, which are poorly known above 10 GeV per nucleon, and ultra-heavy galactic cosmic rays such as zinc (Z = 30), which are quite rare. CALET studies abundances up to Z = 40 using a special trigger with a large acceptance, so far revealing an excellent match with previous measurements from ACE-CRIS (a satellite-based detector), SuperTIGER (a balloon-borne detector) and HEAO-3 (a satellite-based detector decommissioned in the 1980s). Ultra-heavy galactic cosmic rays provide insights into cosmic-ray production and acceleration in some of the most energetic processes in our galaxy, such as supernovae and binary-neutron-star mergers.

Gravitational-wave counterparts

In addition to charged particles, CALET can detect gamma rays with energies between 1 GeV and 10 TeV, and study the diffuse photon background as well as individual sources. To study electromagnetic transients related to complex phenomena such as gamma-ray bursts and neutron-star mergers, CALET is equipped with a dedicated monitor that to date has detected more than 300 gamma-ray bursts, 10% of which are short bursts in the energy range 7 keV to 20 MeV. The search for electromagnetic counterparts to gravitational waves proceeds around the clock by following alerts from LIGO, VIRGO and KAGRA. No X-ray or gamma-ray counterparts to gravitational waves have been detected so far.

CALET measurements of the boron to carbon flux ratio

On the low-energy side of cosmic-ray spectra, CALET has contributed a thorough study of the effect of solar activity on galactic cosmic rays, revealing charge dependence on the polarity of the Sun’s magnetic field due to the different paths taken by electrons and protons in the heliosphere. The instrument’s large-area charge detector has also proven to be ideal for space-weather studies of relativistic electron precipitation from the Van Allen belts in Earth’s magnetosphere.

The spectacular recent experimental advances in cosmic-ray research, and the powerful theoretical efforts that they are driving, are moving us closer to a solution to the century-old puzzle of cosmic rays. With more than four billion cosmic rays observed so far, and a planned extension of the mission to the nominal end of ISS operativity in 2030, CALET is expected to continue its campaign of direct measurements in space, contributing sharper and perhaps unexpected pictures of their complex phenomenology.

The post In defiance of cosmic-ray power laws appeared first on CERN Courier.

]]>
Feature From its pristine vantage point on the International Space Station, the Calorimetric Electron Telescope, CALET, has uncovered anomalies in the spectra of protons and electrons below the cosmic-ray knee. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_COSMIC_frontis.jpg
Super-massive black holes quickly repoint their jets https://cerncourier.com/a/super-massive-black-holes-quickly-repoint-their-jets/ Fri, 05 Jul 2024 08:40:02 +0000 https://preview-courier.web.cern.ch/?p=110852 Francesco Ubertosi of the University of Bologna and co-workers studied a sample of about 60 clusters observed using the Very Long Baseline Array and the Chandra X-ray telescope.

The post Super-massive black holes quickly repoint their jets appeared first on CERN Courier.

]]>
Two galaxy clusters observed by the Chandra X-ray Observatory

With masses up to 1015 times greater than that of the Sun, galaxy clusters are the largest concentrations of matter in the universe. Within these objects, the space between the galaxies is filled with a gravitationally bound hot plasma. Given time, this plasma accretes on the galaxies, cools down and eventually forms stars. However, observations indicate that the rate of star formation is slower than expected, suggesting that processes are at play that prevent the gas from accreting. Violent bursts and jets coming from super-massive black holes in the centre of galaxy clusters are thought to quench star formation. A new study indicates that these jets rapidly change their directions.

Super-massive black holes form the centre of all galaxies, including our own, and can undergo periods of activity during which powerful jets are emitted along their spin axes. In the case of galaxy clusters, these bursts can be spotted in real time by looking at their radio emission, while their histories can be traced using X-ray observations. As the jets are emitted, they crash into the intra-cluster plasma, sweeping up material and leaving behind bubbles, or cavities, in the plasma. As the plasma emits in the X-ray region, these bubbles reveal themselves as voids when viewed with X-ray detectors. After their creation, they continue to move through the plasma and remain visible long after the original jet has disappeared (see image).

Francesco Ubertosi of the University of Bologna and co-workers studied a sample of about 60 clusters observed using the Very Long Baseline Array, which produces highly detailed radio information, and the Chandra X-ray telescope. The team studied the angle between the cavities and the current radio jet and found that most cavities are simply aligned, indicating that the current jet points in the same direction as those responsible for the cavities produced in the past. However, around one third of the studied objects show significant angles, some as large as 90°.

Violent bursts and jets are thought to quench star formation

This study therefore shows that the source of the jet, the super-massive black hole, appears to be able to reorient itself over time. More importantly, by dating the cavities the team showed that this can happen within time scales of just one million years. To get an idea of the rapidity of this change, consider that the solar system takes 225 million years to revolve around the super-massive black hole at the centre of the Milky Way. Analogously, Earth takes 365 days for one revolution around the Sun. Therefore, if the Milky Way’s super-massive black hole altered its spin axis on the timescale of one million years, it would be as if the Sun were to change its spin axis in a matter of a few days.

These observations raise the question of how the re-orientation of jets from super-massive black holes takes place. The authors find that the results are unlikely to be due to projection effects, or perturbations that significantly shift the position of the cavities. Instead, the most plausible explanation is that the spin axes of the super-massive black hole tilt significantly, likely affected by complex accretion flows. The results therefore reveal important information about the accretion dynamics of super-massive black holes. They also offer important insights into how stars form in these clusters, as the reorientation would further suppress star formation.

The post Super-massive black holes quickly repoint their jets appeared first on CERN Courier.

]]>
News Francesco Ubertosi of the University of Bologna and co-workers studied a sample of about 60 clusters observed using the Very Long Baseline Array and the Chandra X-ray telescope. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_NA_Astro_feature.jpg
High time for holographic cosmology https://cerncourier.com/a/high-time-for-holographic-cosmology/ Fri, 05 Jul 2024 08:19:41 +0000 https://preview-courier.web.cern.ch/?p=110889 On the Origin of Time is an intellectually thrilling book and a worthy sequel to Stephen Hawking’s bestsellers, writes Wolfgang Lerche.

The post High time for holographic cosmology appeared first on CERN Courier.

]]>
On the Origin of Time is an intellectually thrilling book and a worthy sequel to Stephen Hawking’s bestsellers. Thomas Hertog, who was a student and collaborator of Hawking, suggests that it may be viewed as the next book the famous scientist would have written if he were still alive. While addressing fundamental questions about the origin of the cosmos, Hertog sprinkles the text with anecdotes from his interactions with Hawking, easing up on the otherwise intense barrage of ideas and concepts. But despite its relaxed and popular style, the book will be most useful for physicists with a basic education in relativity and quantum theory.

Expanding universes

The book starts with an exhaustive journey through the history of cosmology. It reviews the ancient idea of an eternal mathematical universe, passes through the ages of Copernicus and Newton, and then enters the modern era of Einstein’s universe. Hertog thoroughly explores static and expanding universes, Hoyle’s steady-state cosmos, Hartle and Hawking’s no-boundary universe, Guth’s inflationary universe and Linde’s multiverse with eternal inflation. Everything culminates in the proposal for holographic quantum cosmology that the author developed together with the late Hawking.

What makes the book especially interesting is its philosophical reflections on the historical evolution of various underlying scientific paradigms. For example, the old Greeks developed the Platonic view that the workings of the world should be governed by eternal mathematical laws. This laid the groundwork for the reductionistic worldview that many scientists – especially particle physicists – subscribe to today.

Hertog argues that this way of thinking is flawed, especially when confronted with a Big Bang followed by a burst of inflation. Given the supremely fine-tuned structure of our universe, as is necessitated by the existence of atoms, galaxies and ultimately us, how could the universe “know” back at the time of the Big Bang that this fine-tuned world would emerge after inflation and phase transitions?

On the Origin of Time: Stephen Hawking’s Final Theory

The quest to scientifically understand this apparent intelligent design has led to physical scenarios such as eternal inflation, which produces an infinite collection of pocket universes with their own laws. These ideas blend the anthropic principle – that only a life-friendly universe can be observed – into the narrative of a multiverse.

However, for anthropic reasoning to make sense, one needs to specify what a typical observer would be, observes Hertog, because otherwise the statement is circular. Instead, he argues that one should interpret the history of the universe as an evolutionary process. Not only would physical objects continuously evolve, but also the laws that govern them, thereby building up an enormous chain of frozen accidents analogous to the evolutionary tree of biological species on Earth.

This represents a major paradigm shift as it introduces a retrospective element: one can only understand evolution by looking at it backwards in time. Deterministic and causal explanations apply only at a crude, coarse-grained level, while the precise way that structures and laws play out is governed by accumulated accidents. Essentially the question “how did everything start?” is superseded by the question “how did our universe become as it is today?” This may be seen as adopting a top-down view (into the past) instead of a bottom-up view (from the past).

Hawking criticised traditional cosmology for hiding certain assumptions, in particular the separation of the fundamental laws from initial boundary conditions and from the role of the observer. Instead, one should view the universe, at its most fundamental level, as a quantum superposition of many possible spacetimes, of which the observer is an intrinsic part.

From this Everettian viewpoint, wavefunctions behave like separate branches of reality. A measurement is like a fork in the road, where history divides into different outcomes. This line of thought has significant consequences. The author presents an illuminating analogy with the so-called delayed double-slit experiment, which was first conceived by John Archibald Wheeler. Here the measurement that determines whether an electron behaves as particle or wave is delayed until after the electron has already passed the slit. This demonstrates that the process of observation inflicts a retroactive component which, in a sense, creates the past history of the electron.

The fifth dimension 

Further ingredients are needed to transform this collection of ideas to a concrete proposal, argues Hertog. In short, these are quantum entanglement and holography. Holography has been recognised as a key property of quantum gravity, following Maldacena’s work on quantum black holes. It posits that all the information about the interior of a black hole is encoded at its horizon, which acts like a holographic screen. Inside, a fictitious fifth dimension emerges that plays the role of an energy scale.

A holographic universe would be the polar opposite of a Platonic universe with eternal laws

In Hawking and Hertog’s holographic quantum universe, one considers a Euclidean universe where the role of the holographic screen is played by the surface of our observations. The main idea is that the emergent dimension is time itself! In essence, the observed universe, with all its complexity, is like a holographic screen whose quantum bits encode its past history. Moving from the screen to the interior is equivalent to going back in time, from a highly entangled complex universe to a gradually less structured universe with fading physical laws and less entangled qubits. Eventually no entangled qubits remain. This is the origin of time as well as of the physical laws. Such a holographic universe would be the polar opposite of a Platonic universe with eternal laws.

Could these ideas be tested? Hertog argues that an observable imprint in the spectrum of primordial gravitational waves could be discovered in the future. For now, On the Origin of Time is delightful food for thought.

The post High time for holographic cosmology appeared first on CERN Courier.

]]>
Review On the Origin of Time is an intellectually thrilling book and a worthy sequel to Stephen Hawking’s bestsellers, writes Wolfgang Lerche. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_REV_Hawking.jpg
The next 10 years in astroparticle theory https://cerncourier.com/a/the-next-10-years-in-astroparticle-theory/ Fri, 05 Jul 2024 08:16:30 +0000 https://preview-courier.web.cern.ch/?p=110857 Newly appointed EuCAPT director Silvia Pascoli sets out her vision for disentangling fundamental questions involving dark matter, the baryon asymmetry, neutrinos, cosmic rays, gravitational waves, dark energy and other cosmic relics.

The post The next 10 years in astroparticle theory appeared first on CERN Courier.

]]>
Pulsar timing arrays

Astroparticle physics connects the extremely small with the extremely large. At the interface of particle physics, cosmology and astronomy, the field ties particles and interactions to the hot Big Bang cosmological model. This synergy allows us to go far beyond the limitations of terrestrial probes in our quest to understand nature at its most fundamental level. A typical example is neutrino masses, where cosmological observations from large-scale structure formation far exceed current bounds from terrestrial experiments. Astroparticle theory (APT) has accelerated quickly in the past 10 years. And this looks certain to continue in the next 10.

Today, neutrino masses, dark matter and the baryon asymmetry of the universe are the only evidence we have of physics beyond the Standard Model (BSM) of particle physics. Astroparticle theorists study how to extend the theory towards a new Standard Model – and the cosmological consequences of doing so.

New insights

For a long time, work on dark matter focused on TeV-scale models parallel to searches at the LHC and in ultra-low-noise detectors. The scope has now broadened to a much larger range of masses and models, from ultralight dark matter and axions to sub-GeV dark matter and WIMPs. Theoretical developments have gone hand-in-hand with new experimental opportunities. In the next 10 years, much larger detectors are planned for WIMP searches aiming towards the neutrino floor. Pioneering experimental efforts, even borrowing techniques from atomic and condensed-matter physics, test dark matter with much lower masses, providing new insights into what dark matter may be made of.

I strongly welcome efforts to broaden the reach in mass scales to efficiently hunt for any hint of what the new physics BSM may be

Neutrinos provide a complementary window on BSM physics. It is just over 25 years since the discovery of neutrino oscillation provided evidence that neutrinos have mass – a fact that cannot be accounted for in the SM (CERN Courier May/June 2024 p29). But the origin of neutrino masses remains a mystery. In the coming decade, neutrinoless double-beta decay experiments and new large experiments, such as JUNO, DUNE (see “A gold mine for neutrino physics“) and Hyper-Kamiokande, will provide a much clearer picture, determining the mass ordering and potentially discovering the neutrino’s nature and whether it violates CP symmetry. These results may, via leptogenesis, be related to the origin of the matter–antimatter asymmetry of the universe.

Recently, there has been renewed interest in models with scales accessible to current particle-physics experiments. These will exploit the powerful beams and capable detectors of the current and future experimental neutrino programme, and collider-based searches for heavy neutral leptons with MeV-to-TeV masses.

Overall, while the multi-TeV scale should continue to be a key focus for both particle and astroparticle physics experiments, I strongly welcome the theoretical and experimental efforts to broaden the reach in mass scales to efficiently hunt for any hint of what the new physics BSM may be.

Silvia Pascoli

Astroparticle physics also studies the particles that arrive on Earth from all around our universe. They come from extreme astrophysical environments, such as supernovae and active galactic nuclei, where they may be generated and accelerated to the highest energies. Thanks to their detection we can study the processes that fuel these astrophysical objects and gain an insight into their evolution (see “In defiance of cosmic-ray power laws“).

The discovery of gravitational waves (GWs) just a few years ago has shed new light on this field. Together with gamma rays, cosmic rays and the high-energy neutrinos detected at IceCube, the field of multi-messenger astronomy is in full bloom. In the coming years it will get a boost from the results of new, large experiments such as KM3Net, the Einstein Telescope, LISA and the Cherenkov Telescope Array – as well as many new theoretical developments, such as advanced particle-theory techniques for GW predictions.

In the field of GWs, last year’s results from pulsar timing arrays indicate the presence of a stochastic background of GWs. What is its origin? Is it of astrophysical nature or does it come from some dramatic event in the early universe, such as a strong first-order phase transition? In this latter case, we would be getting a glimpse of the universe when it was just born, opening up a new perspective on fundamental particles and interactions. Could it be that we have seen a new GeV-scale dark sector at work? It is too early to tell. But this is very exciting.

The post The next 10 years in astroparticle theory appeared first on CERN Courier.

]]>
Opinion Newly appointed EuCAPT director Silvia Pascoli sets out her vision for disentangling fundamental questions involving dark matter, the baryon asymmetry, neutrinos, cosmic rays, gravitational waves, dark energy and other cosmic relics. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_VIEW_Pulsar_feature.jpg
CERN teams up with ET on civil engineering https://cerncourier.com/a/cern-teams-up-with-et-on-civil-engineering/ Fri, 05 Jul 2024 07:30:21 +0000 https://preview-courier.web.cern.ch/?p=110847 The Einstein Telescope requires a new underground infrastructure in the form of a triangle with 10 km-long arms.

The post CERN teams up with ET on civil engineering appeared first on CERN Courier.

]]>
The Einstein Telescope (ET), a proposed third-generation gravitational-wave observatory in Europe with a much higher sensitivity than existing facilities, requires a new underground infrastructure in the form of a triangle with 10 km-long arms. At each corner a large cavern will host complex mirror assemblies that detect relative displacements as small as 10–22 m caused by momentary stretches and contractions of space–time. Access to the underground structure, which needs to be at a depth of between 200 and 300 m to mitigate environmental and seismic noise, will be provided by either vertical shafts or inclined tunnels. Currently there are two candidate sites for the ET: the Meuse–Rhine Euroregion and the Sardinia region in Italy, each with their own geology and environment.

CERN is already sharing its expertise in vacuum, materials, manufacturing and surface treatments with the gravitational-wave community. Beginning in 2022, a collaboration between CERN, Nikhef and INFN is exploring practical solutions for the ET vacuum tubes which, with a diameter of 1 to 1.2 m, would represent the largest ultrahigh vacuum systems ever built (CERN Courier September/October 2023 p45).

In September 2023, the ET study entered a further agreement with CERN to support the preparation of a site-independent technical design report. With civil-engineering costs representing a significant proportion of the overall implementation budget, detailed studies are needed to ensure a cost-efficient design and construction methodology. Supported financially by INFN, Nikhef and IFAE, CERN will provide technical assistance on how to optimise the tunnel placement, for example via software tools to generate geological profiles. Construction methodology and management of excavated materials, carbon footprint, environmental impact, and project cost and schedule, are other key aspects. CERN will also provide recommendations during the technical review of the associate documents that feed into the site selection.

“We are advising the ET study on how we managed similar design studies for colliders such as CLIC, ILC, the FCC and the HL-LHC upgrade,” explains John Osborne of CERN’s site and civil-engineering department. “CERN is acting as an impartial third party in the site-selection process.”

A decision on the most suitable ET site is expected in 2027, with construction beginning a few years later. “The collaboration with CERN represents an element of extreme value in the preparation phase of the ET project,” says ET civil-engineering team leader Maria Marsella. “CERN’s involvement will help to design the best infrastructure at any selected sites and to train the future generation of engineers who will have to face the construction of such a large underground research facility.”

The post CERN teams up with ET on civil engineering appeared first on CERN Courier.

]]>
News The Einstein Telescope requires a new underground infrastructure in the form of a triangle with 10 km-long arms. https://cerncourier.com/wp-content/uploads/2024/07/CCJulAug24_ND_et.jpg
The inventive pursuit of UHF gravitational waves https://cerncourier.com/a/the-inventive-pursuit-of-uhf-gravitational-waves/ Sat, 04 May 2024 15:52:54 +0000 https://preview-courier.web.cern.ch/?p=110672 Since their first direct detection in 2015, gravitational waves have become pivotal in our quest to understand the universe.

The post The inventive pursuit of UHF gravitational waves appeared first on CERN Courier.

]]>
Since their first direct detection in 2015, gravitational waves (GWs) have become pivotal in our quest to understand the universe. The ultra-high-frequency (UHF) band offers a window to discover new physics beyond the Standard Model (CERN Courier March/April 2022 p22). Unleashing this potential requires theor­etical work to investigate possible GW sources and experiments with far greater sensitivities than those achieved today.

A workshop at CERN from 4 to 8 December 2023 leveraged impressive experimental progress in a range of fields. Attended by nearly 100 international scientists – a noteworthy increase from the 40 experts who attended the first workshop at ICTP Trieste in 2019 – the workshop showcased the field’s expanded research interest and collaborative efforts. Concretely, about 10 novel detector concepts have been developed since the first workshop.

One can look for GWs in a few different ways: observing changes in the space between detector components, exciting vibrations in detectors, and converting GWs into electromagnetic radiation in strong magnetic fields. Substantial progress has been made in all three experimental directions.

Levitating concepts

The leading concepts for the first approach involve optically levitated sensors such as high-aspect-ratio sodium–cyttrium–fluoride prisms, and semi-levitated sensors such as thin silicon or silicon–nitride nanomembranes in long optical resonators. These technologies are currently under study by various groups in the Levitated Sensor Detectors collaboration and at DESY.

For the second approach, the main focus is on millimetre-scale quartz cavities similar to those used in precision clocks. A network of such detectors, known as GOLDEN, is being planned, involving collaborations among UC Davis, University College London and Northwestern University. Superconducting radio-frequency cavities also present a promising technology. A joint effort between Fermilab and DESY is leveraging the existing MAGO prototype to gain insights and design further optimised cavities.

Regarding the third approach, a prominent example is optical high-precision interferometry, combined with a series of accelerator dipole magnets similar to those used in the light-shining-through-a-wall axion-search experiment, ALPS II (Any Light Particle Search II) or the axion helioscope CAST and its planned successor IAXO. In fact, ALPS II is anticipated to commence a dedicated GW search in 2028. Additionally, other notable concepts inspired by axion dark-matter searches involve toroidal magnets, exemplified by experiments like ABRACADABRA, or solenoidal magnets such as BASE or MADMAX.

All three approaches stand to benefit from burgeoning advances in quantum sensing, which promise to enhance sensitivities by orders of magnitude. In this landscape, axion dark-matter searches and UHF GW detection are poised to work in close collaboration, leveraging quantum sensing to achieve unprecedented results. Concepts that demonstrate synergies with axion-physics searches are crucial at this stage, and can be facilitated by incremental investments. Such collaboration builds awareness within the scientific community and presents UHF searches as an additional, compelling science case for their construction.

The workshop showcased the fields expanded research interest and collaborative efforts

Cross-disciplinary research is also crucial to understand cosmological sources and constraints on UHF GWs. For the former, our understanding of primordial black holes has significantly matured, transitioning from preliminary estimates to a robust framework. Additional sources, such as parabolic encounters and exotic compact objects, are also gaining clarity. For the latter, the workshop highlighted how strong magnetic fields in the universe, such as those in extragalactic voids and planetary magnetospheres, can help set limits on the conversion between electromagnetic and gravitational waves.

Despite much progress, the sensitivity needed to detect UHF GWs remains a visionary goal, requiring the constant pursuit of inventive new ideas. To aid this, the community is taking steps to be more inclusive. The living review produced after the first workshop (arXiv:2011.12414) will be revised to be more accessible for people outside our community, breaking down detector concepts into fundamental building blocks for easier understanding. Plans are also underway to establish a comprehensive research repository and standardise data formats. These initiatives are crucial for fostering a culture of open innovation and expanding the potential for future breakthroughs in UHF GW research. Finally, a new, fully customisable and flexible GW plotter including the UHF frequency range is being developed to benefit the entire GW community.

The journey towards detecting UHF GWs is just beginning. While current sensitivities are not yet sufficient, the community’s commitment to developing innovative ideas is unwavering. With the collective efforts of a dedicated scientific community, the next leap in gravitational-wave research is on the horizon. Limits exist to be surpassed!

The post The inventive pursuit of UHF gravitational waves appeared first on CERN Courier.

]]>
Meeting report Since their first direct detection in 2015, gravitational waves have become pivotal in our quest to understand the universe. https://cerncourier.com/wp-content/uploads/2024/05/CCMayJun24_FN_GW.jpg
First DESI results shine a light on Hubble tension https://cerncourier.com/a/first-desi-results-shine-a-light-on-hubble-tension/ Fri, 03 May 2024 12:44:08 +0000 https://preview-courier.web.cern.ch/?p=110628 The Dark Energy Spectroscopic Instrument has catalogued millions of objects in its bid to study the evolution of the universe.

The post First DESI results shine a light on Hubble tension appeared first on CERN Courier.

]]>
The expansion of the universe has been a well-established fact of physics for almost a century. By the turn of the millennium the rate of this expansion, referred to as the Hubble constant (H0), had converged to a value of around 70 km s–1 Mpc–1. However, more recent measurements have given rise to a tension: whereas those derived from the cosmic microwave background (CMB) cluster around a value of 67 km s–1 Mpc–1, direct measurements using a local distance-ladder (such as those based on Cepheids) mostly prefer larger values around 73 km s–1 Mpc–1. This disagreement between early- and late-universe measurements, respectively, stands at the 4–5σ level, thereby calling for novel measurements.

One such source of new information are large galaxy surveys, such as the one currently being performed by the Dark Energy Spectroscopic Instrument (DESI). This Arizona-based instrument uses 5000 individual robots that optimise the focal plane of the detector to allow it to measure 5000 galaxies at the same time. The goal of the survey is to provide a detailed 3D map, which can be used to study the evolution of the universe by focussing on the distance between galaxies. During its first year of observation, the results of which have now been released, DESI has provided a catalogue of millions of objects.

Primordial imprints

Small fluctuations in the density of the early universe resulted not only in signatures in the CMB, as measured for example by the Planck probe, but also left imprints in the distribution of baryonic matter. Each over-dense region is thought to contain dark matter, baryonic matter and photons. The gravitational force from dark matter on the baryons is countered by radiation pressure from the photons. From the small over-densities, baryons are dragged along by photon pressure until these two types of particles decoupled during the recombination era. The original location of the over-density is surrounded by a sphere of baryonic matter, which typically is at a distance referred to as the sound horizon. The sound horizon at the moment of decoupling, denoted rd, leaves an imprint that has since evolved to produce the density fluctuations in the universe that seeded large-scale structures.

Constraints on the Hubble constant assuming the flat ΛCDM model

This imprint, and how it has evolved over the last 13 billion years, depends on a number of parameters in the standard ΛCDM model of cosmology. Measuring the baryon distribution therefore allows many of the ΛCDM parameters to be constrained. Since the DESI data measure the combination of H0 and rd, a direct measurement of H0 is not possible. However, by using additional data for the sound horizon, taken from CMB measurements and Big Bang nucleosynthesis theory, the team finds values of H0 that cluster around 67.5 km s–1 Mpc–1 (see “Hubble tension” figure). This is consistent with early-universe measurements and differs by more than 3σ from late-universe measurements.

Although these new results do not directly resolve the Hubble tension, they do hint at one potential solution: the need to revise the ΛCDM model. The measurements also allow constraints to be placed on the acceleration of the universe, which depends on the dark-energy equation of state, w. While this is naturally assumed to be constant at w = –1, the DESI first-year results better match a time-evolving equation of state. Although highly dependent on the analysis, the DESI data so far provide results that differ from ΛCDM predictions by more than 2.5σ. The data from the remaining four years of the survey are therefore highly anticipated as these will show whether a change to the standard cosmological model is required.

The post First DESI results shine a light on Hubble tension appeared first on CERN Courier.

]]>
News The Dark Energy Spectroscopic Instrument has catalogued millions of objects in its bid to study the evolution of the universe. https://cerncourier.com/wp-content/uploads/2024/05/CCMayJun24_NA_Astro_desi.jpg
Sabbatical in space https://cerncourier.com/a/sabbatical-in-space/ Fri, 03 May 2024 12:30:56 +0000 https://preview-courier.web.cern.ch/?p=110635 CERN engineer Sławosz Uznański is one of 17 astronauts selected by ESA from among more than 22,000 applicants.

The post Sabbatical in space appeared first on CERN Courier.

]]>
Sławosz Uznański had to bide his time. Since its foundation in 1975, the European Space Agency (ESA) had only opened four selection rounds for new astronauts. When a fresh opportunity arose in 2021, Uznański’s colleagues in CERN’s electric power converters group were supportive of his ambitions to take an extended sabbatical in space. Now confirmed as one of 17 astronauts selected from among more than 22,000 applicants, Uznański is in training for future missions to the International Space Station (ISS).

His new colleagues are a diverse bunch, including geologists, medical doctors, astrophysicists, biologists, biotechnologists, jet fighter pilots and helicopter pilots. His own background is as a physicist and systems engineer. Following academic work studying the effect of radiation on semiconductors, Uznański spent 12 years at CERN working on powering existing infrastructure and future projects such as the Future Circular Collider. He’s most proud of being a project leader in reliability engineering and helping to design and deploy a new radiation-tolerant power-converter control system to the entire LHC accelerator complex.

Preparing for orbit

For now, Uznański’s astronaut training is mostly theoretical, preparing him for the ISS’s orbit-trajectory control, thermal control, communications, data handling, guidance, navigation and power generation, where he has deep expertise. But lift-off may not be far away, and one of his reserve-astronaut colleagues, Marcus Wandt, is already sitting up in the ISS capsule.

“I had the chance, in January, to see him launch from Cape Canaveral. And then, thanks to my operational experience at CERN, being in the control room, I came back directly to Columbus Control Center in Munich. Throughout the whole mission, I was in the control room, to support the mission and learn what I might live through one day.”

Rather than expertise or physical fitness, Uznański sees curiosity as the golden thread for astronauts – not least because they have to be able to perform any type of experiment that is assigned to them. As a Polish astronaut, he will have responsibility for the scientific experiments that are intended to accompany his country’s first mission to the ISS, most likely in late 2024 or early 2025. Among 66 proposals from Polish institutes, a dozen or more are currently being considered to fly.

CERN is extremely open in terms of technologies and I very much identify myself with that

The experiments are as diverse as the astronauts’ professional backgrounds. One will non-invasively monitor astronauts’ brain activity to help develop human–machine interfaces for artificial limbs. Another – a radiation monitor developed at CERN – plays on the fact that shielded high-energy physics environments have a similar radiation environment to the ISS in low-earth orbit. Uznański hopes that this technology can be commercialised and become another example of the opportunities out there for budding space entrepreneurs.

“I think we are in a fascinating moment for space exploration,” he explains, pointing to the boom in the commercial sector since 2014. “Space technology has gotten really democratised and commercialised. And I think it opens up possibilities for all types of engineers who build systems with great ideas and great science.”

Open science is a hot topic here. It’s increasingly possible to access venture capital to develop related technologies, notes Uznański, and the challenge is to ensure that the science is used in an open manner. “There is a big overlap between CERN culture and ESA culture in this respect. CERN is extremely open in terms of technologies and I very much identify myself with that.”

However societies choose to shape the future of open science in space, the two organisations are already partnering on several projects devoted to the pure curiosity that is dear to Uznański’s heart. These range from Euclid’s study of dark energy (CERN Courier May/June 2023 p7) to the ongoing study of cosmic rays by the Alpha Magnetic Spectrometer (AMS). With AMS due for an upgrade in 2026 (CERN Courier March/April 2024 p7), he cannot help but hope to be on that flight.

“If the opportunity arises, it’s a clear yes from me.”

The post Sabbatical in space appeared first on CERN Courier.

]]>
Careers CERN engineer Sławosz Uznański is one of 17 astronauts selected by ESA from among more than 22,000 applicants. https://cerncourier.com/wp-content/uploads/2024/05/CCMayJun24_CAREERS_Slawosz2.jpg
Advances in cosmology https://cerncourier.com/a/advances-in-cosmology/ Mon, 15 Apr 2024 12:28:24 +0000 https://preview-courier.web.cern.ch/?p=110469 The papers assembled in this volume range in subject matter from dark-matter searches and gravitational waves to artistic and philosophical considerations.

The post Advances in cosmology appeared first on CERN Courier.

]]>
Advances in cosmology

On the 30th anniversary of the discovery of weak neutral currents, the architects of the Standard Model of strong and electroweak interactions met in the CERN main auditorium on 16 September 2003 to debate the future of high-energy physics. During the panel discussion, Steven Weinberg repeatedly propounded the idea that cosmology is part of the future of high-energy physics, since cosmology “is now a science” as opposed to a mere theoretical framework characterised by diverging schools of thought. Twenty years later, this viewpoint may serve as a summary of the collection of articles in Advances in Cosmology.

The papers assembled in this volume encompass the themes that are today associated with the broad domain of cosmology. After a swift theoretical section, the contributions range from dark-matter searches (both at the LHC and in space) to gravitational waves and optical astronomy. The last two sections even explore the boundaries between cosmology, philosophy and artistic intuition. Indeed, as former CERN Director-General Rolf Heuer correctly puts it in his thoughtful foreword, the birth of quantum mechanics was also a philosophical enterprise: both Wolfgang Pauli and Werner Heisenberg never denied their Platonic inspiration and reading Timaeus (the famous Plato dialogue dealing with the origin and purpose of the universe) was essential for physicists of that generation to develop their notion of symmetry (see, for instance, Heisenberg’s 1969 book Physics and Beyond).

In around 370 pages, the editors of Advances in Cosmology manage to squeeze in more than two millennia of developments ranging from Pythagoras to the LHC, and for this reason the various contributions clearly follow different registers. Interested readers will not only find specific technical accounts but also the wisdom of science communicators and even artists. This is why the complementary parts of the monograph share the same common goals, even if they are not part of the same logical line of thinking.

Advances in Cosmology appeals to those who cherish an inclusive and eclectic approach to cosmology and, more generally, to modern science. While in the mid 1930s Edwin Hubble qualified the frontier of astronomy as the “realm of the nebulae”, modern cosmology combines the microscopic phenomena of quantum mechanics with the macroscopic effects of general relativity. As this monograph concretely demonstrates, the boundaries between particle phenomenology and the universe’s sciences are progressively fading away. Will the next 20 years witness only major theoretical and experimental breakthroughs, or more radical changes of paradigm? From the diverse contributions collected in this book, we could say, a posteriori, that scientific revolutions are never isolated as they need environmental selection rules that come from cultural, technological and even religious boundary conditions that cannot be artificially manufactured. This is why paradigm shifts are often difficult to predict and only recognised well after their appearance.

The post Advances in cosmology appeared first on CERN Courier.

]]>
Review The papers assembled in this volume range in subject matter from dark-matter searches and gravitational waves to artistic and philosophical considerations. https://cerncourier.com/wp-content/uploads/2024/04/CCMarApr24_REV_Advances_featured.jpg
Potent accelerators in microquasar jets https://cerncourier.com/a/potent-accelerators-in-microquasar-jets/ Wed, 27 Mar 2024 18:41:45 +0000 https://preview-courier.web.cern.ch/?p=110353 H.E.S.S opens exciting possibilities in the search for galactic cosmic-ray sources at PeV energies and extragalactic ones at EeV energies.

The post Potent accelerators in microquasar jets appeared first on CERN Courier.

]]>
Supernova remnants (SNRs) are excellent candidates for the production of galactic cosmic rays. Still, as we approach the “knee” region in the cosmic-ray spectrum (in the few-PeV regime), other astrophysical sources may contribute. A recent study by the High Energy Stereoscopic System (H.E.S.S.) observatory in Namibia sheds light on one such source, called SS 433, a microquasar located nearly 18,000 light-years away. It is a binary system formed by a compact object, such as a neutron star or a stellar-mass black hole, and a companion star, where the former is continuously accreting matter from the latter and emitting relativistic jets perpendicular to the accretion plane.

The jets of SS 433 are oriented perpendicular to our line of sight and constantly distort the SNR shell (called W50, or the Manatee Nebula) that was created during the black-hole formation. Radio observations reveal the precessing motion of the jets up to 0.3 light-years from the black hole, disappearing thereafter. At approximately 81 light-years from the black hole, they reappear as collimated large-scale structures in the X- and gamma-ray bands, termed “outer jets”. These jets are a fascinating probe into particle-acceleration sites, as interactions between jets and their environments can lead to the acceleration of particles that produce gamma rays.

Excellent resolution

The H.E.S.S. collaboration collected and analysed more than 200 hours of data from SS 433 to investigate the acceleration and propagation of electrons in its outer jets. Being an imaging air–shower Cherenkov telescope, H.E.S.S. offers excellent energy and angular resolutions. The gamma-ray image showed two emission regions along the outer jets, which overlap with previously observed X-ray sources. To study the energy dependence of the emission, the full energy range was split into three parts, indicating that the highest energy emission is concentrated closer to the central source, i.e. at the base of the outer jets. A proposed explanation for the observations is that electrons are accelerated to TeV energies, generate high-energy gamma rays via inverse Compton scattering, and subsequently lose energy as they propagate outwards to generate the observed X-rays.

Monte Carlo simulations modelled the morphology of the gamma-ray emission and revealed a significant deceleration in the velocity of the outer jets at their bases, indicating a possible shock region. With a lower limit on the cut-off energy for electron injection into this region, the acceleration energies were found to be > 200 TeV at 68% confidence level. Additionally, protons and heavier nuclei can also be accelerated in these regions and reach much higher energies as they are affected by weaker energy losses and carry higher total energy than electrons.

These jets are a fascinating probe into particle-acceleration sites

SS 433 is, unfortunately, ruled out as a contributor to the observed cosmic-ray flux on Earth. Considering the age of the system to be 30,000 years and proton energies of 1 PeV, the distance traversed by a cosmic-ray particle is much smaller than even the lowest estimates for the distance to SS 433. Even with a significantly larger galactic diffusion coefficient or an age 40 times older, it remains incompatible with other measurements and the highest estimate on the age of the nebula. While proton acceleration does occur in the outer jets of SS 433, these particles don’t play a part in the cosmic-ray flux measured on Earth.

This study, by revealing the energy-dependent morphology of a galactic microquasar and constraining jet velocities at large distances, firmly establishes shocks in microquasar jets as potent particle-acceleration sites and offers valuable insights for future modelling of these astrophysical structures. It opens up exciting possibilities in the search for galactic cosmic-ray sources at PeV energies and extragalactic ones at EeV energies.

The post Potent accelerators in microquasar jets appeared first on CERN Courier.

]]>
News H.E.S.S opens exciting possibilities in the search for galactic cosmic-ray sources at PeV energies and extragalactic ones at EeV energies. https://cerncourier.com/wp-content/uploads/2024/03/CCMarApr24_NA_astro.jpg
AMS upgrade seeks to solve cosmic conundrum https://cerncourier.com/a/ams-upgrade-seeks-to-solve-cosmic-conundrum/ Wed, 27 Mar 2024 18:37:37 +0000 https://preview-courier.web.cern.ch/?p=110328 A major upgrade to the AMS-02 tracking system planned for 2026 will bring key information relating to a mysterious excess of cosmic rays at high energies.

The post AMS upgrade seeks to solve cosmic conundrum appeared first on CERN Courier.

]]>
New tracker layer and prototype PDS radiator

Since being delivered to the International Space Station (ISS) by Space Shuttle Endeavour in 2011, the Alpha Magnetic Spectrometer (AMS-02) has recorded more than 200 billion cosmic-ray events with energies extending into the multi-TeV range. Although never designed to be serviceable, a major intervention to the 7.5 tonne detector in 2019/2020, during which astronauts replaced a failing cooling system, extended the lifetime of AMS significantly (CERN Courier March/April 2020 p9). Now, the international collaboration is preparing a new mission to upgrade the detector itself, by adding an additional tracker layer and associated thermal radiators. If all goes to plan, the upgrade will allow physicists to gather key data relating to a mysterious excess of cosmic rays at high energies.

Precise dataset

The increasingly precise AMS-02 dataset reveals numerous unexplained features in cosmic-ray spectra (CERN Courier December 2016 p26). In particular, a high-energy excess in the relative positron flux does not follow the single power-law behaviour expected from standard cosmic-ray interactions with the interstellar medium. While known astrophysical sources such as pulsars cannot yet be ruled out, the spectrum fits well to dark-matter models. If the excess events are indeed due to the annihilation of dark-matter particles, a smoking gun would be a high-energy cut-off in the spectrum. By increasing the AMS acceptance by 300%, the addition of a new tracker layer is the only way that the experiment can gather the necessary data to test this hypothesis before the scheduled decommissioning of the ISS in 2030.

“By 2030 AMS will extend the energy range of the positron flux measurement from 1.4 to 2 TeV and reduce the error by a factor of two compared to current data,” says AMS spokesperson Sam Ting of MIT. “This will allow us to measure the anisotropy accurately to permit a separation between dark matter and pulsars at 99.93% confidence.”

Led by MIT, and assembled and tested at CERN/ESA with NASA support, AMS is a unique particle-physics experiment in space. It consists of a transition radiation detector to identify electrons and positrons, a permanent magnet together with nine silicon-tracker layers to measure momentum and identify different particle species, two banks of time-of-flight counters, veto counters, a ring-image Cherenkov counter and an electromagnetic calorimeter.

AMS extravehicular activities training

The additional tracker layer, 2.6 m in diameter, 30 cm thick and weighing 250 kg, will be installed on the top-most part of the detector. The tracking sensors will populate the opposite faces of an ultralight carbon plane specifically developed for AMS to fulfil thermoelastic stability requirements, surrounded by an octagonal carbon frame that also provides the main structural interface during launch. The powering and readout electronics for the new layer will generate additional heat that is rejected to space by radiators at its periphery. Two new radiators will therefore be integrated into the detector prior to the installation of the layer, while a third, much larger power-distribution radiator (PDS) will also be installed to recuperate the performance of one of the AMS main radiators, which has suffered degradation and radiation damage after 13 years in low-Earth orbit. In January, a prototype of the PDS, manufactured and supported by aerospace company AIDC in Taiwan, was delivered to CERN for tests.

First steps for the upgrade took place in 2021, and the US Department of Energy together with NASA approved the mission in March 2023. The testing of components and construction of prototypes at institutes around the world is proceeding quickly in view of a planned launch in February 2026. The silicon strips, 8 m2 of which will cover both faces of the layer, were produced by Hamamatsu and are being assembled into “ladders” of different lengths at IHEP in Beijing. These are then shipped to INFN Perugia in Italy, where they are joined together to form a quarter plane. Once fully characterised, the eight quarters will be installed at CERN on both faces of the mechanical plane and integrated with electronics, thermal hardware and the necessary brackets. Crucial for the new tracker layer to survive the harsh launch environment and to maintain, once in orbit, the sensor within five microns relative to ground measurements, are the large carbon plane and the shielding cupolas, developed at CERN, as well as the NASA brackets that will attach the layer module to AMS. This hardware represents a major R&D programme in its own right.

By 2030 AMS will extend the energy range of the positron flux measurement from 1.4 to 2 TeV and reduce the error by a factor of two

Following the first qualification model in late 2023, consisting of a quarter of the entire assembled layer, AMS engineers are now working towards a full-size model that will take the system closer to flight. The main tests to simulate the environment that the layer will experience during launch and once in orbit are vibrational and thermal-vacuum, to be performed in Italy (INFN PG) and in Germany (IABG), while the sensors’ position in the layer will be fully mapped at CERN and then tested with beams from the SPS, explains AMS chief engineer Corrado Gargiulo of CERN: “Everything is going very, very fast. This is a requirement, otherwise we arrive too late at the ISS for the upgrade to make sense.”

The new module is being designed to fit snuggly into the nose of a SpaceX Dragon rocket. Once safely delivered to the ISS, a robotic arm will dispatch the module to AMS where astronauts will, through a series of extravehicular activities (EVAs), perform the final mounting. Training for the delicate EVAs is well underway at NASA’s Johnson Space Center. Nearby, at the Neutral Buoyancy Laboratory, the astronauts are trained in a large swimming pool on how to attach the different components under the watchful eyes of safety and NASA divers, among them Gargiulo (see “Space choreography” images). As with the EVAs required to replace the cooling system, a number of custom-built tools and detailed procedures have to be developed and tested.

“If the previous ones were considered high-risk surgery, the EVAs for the new upgrade are unprecedented for the several different locations where astronauts will be required to work in much tighter and less accessible spaces,” explains Ken Bollweg, NASA manager of AMS, who is leading the operations aspect.

The post AMS upgrade seeks to solve cosmic conundrum appeared first on CERN Courier.

]]>
News A major upgrade to the AMS-02 tracking system planned for 2026 will bring key information relating to a mysterious excess of cosmic rays at high energies. https://cerncourier.com/wp-content/uploads/2024/03/AMS-tracker-segment.jpg
Webb sheds light on oldest black holes https://cerncourier.com/a/webb-sheds-light-on-oldest-black-holes/ Thu, 11 Jan 2024 16:26:37 +0000 https://preview-courier.web.cern.ch/?p=109892 The oldest black hole found by the JWST and Chandra telescopes hints at the seeds of supermassive black-hole formation.

The post Webb sheds light on oldest black holes appeared first on CERN Courier.

]]>
JWST image of distant galaxies

While it is believed that each galaxy, including our own, contains a supermassive black hole (SMBH) at its centre, much remains unknown about the origin of these extreme objects. The seeds for SMBHs are thought to have existed as early as 200 million years after the Big Bang, after which they accreted mass for 13 billion years to turn into black holes with sizes of up to tens of billions of solar masses. But what were the seeds of these massive black holes? Some theories state that they were formed after the collapse of the first generation of stars, thereby making them tens to hundreds of solar masses, while other theories attribute their origin to the collapse of massive gas clouds that could produce seeds with masses of 104–105 solar masses.

The recent joint detection of a SMBH dating from 500 million years after the Big Bang by the James Webb Space Telescope (JWST) and the Chandra X-ray Observatory provides new insights into this debate. The JWST, sensitive to highly redshifted emission from the early universe, observed a gravitationally lensed area to provide images of some of the oldest galaxies. One such galaxy, called UHZ1, has a redshift corresponding to 13.2 billion years ago, or 500 million years after the Big Bang. Apart from its age, the observations allow an estimate of its stellar mass, while the SMBH expected to be at its centre remains hidden in these wavelengths. This is where Chandra, which is sensitive in the 0.2 to 10 keV energy range, came in.

Observations by Chandra of the area of the cluster lens, Abell 2744, which magnifies UHZ1, shows an excess at energies of 2–7 keV. The measured emission spectrum and luminosity correspond to that from an accreting black hole with a mass of 107 to 108 solar masses, which is about half of the total mass of the galaxy. This can be compared to our own galaxy where the SMBH is estimated to make up only 0.1% of the total mass.

Such a mass can be explained by a seed black-hole of 104 to 105 solar masses accreting matter for 300 million years. A small seed is more difficult to explain, however, because such sources would have to continuously accrete matter at twice their Eddington limit (the point at which the gravitational pull of the object is cancelled by the radiation pressure it applies through the accretion to the surrounding matter). Although super-Eddington accretion is possible, as this limit assumes for example spherical emission of the radiation, which is not necessarily correct, the accretion rates required for light seeds are difficult to explain. 

The measurements of a single early galaxy already provide strong hints regarding the source of SMBHs. As JWST continues to observe the early universe, more such sources will likely be revealed. This will allow us to better understand the masses of the seeds, as well as how they grew over a period of 13 billion years.

The post Webb sheds light on oldest black holes appeared first on CERN Courier.

]]>
News The oldest black hole found by the JWST and Chandra telescopes hints at the seeds of supermassive black-hole formation. https://cerncourier.com/wp-content/uploads/2024/01/CCJanFeb24_NA_astrowatch_feature.jpg
Supernovae probe neutrino self-interactions https://cerncourier.com/a/supernovae-probe-neutrino-self-interactions/ Fri, 03 Nov 2023 12:32:26 +0000 https://preview-courier.web.cern.ch/?p=109618 The Standard Model predicts feeble self-interactions among neutrinos, but probing them remains beyond the reach of present-day laboratories on Earth.

The post Supernovae probe neutrino self-interactions appeared first on CERN Courier.

]]>
Towards the end of the lifetime of a very massive (>8 M) star, the nuclear fusion processes in its core are no longer sufficient to balance the constantly increasing pull of gravitational forces. This eventually causes the core to collapse, with the release of an enormous amount of matter and energy via shockwaves. Nearly 99% of such a core-collapse supernova’s energy is released in the form of neutrinos, usually leaving behind a compact proto-neutron star with a mass of about 1.5 M and a radius of about 10 km. For more massive remnant cores (>3 M), a black hole is formed instead. The near-zero mass and electrical neutrality of neutrinos make their detection particularly challenging: when the famous 1987 supernova SN1987a occurred 168,000 light-years from Earth, the IMB observatory in the US detected just eight neutrinos, BNO in Russia detected 13 and Kamiokande II in Japan detected 11 (CERN Courier March/April 2021 p12).

Besides telling us about the astrophysical processes inside a core-collapse supernova, such neutrino detections might also tell us more about the particles themselves. The Standard Model (SM) predicts feeble self-interactions among neutrinos (νSI), but probing them remains beyond the reach of present-day laboratories on Earth. As outlined in a white paper published earlier this year by Jeffrey Berryman and co-workers, νSI (mediated, for example, by a new scalar or vector boson) enter many beyond-the-SM theories that attempt to explain the generation of neutrino masses and the origin of dark matter. One of the probes that can be used to explore such interactions are core-collapse supernovae, since the extreme conditions in these catastrophic events make it more likely for νSI to occur and therefore affect the behaviour of the emitted neutrinos.

Recently, Po-Wen Chang and colleagues at Ohio State University explored this possibility by considering the formation of a tightly coupled neutrino fluid that expands under relativistic hydrodynamics, thereby having an effect on neutrino pulses detected on Earth. The team derives solutions to relativistic hydrodynamic equations for two cases: a “burst outflow” and a “wind outflow”. A burst outflow of a uniform neutrino fluid occurs when it undergoes free expansion in vacuum, while a wind outflow occurs when steady-state solutions to the hydrodynamics equations are looked for. In their current work, the authors focus on the former.

In a scenario without νSI, the neutrinos escape and form a shell of thickness about 105 times the radius of the proto-neutron star that freely travels away at the speed of light. On the other hand, in a scenario with νSI, the neutrinos don’t move freely immediately after escaping the proto-neutron star and instead undergo increased neutrino elastic scattering. As a result, the neutrino shell continues expanding radially until it reaches the point where the density becomes low enough for the neutrinos to decouple and begin free-flowing. The thickness of the shell at this instant depends on the strength of the νSI interactions and is expected to be much larger than that in the no-νSI case. This, in turn, would translate to longer neutrino signals in detectors on Earth.

The effects of neutrino self interactions on SN1987a are starting to become clearer

Data from SN1987a, where the neutrino signal lasted for about 10 s, broadly agree with the no-νSI scenario and were used to set limits on very strong νSI interactions. On the other hand, if νSI were to exist as a burst-outflow, the proposed model gives very robust results, with an estimated sensitivity of 3 s. Additionally, the authors argue that the steady-state wind-outflow case might be more likely to occur, a dedicated treatment of which has been left for future work.

For the first time since its observation 36 years ago, the effects of νSI on SN1987a are starting to become clearer. Further advances in this direction are much anticipated so that when the next supernovae occurs it could help clear the fog that surrounds our current understanding of neutrinos.

The post Supernovae probe neutrino self-interactions appeared first on CERN Courier.

]]>
News The Standard Model predicts feeble self-interactions among neutrinos, but probing them remains beyond the reach of present-day laboratories on Earth. https://cerncourier.com/wp-content/uploads/2023/11/CCNovDec23_NA_Astro.jpg
ESO’s Extremely Large Telescope halfway to completion https://cerncourier.com/a/esos-extremely-large-telescope-halfway-to-completion/ Thu, 24 Aug 2023 09:12:12 +0000 https://preview-courier.web.cern.ch/?p=109078 The Extremely Large Telescope has passed its construction mid-point atop Cerro Armazones in the Atacama Desert.

The post ESO’s Extremely Large Telescope halfway to completion appeared first on CERN Courier.

]]>
The construction of the world’s largest optical telescope, the Extremely Large Telescope (ELT), has reached its mid-point, stated the European Southern Observatory (ESO) on 11 July. Originally planned to see first light in the early 2020s, operations will now start in 2028 due to delays inherent to building such a large and complex instrument, as well as the COVID-19 pandemic. 

The base and frame of the ELT’s dome structure on Cerro Armazones in the Chilean Atacama Desert have now been set. Meanwhile at European sites, the five-system mirrors for the ELT are being manufactured. More than 70% of the supports and blanks for the main mirror – which at 39 m across will be the biggest primary mirror ever built – are complete, and mirrors two and three are cast and now in the process of being polished.

Along with six laser guiding sources that will act as reference stars, mirrors four and five form part of a sophisticated adaptive-optics system to correct for atmospheric disturbances. The ELT will observe the universe in the near-infrared and visible regions to track down Earth-like exoplanets, investigate faint objects in the solar system and study the first stars and galaxies. It will also explore black holes, the dark universe and test fundamental constants (CERN Courier November/December 2019 p25).

The post ESO’s Extremely Large Telescope halfway to completion appeared first on CERN Courier.

]]>
News The Extremely Large Telescope has passed its construction mid-point atop Cerro Armazones in the Atacama Desert. https://cerncourier.com/wp-content/uploads/2023/08/CCSepOct23_NA_ELT.jpg
Time dilation finally observed in quasars https://cerncourier.com/a/time-dilation-finally-observed-in-quasars/ Thu, 24 Aug 2023 08:56:14 +0000 https://preview-courier.web.cern.ch/?p=109084 Data from the Sloan Digital Sky Survey and PanSTARRS-1 resolve one of the main problems with the standard cosmological model.

The post Time dilation finally observed in quasars appeared first on CERN Courier.

]]>
A quasar in the very early universe

Within astronomy and cosmology, the idea that the universe is continuously expanding is a cornerstone of the standard cosmological model. For example, when measuring the distance of astronomical objects one often uses their redshift, which is induced by their velocity with respect to us due to the expansion. The expansion itself has, however, never been directly measured, i.e. no measurement exists that shows the increasing redshift with time of a single object. Although not far beyond the current capabilities of astrophysics, such a measurement is unlikely to be performed soon. Rather, evidence for it is based on correlations within populations of astrophysical objects. However, not all studies agree with this standard assumption.

One population study that supports the standard model concerns type 1A supernovae, specifically the observed correlation between their duration and distance. Such a correlation is predicted to be the result of time dilation induced by the higher velocity of more distant objects. Supporting this picture, gamma-ray bursts occurring at larger distances appear to, on average, last longer than those that occur nearby. However, similar studies of quasars thus far did not show any dependence of the length in their variability with their distance, thereby contradicting special relativity and leading to an array of alternative hypotheses.

Detailed studies

Quasars are active galaxies containing a supermassive blackhole surrounded by a relativistic accretion disk. Due to their brightness they can be observed with redshifts up to about z = 8, which, based on special relativity should show variabilities occurring 8 times slower than those that occur nearby. As previous studies did not observe such time dilation, alternative theories proposed included those that cast doubt on the extragalactic nature of quasars. A new, detailed study now removes the need for such theories.

These results do not provide hints of new physics but rather resolve one of the main problems with the standard cosmological model

In order to observe time dilation one requires a standard clock. Supernovae are ideal for this purpose because these explosions are all nearly identical, allowing their duration to be used to measure time dilation. For quasars the issue is more complicated as the variability of their brightness appears almost random. However, the variability can be modelled using a so-called dampened random walk (DRW), a random process combined with an exponential dampening component. This complex model does not allow the brightness of a quasar to be predicted, but contains a characteristic timescale in the exponent that should correlate to the redshift due to time dilation.

This idea has now been tested by Geraint Lewis and Brenden Brewer of the universities of Sydney and Auckland, respectively. The pair studied 190 quasars with redshifts up to z = 4, observed over a 20 year period by the Sloan Digital Sky Survey and PanSTARRS-1, and applied a Bayesian analysis to look for a correlation between the DRW parameters and their redshift. The data was found to match best a universe where the DRW parameters scale according to (1 + z)n with n = 1.28 ±0.29, thereby making it compatible with n = 1, the value expected by standard physics. This contradicts previous measurements, something the authors attribute to the smaller quasar sample used in previous studies. The complex nature of quasars and the large variability in their population requires long observations of a similar population to make the time dilation effect visible. 

These new results, which were made possible due to the large amounts of data becoming available from large observatories, do not provide hints of new physics but rather resolve one of the main problems with the standard cosmological model.

The post Time dilation finally observed in quasars appeared first on CERN Courier.

]]>
News Data from the Sloan Digital Sky Survey and PanSTARRS-1 resolve one of the main problems with the standard cosmological model. https://cerncourier.com/wp-content/uploads/2023/08/CCSepOct23_NA_Webb_feature.jpg
Gravitational waves: a golden era https://cerncourier.com/a/gravitational-waves-a-golden-era/ Wed, 23 Aug 2023 09:20:34 +0000 https://preview-courier.web.cern.ch/?p=109127 Azadeh Maleknejad and Fabrizio Rompineve explain why precision measurements of the gravitational-wave spectrum are essential to explore particle physics beyond the reach of colliders.

The post Gravitational waves: a golden era appeared first on CERN Courier.

]]>
An array of pulsars

The existence of dark matter in the universe is one of the most important puzzles in fundamental physics. It is inferred solely by means of its gravitational effects, such as on stellar motions in galaxies or on the expansion history of the universe. Meanwhile, non-gravitational interactions between dark matter and the known particles described by the Standard Model have not been detected, despite strenuous and advanced experimental efforts.

Such a situation suggests that new particles and fields, possibly similar to those of the Standard Model, may have been similarly present across the entire cosmological history of our universe, but with only very tiny interactions with visible matter. This intriguing idea is often referred to as the paradigm of dark sectors and is made even more compelling by the lack of new particles seen at the LHC and laboratory experiments so far.

Dark universe

Cosmological observations, above all those of the cosmic microwave background (CMB), currently represent the main tool to test such a paradigm. The primary example is that of dark radiation, i.e. putative new dark particles that, unlike dark matter, behave as relativistic species at the energy scales probed by the CMB. The most recent data collected by the Planck satellite constrain such dark particles to make at most around 30% of the energy of a single neutrino species at the recombination epoch (when atoms formed and the universe became transparent, around 380,000 years after the Big Bang).

While such observations represent a significant advance, the early universe was characterised by temperatures in the MeV range and above (enabling nucleosynthesis), possibly as large as 1016 GeV. Some of these temperatures correspond to energy scales that cannot be probed via the CMB, nor directly with current or prospective particle colliders. Even if new particles had significant interactions with SM particles at such high temperatures, any electromagnetic radiation in the hot universe was continuously scattered off matter (electrons), making it impossible for any light from such early epochs to reach our detectors today. The question then arises: is there another channel to probe the existence of dark sectors in the early universe? 

We are entering a golden era of GW observations across the frequency spectrum

For more than a century, a different signature of gravitational interactions has been known to be possible: waves, analogous to those of the electromagnetic field, carrying fluctuations of gravitational fields. The experimental effort to detect gravitational waves (GWs) had a first amazing success in 2015, when waves generated by the merger of two black holes were first detected by the LIGO and Virgo interferometers in the US and Italy.

Now, the GW community is on the cusp of another incredible milestone: the detection of a GW background, generated by all sources of GWs across the history of our universe. Recently, based on more than a decade of observations, several networks of radio telescopes called pulsar timing arrays (PTAs) – NANOGrav in North America, EPTA in Europe, PPTA in Australia and CPTA in China – produced tentative evidence for such a stochastic GW background based on the influence of GWs on pulsars (see “Hints of low-frequency gravitational waves found” and “Clocking gravity” image). Together with next-generation interferometer-based GW detectors such as LISA and the Einstein Telescope, and new theoretical ideas from particle physics, the observations suggest that we are entering an exciting new era of observational cosmology that connects the smallest and largest scales. 

Particle physics and the GW background

Once produced, GWs interact only very weakly with any other component of the universe, even at the high temperatures present at the earliest times. Therefore, whereas photons can tell us about the state of the universe at recombination, the GW background is potentially a direct probe of high-energy processes in the very early universe. Unlike GWs that reach Earth from the locations of binary systems of compact objects, the GW background is expected to be mostly isotropic in the sky, very much like the CMB. Furthermore, rather than being a transient signal, it should persist in the sensitivity bands of GW detectors, similar to a noise component but with peculiarities that are expected to make a detection possible. 

Colliding spherical pressure waves

As early as 1918, Einstein quantified the power emitted in GWs by a generic source. Compared to electromagnetic radiation, which is sourced by the dipole moment of a charge distribution, the power emitted in GWs is proportional to the third time derivative of the quadrupole moment of the mass-energy distribution of the source. Therefore, the two essential conditions for a source to emit GWs are that it should be sufficiently far from spherical symmetry and that its distribution should change sufficiently quickly with time.

What possible particle-physics sources would satisfy these conditions? One of the most thoroughly studied phenomena as a source of GWs is the occurrence of a phase transition, typically associated with the breaking of a fundamental symmetry. Specifically, only those phase transitions that proceed via the nucleation, expansion and collision of cosmic bubbles (analogous to the phase transition of liquid water to vapour) can generate a significant amount of GWs (see “Ringing out” image). Inside any such bubble the universe is already in the broken-symmetry phase, whereas beyond the bubble walls the symmetry is still unbroken. Eventually, the state of lowest energy inside the bubbles prevails via their rapid expansion and collisions, which fill up the universe. Even though such bubbles may initially be highly spherical, once they collide the energy distribution is far from being so, while their rapid expansion provides a time variation.  

The occurrence of two phase transitions is in fact predicted by the Standard Model (SM): one related to the spontaneous breaking of the electroweak SU(2) × U(1) symmetry, the other associated with colour confinement and thus the formation of hadronic states. However, dedicated analytical and numerical studies in the 1990s and 2000s concluded that the SM phase transitions are not expected to be of first order in the early universe. Rather, they are expected to proceed smoothly, without any violent release of energy to source GWs. 

Sensitivity of current and future GW observatories

This leads to a striking conclusion: a detection of the GW background would provide evidence for physics beyond the SM – that is, if its origin can be attributed to processes occurring in the early universe. This caveat is crucial, since astrophysical processes in the late universe also contribute to a stochastic GW background. 

In order to claim a particle-physics interpretation for any stochastic GW background, it is thus necessary to appropriately account for astrophysical sources and characterise the expected (spectral) shape of the GW signal from early-universe sources of interest. These tasks are being undertaken by a diverse community of cosmologists, particle physicists and astrophysicists at research institutions all around the world, including in the cosmology group in the CERN TH department.

Precise probing

For particle physicists and cosmologists, it is customary to express the strength of a given stochastic GW signal in terms of the fraction of the energy (density) of the universe today carried by those GWs. The CMB already constraints this “relic abundance” to be less than roughly 10% of ordinary radiation, or about one millionth of that of the dominant component of the universe today, dark energy. Remarkably, current GW detectors are already able to probe stochastic GWs that produce only one billionth of the energy density of the universe.

Generally, the stochastic GW signal from a given source extends over a broad frequency range. The spectrum from many early-universe sources typically peaks at a frequency linked to the expansion rate at the time the source was active, redshifted to today. Under standard assumptions, the early universe was dominated by radiation and the peak frequency of the GW signal increases linearly with the temperature. For instance, the GW frequency range in which LIGO/Virgo/KAGRA are most sensitive (10–100 Hz) corresponds to sources that were active when the universe was as hot as 108 GeV – six orders of magnitude higher than the LHC. The other currently operating GW observatories, PTAs, are sensitive to GWs of much smaller frequencies, around 10–9–10–7 Hz, which correspond to temperatures around 10 MeV to 1 GeV (see “Broadband” figure). These are the temperatures at which the QCD phase transition occurred. While, as mentioned above, a signal from the latter is not expected, dark sectors may be active at those temperatures and source a GW signal. In the near (and long-term) future, it is conceivable that new GW observatories will allow us to probe the stochastic GW background across the entire range of frequencies from nHz to 100 Hz. 

Laser-interferometer GW detectors on Earth and in space

Together with bubble collisions, another source of peaked GW spectra due to symmetry breaking in the early universe is the annihilation of topological defects, such as domain walls separating different regions of the universe (in this case the corresponding symmetry is a discrete symmetry). Violent (so-called resonant) decays of new particles, such as is predicted by some early-universe scenarios, may also strongly contribute to the GW background (albeit possibly only at very large frequencies, beyond the sensitivity reach of current and forecasted detectors). Yet another discoverable phenomenon is the collapse of large energy (density) fluctuations in the early universe, such as is predicted to occur in scenarios where the dark matter is made of primordial black holes.

On the other hand, particle-physics sources can also be characterised by very broad GW spectra without large peaks. The most important such source is the inflationary mechanism: during this putative phase of exponential expansion of the universe, GWs would be produced from quantum fluctuations of space–time, stretched by inflation and continuously re-entering the Hubble horizon (i.e. the causally connected part of the universe at any given time) throughout the cosmological evolution. The amount of such primordial GWs is expected to be small. Nonetheless, a broad class of inflationary models predicts GWs with frequencies and amplitudes such that they can be discovered by future measurements of the CMB. In fact, it is precisely via these measurements that Planck and BICEP/Keck Array have been able to strongly constrain the simplest models of inflation. The GWs that can be discovered via the CMB would have very small frequencies (around 10–17 Hz, corresponding to ~eV temperatures). The full spectrum would nonetheless extend to large frequencies, only with such a small amplitude that detection by GW observatories would be unfeasible (except perhaps for the futuristic Big Bang Observer – a proposed successor to the Laser Interferometer Space Antenna, LISA, currently being prepared by the European Space Agency). 

Feeling blue

Certain classes of inflationary models could also lead to “blue-tilted” (i.e. rising with frequency) spectra, which may then be observable at GW observatories. For instance, this can occur in models where the inflaton is a so-called axion field (a generalisation of the predicted Peccei–Quinn axion in QCD). Such scenarios naturally produce gauge fields during inflation, which can themselves act as sources of GWs, with possible peculiar properties such as circular polarisation and non-gaussianities. A final phenomenon that would generate a very broad GW spectrum, unrelated to inflation, is the existence of cosmic strings. These one-dimensional defects can originate, for instance, from the breaking of a global (or gauge) rotation symmetry and persist through cosmological history, analogous to cracks that appear in an ice crystal after a phase transition from water.

Astrophysical contributions to the stochastic GW background are certainly expected from binary black-hole systems. At the frequencies relevant for LIGO/Virgo/KAGRA, such background would be due to black holes with masses of tens of solar masses, whereas in the PTA sensitivity range the background is sourced by binaries of supermassive black holes (with masses up to millions of solar masses), such as those that are believed to exist at the centres of galaxies. The current PTA indications of a stochastic GW background require detailed analyses to understand whether the signal is due to a particle physics or an astrophysics source. A smoking gun for the latter origin would be the observation of significant anisotropies in the signal, as it would come from regions where more binary black holes are clustered. 

Polarised microwave emission from the CMB

We are entering a golden era of GW observations across the frequency spectrum, and thus in exploring particle physics beyond the reach of colliders and astrophysical phenomena at unprecedented energies. The first direct detection of GWs by LIGO in September 2015 was one of the greatest scientific achievements of the 21st century. The first generation of laser interferometric detectors (GEO600, LIGO, Virgo and TAMA) did not detect any signal and only constrained the gravitational-wave emission from several sources. The second generation (Advanced LIGO and Advanced Virgo) made the first direct detection and has observed almost 100 GW signals to date. The underground Kamioka Gravitational Wave Detector (KAGRA) in Japan joined the LIGO–VIRGO observations in 2020. As of 2021, the LIGO–Virgo–KAGRA collaboration is working to establish the International Gravitational Wave Network, to facilitate coordination among ground-based GW observatories across the globe. In the near future, LIGO India (IndIGO) will also join the network of terrestrial detectors. 

Despite being sensitive to changes in the arm length of the order of 10–18 m, the LIGO, Virgo and KAGRA detectors are not sensitive enough for precise astronomical studies of GW sources. This has motivated the new generation of detectors. The Einstein Telescope (ET) is a proposed design concept for a European third-generation GW detector underground, which will be 10 times more sensitive than the current advanced instruments (see “Joined-up thinking in vacuum science”). On Earth, however, gravitational waves with frequencies lower than 1 Hz are inaccessible due to terrestrial gravity gradient noise and limitations to the size of the device. Space-based detectors, on the other hand, can access frequencies as low as 10–4 Hz. Several space-based GW observatories are proposed that will ultimately form a network of laser interferometers in space. They include LISA (planned to launch around 2035), the Deci-hertz Interferometer Gravitational Wave Observatory (DECIGO) led by the Japan Aerospace Exploration Agency and two Chinese detectors, TianQin and Taiji (see “In synch” figure).

Precision detection of the gravitational-wave spectrum is essential to explore particle physics beyond the reach of particle colliders

A new kid on the block, atom interferometry, offers a complementary approach to laser interferometry for the detection of GWs. Two atom interferometers coherently manipulated by the same light field can be used as a differential phase meter tracking the distance traversed by the light field. Several terrestrial cold-atom experiments are under preparation, such as MIGA, ZAIGA and MAGIS, or being proposed, such as ELGAR and AION. These experiments will provide measurements in the mid-frequency range between 10–2–1 Hz. Moreover, a space-based cold-atom GW detector called the Atomic Experiment for Dark Matter and Gravity Exploration (AEDGE) is expected to probe GWs in a much broader frequency range (10–7–10 Hz) compared to LISA.

Astrometry provides yet another powerful way to explore GWs that is not accessible to other probes, i.e. ultra-low frequencies of 10 nHz or less. Here, the passage of a GW over the Earth-star system induces a deflection in the apparent position of a star, which makes it possible to turn astrometric data into a nHz GW observatory. Finally, CMB missions have a key role to play in searching for possible imprints on the polarisation of CMB photons caused by a stochastic background of primordial GWs (see “Acoustic imprints” image). The wavelength of such primordial GWs can be as large as the size of our horizon today, associated with frequencies as low as 10–17 Hz. Whereas current CMB missions allow upper bounds on GWs, future missions such as the ground-based CMB-S4 (CERN Courier March/April 2022 p34) and space-based LiteBIRD observatories will improve this measurement to either detect primordial GWs or place yet stronger upper bounds on their existence.

Outlook 

Precision detection of the gravitational-wave spectrum is essential to explore particle physics beyond the reach of particle colliders, as well as for understanding astrophysical phenomena in extreme regimes. Several projects are planned and proposed to detect GWs across more than 20 decades of frequency. Such a wealth of data will provide a great opportunity to explore the universe in new ways during the next decades and open a wide window on possible physics beyond the SM.

The post Gravitational waves: a golden era appeared first on CERN Courier.

]]>
Feature Azadeh Maleknejad and Fabrizio Rompineve explain why precision measurements of the gravitational-wave spectrum are essential to explore particle physics beyond the reach of colliders. https://cerncourier.com/wp-content/uploads/2023/08/CCSepOct23_GW_frontis.jpg
Hints of low-frequency gravitational waves found https://cerncourier.com/a/hints-of-low-frequency-gravitational-waves-found/ Wed, 23 Aug 2023 08:34:22 +0000 https://preview-courier.web.cern.ch/?p=109066 Pulsar timing arrays have spotted the first evidence for a stochastic gravitational-wave background.

The post Hints of low-frequency gravitational waves found appeared first on CERN Courier.

]]>
Since their direct discovery in 2015 by the LIGO and Virgo detectors, gravitational waves (GWs) have opened a new view on extreme cosmic events such as the merging of black holes. These events typically generate gravitational waves with frequencies of a few tens to a few thousand hertz, within reach of ground-based detectors. But the universe is also expected to be pervaded by low-frequency GWs in the nHz range, produced by the superposition of astrophysical sources and possibly by high-energy processes at the very earliest times (see “Gravitational waves: a golden era”). 

Announced in late June, news that pulsar timing arrays (PTAs), which infer the presence of GWs via detailed measurements of the radio emission from pulsars, had seen the first evidence for such a stochastic GW background was therefore met with delight by particle physicists and cosmologists alike. “For me it feels that the first gravitational wave observed by LIGO is like seeing a star for the first time, and now it’s like seeing the cosmic microwave background for the first time,” says CERN theorist Valerie Domcke.

Clocking signals

Whereas the laser interferometers LIGO and Virgo detect relative length changes in two perpendicular arms, PTAs clock the highly periodic signals from millisecond pulsars (rapidly rotating neutron stars), some of which are in Earth’s line of sight. A passing GW perturbs spacetime and induces a small delay in the observed arrival time of the pulses. By observing a large sample of pulsars over a long period and correlating the signals, PTAs effectively turn the galaxy into a low-frequency GW observatory. The challenge is to pick out the characteristic signature of this stochastic background, which is expected to induce “red noise” (meaning there should be greater power at lower fluctuation frequencies) in the differences between the measured arrival times of the pulsars and the timing-model predictions. 

The smoking gun of a nHz GW detection is a measurement of the so-called Hellings–Downs (HD) curve based on general relativity. This curve predicts the arrival-time correlations as a function of angular separation for pairs of pulsars, which vary because the quadrupolar nature of GWs introduces directionally dependent changes. 

Following its first hints of these elusive correlations in 2020, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) has released the results of its 15-year dataset. Based on observations of 68 millisecond-pulsars distributed over half the galaxy (21 more than in the last release) by the Arecibo Observatory, the Green Bank Telescope and the Very Large Array, the team finds 4σ evidence for HD correlations in both frequentist and Bayesian analyses.

We are opening a new window in the GW universe, where we can observe unique sources and phenomena

A similar signal is seen by the independent European PTA, and the results are also supported by data from the Parkes PTA and others. “Once the partner collaborations of the International Pulsar Timing Array (which includes NANOGrav, the European, Parkes and Indian PTAs) combine these newest datasets, this may put us over the 5σ threshold,” says NANOGrav spokesperson Stephen Taylor. “We expect that it will take us about a year to 18 months to finalise.”

It will take longer to decipher the precise origin of the low-frequency PTA signals. If the background is aniso­tropic, astrophysical sources such as supermassive black-hole binaries would be the likely origin and one could therefore learn about their environment, population and how galaxies merge. Phase transitions or other cosmological sources tend to lead to an isotropic background. Since the shape of the GW spectrum encodes information about the source, with more data it should become possible to disentangle the signatures of the two potential sources. PTAs and current, as well as next-generation, GW detectors such as LISA and the Einstein Telescope complement each other as they cover different frequency ranges. For instance, LISA could detect the same supermassive black-hole binaries as PTAs but at different times during and after their merger. 

“We are opening a new window in the gravitational-wave universe in the nanohertz regime, where we can observe unique sources and phenomena,” says European PTA collaborator Caterina Tiburzi of the Cagliari Observatory in Sardinia.

The post Hints of low-frequency gravitational waves found appeared first on CERN Courier.

]]>
News Pulsar timing arrays have spotted the first evidence for a stochastic gravitational-wave background. https://cerncourier.com/wp-content/uploads/2023/08/CCSepOct23_NA_GreenBank.jpg
Joined-up thinking in vacuum science https://cerncourier.com/a/joined-up-thinking-in-vacuum-science/ Mon, 17 Jul 2023 13:46:47 +0000 https://preview-courier.web.cern.ch/?p=108928 CERN is home to an international effort to develop the next generation of gravitational-wave telescopes. 

The post Joined-up thinking in vacuum science appeared first on CERN Courier.

]]>
The first detection of gravitational waves in 2015 stands as a confirmation of Einstein’s prediction in his general theory of relativity and represents one of the most significant milestones in contemporary physics. Not only that, direct observation of gravitational ripples in the fabric of space-time opened up a new window on the universe that enables astronomers to study cataclysmic events such as black-hole collisions, supernovae and the merging of neutron stars. The hope is that the emerging cosmological data sets will, over time, yield unique insights to address fundamental problems in physics and astrophysics – the distribution of matter in the early universe, for example, and the search for dark matter and dark energy.

By contrast, an altogether more down-to-earth agenda – Beampipes for Gravitational Wave Telescopes 2023 – provided the backdrop for a three-day workshop held at CERN at the end of March. Focused on enabling technologies for current and future gravitational-wave observatories – specifically, their ultrahigh-vacuum (UHV) beampipe requirements – the workshop attracted a cross-disciplinary audience of 85 specialists drawn from the particle-accelerator and gravitational-wave communities alongside industry experts spanning steel production, pipe manufacturing and vacuum technologies (CERN Courier July/August 2023 p18). 

If location is everything, Geneva ticks all the boxes in this regard. With more than 125 km of beampipes and liquid-helium transfer lines, CERN is home to one of the world’s largest vacuum systems – and certainly the longest and most sophisticated in terms of particle accelerators. All of which ensured a series of workshop outcomes shaped by openness, encouragement and collaboration, with CERN’s technology and engineering departments proactively sharing their expertise in vacuum science, materials processing, advanced manufacturing and surface treatment with counterparts in the gravitational-wave community. 

Measurement science

To put all that knowledge-share into context, however, it’s necessary to revisit the basics of gravitational-wave metrology. The principal way to detect gravitational waves is to use a laser interferometer comprising two perpendicular arms, each several kilometres long and arranged in an L shape. At the intersection of the L, the laser beams in the two branches interact, whereupon the resulting interference signal is captured by photodetectors. When a gravitational wave passes through Earth, it induces differential length changes in the interferometer arms – such that the laser beams traversing the two arms experience dissimilar path lengths, resulting in a phase shift and corresponding alterations in their interference pattern. 

Better by design: the Einstein Telescope beampipes

Beampipe studies

The baseline for the Einstein Telescope’s beampipe design studies is the Virgo gravitational-wave experiment. The latter’s beampipe – which is made of austenitic stainless steel (AISI 304L) – consists of a 4 mm thick wall reinforced with stiffener rings and equipped with an expansion bellows (to absorb shock and vibration).

While steel remains the material of choice for the Einstein Telescope beampipe, other grades beyond AISI 304L are under consideration. Ferritic steels, for example, can contribute to a significant cost reduction per unit mass compared to austenitic stainless steel, which contains nickel. Ferrite also has a body-centred-cubic crystallographic structure that results in lower residual hydrogen levels versus face-centred-cubic austenite – a feature that eliminates the need for expensive solid-state degassing treatments when pumping down to UHV. 

Options currently on the table include the cheapest ferritic steels, known as “mild steels”, which are used in gas pipelines after undergoing corrosion treatment, as well as ferritic stainless steels containing more than 12% chromium by weight. While initial results with the latter show real promise, plastic deformation of welded joints remains an open topic, while the magnetic properties of these materials must also be considered to prevent anomalous transmission of electromagnetic signals and induced mechanical vibrations.

Along a related coordinate, CERN is developing an alternative solution with respect to the “baseline design” that involves corrugated walls with a thickness of 1.3 mm, eliminating the need for bellows and reinforcements. Double-wall pipe designs are also in the mix – either with an insulation vacuum or thermal insulators between the two walls. 

Beyond the beampipe material, studies are exploring the integration of optical baffles, which intermittently reduce the pipe aperture to block scattered photons. Various aspects such as positioning, material, surface treatment and installation are under review, while the transfer of vibrations from the tunnel structure to the baffle represents another line of enquiry. 

With this in mind, the design of the beampipe support system aims to minimise the transmission of vibrations to the baffles and reduce the frequency of the first vibration eigen mode within a range where the Einstein Telescope is expected to be less sensitive. Defining the vibration transfer function from the tunnel’s near-environment to the beampipe is another key objective, as are the vibration levels induced by airflow in the tunnel (around the beampipe) and stray electromagnetic fields from beampipe instrumentation.

Another thorny challenge is integration of the beampipes into the Einstein Telescope tunnel. Since the beampipes will be made up of approximately 15 m-long units, welding in the tunnel will be mandatory. CERN’s experience in welding cryogenic transfer lines and magnet junctions in the LHC tunnel will be useful in this regard, with automatic welding and cutting machines being one possible option to streamline deployment. 

Also under scrutiny is the logistics chain from raw material to final installation. Several options are being evaluated, including manufacturing and treating the beampipes on-site to reduce storage needs and align production with the pace of installation. While this solution would reduce the shipping costs of road and maritime transport, it would require specialised production personnel and dedicated infrastructure at the Einstein Telescope site.

Finally, the manufacturing and treatment processes of the beampipes will have a significant impact on cost and vacuum performance – most notably with respect to dust control, an essential consideration to prevent excessive light scattering due to falling particles and changes in baffle reflectivity. Dust issues are common in particle accelerators and the lessons learned at CERN and other facilities may well be transferable to the Einstein Telescope initiative. 

These are no ordinary interferometers, though. The instruments operate at the outer limits of measurement science and are capable of tracking changes in length down to a few tens of zeptometres (10–21 m), a length scale roughly 10,000 times smaller than the diameter of a proton. This achievement is the result of extraordinary progress in optical technologies over recent decades – advances in laser stability and mirror design, for example – as well as the ongoing quest to minimise sources of noise arising from seismic vibrations and quantum effects. 

With the latter in mind, the interferometer laser beams must also propagate through vacuum chambers to avoid potential scattering of the light by gas molecules. The residual gas present within these chambers introduces spatial and temporal fluctuations in the refractive index of the medium through which the laser beam propagates – primarily caused by statistical variations in gas density. 

As such, the coherence of the laser beam can be compromised as it traverses regions characterised by a non-uniform refractive index, resulting in phase distortions. To mitigate the detrimental effects of coherence degradation, it is therefore essential to maintain hydrogen levels at pressures lower than 10–9 mbar, while even stricter UHV requirements are in place for heavier molecules (depending on their polarisability and thermal speed).

Now and next

Right now, there are four gravitational-wave telescopes in operation: LIGO (across two sites in the US), Virgo in Italy, KAGRA in Japan, and GEO600 in Germany (while India has recently approved the construction of a new gravitational-wave observatory in the western state of Maharashtra). Coordination is a defining feature of this collective endeavour, with the exchange of data among the respective experiments crucial for eliminating local interference and accurately pinpointing the detection of cosmic events.

Meanwhile, the research community is already planning for the next generation of gravitational-wave telescopes. The primary objective: to expand the portion of the universe that can be comprehensively mapped and, ultimately, to detect the primordial gravitational waves generated by the Big Bang. In terms of implementation, this will demand experiments with longer interferometer arms accompanied by significant reductions in noise levels (necessitating, for example, the implementation of cryogenic cooling techniques for the mirrors). 

The beampipe for the ALICE experiment

Two leading proposals are on the table: the Einstein Telescope in Europe and the Cosmic Explorer in the US. The latter proposes a 40 km long interferometer arm with a 1.2 m diameter beampipe, configured in the traditional L shape and across two different sites (as per LIGO). Conversely, the former proposes six 60° Ls in an underground tunnel laid out in an equilateral triangle configuration (10 km long sides, 1 m beampipe diameter and with a high- and low-frequency detector at each vertex). 

For comparison, the current LIGO and Virgo installations feature arm lengths of 4 km and 3 km, respectively. As a result, the anticipated length of the vacuum vessel for the Einstein Telescope is projected to be 120 km, while for the Cosmic Explorer it is expected to be 160 km. In short: both programmes will require the most extensive and ambitious UHV systems ever constructed. 

Extreme vacuum 

At a granular level, the vacuum requirements for the Einstein Telescope and Cosmic Explorer assume that the noise induced by residual gas is significantly lower than the allowable noise budget of the gravitational interferometers themselves. This comparison is typically made in terms of amplitude spectral density. A similar approach is employed in particle accelerators, where an adequately low residual gas density is imperative to minimise any impacts on beam lifetimes (which are predominantly constrained by other unavoidable factors such as beam-beam interactions and collimation). 

The specification for the Einstein Telescope states that the contribution of residual gas density to the overall noise budget must not exceed 10%, which necessitates that hydrogen partial pressure be maintained in the low 10–10 mbar range. Achieving such pressures is commonplace in leading-edge particle accelerator facilities and, as it turns out, not far beyond the limits of current gravitational-wave experiments. The problem, though, comes when mapping current vacuum technologies to next-generation experiments like the Einstein Telescope. 

In such a scenario, the vacuum system would represent one of the biggest capital equipment costs – on a par, in fact, with the civil engineering works (the main cost-sink). As a result, one of the principal tasks facing the project teams is the co-development – in collaboration with industry – of scalable vacuum solutions that will enable the cost-effective construction of these advanced experiments without compromising on UHV performance and reliability. 

Follow the money

It’s worth noting that the upward trajectory of capital/operational costs versus length of the experimental beampipe is a challenge that’s common to both next-generation particle accelerators and gravitational-wave telescopes – and one that makes cost reduction mandatory when it comes to the core vacuum technologies that underpin these large-scale facilities. In the case of the proposed Future Circular Collider at CERN, for instance, a vacuum vessel exceeding 90 km in length would be necessary. 

Of course, while operational and maintenance costs must be prioritised in the initial design phase, the emphasis on cost reduction touches all aspects of project planning and, thereafter, requires meticulous optimisation across all stages of production – encompassing materials selection, manufacturing processes, material treatments, transport, logistics, equipment installation and commissioning. Systems integration is also paramount, especially at the interfaces between the vacuum vessel’s technical systems and adjacent infrastructure (for example, surface buildings, underground tunnels and caverns). Key to success in every case is a well-structured project that brings together experts with diverse competencies as part of an ongoing “collective conversation” with their counterparts in the physics community and industrial supply chain.

Welding services

Within this framework, CERN’s specialist expertise in managing large-scale infrastructure projects such as the HL-LHC can help to secure the success of future gravitational-wave initiatives. Notwithstanding CERN’s capabilities in vacuum system design and optimisation, other areas of shared interest between the respective communities include civil engineering, underground safety and data management, to name a few. 

Furthermore, such considerations align well with the  latest update of the European strategy for particle physics – which explicitly prioritises the synergies between particle and astroparticle physics – and are reflected operationally through a collaboration agreement (signed in 2020) between CERN and the lead partners on the Einstein Telescope feasibility study – Nikhef in the Netherlands and INFN in Italy. 

In this way, CERN is engaged directly as a contributing partner on the beampipe studies for the Einstein Telescope (see “Better by design: the Einstein Telescope beampipes”). The three-year project, which kicked off in September 2022, will deliver the main technical design report for the telescope’s beampipes. CERN’s contribution is structured in eight work packages, from design and materials choice to logistics and installation, including surface treatments and vacuum systems. 

CERN teams are engaged directly on the beampipe studies for the Einstein Telescope

The beampipe pilot sector will also be installed at CERN, in a building previously used for testing cryogenic helium transfer lines for the LHC. Several measurements are planned for 2025, including tests relating to installation, alignment, in-situ welding, leak detection and achievable vacuum levels. Other lines of enquiry will assess the efficiency of the bakeout process, which involves the injection of electrical current directly into the beampipe walls (heating them in the 100–150 °C range) to minimise subsequent outgassing levels under vacuum.

Given that installation of the beampipe pilot sector is time-limited, while details around the manufacturing and treatment of the vacuum chambers are still to be clarified, the engagement of industry partners in this early design stage is a given – an approach, moreover, that seeks to replicate the collaborative working models pursued as standard within the particle-accelerator community. While there’s a lot of ground to cover in the next two years, the optimism and can-do mindset of all participants at Beampipes for Gravitational Wave Telescopes 2023 bodes well.

The post Joined-up thinking in vacuum science appeared first on CERN Courier.

]]>
Feature CERN is home to an international effort to develop the next generation of gravitational-wave telescopes.  https://cerncourier.com/wp-content/uploads/2023/09/202307-172_288.jpg
A soft spot for heavy metal https://cerncourier.com/a/a-soft-spot-for-heavy-metal/ Wed, 05 Jul 2023 10:21:03 +0000 https://preview-courier.web.cern.ch/?p=108754 Together with colleagues from CERN’s Vacuum, Surfaces and Coatings and Mechanical and Materials Engineering groups, Audrey Vichard is working on R&D for the Einstein Telescope.

The post A soft spot for heavy metal appeared first on CERN Courier.

]]>
Welding is the technique of fusing two materials, often metals, by heating them to their melting points, creating a seamless union. Mastery of the materials involved, meticulous caution and remarkable steadiness are integral elements to a proficient welder’s skillset. The ability to adjust to various situations, such as mechanised or manual welding, is also essential. Audrey Vichard’s role as a welding engineer in CERN’s mechanical and materials engineering group (MME) encompasses comprehensive technical guidance in the realm of welding. She evaluates methodologies, improves the welding process, develops innovative solutions, and ensures compliance with global standards and procedures. This amalgamation of tasks allows for the effective execution of complex projects for CERN’s accelerators and experiments. “It’s a kind of art,” says Audrey. “Years of training are required to achieve high-quality welds.” 

Audrey is one of the newest additions to the MME group, which provides specific engineering solutions combining mechanical design, fabrication and material sciences for accelerator components and physics detectors to the CERN community. She joined the forming and welding section as a fellow in January 2023, having previously studied metallurgy in the engineering school at Polytech Nantes in France. “While in school, I did an internship in Toulon, where they build submarines for the army. I was in a group with a welder, who passed on his passion for welding to me – especially when applied in demanding applications.”

Extreme conditions

What sets welding at CERN apart are the variety of materials used and the environments the finished parts have to withstand. Radioactivity, high pressure to ultra-high vacuum and cryogenic temperatures are all factors to which the materials are exposed. Stainless steel is the most frequently used material, says Audrey, but rarer ones like niobium also come into play. “You don’t really find niobium for welding outside CERN – it is very specific, so it’s interesting and challenging to study niobium welds. To keep the purity of this material in particular, we have to apply a special vacuum welding process using an electron beam.” The same is true for titanium, which is a material of choice for its low density and high mechanical properties. It is currently under study for the next-generation HL-LHC beam dump. Whether it’s steel, titanium, copper, niobium or aluminium, each material has a unique metallurgical behaviour that will greatly influence the welding process. To meet the strict operating conditions over the lifetime of the components, the welding parameters are developed consequently, and rigorous control of the quality and traceability are essential.

“Although it is the job of the physicists at CERN to come up with the innovative machines they need to push knowledge further, it is an interesting exchange to learn from each other, juggling between ideal objects and industrial realities,” explains Audrey. “It is a matter of adaptation. The physicists come here and explain what they need and then we see if it’s feasible with our machines. If not, we can adapt the design or material, and the physicists are usually quite open to the change.”

Touring the main CERN workshop – which was one of CERN’s first buildings and has been in service since 1957 – Audrey is one of the few women present. “We are a handful of women graduating as International Welding Engineers (IWE). I am proud to be part of the greater scientific community and to promote my job in this domain, historically dominated by men.”

The physicists come here and explain what they need and then we see if it’s feasible with our machines

In the main workshop at CERN, Audrey is, along with her colleagues, a member of the welding experts’ team. “My daily task is to support welding activities for current fabrication projects CERN-wide. On a typical day, I can go from performing visual inspections of welds in the workshop to overseeing the welding quality, advising the CERN community according to the most recent standards, participating in large R&D projects and, as a welding expert, advising the CERN community in areas such as the framework of the pressure equipment directive.”

Together with colleagues from CERN’s vacuum, surfaces and coatings group (TE-VSC), and MME, Audrey is currently working on R&D for the Einstein Telescope – a proposed next-generation gravitational-wave observatory in Europe. It is part of a new collaboration between CERN, Nikhef and the INFN to design the telescope’s colossal vacuum system – the largest ever attempted (see CERN shares beampipe know-how for gravitational-wave observatories). To undertake this task, the collaboration is initially investigating different materials to find the best candidate combining ultra-high vacuum compatibility, weldability and cost efficiency. So far, one fully prototyped beampipe has been finished using stainless steel and another is in production with common steel; the third is yet to be done. The next main step will then be to go from the current 3 m-long prototype to a 50 m version, which will take about a year and a half. Audrey’s task is to work with the welders to optimise the welding parameters and ultimately provide a robust industrial solution to manufacture this giant vacuum chamber. “The design is unusual; it has not been used in any industrial application, at least not at this quality. I am very excited to work on the Einstein Telescope. Gravitational waves have always interested me, and it is great to be part of the next big experiment at such an early stage.”

The post A soft spot for heavy metal appeared first on CERN Courier.

]]>
Careers Together with colleagues from CERN’s Vacuum, Surfaces and Coatings and Mechanical and Materials Engineering groups, Audrey Vichard is working on R&D for the Einstein Telescope. https://cerncourier.com/wp-content/uploads/2023/07/CCJulAug23_CAR_Vichard.jpg
DAMPE confirms cosmic-ray complexity https://cerncourier.com/a/dampe-confirms-cosmic-ray-complexity/ Wed, 05 Jul 2023 08:59:04 +0000 https://preview-courier.web.cern.ch/?p=108804 The space-based Chinese–European Dark Matter Particle Explorer (DAMPE) reports detailed insights into the various spectral "breaks" in cosmic-ray spectra.

The post DAMPE confirms cosmic-ray complexity appeared first on CERN Courier.

]]>
Energy spectra measured by DAMPE

The exact origin of the high-energy cosmic rays that bombard Earth remains one of the most important open questions in astrophysics. Since their discovery more than a century ago, a multitude of potential sources, both galactic and extra-galactic, have been proposed. Examples of proposed galactic sources, which are theorised to be responsible for cosmic rays with energies below the PeV range, are supernova remnants and pulsars, while blazars and gamma-ray bursts are two of many potential sources theorised to be responsible for the cosmic-ray flux at higher energies. 

When identifying the origin of astrophysical photons, one can use their direction. However, for cosmic rays this is not as straightforward due to the impact of galactic and extra-galactic magnetic fields on their direction. To identify the origin of cosmic rays, researchers therefore almost fully rely on information embedded in their energy spectra. When assuming just acceleration within shock regions of extreme astrophysical objects, the galactic cosmic-ray spectrum should follow a simple, single power law with an index between –2.6 and –2.7. However, thanks to measurements by a range of dedicated instruments including AMS, ATIC, CALET, CREAM and HAWC, we know the spectrum to be more complex. Furthermore, different types of cosmic rays, such as protons, and the nuclei of helium or oxygen, have all been shown to exhibit different spectral features with breaks at different energies.

New measurements by the space-based Chinese–European Dark Matter Particle Explorer (DAMPE) provide detailed insights into the various spectral breaks in the combined proton and helium spectra. Clear hints of spectral breaks were already shown previously by various balloon and space-based experiments at low energies (below about 1 TeV), and by ground-based air-shower detectors at high energies (> TeV). However, in the region where space-based measurements start to suffer from a lack of statistics, ground-based instruments suffer from a low sensitivity, resulting in relatively large uncertainties. Furthermore, the completely different way in which space- and ground-based instruments measure the energy (directly in the former, and via air-shower reconstruction in the latter) made it important to make measurements that clearly connect the two. DAMPE has now produced detailed spectra in the 46 GeV to 316 TeV energy range, thereby filling most of the gap. The results confirm both a spectral hardening around 100 GeV and a subsequent spectral softening around 10 TeV, which connects well with a second spectral bump previously observed by ARGO-YBJ+WFCT at an energy of several hundred TeV (see figure).

The complex spectral features of high-energy cosmic rays can be explained in various ways. One possibility is through the presence of different types of cosmic-ray sources in our galaxy; one population produces cosmic rays with energies up to PeV, while a second only produces cosmic rays with energies up to tens of TeV, for example. A second possibility is that the spectral features are a result of a nearby single source from which we observe the cosmic rays directly before they become diffused in the galactic magnetic field. Examples of such a nearby source could be the Geminga pulsar, or the young supernova remnant Vela.

In the near future, novel data and analysis methods will likely allow researchers to distinguish between these two theories. One important source of this data is the LHAASO experiment in China, which is currently taking detailed measurements of cosmic rays in the 100 TeV to EeV range. Furthermore, thanks to ever-increasing statistics, the anisotropy of the arrival direction of the cosmic rays will also become a method to compare different models, in particular to identify nearby sources. The important link between direct and indirect measurements presented in this work thereby paves the way to connecting the large amounts of upcoming data to the theories on the origins of cosmic rays. 

The post DAMPE confirms cosmic-ray complexity appeared first on CERN Courier.

]]>
News The space-based Chinese–European Dark Matter Particle Explorer (DAMPE) reports detailed insights into the various spectral "breaks" in cosmic-ray spectra. https://cerncourier.com/wp-content/uploads/2023/07/CCJulAug23_NA_astro_feature.jpg
CERN shares beampipe know-how for gravitational-wave observatories https://cerncourier.com/a/cern-shares-beampipe-know-how-for-gravitational-wave-observatories/ Fri, 12 May 2023 14:18:25 +0000 https://preview-courier.web.cern.ch/?p=108551 Participants of a recent CERN workshop discussed vacuum technologies for next-generation gravitational-wave observatories such as the Einstein Telescope.

The post CERN shares beampipe know-how for gravitational-wave observatories appeared first on CERN Courier.

]]>
The direct detection of gravitational waves in 2015 opened a new window to the universe, allowing researchers to study the cosmos by merging data from multiple sources. There are currently four gravitational wave telescopes (GWTs) in operation: LIGO at two sites in the US, Virgo in Italy, KAGRA in Japan, and GEO600 in Germany. Discussions are ongoing to establish an additional site in India. The detection of gravitational waves is based on Michelson laser interferometry with Fabry-Perot cavities, which reveals the expansion and contraction of space at the level of ten-thousandths of the size of an atomic nucleus, i.e. 10-19 m. Despite the extremely low strain that needs to be detected, an average of one gravitational wave is measured per week of measurement by studying and minimising all possible noise sources, including seismic vibration and residual gas scattering. The latter is reduced by placing the interferometer in a pipe where ultrahigh vacuum is generated. In the case of Virgo, the vacuum inside the two perpendicular 3 km-long arms of the interferometer is lower than 10-9 mbar.

While current facilities are being operated and upgraded, the gravitational-wave community is also focusing on a new generation of GWTs that will provide even better sensitivity. This would be achieved by longer interferometer arms, together with a drastic reduction of noise that might require cryogenic cooling of the mirrors. The two leading studies are the Einstein Telescope (ET) in Europe and the Cosmic Explorer (CE) in the US. The total length of the vacuum vessels envisaged for the ET and CE interferometers is 120 km and 160 km, respectively, with a tube diameter of 1 to 1.2 m. The required operational pressures are typical to those needed for modern accelerators (i.e. in the region of 10-10 mbar for hydrogen and even lower for other gas species). The next generation of GWTs would therefore represent the largest ultrahigh vacuum systems ever built.

The next generation of gravitational-wave telescopes would represent the largest ultrahigh vacuum systems ever built.

Producing these pressures is not difficult, as present vacuum systems of GWT interferometers have a comparable degree of vacuum. Instead, the challenge is cost. Indeed, if the previous generation solutions were adopted, the vacuum pipe system would amount to half of the estimated cost of CE and not far from one-third of ET, which is dominated by underground civil engineering. Reducing the cost of vacuum systems requires the development of different technical approaches with respect to previous-generation facilities. Developing cheaper technologies is also a key subject for future accelerators and a synergy in terms of manufacturing methods, surface treatments and installation procedures is already visible.

Within an official framework between CERN and the lead institutes of the ET study –  Nikhef in the Netherlands and INFN in Italy – CERN’s TE-VSC and EN-MME groups  are sharing their expertise in vacuum, materials, manufacturing and surface treatments with the gravitational-wave community. The activity started in September 2022 and is expected to conclude at the end of 2025 with a technical design report and a full test of a vacuum-vessel pilot sector. During the workshop “Beampipes for Gravitational Wave Telescopes 2023”, held at CERN from 27 to 29 March, 85 specialists from different communities encompassing accelerator and gravitational-wave technologies and from companies that focus on steel production, pipe manufacturing and vacuum equipment gathered to discuss the latest progress. The event followed a similar one hosted by LIGO Livingston in 2019, which gave important directions for research topics.

Plotting a course
In a series of introductory contributions, the basic theoretical elements regarding vacuum requirements and the status of CE and ET studies were presented, highlighting initiatives in vacuum and material technologies undertaken in Europe and the US. The detailed description of current GWT vacuum systems provided a starting point for the presentations of ongoing developments. To conduct an effective cost analysis and reduction, the entire process must be taken into account — including raw material production and treatment, manufacturing, surface treatment, logistics, installation, and commissioning in the tunnel. Additionally, the interfaces with the experimental areas and other services such as civil engineering, electrical distribution and ventilation are essential to assess the impact of technological choices for the vacuum pipes.

The selection criteria for the structural materials of the pipe were discussed, with steel currently being the material of choice. Ferritic steels would contribute to a significant cost reduction compared to austenitic steel, which is currently used in accelerators, because they do not contain nickel. Furthermore, thanks to their body-centred cubic crystallographic structure, ferritic steels have a much lower content of residual hydrogen – the first enemy for the attainment of ultrahigh vacuum – and thus do not require expensive solid-state degassing treatments. The cheapest ferritic steels are “mild steels” which are common materials in gas pipelines after treatment to fight corrosion. Ferritic stainless steels, which contain more than 12% in weight of dissolved chromium, are also being studied for GWT applications. While first results are encouraging, the magnetic properties of these materials must be considered to avoid anomalous transmission of electromagnetic signals and of the induced mechanical vibrations.

Four solutions regarding the design and manufacturing of the pipes and their support system were discussed at the March workshop. The baseline is a 3 to 4 mm-thick tube similar to the ones operational in Virgo and LIGO, with some modifications to cope with the new tunnel environment and stricter sensitivity requirements. Another option is a 1 to 1.5 mm-thick corrugated vessel that does not require reinforcement and expansion bellows. Additionally, designs based on double-wall pipes were discussed, with the inner wall being thin and easy to heat and the external wall performing the structural role. An insulation vacuum would be generated between the two walls without the cleanliness and pressure requirements imposed on the laser beam vacuum. The forces acting on the inner wall during pressure transients would be minimised by opening axial movement valves, which are not yet fully designed. Finally, a gas-pipeline solution was also considered, which would be produced by a half-inch thick wall made of mild steel. The main advantage of this solution is its relatively low cost, as it is a standard approach used in the oil and gas industry. However, corrosion protection and ultrahigh vacuum needs would require surface treatment on both sides of the pipe walls. These treatments are currently under consideration.  For all types of design, the integration of optical baffles (which provide an intermittent reduction of the pipe aperture to block scattered photons) is a matter of intense study, with options for position, material, surface treatment, and installation reported. The transfer of vibrations from the tunnel structure to the baffle is also another hot topic.

The manufacturing of the pipes directly from metal coils and their surface treatment can be carried out at supplier facilities or directly at the installation site. The former approach would reduce the cost of infrastructure and manpower, while the latter would reduce transport costs and provide an additional degree of freedom to the global logistics as storage area would be minimized. The study of in-situ production was brought to its limit in a conceptual study of a process that from a coil could deliver pipes as long as desired directly in the underground areas: The metal coil arrives in the tunnel; then it is installed in a dedicated machine that unrolls the coil and welds the metallic sheet to form the pipe to any length.

These topics will undergo further development in the coming months, and the results will be incorporated into a comprehensive technical design report. This report will include a detailed cost optimization and will be validated in a pilot sector at CERN. With just under two and a half years of the project remaining, its success will demand a substantial effort and resolute motivation. The optimism instilled by the enthusiasm and collaborative approach demonstrated by all participants at the workshop is therefore highly encouraging.

The post CERN shares beampipe know-how for gravitational-wave observatories appeared first on CERN Courier.

]]>
Meeting report Participants of a recent CERN workshop discussed vacuum technologies for next-generation gravitational-wave observatories such as the Einstein Telescope. https://cerncourier.com/wp-content/uploads/2023/05/vacuum_gravitational_waves_meeting.jpg
X-ray source could reveal new class of supernovae https://cerncourier.com/a/x-ray-source-could-reveal-new-class-of-supernovae/ Mon, 24 Apr 2023 14:12:31 +0000 https://preview-courier.web.cern.ch/?p=108241 Data from eROSITA and XMM Newton prove the existence of white dwarfs that accumulate helium from a companion star at a steady rate.

The post X-ray source could reveal new class of supernovae appeared first on CERN Courier.

]]>
Large Magellanic Cloud

Type1A supernovae play an important role in the universe, both as the main source of iron and as one of the principal tools for astronomers to measure cosmic-distance scales. They are also important for astroparticle physics, for example allowing the properties of the neutrino to be probed in an extreme environment.

Type1A supernovae make ideal cosmic rulers because they all look very similar, with roughly equal luminosity and emission characteristics. Therefore, when a cosmic explosion that matches the properties of a type1A supernova is detected, its luminosity can be directly used to measure the distance to its host galaxy. Despite this importance, the details surrounding the progenitors of these events are still not fully understood. Furthermore, a group of outliers, now known as type1ax events, has recently been identified that indicate there might be more than one path towards a type1A explosion.

The reason that typical type1A events all have a roughly equal luminosity is because of their progenitors. The general explanation for these events includes a binary system with at least one white dwarf: a very dense old star consisting mostly of oxygen and carbon that is not undergoing fusion. The system is only prevented from collapsing into a neutron star or black hole due to electron-degeneracy pressure. As the white dwarf accumulates matter from a nearby companion, its mass increases to a precise critical limit at which an uncontrolled thermonuclear explosion starts, resulting in the star being unbounded and seen as the supernova.

This peculiar binary system provides strong hints of a new type of progenitor that can explain up to 30% of all supernovae 1a events

As several X-ray sources were identified in the 1990s by the ROSAT mission as being white dwarfs with hydrogen burning on their surface, the source of matter that is accumulated by the white dwarf was long thought to be hydrogen from a companion star. The flaw with this model, however, is that type1A supernovae show no signs of any hydrogen. On the other hand, helium has been seen, particularly in the outlier type1ax supernovae events. These 1ax events, which are predicted to make up 30% of all type1A events, can be explained by a white dwarf accumulating helium from a companion star that has already shed all of its hydrogen. If the helium was able to accumulate on the surface in a stable way, without intermediate explosions due to violent ignition of the helium, it reaches a mass where it violently ignites on the surface. This in turn triggers the ignition of the core and could explain the type1ax events. Evidence of helium accumulating white dwarfs has, however, not been found.

Now, a group led by researchers from the Max Planck Institute for Extraterrestrial Physics (MPE) has used both optical data and X-ray data from the eROSITA and XMM Newton missions to find the first clear evidence of such a progenitor system. The group found an object, known as [HP99] 159, located in the Large Magellanic Cloud, which shows all the characteristics of a white dwarf surrounded by an accretion disk of helium. Using historical X-ray data from as far back as 50 years, the team also showed that the brightness of the source is relatively stable, thereby indicating that it is accumulating the helium at a stable rate, despite the accumulation rate being lower than theoretically predicted for stable burning. This indicates that the system is working its way towards ignition in the future.

The discovery of this new X-ray source therefore proves the existence of white dwarfs that accumulate helium from a companion star at a steady rate, thereby allowing them to reach the conditions to produce a supernova. This peculiar binary system already provides strong hints of a new type of progenitor that can explain up to 30% of all supernovae 1a events. Follow-up measurements will provide further insight into the complex physics at play in the thermonuclear explosions that produce these events, while [HP99] 159’s characteristics can be used to find similar sources.

The post X-ray source could reveal new class of supernovae appeared first on CERN Courier.

]]>
News Data from eROSITA and XMM Newton prove the existence of white dwarfs that accumulate helium from a companion star at a steady rate. https://cerncourier.com/wp-content/uploads/2023/04/CCMayJun23_NA_astro.jpg
Euclid to link the largest and smallest scales https://cerncourier.com/a/euclid-to-link-the-largest-and-smallest-scales/ Mon, 24 Apr 2023 13:00:18 +0000 https://preview-courier.web.cern.ch/?p=108222 The science goals of the European Space Agency's newest explorer, Euclid, align closely with some of the biggest mysteries in particle physics.

The post Euclid to link the largest and smallest scales appeared first on CERN Courier.

]]>
Euclid payload module

Untangling the evolution of the universe, in particular the nature of dark energy and dark matter, is a central challenge of modern physics. An ambitious new mission from the European Space Agency (ESA) called Euclid is preparing to investigate the expansion history of the universe and the growth of cosmic structures over the last 10 billion years, covering the entire period over which dark energy is thought to have played a significant role in the accelerating expansion. The 2 tonne, 4.5 m tall and 3.1 m diameter probe is undergoing final tests in Cannes, France, after which it will be shipped to Cape Canaveral in Florida and inserted into the faring of a SpaceX Falcon 9 rocket, with launch scheduled for July. 

Let there be light

Euclid, which was selected by ESA for implementation in 2012 with a budget of about €600 million, has four main objectives. The first is to investigate whether dark energy is real, or whether the apparent acceleration of the universe is caused by a breakdown of general relativity on the largest scales. Second, if dark energy is real, Euclid will investigate whether it is a constant energy spread across space or a new force of nature that evolves with the expansion of the universe. A third objective is to investigate the nature of dark matter, the mass of neutrinos and whether there exist other, so-far undetected fast-moving particle species, and a fourth is to investigate statistics and properties of the early universe that seeded large-scale structures. To meet these goals, the six-year Euclid mission will use a three-mirror system to direct light from up to a billion galaxies across more than a third of the sky towards a visual imager for photometry and a near-infrared spectrophotometer.

So far, the best constraints on the geometry and expansion history of the universe come from cosmic-microwave background (CMB) surveys. Yet these missions are not the best tracers of the curvature, neutrino masses and expansion history, nor for identifying possible exotic subcomponents of dark matter. For this, large surveys on galaxy clustering are required. Euclid will use three methods to achieve this. The first is redshift-space distortions, which combines how fast galaxies move away from us due to the expansion of the universe and how fast galaxies move towards a region of strong gravitational pull in our line-of-sight; measuring these deformations in galactic positions enables the growth rate of structures as well as gravity to be investigated. The second is baryonic acoustic oscillations (BAOs), which arose when the universe was a plasma made from baryons and photons and set a characteristic scale that is related to the sound horizon at recombination. After recombination, photons decoupled from visible matter while baryons were pulled in by gravity and started to form bigger structures, with the BAO scale imprinted in galaxy distributions. BAOs thus serve as a ruler to trace the expansion rate of the universe. The third method, weak gravitational lensing, occurs when light from a background source is bent around a massive foreground object such as a galaxy cluster, from which the distribution of dark matter can be inferred. 

As the breadth and precision of cosmological measurements increase, so do the links with particle physics. CERN and the Euclid Consortium (which consists of more than 2000 scientists from 300 institutes in 13 European countries, the US, Canada and Japan) signed a memorandum of understanding in 2016 after Euclid gained CERN recognised-experiment status in 2015. The collaboration was motivated by technical synergies for the mission’s Science Ground Segment (SGS), which will process about 850 Gbit of compressed data per day – the largest of any ESA mission to date. CERN is contributing with the provision of critical software tools and related support activities, explains CERN aerospace and environmental applications coordinator Enrico Chesta: “CernVM–FS, developed by the EP-SFT team to assist high-energy physics collaborations to deploy software on the distributed computing infrastructure used to run data-processing applications, has been integrated into Euclid SGS and will be used for software continuous deployment among the nine Euclid science data centres.” 

Competitive survey

Euclid’s main scientific objectives also align closely with CERN’s physics challenges. A 2019 CERN-TH/Euclid workshop identified overlapping areas of interest and options for scientific visitor programmes, with topics of potential interest including N-body CMB simulations, redshift space distortions with relativistic effects, model selection of modified gravity, and dark-energy and neutrino-mass estimation from cosmic voids. Over the coming years, Euclid will provide researchers with data against which they can test different cosmological models. “Galaxy surveys have been happening for decades and have grown in scale, but we didn’t hear much about it because the CMB was, until now, more accurate,” says theorist Marko Simonović of CERN. “With Euclid there will be a competitive survey that is big enough to be comparable to CMB data. It is exciting to see what Euclid, and other new missions such as DESI, will tell us about cosmology. And maybe we will even discover something new.”

The post Euclid to link the largest and smallest scales appeared first on CERN Courier.

]]>
News The science goals of the European Space Agency's newest explorer, Euclid, align closely with some of the biggest mysteries in particle physics. https://cerncourier.com/wp-content/uploads/2023/04/CCMayJun23_NA_Euclid_feature.jpg
TeV photons challenge standard explanations https://cerncourier.com/a/tev-photons-challenge-standard-explanations/ Wed, 01 Mar 2023 14:30:56 +0000 https://preview-courier.web.cern.ch/?p=107948 The brightest gamma-ray burst ever detected appears to have produced an emission that is difficult to explain.

The post TeV photons challenge standard explanations appeared first on CERN Courier.

]]>
GRB 221009A

Gamma-ray bursts (GRBs) are the result of the most violent explosions in the universe. They are named for their bright burst of high-energy emission, mostly in the keV to MeV region, which can last from milliseconds to hundreds of seconds, and are followed by an afterglow that covers the full electromagnetic spectrum. The extreme nature and important role in the universe of these extragalactic events – for example in the production of heavy elements, potential cosmic-ray acceleration or even mass-extinction events on Earth-like planets – makes them one of the most studied astrophysical phenomena. 

Since their discovery in 1967, detailed studies of thousands of GRBs show that they are the result of cataclysmic events, such as neutron-star binary mergers. The observed gamma-ray emission is produced (through a yet-unidentified mechanism) within relativistic jets that decelerate when they strike interstellar matter, resulting in the observed afterglow. 

But interest in GRBs goes beyond astrophysics. Due to the huge energies involved, they are also a unique lab to study the laws of physics at their extremes. This once again became clear on 9 October 2022, when a GRB was detected that was not only the brightest ever but also appeared to have produced an emission that is difficult to explain using standard physics.

Eye-catching emission

“GRB 221009A” immediately caught the eye of the multi-messenger community, its gamma-ray emission being so bright that it saturated many observatories. As a result, it was also observed by a wide range of detectors covering the electromagnetic spectrum, including at energies exceeding 10 TeV. Two separate ground-based experiments – the Large High Altitude Air Shower Observatory (LHAASO) in China and the Carpet-2 air-shower array in Russia – claimed detections of photons with an energy of 18 TeV and 251 TeV, respectively. This is significantly higher, by an order of magnitude, than the previous record for TeV emission from GRBs reported by the MAGIC and HESS telescopes in 2019 (CERN Courier January/February 2020 p10). Adding further intrigue, such high-energy emission from GRBs should not be able to reach Earth at all.

For photons with energies exceeding several TeV, electron–positron pair-production with optical photons starts to become possible. Although the cross section for this process only just exceeds its threshold at an energy of 2.6 TeV, it is compensated by the billions of light years of space filled with optical light that the TeV photons need to traverse before reaching us. Despite uncertainties in the density of this so-called extragalactic background light, a rough calculation using the distance of GRB 221009A (z = 0.151) suggests that the probability for an 18 TeV photon to reach Earth is around 10–8. 

Clearly we need to wait for the detailed analyses by LHAASO and Carpet-2 to confirm the measurements 

The reported measurements have thus far only been provided through alerts shared among the multi-messenger community, while detailed data analy­ses are still ongoing. Their significance, however, led to tens of beyond-the-Standard Model (BSM) explanations being posted on the arXiv preprint server within days of the alert. While each differs in the specific mechanism hypothesised, the overall idea is similar: instead of being produced directly in the GRB, the photons are posited to be a secondary product of BSM particles produced during or close to the GRB. Examples range from light scalar particles or right-handed neutrinos produced in the GRB and decaying within our galaxy, to photons that converted into axions close to the GRB and turned back into photons in the galactic magnetic field before reaching Earth.

Clearly the community needs to wait for the detailed analyses by the LHAASO and Carpet-2 collaborations to confirm the measurements. The published energy resolution of LHAASO keeps open the possibility that their results can be explained with Standard Model physics, while the 251 TeV emission from Carpet-2 is more difficult to attribute to known systematic effects. This result could, however, be explained by secondary particles resulting from an ultra-high energy cosmic-ray (UHECR) produced in the GRB which, although would not represent new physics, would still confirm GRBs as a source of UHECRs for the first time. Analysis results from both collaborations are therefore highly anticipated.

The post TeV photons challenge standard explanations appeared first on CERN Courier.

]]>
News The brightest gamma-ray burst ever detected appears to have produced an emission that is difficult to explain. https://cerncourier.com/wp-content/uploads/2023/02/CCMarApr23_NA_astro_feature.jpg
Statistics meets gamma-ray astronomy https://cerncourier.com/a/statistics-meets-gamma-ray-astronomy/ Thu, 19 Jan 2023 11:09:58 +0000 https://preview-courier.web.cern.ch/?p=107716 As a subfield of astroparticle physics, gamma-ray astronomy, investigates many questions rooted in particle physics in an astrophysical context. A prominent example is the search for self-annihilating Weakly Interacting Massive Particles (WIMPs) in the Milky Way as a signature of dark matter. Another long-standing problem is finding out where in the universe the cosmic-ray particles […]

The post Statistics meets gamma-ray astronomy appeared first on CERN Courier.

]]>
As a subfield of astroparticle physics, gamma-ray astronomy, investigates many questions rooted in particle physics in an astrophysical context. A prominent example is the search for self-annihilating Weakly Interacting Massive Particles (WIMPs) in the Milky Way as a signature of dark matter. Another long-standing problem is finding out where in the universe the cosmic-ray particles detected on Earth are accelerated to PeV energies and beyond.

With the imminent commissioning of the Cherenkov Telescope Array (CTA), which will comprise more than 100 telescopes located in the northern and southern hemispheres, gamma-ray astronomy is about to enter a new era. This was taken as an opportunity to discuss the statistical methods used to analyze data from Cherenkov telescopes at a dedicated PHYSTAT workshop hosted by the university of Berlin. More than 300 participants, including several statisticians, registered for PHYSTAT-Gamma from 28 to 30 September to discuss concrete statistical problems, find synergies between fields, and set the employed methods in a broader context.

Three main topics were addressed at the meeting across 13 talks and multiple discussion sessions: statistical analysis of data from gamma-ray observatories in a multi-wavelength context, connecting statisticians and gamma-ray astronomers, and astrophysical sources across different wavelengths. Many concrete physical questions in gamma-ray astronomy must be answered in an astrophysical context, which becomes visible only by observing the electromagnetic spectrum. A mutual understanding of the statistical methods and systematic errors is therefore needed. Josh Speagle (University of Toronto) proclaimed a potential ‘datapocalypse’ in the heterogeneity and amount of soon-to-be-expected astronomical data. Similarities between analyses in X- and gamma-ray astronomy gave hope for reducing the data heterogeneity. Further cause for optimism arose from new approaches for combining data from different observatories.

The second day of PHYSTAT-Gamma focused on building connections between statisticians and gamma-ray astronomers. Eric Feigelson (Penn State) gave an overview of astrostatistics, followed by deeper discussions of Bayesian methods in astronomy by Tom Loredo (Cornell) and techniques for fitting astrophysical models to data with bootstrap methods by Jogesh Babu (Penn State). The session concluded with an overview of statistical methods for the analysis of astronomical time series by Jeff Scargle (NASA).

The final day centered on the problem of how to match astrophysical sources across different wavelengths. CTA is expected to detect gamma rays from more than 1000 sources. Identifying the correct counterparts at other wavelengths will be essential to study the astrophysical context of the gamma-ray emission. Applying Bayesian methods, Tamas Budavari (Johns Hopkins) discussed the current state of the problem from a statistical point of view, followed by in-depth talks and discussions among experts from X-ray, gamma-ray, and radio astronomy.

Topics across all sessions were the treatment of systematic errors and the formats for exchanging data between experiments. Technical considerations appear to dominate the definition of data formats in astronomy currently. However, for example, as Fisher famously showed with the introduction of sufficiency, statistical aspects can help to find useful representations of data and might also be considered in the definition of future data formats.

PHYSTAT-gamma was only the first attempt to discuss statistical aspects of gamma-ray astronomy. For example, the LHCf experiment at CERN will help to improve the prediction of the gamma-ray flux, which is expected from astrophysical hadron colliders and measured by gamma-ray observatories like CTA. However, modeling uncertainties from particle physics must be treated appropriately to improve the constraints on astrophysical processes. The discussion of this and many further topics is planned for follow-up meetings.

The post Statistics meets gamma-ray astronomy appeared first on CERN Courier.

]]>
Meeting report https://cerncourier.com/wp-content/uploads/2023/01/PHYSTAT_GAMMA_CTA.jpg
Neutrinos reveal active galaxy’s inner depths https://cerncourier.com/a/neutrinos-reveal-active-galaxys-inner-depths/ Tue, 10 Jan 2023 12:02:24 +0000 https://preview-courier.web.cern.ch/?p=107569 IceCube’s measurements take researchers a step closer to understanding the origin of high-energy cosmic rays.

The post Neutrinos reveal active galaxy’s inner depths appeared first on CERN Courier.

]]>
Highly energetic cosmic rays reach Earth from all directions and at all times, yet it has been challenging to conclusively identify their sources. Being charged, cosmic rays are easily deflected by interstellar magnetic fields during their propagation and thereby lose any information about where they originated from. On the other hand, highly energetic photons and neutrinos remain undeflected. Observations of high-energy photons and neutrinos are therefore crucial clues towards unravelling the mystery of cosmic-ray sources and accelerators.

Four years ago, the IceCube collaboration announced the identification of the blazar TXS 0506+056 as a source of high-energy cosmic neutrinos, the first of its kind (CERN Courier September 2018 p7). This was one of the early examples of multi-messenger astronomy wherein a high-energy neutrino event detected by IceCube, which was coincident in direction and time with a gamma-ray flare from the blazar, prompted an investigation into this object as a potential astrophysical neutrino source.

Point source

In the following years, IceCube made a full-sky scan for point-like neutrino sources, and in 2020, the collaboration found an excess coincident with the Seyfert II galaxy, NGC1068, that was inconsistent with a background-only hypothesis. However, with a statistical significance of only 2.9σ, it was insufficient to claim a detection. In November 2022, after a more detailed analysis with a longer live-time and improved methodologies, the collaboration confirmed NGC1068 to be a point source of high-energy neutrinos at a significance of 4.2σ.

IceCubes measurements usher in a new era of neutrino astronomy

Messier 77, also known as the Squid Galaxy or NGC1068, is located 47 million light years away in the constellation Cetus and was discovered in 1780. Today, we know it to be an active galaxy: at its centre lies an active galactic nucleus (AGN), which is a luminous and compact region powered by a super massive black hole (SMBH), surrounded by an accretion disk. Specifically, it is a Seyfert II galaxy, which is an active galaxy that is viewed edge-on, with the line-of-sight passing through the accretion region that obscures the centre.

The latest search used data from the fully completed IceCube detector. Several calibrations and alignments were also made to the data-acquisition procedure and an advanced event reconstruction algorithm was deployed. The search was conducted in the Northern Hemisphere of the sky, i.e. by detecting neutrinos from “below”, so that Earth could screen background atmospheric muons.

Three different searches were carried out to locate possible point-like neutrino sources. The first involved scanning the sky for a statistically significant excess over background, while the other two used a catalogue of 110 sources that was developed in the 2020 study, the difference between the two being the statistical methods used. The results showed an excess of 79+22–20  muon–neutrino events, with the main contribution coming from neutrinos in the energy range of 1.5 to 15 TeV, while the all-flavour flux is expected to be a factor of three higher. All the events contributing to the excess were well-reconstructed within the detector, with no signs of anomalies, and the results were found not to be dominated by just one or a few individual events. The results were also in line with phenomenological models that predict the production of neutrinos and gamma rays in sources such as NGC1068.

IceCube’s measurements usher in a new era of neutrino astronomy and take researchers a step closer to understanding not only the origin of high-energy cosmic rays but also the immense power of massive black holes, such as the one residing inside NGC1068.

The post Neutrinos reveal active galaxy’s inner depths appeared first on CERN Courier.

]]>
News IceCube’s measurements take researchers a step closer to understanding the origin of high-energy cosmic rays. https://cerncourier.com/wp-content/uploads/2023/01/CCJanFeb23_NA_astro.jpg
An Infinity of Worlds https://cerncourier.com/a/an-infinity-of-worlds/ Tue, 08 Nov 2022 13:52:34 +0000 https://preview-courier.web.cern.ch/?p=107090 Kinney’s presentation of cosmology serves as a useful introduction for the layperson, but also for physics students and even physicists working in different fields.

The post An Infinity of Worlds appeared first on CERN Courier.

]]>
An Infinity of Worlds

Cosmology, along with quantum mechanics, is probably among the most misunderstood physics topics for the layperson. Many misconceptions exist, for instance whether the universe had a beginning or not, what the cosmic expansion is, or even what exactly is meant by the term “Big Bang”. Will Kinney’s book An Infinity of Worlds: Cosmic Inflation and the Beginning of the Universe clarifies and corrects these misconceptions in the most accessible way.

Kinney’s main aim is to introduce cosmic inflation – a period of exponential expansion conjectured to have taken place in the very early universe – to a general audience. He starts by discussing the Standard Model of cosmology and how we know that it is correct. This is done most successfully and in a very succinct way. In only 24 pages, the book clarifies all the relevant concepts about what it means for the universe to expand, its thermal history and what a modern cosmologist means by the term Big Bang.

The book continues with an accessible discussion about the motivation for inflation. There are plenty of comments about the current evidence for the theory, its testability and future directions, along with discussions about the multiverse, quantum gravity, the anthropic principle and how all these combine together.

A clear understanding

There are two main points that the author manages to successfully induce the reader to reflect on. The first is the extreme success of the cosmic microwave background (CMB) as a tool to understand cosmology: its black-body spectrum established the Big Bang; its analysis demonstrated the flatness of the universe and its dark contents and motivated inflation; its fluctuations play a large part in our understanding of structure formation in the universe; and, along with the polarisation of the CMB, photons provide a window into the dynamics of inflation. Kinney notes that there are also plenty of features that have not been measured, which are especially important for inflation, such as the B-modes of the CMB and primordial gravitational waves, meaning that CMB-related observations have a long way to go.

The second main point is the importance of a clear understanding of what we know and what we do not know in cosmology. The Big Bang, which is essentially the statement that the universe started as a hot plasma of particles and cooled as it expanded, is a fact. The evidence, which goes well beyond the observation of cosmic expansion, is explained very well in Kinney’s book. Beyond that there are many unknowns. Despite the excellent motivation for and the significant observational successes of inflationary models, they are yet to be experimentally verified. It is probably safe to assume, along with the author, that we will know in the future whether inflation happened or not. Even if we establish that it did and understand its mechanism, it is not clear what we can learn beyond that. Most inflationary models make statements about elements, such as the inflationary multiverse, that in principle cannot be observed.

Steven Weinberg once commented that we did not have to wait to see the dark side of the moon to conclude that it exists. Whether this analogy can be extended successfully to include inflation or string theory is definitely debatable. What is certain, however, is that there will be no shortage of interesting topics and discussions in the years to come about cosmology and fundamental physics in general. Kinney’s book can serve as a useful introduction for the general public, but also for physics students and even physicists working in different fields. As such, this book is a valuable contribution to both science education and dissemination.

The post An Infinity of Worlds appeared first on CERN Courier.

]]>
Opinion Kinney’s presentation of cosmology serves as a useful introduction for the layperson, but also for physics students and even physicists working in different fields. https://cerncourier.com/wp-content/uploads/2022/11/CCNovDec22_REVS_infinity_feature.jpg
Probing the Milky Way’s violent history https://cerncourier.com/a/probing-the-milky-ways-violent-history/ Mon, 07 Nov 2022 11:15:35 +0000 https://preview-courier.web.cern.ch/?p=107062 Data collected by the Hubble and Green Bank telescopes bring a fresh perspective on the origin of the galaxy's enormous "Fermi bubbles".

The post Probing the Milky Way’s violent history appeared first on CERN Courier.

]]>
Fermi bubbles

Active galactic nuclei (AGN) are one of the most studied astrophysical objects. Known to be the brightest persistent sources of photons in the radio to gamma- ray spectrum, they are also thought to be responsible for high-energy cosmic rays and neutrinos. As such, they play an important role in the universe and its evolution. 

AGNs are galaxies in which the supermassive black hole at their centre is accreting matter, thereby producing violent jets responsible for the observed emissions. While our galaxy has a supermassive black hole at its centre, it is currently not accreting matter and therefore the nucleus of the Milky Way is not active. Strong hints of past activity were, however, discovered using the Fermi–LAT satellite in 2010. In particular, the data showed two giant gamma-ray emitting bubbles – now known as the Fermi bubbles – extending from the galactic centre and covering almost-half of the sky (see image). The exact origin of the giant plasma lobes remains to be understood. However, their position and bipolar nature point towards an origin in the Milky Way’s centre several million years ago, likely during a period of high activity in the galactic nucleus. 

A new study led by Trisha Ashley from the Space Telescope Science Institute, Baltimore, brings a fresh perspective on the origin of these structures. Her team focused on the chemical composition of gas clouds inside the bubbles using UV absorption data collected by the Hubble Space Telescope and Green Bank Telescope. Based on their location and movement, these high-velocity clouds had been assumed to originate in the disk of the Milky Way before being swept up as the bubbles were emitted from the galactic centre. However, measurements of the clouds’ elemental makeup cast doubt on this assumption.

UV surprise 

Gas clouds from the galactic disk should have a similar chemical composition (referred to as metallicity by astronomers) to those that once collapsed into stars like the Sun. In the galactic disk, the abundance of elements heavier than hydrogen (high metallicity) is expected to be higher thanks to several generations of stars responsible for the production of such elements, whereas in the galactic halo the metallicity is expected to be lower due to a lack of stellar evolution. To measure the chemical composition of the gas clouds, Ashley and her team looked at the UV spectra from sources behind them to see the induced absorption lines. To their surprise, they found not only clouds with high metallicity but also those with a lower metallicity, matching that of galactic halo gas, thereby implying a different origin for these clouds. Suggestions that the second class of clouds is a result of heavy clouds accumulating low-metallicity gases are unlikely to hold, as the time it would take to absorb these gases is significantly longer than the age of the Fermi bubbles. Instead, it appears that while the bubbles did drag along gas clouds from the galactic plane, they also swept up existing halo gas clouds as they expanded outwards. 

These results imply that events such as those which produced the Fermi bubbles play an important role in gas accumulation in a galactic plane. They remove gas from the galactic disk, while in parallel, push back gas flowing into the galactic disk from the halo. As less gas reaches the disk, star formation gets suppressed, and as such, these events play an important role in galaxy evolution. Since studying small-scale details such as gas clouds in other galaxies is impossible, these results provide a unique insight into our own galaxy as well as into galaxy evolution in general.

The post Probing the Milky Way’s violent history appeared first on CERN Courier.

]]>
News Data collected by the Hubble and Green Bank telescopes bring a fresh perspective on the origin of the galaxy's enormous "Fermi bubbles". https://cerncourier.com/wp-content/uploads/2022/11/CCNovDec22_NA_bubbles_feature.jpg
Webb opens new era in observational astrophysics https://cerncourier.com/a/webb-opens-new-era-in-observational-astrophysics/ Mon, 05 Sep 2022 08:43:12 +0000 https://preview-courier.web.cern.ch/?p=105927 The keenly awaited first science-grade images from the James Webb Space Telescope did not disappoint.

The post Webb opens new era in observational astrophysics appeared first on CERN Courier.

]]>
JWST

The keenly awaited first science-grade images from the James Webb Space Telescope were released on 12 July – and they did not disappoint. Thanks to Webb’s unprecedented 6.5 m mirror, together with its four main instruments (NIRCam, NIRSpec, NIRISS and MIRI), the $10 billion probe marks a new dawn for observational astrophysics.

The past six months since Webb’s launch from French Guiana have been devoted to commissioning, including alignment and calibration of the mirrors and bringing temperatures to cyrogenic levels to minimise noise from heat radiated from the equipment (CERN Courier March/April 2022 p7). Unlike the Hubble Space Telescope, Webb does not look at ultraviolet or visible light but is primarily sensitive to near- and mid-infrared wavelengths. This enables it to look at the farthest galaxies and stars, as early as a few hundred million years after the Big Bang.

Wealth of information

Pictured here are some of Webb’s early-release images. The first deep-field image (top) covers the same area of the sky as a grain of sand held at arm’s length, and is swarming with galaxies. At the centre is a cluster called SMACS 0723, whose combined mass is so high that its gravitational field bends the light of objects that lie behind it (resulting in arc-like features), revealing galaxies that existed when the universe was less than a billion years old. The image was taken using NIRCam and is a combination of images at different wavelengths. The spectrographs, NIRSpec and NIRISS, will provide a wealth of information on the composition of stars, galaxies and their clusters, offering a rare peak into the earliest stages of their formation and evolution. 

Stephan’s Quintet (bottom left) is a visual group of five galaxy clusters that was first discovered in 1877 and remains one of the most studied compact galaxy groups. The actual grouping involves only four galaxies, which are predicted to eventually merge. The non-member, NGC 7320, which lies about 40 million light years from Earth rather than 290 million for the actual group, is seen on the left, with vast regions of active star formation in its numerous spiral arms. 

A third stunning image, the Southern Ring nebula (bottom right), shows a dying star. With its reservoirs of light elements already exhausted, it starts using up any available heavier elements to sustain itself – a complex and violent process that results in large amounts of material being ejected from the star in intervals, visible as shells.

These images are just a taste, yet not all Webb data will be so visually spectacular. By extending Hubble’s observations of distant supernovae and other standard candles, for example, the telescope should enable the local rate of expansion to be determined more precisely, possibly shedding light on the nature of dark energy. By measuring the motion and gravitational lensing of early objects, it will also survey the distribution of dark matter, and might even hint at what it’s made of. Using transmission spectroscopy, Webb will also reveal exoplanets in unprecedented detail, learn about their chemical compositions and search for signatures of habitability. 

The post Webb opens new era in observational astrophysics appeared first on CERN Courier.

]]>
News The keenly awaited first science-grade images from the James Webb Space Telescope did not disappoint. https://cerncourier.com/wp-content/uploads/2022/09/CCSepOct22_NA_Astro_feature.jpg
Counting down to LISA https://cerncourier.com/a/counting-down-to-lisa/ Mon, 05 Sep 2022 08:28:15 +0000 https://preview-courier.web.cern.ch/?p=106024 Stefano Vitale describes the status of LISA as the space-based gravitational-wave observatory moves into its final design phase.

The post Counting down to LISA appeared first on CERN Courier.

]]>
Stefano Vitale

What is LISA? 

LISA (Laser Interferometer Space Antenna) is a giant Michelson interferometer comprising three spacecraft that form an equilateral triangle with sides of about 2.5 million km. You can think of one satellite as the central building of a terrestrial observatory like Virgo or LIGO, and the other two as the end stations of the two interferometer arms. Mirrors at the two ends of each arm are replaced by a pair of free-falling test masses, the relative distance between which is measured by a laser interferometer. When a gravitational wave (GW) passes, it alternately stretches one arm and squeezes the other, causing these distances to oscillate by an almost imperceptible amount (just a few nm). The nature and position of the GW sources is encoded in the time evolution of this distortion. Unlike terrestrial observatories, which keep their arms locked in a fixed position, LISA must keep track of the satellite positions by counting the millions of wavelengths by which their separation changes each second. All interferometer signals are combined on the ground and a sophisticated analysis is used to determine the differential distance changes between the test masses. 

What will LISA tell us that ground-based observatories can’t?

Most GW sources, such as the merger of two black holes detected for the first time by LIGO and Virgo in 2015, consist of binary systems; as the two compact companions spiral into each other, they generate GWs. In these extreme binary mergers, the frequency of the GWs decrease both with the increasing mass of the objects and with increasing distance from their final merger. GWs with frequencies down to about a few Hz, corresponding to objects with masses up to a few thousand solar masses, are detectable from the ground. Below that, however, Earth’s gravity is too noisy. To access milli-Hertz and sub-milli-Hertz frequencies we need to go to space. This low-frequency regime is the realm of supermassive objects with millions of solar masses located in galactic centres, and also where tens of thousands of compact objects in our galaxy, including some of the Virgo/LIGO black holes, emit their signals for years and centuries as they peacefully rotate around each other before entering the final few seconds of their collapse. The LISA mission will therefore be highly complementary to existing and future ground-based observatories such as the Einstein Telescope. Theorists are excited about the physics that can be probed by multiband GW astronomy.

When and how did you get involved in LISA?

LISA was an idea by Pete Bender and colleagues in the 1980s. It was first proposed to the European Space Agency (ESA) in 1993 as a medium-sized mission, an envelope that it could not possibly fit. Nevertheless, ESA got excited by the idea and studies immediately began toward a larger mission. I became aware of the project around that time, immediately fell in love with it and, in 1995, joined the team of enthusiastic scientists, led by Karsten Danzmann. At the time it was not clear that a detection of GWs from ground was possible, whereas unless general relativity was deadly wrong, LISA would certainly detect binary systems in our galaxy. It soon became clear that such a daring project needed a technology precursor, to prove the feasibility of test-mass freefall. This built on my field of expertise, and I became principal investigator, with Karsten as a co-principal investigator, of LISA Pathfinder. 

LISA Pathfinder

What were the key findings of LISA Pathfinder? 

Pathfinder essentially squeezed one of LISA’s arms from millions of kilometres to half a metre and placed it into a single spacecraft: two test masses in a near-perfect gravitational freefall with their relative distance tracked by a laser interferometer. It launched in December 2015 and exceeded all expectations. We were able to control and measure the relative motion of the test masses with unprecedented accuracy using innovative technologies comprising capacitive sensors, optical metrology and a micro-Newton thruster system, among others. By reducing and eliminating all sources of disturbance, Pathfinder observed the most perfect freefall ever created: the test masses were almost motionless with respect to each other, with a relative acceleration less than a millionth of a billionth of Earth’s gravitational acceleration. 

What is LISA’s status today?

LISA is in its final study phase (“B1”) and marching toward adoption, possibly late next year, after which ESA will release the large industrial contracts to build the mission. Following Pathfinder, many necessary technologies are in a high state of maturity: the test masses will be the same, with only minor adjustments, and we also demonstrated a pm-resolution interferometer to detect the motion of the test masses inside the spacecraft – something we need in LISA, too. What we could not test in Pathfinder is the million-kilometre-long pm-resolution interferometer, which is very challenging. Whereas LIGO’s 4 km-long arms allow you to send laser light back and forth between the mirrors and reach kW powers, LISA will have a 1 W laser: if you try to reflect it off a small test-mass 2.5 million km away, you get back just 20 photons per second! The instrument therefore needs a transponder scheme: one spacecraft sends light to another, which collects and measures the frequency to see if there is a shift due to a passing GW. You do this with all six test masses (two per spacecraft), combining the signals in one heck of an analysis to make a “synthetic” LIGO. Since this is mostly a case of optics, you don’t need zero-g space tests, and based on laboratory evidence we are confident it will work. Although LISA is no longer a technology-research project, it will take a few more years to iron out some of the small problems and build the actual flight hardware, so there is no shortage of papers or PhD theses to be written. 

How is the LISA consortium organised?

ESA’s science missions are often a collaboration in which ESA builds, launches and operates the satellite and its member states – via their universities and industries – contribute all or part of the scientific instruments, such as a telescope or a camera. NASA is a major partner with responsibilities that include the lasers, the device to discharge the test masses as they get charged up by cosmic rays, and the telescope to exchange laser beams among the satellites. Germany, which holds the consortium’s leadership role, also shares responsibility for a large part of the interferometry with the UK. Italy leads the development of the test-mass system; France the science data centre and the sophisticated ground testing of LISA optics; and Spain the science-diagnostics development. Critical hardware components are also contributed by Switzerland, the Netherlands, Belgium, the Czech Republic, Denmark and Poland, while scientists worldwide contribute to various aspects of the preparation of mission operation, data analysis and science utilisation. The LISA consortium has around 1500 members. 

What is the estimated cost of the mission, and what is industry’s role?

A very crude estimate of the sum of ESA, NASA and member-state contributions may add up to something below two billion dollars. One of the main drivers of ESA’s scientific programme is to maintain the technological level of European aerospace, so the involvement of industry, in close cooperation with scientific institutes, is crucial. After having passed the adoption phase, ESA will grant contracts to prime industrial contractors who take responsibility for the mission. To foster industrial competition during the study phase, ESA has awarded contracts to two independent contractors, in our case Airbus and Thales Alenia. In addition, international partners and member-state contributions often, if not always, involve industry.

What scientific and technological synergies exist with other fields?

LISA will look for deviations from general relativity, in particular the case where compact objects fall into a supermassive black hole. In terms of their importance, deviations in general relativity are a very close cousin of deviations from the Standard Model of particle physics. Which will come first we don’t know, but LISA is certainly an outstanding laboratory for fundamental gravitational physics. Then there are expectations for cosmology, such as tracing the history of black-hole formation or maybe detecting stochastic backgrounds of GWs, such as “cusps” predicted in string theory. Wherever you push the frontiers to investigate the universe at large, you push the frontiers of fundamental interactions – so it’s not surprising that one of our best cosmologists now works at CERN! Technologically speaking, we just started a collaboration with CERN’s vacuum group. In LISA we have a tiny vacuum volume in the region where the test masses are located, and it is full of components and cables. It was a big challenge for Pathfinder, but for LISA we definitely need to understand more. The CERN vacuum group is really interested in understanding this, so we are very happy with this new collaboration. As with LIGO, Advanced Virgo and the Einstein Telescope, LISA is a CERN-recognised experiment.

There is no other space mission with as many papers published about its science expectations before it even leaves the ground

What’s the secret to maintaining the momentum in a complex, long-term global project in fundamental physics? 

The LISA mission is so fascinating that it is “self-selling”. Scientists liked it, engineers liked it, industry liked it, space agencies like it. Obviously Pathfinder helped a lot – it meant that even in the darkest moments we knew we were “real”. But in the meantime, our theory colleagues did so much work. As far as I know, there is no other space mission with as many papers published about its science expectations before it even leaves the ground. It’s not just that the science is inspiring, but the fact that you can calculate things. The instrumentation is also so fascinating that students want to do it. With Pathfinder, we faced many difficulties. We were naïve in thinking that we could take this thing that we built in the lab and turn it into an industrial project. Of course we needed to grow and learn, but because we loved the project so much, we never ever gave up. One needs this mind-set and resilience to make big scientific projects work. 

When do you envision launch? 

Currently it’s planned for the mid-2030s. This is a bit in the future at my age, but I am grateful to have seen the launch of LISA Pathfinder and I am happy to think that many of my young colleagues will see it, and share the same emotions we did with Pathfinder, as a new era in GW astronomy opens up.

The post Counting down to LISA appeared first on CERN Courier.

]]>
Opinion Stefano Vitale describes the status of LISA as the space-based gravitational-wave observatory moves into its final design phase. https://cerncourier.com/wp-content/uploads/2022/09/CCSepOct22_INT_vitale_feature.jpg
X-ray polarisation probes extreme physics https://cerncourier.com/a/x-ray-polarisation-probes-extreme-physics/ Thu, 30 Jun 2022 13:37:39 +0000 https://preview-courier.web.cern.ch/?p=102010 IXPE’s first results provide novel insights into the properties of neutron stars.

The post X-ray polarisation probes extreme physics appeared first on CERN Courier.

]]>
Accretion disk around magnetar 4U 0142+61

X-ray astronomy has been around for more than 50 years and remains responsible for a wealth of discoveries. Astronomical breakthroughs have been the result of detailed measurements of the X-ray arrival time, direction and energy. But the fourth measurable parameter of X-rays, their polarisation, remains largely unexplored. Following the first rough measurements of a handful of objects in the 1970s by Martin Weisskopf and co-workers, there was a hiatus in X-ray polarimetry due to the complexity of the detection mechanism. In recent years, in parallel with the emergence of gamma-ray polarimetry, interest in the field has returned. Indeed, after some initial measurements using the Chinese–Italian PolarLight Cubesat launched in October 2018, X-ray polarimetry has reached full maturity with the launch of the first large-scale dedicated observatory in December 2021: the Imaging X-ray Polarimetry Explorer (IXPE), a joint project by NASA and the Italian Space Agency, led by Weisskopf.

The IXPE mission uses gas pixel detectors to measure the polarisation for a range of astronomical sources in the 2-8 keV energy range. Incoming X-rays are absorbed in a gas which results in the emission of a photoelectron, the azimuthal emission direction of which is correlated with the polarisation vector of the incoming photon. Tracking the path of the electron therefore allows the polarisation to be inferred. Accurately measuring the emission direction of the low-energy photoelectron, especially in a space-based detector, has been one of the main IXPE challenges and required decades of detector development. 

X-ray polarimetry has reached full maturity with the launch of the first large-scale dedicated observatory

IXPE has already observed a range of sources. Its first public results, posted on arXiv on 18 May, concern a magnetar, a highly magnetic neutron star, called 4U 0142+61, which rotates around its axis in about 8 s and has a magnetic field of 1010 T. IXPE’s first ever measurement of polarised emission from a magnetar in the X-ray region shows this extreme object to have an energy-integrated polarisation degree of 12%, while in the thermal (2–4 keV) range this is about 12%, and as high as 41% for emission at higher energies (5.5–8 keV). The polarisation angles of the two emission components are orthogonal. 

The results appear to agree best with a model where the thermal emission stems from a condensed iron atmosphere: the higher energy emission would be a result of some thermal photons being up-scattered to higher energies when interacting with charged particles following the magnetic field lines. However, since other models link the emission to a gaseous atmosphere heated by a constant bombardment of particles, measurements of additional magnetars are needed.

Fundamental physics

Apart from providing novel insights into neutron-star properties, time-resolved studies of the emission during the rotation period hints at more fundamental physics at play. The spectral profile of 4U 0142+61 was found to be rather constant during the rotation, indicating that the emission does not come from hot-spots, such as the poles, but rather from a large area on the surface. As the magnetic field over such a large area would, however, be expected to vary significantly, so would the polarisation angle of the emitted X-rays. As a result, the net polarisation seen on Earth would largely be blurred out, resulting in a much lower polarisation degree than is observed. 

An intriguing explanation for this, note the authors, is vacuum birefringence – an effect predicted to be important in the presence of extreme magnetic fields, but which has never been observed. While for the magnetar the polarisation angle of the emission varies with the emission location, it gets altered as the photons travel through the strong magnetic field in which continuous electron–positron pairs affect their propagation. Only when the magnetic field is weak enough, at around 100 times the radius of the star, does the polarisation angle get frozen. Since this angle is aligned with the magnetic field, which at this point is smoother, the emission will realign the emission travelling towards Earth and allow for a net polarisation.

Although the polarisation degrees measured by IXPE are not high enough to definitively prove vacuum birefringence, the results give a clear hint. Furthermore, the measurements of 4U 0142+61 are only the first of many performed by the IXPE team. Throughout the coming months, detailed measurements of galactic objects such as the Crab Nebula, as well as extra-galactic sources, are predicted to be released. Among these objects there will be other magnetars, the X-ray emission from which will soon bring further understanding of these extreme objects and potentially confirm the existence of vacuum birefringence.

The post X-ray polarisation probes extreme physics appeared first on CERN Courier.

]]>
News IXPE’s first results provide novel insights into the properties of neutron stars. https://cerncourier.com/wp-content/uploads/2022/06/CCJulAug22_NA-astro.jpg
Flying high with silicon photomultipliers https://cerncourier.com/a/flying-high-with-silicon-photomultipliers/ Wed, 25 May 2022 07:45:59 +0000 https://preview-courier.web.cern.ch/?p=100404 Silicon photomultipliers offer many advantages over traditional tube devices, but further R&D is needed to understand their performance under radiation damage.

The post Flying high with silicon photomultipliers appeared first on CERN Courier.

]]>
sipm_2

The ever maturing technology of silicon photomultipliers (SiPMs) has a range of advantages over traditional photomultiplier tubes (PMTs). As such, SiPMs are quickly replacing PMTs in a range of physics experiments. The technology is already included in the LHCb SciFi tracker and is foreseen to be used in CMS’ HGCAL, as well as in detectors at proposed future colliders. For these applications the important advantages of SiPMs over PMTs are their higher photo-detection efficiencies (by roughly a factor of two), their lower operating voltage (30-70 V compared to kV’s) and their small size, which allows them to be integrated in compact calorimeters. For space-based instruments — such as the POLAR-2 gamma-ray mission, which aims to use 6400 SiPM channels (see image) — a further advantage is the lack of a glass window, which gives SiPMs the mechanical robustness required during launch. There is, however, a disadvantage with SiPMs: dark current, which flows when the device is not illuminated and is greatly aggravated after exposure to radiation.

In order to strengthen the community and make progress on this technological issue, a dedicated workshop was held at CERN in a hybrid format from 25 to 29 April. Organized by the University of Geneva and funded by the Swiss National Science Foundation, the event attracted around 100 experts from academia and industry. The participants included experts in silicon radiation damage from the University of Hamburg who showed both the complexity of the problem and the need for further studies. Whereas the non-ionizing energy loss concept used to predict radiation damage in silicon is linearly correlated to the degradation of semiconductor devices in a radiation field, this appears to be violated for SiPMs. Instead, dedicated measurements for different types of SiPMs in a variety of radiation fields are required to understand the types of damage and their consequences on the SiPMs’ performance. Several such measurements, performed using both proton and neutron beams, were presented at the April workshop, while plans were made to coordinate such efforts in the future, for example by performing tests of one type of SiPMs at different facilities followed by identical analysis of the irradiated samples. In addition, an online platform to discuss upcoming results was established.

The lack of a glass window gives SiPMs the mechanical robustness required during launch

The damage sustained by radiation manifests itself mainly in the form of an increased dark current. As presented at the workshop, this increase can cause a vicious cycle because the increased current can cause self-heating, which further increases the highly temperature-dependent dark current. These issues are of great importance for future space missions as they influence the power budget, causing the scientific performance to degrade over time. Data from the first SiPM based in-orbit detectors, such as the SIRI mission by the US Naval Research Lab, the Chinese-led GECAM and GRID detectors and the Japanese-Czech GRBAlpha payload, were presented. It is clear that although SiPMs have advantages over PMTs, the radiation, which is highly dependent on the satellite’s orbit, can cause a significant degradation in performance that limits low-earth orbit missions to several years in space. Based on these results, a future Moon mission has decided against the use of SiPMs and reverted to PMTs.

Solutions to radiation damage in SiPMs were also discussed at length. These are mainly in the form of speeding up the annealing of the damage by exposing SiPMs to hotter environments for short periods. Additionally, cooling of the SiPM during data taking will not only decrease the dark current directly, but could also reduce the radiation damage itself, although further research on this topic is required.

Overall, the workshop indicated significant further studies are required to predict the impact of radiation damage on future experiments.

The post Flying high with silicon photomultipliers appeared first on CERN Courier.

]]>
Meeting report Silicon photomultipliers offer many advantages over traditional tube devices, but further R&D is needed to understand their performance under radiation damage. https://cerncourier.com/wp-content/uploads/2022/05/sipm_1.jpg
Thermonuclear explosions fuel cosmic rays https://cerncourier.com/a/thermonuclear-explosions-fuel-cosmic-rays/ Mon, 02 May 2022 09:20:47 +0000 https://preview-courier.web.cern.ch/?p=98929 Gamma-ray measurements by H.E.S.S. provide a new way to test models of the origin of cosmic rays.

The post Thermonuclear explosions fuel cosmic rays appeared first on CERN Courier.

]]>
The RS Ophiuchi outburst

Normally, RS Ophiuchi is a faint astronomical object at a distance of about 5000 light years from Earth. Once every 15 years or so, however, it brightens dramatically to the point it becomes visible to the naked eye, only to disappear again within several days. This object, classified as a recurrent nova, is not a single star but rather a binary system consisting of a white dwarf and a red giant. Due to the proximity of the white dwarf to its massive companion, it slowly accumulates matter from which it forms a thin atmospheric-like layer on its surface. Over time, this atmosphere becomes denser and heats up until it reaches a critical temperature of around 20 million K. The thermonuclear explosion initiated at this temperature rapidly spreads across the dwarf’s surface, causing all the remaining material to be blown away. This process, which in the case of RS Ophiuchi occurs between every 9 to 26 years, makes the object visible in the optical region. However, the process has also been theorised to be capable of producing cosmic rays.

Bipolar shape

The first recorded explosion on RS Ophiuchi was in 1898 after it was discovered in optical images by Williamina Fleming in 1905. A more recent explosion in 2006 was observed in detail by Hubble, while the last one occurred in August 2021. Hubble’s 2006 images show a shock wave propagating from the object. The shock, which is originally radially symmetric, gets distorted by the gas present in the orbital plane of the binary system. This gas slows down the shock in the orbital plane, leading to a final bipolar shape capable of accelerating electrons and hadrons to high energies. These accelerated charged particles can reach Earth in the form of cosmic rays, but due to the influence of magnetic fields it is not possible to directly trace these back to the source. The high-energy gamma rays produced by some of these cosmic rays, on the other hand, do point directly to the source. Gamma rays formed in this way during the 2021 explosion have recently been used by the H.E.S.S. collaboration to test cosmic-ray acceleration models. 

After the initial detection of the brightening of the source in optical wavelengths, the ground-based H.E.S.S. facility in Namibia pointed its five telescopes (which are sensitive to the Cherenkov light emitted as TeV gamma rays induce showers in the atmosphere) to the source. In parallel, the space-based Fermi–LAT telescope, which directly detects gamma rays in the ~100 MeV to ~500 GeV energy range, observed the target for a duration of several weeks. The emission measured by both telescopes as a function of time shows the maximum energy flux as measured by Fermi–LAT peaking about one day after the peak in optical brightness. For H.E.S.S., which covered the 250 GeV to 2.5 TeV energy range, the peak only occurred three days after the optical peak, indicating a significant hardening of the emission spectrum with time.

Hadronic origin

These results match what would be expected from a hadronic origin of these gamma rays. The shock wave produced by the thermonuclear explosion is capable of accelerating charged particles every time they traverse the shock. Magnetic fields, which are in part induced by some of the accelerated hadrons themselves, trap the charged particles in the region, thereby allowing these to traverse the shock many times. Some of the hadrons collide with gas in the surrounding medium to produce showers in which neutral pions are produced, which in turn produce the gamma rays detected on Earth. The maximum energy of these gamma rays is about an order of magnitude lower than the hadrons that induced the showers. This implies that one day after the explosion, hadrons had been accelerated up to 1 TeV, producing the photons detected by Fermi–LAT, while it took an additional two days for the source to further accelerate such hadrons up to the 10 TeV required to produce the emission visible to H.E.S.S. These timescales, as well as the measured energies, match with the theoretical predictions for sources with the same size and energy as RS Ophiuchi.

The results show a clear correlation between the theoretical predictions of hadronic production of gamma rays by recurring novae

The results, published in Science by the H.E.S.S. collaboration, show a clear correlation between the theoretical predictions of hadronic production of gamma rays by recurring novae. The alternative theory of a leptonic origin of the gamma rays is more difficult to fit due to the relatively large fraction of the shock energy that would need to be converted into electron acceleration. The measurements form an almost direct way to test models of the origin of cosmic rays and thereby add several important pieces to the puzzle of cosmic-ray origins. 

The post Thermonuclear explosions fuel cosmic rays appeared first on CERN Courier.

]]>
News Gamma-ray measurements by H.E.S.S. provide a new way to test models of the origin of cosmic rays. https://cerncourier.com/wp-content/uploads/2022/04/CCMayJun22_NA_ASTRO_feature.jpg
LHCb constrains cosmic antimatter production https://cerncourier.com/a/lhcb-constrains-cosmic-antimatter-production/ Mon, 02 May 2022 08:26:17 +0000 https://preview-courier.web.cern.ch/?p=99260 Studies of proton-gas collisions at the LHC enable a deeper understanding of the antiproton flux in cosmic rays.

The post LHCb constrains cosmic antimatter production appeared first on CERN Courier.

]]>
LHCb figure 1

During their 10 million-year-long journey through the Milky Way, high-energy cosmic rays can collide with particles in the interstellar medium, the ultra-rarefied gas filling our galaxy and mostly composed of hydrogen and helium. Such rare encounters are believed to produce most of the small number of antiprotons, about one per 10,000 protons, that are observed in high-energy cosmic rays. But this cosmic antimatter could also originate from unconventional sources, such as dark-matter annihilation, motivating detailed investigations of antiparticles in space. This effort is currently led by the AMS-02 experiment on the International Space Station, which has reported results with unprecedented accuracy.

The interpretation of these precise cosmic antiproton data calls for a better understanding of the antiproton production mechanism in proton-gas collisions. Here, experiments at accelerators come to the rescue. The LHCb experiment has the unique capability of injecting gas into the vacuum of the LHC accelerator. By injecting helium, cosmic collisions are replicated in the detector and their products can be studied in detail. LHCb already provided a first key input into the understanding of cosmic antimatter by measuring the amount of antiprotons produced at the proton–helium collision vertex itself. In a new study, this measurement has been extended by including the significant fraction (about one third) of antiprotons resulting from the decays of antihyperons such as Λ, which contain a strange antiquark also produced in the collisions.

These antiprotons are displaced from the collision point in the detector, as the antihyperons can fly several metres through the detector before decaying. Different antihyperon states and decay chains are possible, all contributing to the cosmic antiproton flux. To count them, the LHCb team exploited two key features of its detector: the ability to distinguish antiprotons from other charged particles via two ring-imaging Cherenkov (RICH) detectors, and the outstanding resolution of the LHCb vertex locator. Thanks to the latter, when checking the compatibility of the identified antiproton tracks with the collision vertex, three classes of antiprotons can be clearly resolved (figure 1): “prompt” particles originating from the proton–helium collision vertex; detached particles from Λ decays; and more separated particles produced in secondary collisions with the detector material.

The majority of the detached antiprotons are expected to originate from Λ particles produced at the collision point decaying to an antiproton and a positive pion. A second study was thus performed to fully reconstruct these decays by identifying the decay vertex. The results of this complementary approach show that about 75% of the observed detached antiprotons originate from Λ decays, in good agreement with theoretical predictions.

These new results provide an important input for modelling the expected antiproton flux from cosmic collisions. No smoking gun for an exotic source of cosmic antimatter has emerged yet, while the accuracy of this quest would profit from more accelerator inputs. Thus, the LHCb collaboration plans to expand its “space mission” with the new gas target SMOG2. This facility/device could also enable collisions between protons and hydrogen or deuterium targets, further strengthening the ties between the particle and astroparticle physics communities.

The post LHCb constrains cosmic antimatter production appeared first on CERN Courier.

]]>
News Studies of proton-gas collisions at the LHC enable a deeper understanding of the antiproton flux in cosmic rays. https://cerncourier.com/wp-content/uploads/2022/04/CCMayJun22_EF_LHCb.jpg
Gravitational-wave astronomy turns to AI https://cerncourier.com/a/gravitational-wave-astronomy-turns-to-ai/ Wed, 16 Mar 2022 14:39:23 +0000 https://preview-courier.web.cern.ch/?p=98019 Physicists gathered to discuss machine-learning methods applied to detect signals from gravitational-waves.

The post Gravitational-wave astronomy turns to AI appeared first on CERN Courier.

]]>
New frontiers in gravitational-wave (GW) astronomy were discussed in the charming and culturally vibrant region of Oaxaca, Mexico from 14 to 19 November. Around 37 participants attended the hybrid Banff International Research Station for Mathematical Innovation and Discovery (BIRS) “Detection and Analysis of Gravitational Waves in the Era of Multi-Messenger Astronomy: From Mathematical Modelling to Machine Learning’’ workshop. Topics ranged from numerical relativity to observational astrophysics and computer science, including the latest applications of machine-learning algorithms for the analysis of GW data.

GW observations are a new way to explore the universe’s deepest mysteries. They allow researchers to test gravity in extreme conditions, to get important clues on the mathematical structure and possible extension of general relativity, and to understand the origin of matter and the evolution of the universe. As more GW observations with increased detector sensitivities spur astrophysical and theoretical investigations, the analysis and interpretation of GW data faces new challenges which require close collaboration with all GW researchers. The Oaxaca workshop focused on a topic that is currently receiving a lot of attention: the development of efficient machine-learning (ML) methods and numerical-analysis algorithms for the detection and analysis of GWs. The programme gave participants an overview of new-physics phenomena that could be probed by current or next-generation GW detectors, as well as data-analysis tools that are being developed to search for astrophysical signals in noisy data.

Since their first detections in 2015, the LIGO and Virgo detectors have reached an unprecedented GW sensitivity. They have observed signals from binary black-hole mergers and a handful of signals from binary neutron star and mixed black hole-neutron star systems. In discussing the role that numerical relativity plays in unveiling the GW sky, Pablo Laguna and Deirdre Shoemaker (U. Texas) showed how it can help in understanding the physical signatures of GW events, for example by distinguishing black hole-neutron star binaries from binary black-hole mergers. On the observational side, several talks focused on possible signatures of new physics in future detections. Adam Coogan (U. de Montréal and Mila) and Gianfranco Bertone (U. of Amsterdam, and chair of EuCAPT) discussed dark-matter halos around black holes. Distinctive GW signals  could help to determine whether dark matter is made of a cold, collisionless particle via signatures of intermediate mass-ratio inspirals embedded in dark-matter halos. In addition, primordial black holes could be dark-matter candidates.

Bernard Mueller (U. Monash) and Pablo Cerdá-Durán (U. de Valencia) described GW emission from core-collapse supernovae. The range of current detectors is limited to the Milky Way, where the rate of supernovae is about one per century. However, if and when a galactic supernova happens, its GW signature will be within reach of existing detectors. Lorena Magaña Zertuche (U. of Mississippi) talked about the physics of black-hole ringdown – the process whereby gravitational waves are emitted in the aftermath of a binary black-hole merger – which is crucial for understanding astrophysical black holes and testing general relativity. Finally, Leïla Haegel (U. de Paris) described how the detection of GW dispersion would indicate the breaking of Lorentz symmetry: if a GW propagates according to a modified dispersion relation, its frequency modes will propagate at different speeds, changing  the phase evolution of the signals with respect to general relativity.

Machine learning
Applications of different flavours of ML algorithms to GW astronomy, ranging from the detection of GWs to their characterisation in detector simulations, were the focus of the rest of the workshop.

ML has seen a huge development in recent years and has been increasingly used in many fields of science. In GW astronomy, a variety of supervised, unsupervised, and reinforcement ML algorithms, such as deep learning, neural networks, genetic programming and support vector machines, have been developed. They have been used to successfully deal with noise in the detector, signal processing, data analysis for signal detections and for reducing the non-astrophysical background of GW searches. These algorithms must be able to deal with large data sets and demand  a high accuracy to model  theoretical waveforms and to perform  searches at the limit of instrument sensitivities. The next step for a successful use of ML in GW science will be the integration of ML techniques with more traditional numerical-analysis methods that have been developed for the modelling, real-time detection, and analysis of signals.

The BIRS workshop provided a broad overview of the latest advances in this field, as well as open questions that need to be solved to apply robust ML techniques to a wide range of problems. These include reliable background estimation, modelling gravitational waveforms in regions of the parameter space not covered by full numerical relativity simulations, and determining populations of GW sources and their properties. Although ML for GW astronomy is in its infancy, there is no doubt that it will play an increasingly important role in the detection and characterization of GWs leading to new discoveries.

The post Gravitational-wave astronomy turns to AI appeared first on CERN Courier.

]]>
Meeting report Physicists gathered to discuss machine-learning methods applied to detect signals from gravitational-waves. https://cerncourier.com/wp-content/uploads/2022/03/supernova.png
Exploring the CMB like never before https://cerncourier.com/a/exploring-the-cmb-like-never-before/ Wed, 09 Mar 2022 10:41:37 +0000 https://preview-courier.web.cern.ch/?p=97839 With telescopes at the South Pole and in the Chilean Atacama Desert, the newly endorsed CMB-S4 observatory will exceed the capabilities of earlier experiments by more than an order of magnitude.

The post Exploring the CMB like never before appeared first on CERN Courier.

]]>
To address the major questions in cosmology, the cosmic microwave background (CMB) remains the single most important phenomenon that can be observed. Not this author’s words, but those of the recent US National Academies of Sciences, Engineering, and Medicine report Pathways to Discovery in Astronomy and Astrophysics for the 2020s (Astro2020), which recommended that the US pursue a next-generation ground-based CMB experiment, CMB-S4, to enter operation in around 2030. 

The CMB comprises the photons created in the Big Bang. These photons have therefore experienced the entire history of the universe. Everything that has happened has left an imprint on them in the form of anisotropies in their temperature and polarisation with characteristic amplitudes and angular scales. The early universe was hot enough to be completely ionised, which meant that the CMB photons constantly scattered off free electrons. During this period the primary CMB anisotropies were imprinted, tracing the overall geometry of the universe, the fraction of the energy density in baryons, the number of light-relic particles and the nature of inflation. After about 375,000 years of expansion the universe cooled enough for neutral hydrogen atoms to be stable. With the free electrons rapidly swept up by protons, the CMB photons simply free-streamed in whatever direction they were last moving in. When we observe the CMB today we therefore see a snapshot of this so-called last-scattering surface.

The continued evolution of the universe had two main effects on the CMB photons. First, its ongoing expansion stretched their wavelengths to peak at microwave frequencies today. Second, the growth of structure eventually formed galaxy clusters that changed the direction, energy and polarisation of the CMB photons that pass through them, both from gravitational lensing by their mass and from inverse Compton scattering by the hot gas that makes up the inter-cluster medium. These secondary anisotropies therefore constrain all of the parameters that this history depends on, from the moment the first stars formed to the number of light-relic particles and the masses of neutrinos.  

The temperature anisotropies of the CMB

As noted by the Astro2020 report, the history of CMB research is that of continuously improving ground and balloon experiments, punctuated by comprehensive measurements from the major satellite missions COBE, WMAP and Planck. The increasing temperature and polarisation sensitivity and angular resolution of these satellites is evidenced in the depth and resolution of the maps they produced (see “Relic radiation” image”). However, such maps are just our view of the CMB – one particular realisation of a random process. To derive the underlying cosmology that gave rise to them, we need to measure the amplitude of the anisotropies on various angular scales (see “Power spectra” figure). Following the serendipitous discovery of the CMB in 1965, the first measurements of the temperature anisotropy were made by COBE in 1992. The first peak in the temperature power spectrum was measured by the BOOMERanG and MAXIMA balloons in 2000, followed by the E-mode polarisation of the CMB by the DASI experiment in 2002, and the B-mode polarisation by the South Pole Telescope and POLARBEAR experiments in 2015.

CMB-S4, a joint effort supported by the US Department of Energy (DOE) and the National Science Foundation (NSF), will help write the next chapter in this fascinating adventure. Planned to comprise 21 telescopes at the South Pole and in the Chilean Atacama Desert instrumented with more than 500,000 cryogenically-cooled superconducting detectors, it will exceed the capabilities of earlier generations of experiments by more than an order of magnitude and deliver transformative discoveries in fundamental physics, cosmology, astrophysics and astronomy.

The CMB-S4 challenge 

Three major challenges must be addressed to study the CMB at such levels of precision. Firstly, the signals are extraordinarily faint, requiring massive datasets to reduce the statistical uncertainties. Secondly, we have to contend with systematic effects both from imperfect instruments and from the environment, which must be controlled to exquisite precision if they are not to swamp the signals. Finally, the signals are obscured by other sources of microwave emission, especially galactic synchrotron and dust emission. Unlike the CMB, these sources do not have a black-body spectrum, so it is possible to distinguish between CMB and non-CMB sources if observations are made at enough microwave frequencies to break the degeneracy.

Power spectra of the CMB

This third challenge actually proves to be an astrophysical blessing as well as a cosmological curse: CMB observations are also excellent legacy surveys of the millimetre-wave sky, which can be used for a host of other science goals. These range from cataloguing galaxy clusters, to studying the Milky Way, to detecting spatial and temporal transients such as gamma-ray bursts via their afterglows.

Coming together

In 2013 the US CMB community came together in the Snowmass planning process, which informs the deliberations of the decadal Particle Physics Project Prioritization Panel (P5). We realised that achieving the sensitivity needed to make the next leap in CMB science would require an experiment of such magnitude (and therefore cost) that it could only be accomplished as a community-wide endeavour, and that we would therefore need to transition from multiple competing experiments to a single collaborative one. By analogy with the US dark-energy programme, this was designated a “Stage 4” experiment, and hence became known as CMB-S4. 

In 2014 a P5 report made the critical recommendation that the DOE should support CMB science as a core piece of its programme. The following year a National Academies report identified CMB science as one of three strategic priorities for the NSF Office of Polar Programs. In 2017 the DOE, NSF and NASA established a task force to develop a conceptual design for CMB-S4, and in 2019 the DOE took “Critical Decision 0”, identifying the mission need and initiating the CMB-S4 construction project. In 2020 Berkeley Lab was appointed the lead laboratory for the project, with Argonne, Fermilab and SLAC all playing key roles. Finally, late last year, the long-awaited Astro2020 report unconditionally recommended CMB-S4 as a joint NSF and DOE project with an estimated cost of $650 million. With these recommendations in place, the CMB-S4 construction project could begin.

CMB-S4 constraints

From the outset, CMB-S4 was intended to be the first sub-orbital CMB experiment designed to reach specific critical scientific thresholds, rather than simply to maximise the science return under a particular cost cap. Furthermore, as a community-wide collaboration, CMB-S4 will be able to adopt and adapt the best of all previous experiments’ technologies and methodologies – including operating at the site best suited to each science goal. One third of the major questions and discovery areas identified across the six Astro2020 science panels depend on CMB observations.

The critical degrees of freedom in the design of any observation are the sky area, frequency coverage, frequency-dependent depth and angular resolution, and observing cadence. Having reviewed the requirements across the gamut of CMB science, four driving science goals have been identified for CMB-S4. 

For the first time, the entire community is coming together to build an experiment defined by achieving critical science thresholds

The first is to test models of inflation via the primordial gravitational waves they naturally generate. Such gravitational waves are the only known source of a primordial B-mode polarisation signal. The size of these primordial B-modes is quantified by the ratio of their power to that of the temperature power spectrum – the scalar-to-tensor ratio, designated r. For the largest and most popular classes of inflationary models, CMB-S4 will make a 5σ detection of r, while failure to make such a measurement will put an upper limit of r ≤ 0.001 at 95% confidence, setting a rigorous constraint on alternative models (see “Constraining inflation” figure). The large-scale B-mode polarisation signal encoding r is the faintest of all the CMB signals, requiring both the deepest measurement and the widest low-resolution frequency coverage of any CMB-S4 science case.

The second goal concerns the dark universe. Dark matter and dark energy make up 95% of the universe’s mass-energy content, and their particular form and composition impact the growth of structure and thus the small-scale CMB anisotropies. The collective influence of the three known light-relic particles (the Standard Model neutrinos) has already been observed in CMB data, but many new light species, such as axion-like particles and sterile neutrinos, are predicted by extensions of the Standard Model. CMB-S4’s goal, and the most challenging measurement in this arena, is to detect any additional light-relic species with freeze-out temperatures up to the QCD phase-transition scale. This corresponds to constraining the uncertainty on the number of light-relic species Neff to ≤ 0.06 at 95% confidence (see “Light relics” figure). Precise measurements of the small-scale temperature and E-mode polarisation signals that encode this signal require the largest sky area of any CMB-S4 science case. In addition, since the sum of the masses of the neutrinos impacts the degree of lensing of the E-mode polarisation into small-scale B-modes, CMB-S4 will be able to constrain this sum around a fiducial value of 58 meV with a 1σ uncertainty ≤ 24 meV (in conjunction with baryon acoustic oscillation measurements) and ≤ 14 meV with better measurements of the optical depth to reionisation. 

Current and anticipated CMB-S4 constraints

The third science goal is to understand the formation and evolution of galaxy clusters, and in particular to probe the early period of galaxy formation at redshifts z > 2. This is enabled by the Sunyaev–Zel’dovitch (SZ) effect, whereby CMB photons are up-scattered by the hot, moving gas in the intra-cluster medium. This shifts the CMB photons’ frequency spectrum, resulting in a decrement at frequencies below 217 GHz and an increment at frequencies above, therefore allowing clusters to be identified by matching up the corresponding cold and hot spots. A key feature of the SZ effect is its red-shift independence, allowing us to generate complete, flux-limited catalogues of clusters to the survey sensitivity. The small-scale temperature signals needed for such a catalogue require the highest angular resolution and the widest high-resolution frequency coverage of all the CMB-S4 science cases.

Finally, CMB-S4 aims to explore the mm-wave transient sky, in particular the rate of gamma-ray bursts to help constrain their mechanisms (a few hours to days after the initial event, gamma-ray bursts are observable at longer wavelengths). CMB-S4 will be so sensitive that even its daily maps will be deep enough to detect mm-wave transient phenomena – either spatial from nearby objects moving across our field, or temporal from distant objects exploding in our field. This is the only science goal that places constraints on the survey cadence, specifically on the lag between repeated observations of the same point on the sky. Given its large field of view, CMB-S4 will be an excellent tool for serendipitous discovery of transients but less useful for follow-up observations. The plan is therefore to issue daily alerts for other teams to follow up with targeted observations.

Survey design

While it would be possible to meet all of the CMB-S4 science goals with a single survey, the result – requiring the sensitivity of the inflation survey across the area of the light-relic survey – would be prohibitively expensive. Instead, the requirements have been decoupled into an ultra-deep, small-area survey to meet the inflation goal and a deep, wide-area survey to meet the light-relic goal, the union of these providing a two-tier “wedding cake” survey for the cluster and gamma-ray-burst goals.

Having set the survey requirements, the task was to identify sites at which these observations can most efficiently be made, taking into account the associated cost, schedule and risk. Water vapour is a significant source of noise at microwave frequencies, so the first requirement on any site is that it be high and dry. A handful of locations meet this requirement, and two of them – the South Pole and the high Chilean Atacama Desert – have both exceptional atmospheric conditions and long-standing US CMB programmes. Their positions on Earth also make them ideally suited to CMB-S4’s two-survey strategy: the polar location enables us to observe a small patch of sky continuously, minimising the time needed to reach the required observation depth, and the more equatorial Chilean location enables observations over a large sky area.

CMB-S4 observatory telescopes

Finally, we know that instrumental systematics will be the limiting factor in resolving the extraordinarily faint large-scale B-mode signal. To date, the experiments that have shown the best control of such systematics have used relatively small-aperture (~0.5 m) telescopes. However, the secondary lensing of the much brighter E-mode signal to B-modes, while enabling us to measure the neutrino-mass sum, also obscures the primordial B-mode signal coming from inflation. We therefore need a detailed measurement of this medium- to small-scale lensing signal in order to be able to remove it at the necessary precision. This requires larger, higher-resolution telescopes. The ultra-deep field is therefore itself composed of coincident low- and high-resolution surveys.

A key feature of CMB-S4 is that all of the technologies are already well-proven by the ongoing Stage 3 experiments. These include CMB-S4’s “founding four” experiments, the Atacama Cosmology Telescope (ACT) and POLARBEAR/Simons Array (PB/SA) in Chile, and BICEP/Keck (BK) and the South Pole Telescope (SPT) at the South Pole, which have pairwise-merged into the Simons and South Pole Observatories (SO and SPO). The ACT, PB/SA, BK and SPT are all single-aperture, single-site experiments, while SO and SPO are dual-aperture, single sites. CMB-S4 is therefore the first experiment able to take advantage of both apertures and both sites. 

The key difference with CMB-S4 is that it will deploy these technologies on an unprecedented scale. As a result, the primary challenges for CMB-S4 are engineering ones, both in fabricating detector and readout modules in huge numbers and in deploying them in cryostats on telescopes with unprecedented systematics control. The observatory will comprise: 18 small-aperture refractors collectively fielding about 150,000 detectors across eight frequencies for measuring large angular scales; one large-aperture reflector with about 130,000 detectors across seven frequencies for measuring medium-to-small angular scales in the ultra-deep survey from the South Pole; and two large-aperture reflectors collectively fielding about 275,000 detectors across six frequencies for measuring medium-to-small angular scales in the wide-deep survey from Chile (see “Looking up” image). The final configuration maximises the use of available atmospheric windows to control for microwave foregrounds (particularly synchrotron and dust emission at low and high frequencies, respectively), and to meet the frequency-dependent depth and angular-resolution requirements of the surveys. 

CMB-S4 will be able to adopt and adapt the best of all previous experiments technologies and methodologies

Covering the frequency range 20–280 GHz, the detectors employ dichroic pixels at all but one frequency (to maximise the use of the available focal plane) using superconducting transition-edge sensors, which have become the standard in the field. A major effort is already underway to scale up the production and reduce the fabrication variance of the detectors, taking advantage of the DOE national laboratories and industrial partners. Reading out such large numbers of detectors with limited power is a significant challenge, leading CMB-S4 to adopt the conservative but well-proven time-domain multiplexing approach. The detector and readout systems will be assembled into modules that will be cryogenically cooled to 100 mK to reduce instrument noise. Each large-aperture telescope will carry an 85-tube cryostat with a single wafer per optics tube; and each small-aperture telescope will carry a single optics tube with 12 wafers per tube, with three telescopes sharing a common mount. 

Prototyping of detector and readout fabrication lines, and building up module assembly and testing capabilities, is expected to begin in earnest this year. At the same time, the telescope designs will be refined and the data acquisition and management subsystems developed. The current schedule sees a staggered commissioning of the telescopes in 2028–2030, and operations running for seven years thereafter.

Shifting paradigms

CMB-S4 represents a paradigm shift for sub-orbital CMB experiments. For the first time, the entire community is coming together to build an experiment defined by achieving critical science thresholds in fundamental physics, cosmology, astrophysics and astronomy, rather than by its cost cap. CMB-S4 will span the entire range of CMB science in a single experiment, take advantage of the best of all worlds in the design of its observation and instrumentation, and make the results available to the entire CMB community. As an extremely sensitive, two-tiered, multi-wavelength, mm-wave survey, it will also play a key role in multi-messenger astrophysics and transient science. Taken together, these measurements will constitute a giant leap in our study of the history of the universe.

The post Exploring the CMB like never before appeared first on CERN Courier.

]]>
Feature With telescopes at the South Pole and in the Chilean Atacama Desert, the newly endorsed CMB-S4 observatory will exceed the capabilities of earlier experiments by more than an order of magnitude. https://cerncourier.com/wp-content/uploads/2022/03/CCMarApr22_CMB-hero.jpg
Ruins of ancient star system found within our galaxy https://cerncourier.com/a/ruins-of-ancient-star-system-found-within-our-galaxy/ Wed, 09 Mar 2022 09:34:20 +0000 https://preview-courier.web.cern.ch/?p=97741 Despite it being our galactic home, many open questions remain about the origin and evolution of the Milky Way.

The post Ruins of ancient star system found within our galaxy appeared first on CERN Courier.

]]>
C-19

Despite it being our galactic home, many open questions remain about the origin and evolution of the Milky Way. To answer such questions, astronomers study individual stars and clusters of stars within our galaxy as well as those in others. Using data from the European Space Agency’s Gaia satellite, which is undertaking the largest and most precise 3D map of our galaxy by surveying an unprecedented one per cent of the Milky Way’s 100 billion or so stars, an international group has discovered a stream of stars spread across the night sky with peculiar characteristics. The stars appear not only to be very old, but also very similar to one another, indicating a common origin.

The discovered stream of stars, called C-19, are spread over tens of thousands of light years, and appear to be the remnant of a globular cluster. A globular cluster is a very dense clump of stars with a total typical mass of 104 or 105 solar masses, the centre of which can be so dense that stable planetary systems cannot form due to gravitational disruptions from neighbouring stars. Additionally, the clusters are typically very old. Estimates based on the luminosity of dead cooling remnants (white dwarfs) reveal some to be up to 12.8 billion years old, in stark contrast to neighbouring stars in their host galaxies. The origin, formation and reason for clusters to end up in these galaxies remains poorly understood.

The stars appear not only to be very old, but also very similar to one another, indicating a common origin

One way to discern the age of globular clusters is to study the elemental composition of the stars within them. This is often expressed as the metallicity, which is the ratio of all elements heavier than hydrogen and helium (confusingly referred to as metals in the astronomical community) to these two light elements. Hydrogen and helium were produced during the Big Bang, while anything heavier was produced in the first generation of stars, implying that the first generation of stars had zero metallicity and that the metallicity increases with each generation. Until recently the lowest metallicities of stars in globular clusters were 0.2% that of the Sun. This “lower floor” in metallicity was thought to put constraints on their maximum age and size, with lower-metallicity clusters thought to be unable to survive to this day. The newly discovered stream, however, has metallicities lower than 0.05% that of the Sun, changing this perception.

Despite it being our galactic home, many open questions remain about the origin and evolution of the Milky Way. To answer such questions, astronomers study individuals stars and clusters of stars within our galaxy as well as those in others. Using data from the European Space Agency’s Gaia satellite, which is undertaking the largest and most precise 3D map of our galaxy by surveying an unprecedented one per cent of the Milky Way’s 100 billion or so starts, an international group has discovered a stream of stars spread across the night sky with peculiar characteristics. The stars appear not only to be very old, but also very similar to one another, indication a common origin.

The discovered stream of stars, called C-19, are spread over tens of thousands of light years, and appear to be the remnant of a globular cluster. A globular cluster is a very dense clump of stars with a total typical mass of 104 or 105 solar masses, the centre of which can be so dense that stable planetary systems cannot form due to gravitational disruptions from neighbouring stars. Additionally, the clusters are typically very old. Estimates based on the luminosity of dead cooling remnants (white dwarfs) reveal some to be up to 12.8 billion years old, in stark contrast to neighbouring stars in their host galaxies. The origin, formation and reason for clusters to end up in these galaxies remains poorly understood.

Captured clusters

The stars in the recently observed C-19 stream are no longer a dense cluster. Rather, they all appear to follow the same orbit within our galaxy, the plane of which is almost perpendicular to the galactic disk in which we orbit its centre. This similarity in orbit, as well as their very similar metallicity and general chemical content, indicate that they once formed a globular cluster which was absorbed by the Milky Way. The orbit dynamics further indicate it was captured at a time when the potential well of the Milky Way was significantly smaller than it is now, implying that the capture of this cluster by our galaxy occurred long ago. Since then, the once dense cluster heated up and got smeared out as it orbited the galactic centre through interactions with the disk, as well as with the potential dark-matter halo.

The discovery, published in Nature, does not directly answer the question of where and how globular clusters were formed. It does however provide us with a nearby laboratory to study issues like cluster and galaxy formation, the merging of such objects and the subsequent destruction of the cluster through interactions with both baryonic as well as potential dark matter. This particular cluster furthermore consists of some of the oldest stars found, and could have been formed before the re-ionisation of the universe, which is thought to have taken place between 150 million and a billion years after the Big Bang. Further information about such ancient objects can be expected soon thanks to the recently launched James Webb Space Telescope. This instrument will be able to see some of the earliest formed galaxies, and can thereby provide additional clues on the origin of the fossils now found within our own galaxy.

The post Ruins of ancient star system found within our galaxy appeared first on CERN Courier.

]]>
News Despite it being our galactic home, many open questions remain about the origin and evolution of the Milky Way. https://cerncourier.com/wp-content/uploads/2022/03/C-19.jpg
Webb prepares to eye dark universe https://cerncourier.com/a/webb-prepares-to-eye-dark-universe/ Thu, 24 Feb 2022 15:49:50 +0000 https://preview-courier.web.cern.ch/?p=97708 In addition to studying galaxy formation, the James Webb Space Telescope will deepen our understanding of dark matter and dark energy.

The post Webb prepares to eye dark universe appeared first on CERN Courier.

]]>

After 25 years of development, the James Webb Space Telescope (JWST) successfully launched from Europe’s spaceport in French Guiana on the morning of 25 December. Nerves were on edge as the Ariane 5 rocket blasted its $10 billion cargo through the atmosphere, aided by a velocity kick from its equatorial launch site. An equally nail-biting moment came 27 minutes later, when the telescope separated from the launch vehicle and deployed its solar array. In scenes reminiscent of those at CERN on 10 September 2008 when the first protons made their way around the LHC, the JWST command centre erupted in applause. “Go Webb, go!” cheered the ground team as the craft drifted into the darkness.

The result of an international partnership between NASA, ESA and the Canadian Space Agency, Webb took a similar time to design and build as the LHC and cost almost twice as much. Its science goals are also complementary to particle physics. The 6.2 tonne probe’s primary mirror – the largest ever flown in space, with a diameter of 6.5 m compared to 2.4 m for its predecessor, Hubble – will detect light, stretched to the infrared by the expansion of the universe, from the very first galaxies. In addition to shedding new light on the formation of galaxies and planets, Webb will deepen our understanding of dark matter and dark energy. “The promise of Webb is not what we know we will discover,” said NASA administrator Bill Nelson after the launch. “It’s what we don’t yet understand or can’t yet fathom about our universe. I can’t wait to see what it uncovers!”

The promise of Webb is not what we know we will discover. It’s what we don’t yet understand or can’t yet fathom about our universe

Bill Nelson

Five days after launch, Webb successfully unfurled and tensioned its 300 m2 sunshield. Although the craft’s final position at Earth–Sun Lagrange point 2 (L2) ensures that it is sheltered by Earth’s shadow, further protection from sunlight is necessary to keep its four science instruments operating at 34 K. The delicate deployment procedure involved 139 release mechanisms, 70 hinge assemblies, some 400 pulleys and 90 individual cables – each of which was a potential single-point failure. Just over one week later, on 7 and 8 January, the two wings of the primary mirror, which had to be folded in for launch, were opened, involving the final four of a total of 178 release mechanisms. The ground team then began the long procedure of aligning the telescope optics via 126 actuators on the backside of the primary mirror’s 18 hexagonal segments. On 24 January, having completed a 1.51 million-km journey, the observatory successfully inserted itself into its orbit at L2, marking the end of the complex deployment process and the beginning of commissioning activities. The process will take months, with Webb scheduled to return its first science images in the summer.

James webb

The 1998 discovery of the accelerating expansion of the universe, which implies that around 70% of the universe is made up of an unknown dark energy, stemmed from observations of distant type-Ia supernovae that appeared fainter than expected. While the primary evidence came from ground-based observations, Hubble helped confirm the existence of dark energy via optical and near-infrared observations of supernovae at earlier times. Uniquely, Webb will allow cosmologists to see even farther, from as early as 200 million years after the Big Bang, while also extending the observation and cross-calibration of other standard candles, such as Cepheid variables and red giants, beyond what is currently possible with Hubble. Operating in the infrared rather than optical regime also means less scattering of light from interstellar gas.

With these capabilities, the JWST should enable the local rate of expansion to be determined to a precision of 1%. This will bring important information to the current tension between the measured expansion rate at early and late times, as quantified by the Hubble constant, and possibly shed light on the nature of dark energy.

Launching Webb is a huge celebration of the international collaboration that made this mission possible

Josef Aschbacher

By measuring the motion and gravitational lensing of early objects, Webb will also survey the distribution of dark matter, and might even hint at what it’s made of. “In order to make progress in the identification of dark matter, we need observations that clearly discriminate among the tens of possible explanations that theorists have put forward in the past four decades,” explains Gianfranco Bertone, director of the European Consortium for Astroparticle Theory. “If dark matter is ‘warm’ for example – meaning that it is composed of particles moving at mildly relativistic speeds when first structures are assembled – we should be able to detect its imprint on the number density of small dark-matter halos probed by the JWST. Or, if dark matter is made of primordial black holes, as suggested in the early 1970s by Stephen Hawking, the JWST could detect the faint emission produced by the accretion of gas onto these objects in early epochs.”

On 11 February, Webb returned images of its first star in the form of 18 blurry white dots, the product of the unaligned primary-mirror segments all reflecting light from the same star back at the secondary mirror and into its near-infrared camera. Though underwhelming at first sight, this and similar images are crucial to allow operators to gradually align and focus the hexagonal mirror segments until 18 images become one. After that, Webb will start downlinking science data at a rate of about 60 GB per day.

“Launching Webb is a huge celebration of the international collaboration that made this next-generation mission possible,” said ESA director-general Josef Aschbacher. “We are close to receiving Webb’s new view of the universe and the exciting scientific discoveries that it will make.”

The post Webb prepares to eye dark universe appeared first on CERN Courier.

]]>
News In addition to studying galaxy formation, the James Webb Space Telescope will deepen our understanding of dark matter and dark energy. https://cerncourier.com/wp-content/uploads/2022/02/telescope_alignment_evaluation_image_labeled.jpeg
Exploring the early universe with gravitational waves https://cerncourier.com/a/exploring-the-early-universe-with-gravitational-waves/ Tue, 18 Jan 2022 13:16:09 +0000 https://preview-courier.web.cern.ch/?p=97030 Theorists and experimentalists met at CERN in October to discuss new detector concepts and theoretical approaches to search for a cosmological gravitational-wave background.

The post Exploring the early universe with gravitational waves appeared first on CERN Courier.

]]>
UHF-GW-meeting

Seven years after the direct detection of gravitational waves (GWs), particle physicists around the world are preparing for the next milestone in GW astronomy: the search for a cosmological stochastic GW background. Current and planned GW observatories roughly cover 12 orders of magnitude from the nanohertz to kilohertz regimes, in which astrophysical models predict sizable GW signals from the merging of compact objects such as black-hole and neutron-star mergers, as observed by the LIGO/Virgo collaborations. It is also expected that the universe contains a randomly distributed GW background, which is yet to be detected. This could be the result of various known and unknown astrophysical signals, which are too weak to be resolved individually, or could be due to hypothetical processes in the very early Universe, such as phase transitions at high temperatures. The most promising region to search for the latter is arguably the ultra-high frequency (UHF) regime encompassing megahertz and gigahertz GWs, which is beyond the reach of current detectors. The detection of such a stochastic GW background could therefore offer a powerful probe of the early universe and of physics beyond the Standard Model.

On 12-15 October a virtual workshop hosted by CERN explored theoretical models and detector concepts targeting the UHF GW regime. Following an initial meeting at ICTP Trieste in 2019 and the publication of a Living Review on UHF GWs, the goal of the workshop was to bring together theorists and experimentalists to discuss feasibility studies and prototypes of existing detector concepts as well as to review more recent proposals.  

The wide range of detector concepts discussed demonstrates the rapid evolution of this field and shows the difficulty in choosing the optimal strategy. Tailoring “light shining through wall” experiments for GWs is one promising approach. In the presence of a static magnetic field, general relativity in conjunction with electrodynamics allows GWs to generate electromagnetic radiation at the same frequency, similar to the conversion of the hypothetical axion into photons. In this case, the bounds placed on “axion to photon” couplings, for example as determined by the CAST and OSQAR experiments at CERN or the ALPS experiments at DESY, can be recast as GW bounds. 

The sheer variety of systems offers a new playground for creative ideas and underlines the cross-disciplinary nature of this field 

Another approach, echoing that of the very first GW searches in the late 1960s, is to detect the mechanical deformation induced by GWs at the base of resonant-bar detectors, which can be implemented in the UHF regime using centimetre-sized bulk acoustic wave devices common in radio-frequency engineering. Resonant microwave cavities are another approach to detect interactions between GWs and electromagnetism, and have been explored in the past, such as by the MAGO collaboration at CERN (2004-2007) or proposed as a modified version of the ADMX experiment at the University of Washington. Further proposals include the precise measurement of optically levitated nanoparticles, transitions in Bose Einstein condensates, mesoscopic quantum systems, cosmological detectors and magnon systems. The sheer variety of systems, the majority of which are much smaller and less costly than long-baseline interferometric detectors, offers a new playground for creative ideas and underlines the cross-disciplinary nature of this field. Working groups set up during the workshop will investigate some of the most promising ideas in more detail within the next months.

Complementing the discussion about detector concepts, theorists presented BSM models that predict violent processes in the early universe, which could source strong GW signals. These arise e.g. in some models of cosmic inflation, at the transition phase between cosmic inflation and the radiation dominated universe, or from spontaneous symmetry breaking processes. Since these processes occur isotropically everywhere in the Universe, the expected signal is a diffuse gravitational wave background. Moreover, some relics of these processes, such as topological defects and primordial black holes, may have survived until the late universe and may still be actively emitting gravitational waves. 

The current sensitivity of all proposed and existing detector concepts is several orders of magnitude away from the expected cosmological GW signals. Given that the first laser-interferometer GW detectors built in the 1970s were eight orders of magnitude below the sensitivity of the currently operating LIGO/Virgo/KAGRA observatories, however, there is every reason to think that the search for UHF GWs is the beginning and not the end of a story.  

The post Exploring the early universe with gravitational waves appeared first on CERN Courier.

]]>
Meeting report Theorists and experimentalists met at CERN in October to discuss new detector concepts and theoretical approaches to search for a cosmological gravitational-wave background. https://cerncourier.com/wp-content/uploads/2022/01/bulk_ac_devices_featured.png
How the Sun and stars shine https://cerncourier.com/a/how-the-sun-and-stars-shine/ Tue, 21 Dec 2021 13:41:44 +0000 https://preview-courier.web.cern.ch/?p=96594 Precise measurements of solar neutrinos have enabled the Borexino experiment to definitively observe the two main fusion reactions in stars.

The post How the Sun and stars shine appeared first on CERN Courier.

]]>
Staring at the Sun

Each second, fusion reactions in the Sun’s core fling approximately 60 billion neutrinos onto every square centimetre of the Earth. In the late 1990s, the Borexino experiment at Gran Sasso National Laboratory in Italy was conceived to measure these neutrinos right down to a few tens of keV, where the bulk of the flux lies. The detector’s name means “little Borex” and refers to an earlier idea for a large experiment with a boron-loaded liquid scintillator, which was shelved in favour of the present, smaller and more ambitious detector. Rather than studying rare but high-energy 8B neutrinos from a little-followed branch of the proton–proton (pp) fusion chain, Borexino would target the far more numerous but lower energy neutrinos produced in the Sun by electron captures on 7Be.

The fusion reactions generating the Sun’s energy

Three decades after its conception, Borexino has far exceeded this goal thanks to the exceptional radiopurity of the experimental apparatus (see “Detector design” panel. Special care taken in construction and commissioning has achieved a radiopurity about three orders of magnitude better than predicted, and 10 to 12 orders of magnitude below natural radioactivity. This has allowed the collaboration to probe the entire solar-neutrino spectrum, including not only the pp chain, but also the carbon–nitrogen–oxygen (CNO) cycle. This mechanism plays a minor role in the Sun but becomes important for more massive stars, dominating the energy production and the production of elements heavier than helium in the universe at large.

The heart of the Sun

The pp-chain generates 99% of the energy in the Sun: it begins when two protons fuse to produce a deuteron and an electron neutrino – the so-called pp neutrino (see “Chain and cycle” figure). Subsequent reactions produce light elements, such as 3He, 4He, 7Be, 7Li, 8B and more electron neutrinos. In Borexino, the sensitivity to pp neutrinos depends on the amount of 14C in the liquid scintillator: with an end-point energy of 0.156 MeV compared with a maximum visible energy for pp neutrinos of 0.264 MeV, the 14C 14N + β + ν beta decay sets the detection threshold and the feasibility of probing pp-neutrinos. The Borexino scintillator was therefore made using petroleum from very old and deep geological layers, to ensure a low content of 14C.

Detector design

Like many particle-physics detectors, Borexino has an onion-like design. The innermost layers have the highest radio-purity. The detector’s active core consists of 278 tonnes of pseudocumene (C9H12) scintillator. Into this is dissolved 2,5-diphenyloxazole (PPO) at a concentration of 1.5 grams per litre, which shifts the emission light to 400 nm, where the sensitivity of photomultipliers is peaked. The scintillator is contained within a 125 μm-thick nylon inner vessel (IV) with a 4.5 m radius – made thin to reduce radiation emission from the nylon . In addition, the IV stops radon diffusion towards the core of the detector. 

Borexino design

The IV is contained within a 7 m-radius stainless-steel sphere (SSS) that supports 2212 photomultipliers (PMTs) and contains 1000 tonnes of pseudocumene as high-radio-purity shielding liquid against radioactivity from PMTs and the SSS itself. Between the SSS and the IV, a second nylon balloon acts as a barrier preventing radon and its progeny from reaching the scintillator. The SSS is contained in a 2400-tonne tank of highly purified water which, together with Borexino’s underground location, shields the detector from environmental radioactivity. The tank boasts a muon detector to tag particles crossing the detector. 

When a neutrino interacts in the target volume, energy deposited by the decelerating electron is registered by a handful of PMTs. The neutrino’s energy can be obtained from the total charge, and the hit-time distribution is used to infer the location of the event’s vertex. Recoiling electrons are used to tag electron neutrinos, and the combination of a positron annihilation and a neutron capture on hydrogen (an inverse beta decay) are used to tag electron antineutrinos.

Due to the impossibility of discriminating individual solar-neutrino events from the backgrounds, the greatest challenge has been the reduction of natural radioactivity to unprecedented levels. In the early 1990s, Borexino developed innovative techniques such as under-vacuum distillation, water extraction, ultrafiltration and nitrogen sparging with ultra-high radiopurity nitrogen to reduce radioactive impurities in the scintillator to 10–10 Bq/kg or better. An initial detector called the Counting Test Facility was developed as a means to demonstrate such claims, publishing results for the key uranium, thorium and krypton backgrounds in 1995. Full data taking at Borexino began in 2007. 

Since data-taking began in 2007, Borexino has measured, for the first time, all the individual fluxes produced in the pp-chain. In 2014 the collaboration made the first definitive observation of pp neutrinos, using a comparison with the predicted energy spectrum. In 2018 the collaboration performed, with the same apparatus, a measurement of all the pp-chain components (pp, 7Be, pep and 8B neutrinos), demonstrating the large-scale energy-generation mechanism in the Sun for the first time (see “Energy spectrum” figure). This spectral fit allowed the collaboration to directly determine the ratio between the interaction rate of 3He + 3He fusions and that of 3He + 4He fusions – a crucial parameter for characterising the pp chain and its energy production.

The simultaneous measurement of pp-chain neutrino fluxes also gave Borexino a unique window onto the famous “vacuum-matter” transition, whereby coherent virtual W-boson interactions with electrons modify neutrino-oscillation probabilities as neutrinos propagate through matter, enhancing the oscillation probability as a function of energy. In 2018 Borexino measured the solar electron–neutrino survival probability, Pee, in the energy range from a few tens of keV up to 15 MeV (see “Survival probability” figure). This was the first direct observation of the transition from a low-energy vacuum regime (Pee~0.55) to a higher energy matter regime where neutrino propagation is dominantly affected by the solar interior (Pee~0.32). The transition was measured by Borexino at the level of 98% confidence.

CNO cycle

A different way to burn hydrogen, the CNO cycle, was hypothesised independently by Carl Friedrich von Weizsäcker and Hans Albrecht Bethe between 1937 and 1939. Here, 12C acts as a catalyst, and electron neutrinos are produced by the beta decay of 13N and 15O, with a small contribution from 17F. The maximum energy of CNO neutrinos is about 1.7 MeV. In addition to making an important contribution to the production of elements heavier than helium, this cycle is important for the nucleosynthesis of 16O and 17O. In massive stars it also develops in more complex reactions producing 18F, 18O, 19F, 18Ne and 20Ne.

Solar neutrinos and residual backgrounds

The sensitivity to CNO neutrinos in Borexino mainly comes from events in the energy range from 0.8 to 1 MeV. In this region, the dominant background comes from 210Bi, which is produced by the slow radioactive decay 210Pb (22 y) 210Bi (5 d) + β + ν210Po (138 d) + β + ν206Pb (stable) + α. The 210Bi activity can be inferred from 210Po, which can be efficiently tagged using pulse-shape discrimination. However, convective currents in the liquid scintillator bring into the central fiducial mass 210Po produced by 210Pb, which is most likely to be embedded on the nylon containment vessel. In order to reduce convection currents, a passive insulation system and a temperature control system were installed in 2016, significantly reducing the effect of seasonal temperature variations. 

Thanks to these and other efforts, in 2020 Borexino rejected the null hypothesis of no CNO reactions by more than five standard deviations, providing the first direct proof of the process. The energy production as a fraction of the solar luminosity was measured to be 1-0.3+0.4 %, in agreement with the Solar Standard Model (SSM) prediction of roughly 0.6 ± 0.1% (which assumes the solar surface has a high metallicity – a topic discussed in  more detail later). Given that luminosity scales as M4 and number density as M–2.5 for stars between one and 10 solar masses, the CNO cycle is thought to be the most important source of energy in massive hydrogen-burning stars. Borexino has provided the first experimental evidence for this hypothesis.

Probing solar metallicity using CNO neutrinos is of the utmost importance, and Borexino is hard at work on the problem

But, returning to the confines of our solar system, it’s important to remember that the SSM is not a closed book. Borexino’s results are thus far in agreement with its assumption of a protostar that had a uniform composition throughout its entire volume when fusion began (“zero-age homogeneity”). However, thanks to the ability of neutrinos to peek into the heart of the Sun, the experiment now has the potential to explore this assumption and weigh in on one of the most intriguing controversies in astrophysics.

The solar-abundance controversy

As stars evolve, the distribution of elements within them changes thanks to fusion reactions and convection currents. But the composition of the surface is thought to remain very nearly the same as that of the protostar, as it is not hot enough there for fusion to occur. Measuring the abundance of elements on a star’s surface therefore gives an idea of the protostar’s composition and is a powerful way to constrain the SSM. 

Solar-neutrino measurements

Currently, the best method to determine the surface abundance of elements heavier than helium (“metallicity”) uses measurements of photo-absorption lines. Since 2005, improved hydrodynamic calculations (which are needed to model atomic-line formation, and radiative and collisional processes which contribute to excitation and ionisation) indicate a much lower surface metallicity than was previously considered. However, helioseismology observables differ by roughly five standard deviations from SSM predictions that use the new surface metallicity to infer the protostar’s composition, when the sound–speed profile, surface–helium abundance and the depth of the convective envelope are taken into account. Helioseismology implies that the zero-age Sun’s core was richer in metallicity than the present surface composition, suggesting a violation of zero-age homogeneity and a break with the SSM. This is the solar-abundance controversy, which was discovered in 2005.

One possible explanation is that a late “dilution” of the Sun’s convective zone occurred due to a deposition of elements during the formation of the solar system. Were there to have been an accretion of dust and gas from the proto-planetary disc onto the central star during the evolution of the star–planet system, this could have changed the initial metallicity of the surface of the Sun – a hypothesis backed up by recent simulations that show that a metal-poor accretion could produce the present surface metallicity. 

As they are an excellent probe of metallicity, CNO neutrinos have an important role to play in settling the solar-abundance controversy. If Borexino were to measure the Sun’s present core metallicity, and by running simulations backwards prove that its surface metallicity must have been diluted right from its birth, this would violate one of the basic assumptions of the SSM. Probing solar metallicity using CNO neutrinos is, therefore, of the utmost importance, and Borexino is hard at work on the problem. Initial results favour the high-metallicity hypothesis with a significance of 2.1 standard deviations – a tentative first hint from Borexino that zero-age homogeneity may indeed be false.

The ancient question of why and how the Sun and stars shine finally has a comprehensive answer from Borexino, which has succeeded thanks to the detector’s extreme and unprecedented radio-purity – the hard work of hundreds of researchers over almost three decades.

The post How the Sun and stars shine appeared first on CERN Courier.

]]>
Feature Precise measurements of solar neutrinos have enabled the Borexino experiment to definitively observe the two main fusion reactions in stars. https://cerncourier.com/wp-content/uploads/2021/12/CCJanFeb22_BOREXINO_frontis_feature.jpg
The quantum frontier: cold atoms in space https://cerncourier.com/a/the-quantum-frontier-cold-atoms-in-space/ Thu, 16 Dec 2021 11:22:35 +0000 https://preview-courier.web.cern.ch/?p=96425 September workshop targeted a roadmap for extraterrestrial cold-atom experiments to probe the foundations of physics.

The post The quantum frontier: cold atoms in space appeared first on CERN Courier.

]]>
The quantum frontier

Cold atoms offer exciting prospects for high-precision measurements based on emerging quantum technologies. Terrestrial cold-atom experiments are already widespread, exploring both fundamental phenomena such as quantum phase transitions and applications such as ultra-precise timekeeping. The final quantum frontier is to deploy such systems in space, where the lack of environmental disturbances enables high levels of precision.

This was the subject of a workshop supported by the CERN Quantum Technology Initiative, which attracted more than 300 participants online from 23 to 24 September. Following a 2019 workshop triggered by the European Space Agency (ESA)’s Voyage 2050 call for ideas for future experiments in space, the main goal of this workshop was to begin drafting a roadmap for cold atoms in space.

The workshop opened with a presentation by Mike Cruise (University of Birmingham) on ESA’s vision for cold atom R&D for space: considerable efforts will be required to achieve the technical readiness level needed for space missions, but they hold great promise for both fundamental science and practical applications. Several of the cold-atom teams that contributed white papers to the Voyage 2050 call also presented their proposals.

Atomic clocks

Next came a session on atomic clocks, including descriptions of their potential for refining the definitions of SI units, such as the second, and distributing this new time-standard worldwide, and potential applications of atomic clocks to geodesy. Next-generation spacebased atomic-clock projects for these and other applications are ongoing in China, the US (Deep Space Atomic Clock) and Europe.

This was followed by a session on Earth observation, featuring the prospects for improved gravimetry using atom interferometry and talks on the programmes of ESA and the European Union. Quantum space gravimetry could contribute to studies of climate change, for example, by measuring the densities of water and ice very accurately and with improved geographical precision.

Cold-atom experiments in space offer great opportunities to probe the foundations of physics

For fundamental physics, prospects for space-borne cold-atom experiments include studies of wavefunction collapse and Bell correlations in quantum mechanics, probes of the equivalence principle by experiments like STEQUEST, and searches for dark matter.

The proposed AEDGE atom interferometer will search for ultralight dark matter and gravitational waves in the deci-Hertz range, where LIGO/Virgo/KAGRA and the future LISA space observatory are relatively insensitive, and will probe models of dark energy. AEDGE gravitational- wave measurements could be sensitive to first-order phase transitions in the early universe, as occur in many extensions of the Standard Model, as well as to cosmic strings, which could be relics of symmetries broken at higher energies than those accessible to colliders.

These examples show that cold-atom experiments in space offer great opportunities to probe the foundations of physics as well as make frontier measurements in astrophysics and cosmology.

Several pathfinder experiments are underway. These include projects for terrestrial atom interferometers on scales from 10 m to 1 km, such as the MAGIS project at Fermilab and the AION project in the UK, which both use strontium, and the MIGA project in France and proposed European infrastructure ELGAR, which both use rubidium. Meanwhile, a future stage of AION could be situated in an access shaft at CERN – a possibility that is currently under study, and which could help pave the way towards AEDGE. Pioneering experiments using Bose-Einstein condensates on research rockets and the International Space Station were also presented.

A strong feature of the workshop was a series of breakout sessions to enable discussions among members of the various participating communities (atomic clocks, Earth observation and fundamental science), as well as a group considering general perspectives, which were summarised in a final session. Reports from the breakout sessions will be integrated into a draft roadmap for the development and deployment of cold atoms in space. This will be set out in a white paper to appear by the end of the year and presented to ESA and other European space and funding agencies.

Space readiness

Achieving space readiness for cold-atom experiments will require significant research and development. Nevertheless, the scale of participation in the workshop and the high level of engagement testifies to the enthusiasm in the cold-atom community and prospective user communities for deploying cold atoms in space. The readiness of the different communities to collaborate in drafting a joint roadmap for the pursuit of common technological and scientific goals was striking.

The post The quantum frontier: cold atoms in space appeared first on CERN Courier.

]]>
Meeting report September workshop targeted a roadmap for extraterrestrial cold-atom experiments to probe the foundations of physics. https://cerncourier.com/wp-content/uploads/2021/12/blue-galaxy-nasa-space-wallpaper-preview.jpg
Star-forming galaxies rule gamma-ray sky https://cerncourier.com/a/star-forming-galaxies-rule-gamma-ray-sky/ Thu, 04 Nov 2021 14:16:01 +0000 https://preview-courier.web.cern.ch/?p=96231 The diffuse photon background that fills the universe does not limit itself to the attention-hogging cosmic microwave background.

The post Star-forming galaxies rule gamma-ray sky appeared first on CERN Courier.

]]>
Five-year sky map

The diffuse photon background that fills the universe does not limit itself to the attention-hogging cosmic microwave background, but spans a wide spectrum extending up to TeV energies. The origin of the photon emission at X-ray and gamma-ray wavelengths, first discovered in the 1970s, remains poorly understood. Many possible sources have been proposed, ranging from active galactic nuclei to dark-matter annihilation. Thanks to many years of gamma-ray data from the Fermi Large Area Telescope (Fermi-LAT), a group from Australia and Italy has now produced a model that links part of the diffuse emission to star-forming galaxies (SFGs).

As their name implies, SFGs are galaxies in which stars are formed, and therefore also die through supernova events. Such sources, which include our own Milky Way, have gained interest from gamma-ray astronomers during the past decade because several resolvable SFGs have been shown to emit in the 100 MeV to 1 TeV energy range. Given their preponderance, SFGs are thus a prime-suspect source of the diffuse gamma-ray background. 

Clear correlation

The source of gamma rays within SFGs is very likely the interaction between cosmic rays and the interstellar medium (ISM). The cosmic rays, in turn, are thought to be accelerated within the shockwaves of supernova remnants, after which they interact with the ISM to produce a hadronic cascade. The cascade includes neutral pions, which decay into gamma rays. This connection between supernova remnants and gamma rays is strengthened by a clear correlation between the star-formation rate in a galaxy and the gamma-ray flux they emit. Additionally, such sources are theorised to be responsible for the neutrino emission detected by the IceCube observatory over the past few years, which also appears to be highly isotropic.

Gamma-ray background model

Based on additional SFG gamma-ray sources found by Fermi–LAT, which could be used for validation, the Australian/Italian group developed a physical model to study the contribution of SFGs to the cosmic diffuse gamma-ray background. The model used to predict the gamma-ray emission from galaxies starts with the spectra of charged cosmic-rays produced in the numerous supernovae remnants within a galaxy, and greatly benefits from data collected from several such remnants present in the Milky Way. Subsequently the production and energies of gamma rays through their interaction of cosmic rays with the ISM is modelled, followed by the gamma-ray transport to Earth, which includes losses due to interactions with low-energy photons leading to pair production. 

The main uncertainty in previous models was the efficiency of a galaxy to transform the energy from cosmic rays into gamma rays, since it is not possible to use our own galaxy to measure it. The big breakthrough in the new work is a more thorough theoretical modelling of this efficiency, which was first tested extensively using data from resolved SFG sources. After such tests proved successful, the model could be applied to predict the gamma-ray emission properties of galaxies spanning the history of the universe. These predictions indicate that the low-energy part of the spectrum can be largely attributed to galaxies from the so-called cosmic noon: the period when star formation in large galaxies was at its peak, about 10 billion years ago. Nearby galaxies, on the other hand, explain the high-energy part of the spectrum, which, for old and distant sources, is absorbed in the intergalactic medium by low-energy photons undergoing pair production with TeV emission. Overall, the model predicts not only the spectral shape but also the overall flux (see “Good fit” figure), negating the need for other possible sources such as active galactic nuclei or dark matter.

These new results once again indicate the importance of star-forming regions for astrophysics, after also recently being proposed as a possible source of PeV cosmic rays by LHAASO (CERN Courier July/August 2021 p11). Furthermore, it shows the potential for an expansion to other astrophysical messengers, with the authors stating their ambition to apply the same model to radio-emission and high-energy neutrinos.

The post Star-forming galaxies rule gamma-ray sky appeared first on CERN Courier.

]]>
News The diffuse photon background that fills the universe does not limit itself to the attention-hogging cosmic microwave background. https://cerncourier.com/wp-content/uploads/2021/11/Featured-image-star-forming-galaxy.jpg
Having the right connections is key https://cerncourier.com/a/having-the-right-connections-is-key/ Thu, 04 Nov 2021 14:06:52 +0000 https://preview-courier.web.cern.ch/?p=96265 SKAO director-general Philip Diamond describes how the world's largest radio telescope went from concept to construction.

The post Having the right connections is key appeared first on CERN Courier.

]]>
Philip Diamond

Having led the SKAO for almost a decade, how did it feel to get the green light for construction in June this year?

The project has been a long time in gestation and I have invested much of my professional life in the SKA project. When the day came, I was 95% confident that the SKAO council would give us the green light to proceed, as we were still going through ratification processes in national parliaments. I sent a message to my senior team saying: “This is the most momentous week of my career” because of the collective effort of so many people in the observatory and across the entire partnership over so many years. It was a great feeling, even if we couldn’t celebrate properly because of the pandemic.

What will the SKA telescopes do that previous radio telescopes couldn’t?

The game changer is the sheer size of the facility. Initially, we’re building 131,072 low-frequency antennas in Western Australia (“SKA-Low”) and 197 15 m-class dishes in South Africa (“SKA-Mid”). This will provide us with up to a factor of 10 improvement in our ability to see fainter details in the universe. The long-term SKA vision will increase the sensitivity by a further factor of 10. We’ve got many science areas, but two are going to be unique to us. One is the ability to detect hydrogen all the way back to the epoch of reionisation, also called the “cosmic-dawn”. The frequency range that we cover, combined with the large collecting area and the sensitivity of the two radio telescopes, will allow us to make a “movie” of the universe evolving from a few hundred million years after the Big Bang to the present day. We probably won’t see the first stars but will see the effect of the first stars, and we may see some of the first galaxies and black holes. 

We put a lot of effort into conveying the societal impact of the SKA

The second key science goal is the study of pulsars, especially millisecond pulsars, which emit radio pulses extremely regularly, giving astronomers superb natural clocks in the sky. The SKA will be able to detect every pulsar that can be detected on Earth (at least every pulsar that is pointing in our direction and within the ~70% of the sky visible by the SKA). Pulsars will be used as a proxy to detect and study gravitational waves from extreme phenomena. For instance, when there’s a massive galaxy merger that generates gravitational waves, we will be able to detect the passage of the waves through a change in the pulse arrival times. The SKA telescopes will be a natural extension of existing pulsar-timing arrays, and will be working as a network but also individually.

Another goal is to better understand the influence of dark matter on galaxies and how the universe evolves, and we will also be able to address questions regarding the nature of neutrinos through cosmological studies. 

How big is the expected SKA dataset, and how will it be managed? 

It depends where you look in the data stream, because the digital signal processing systems will be reducing the data volume as much as possible. Raw data coming out of SKA-Low will be 2 Pb per second – dramatically exceeding the entire internet data rate. That data goes from our fibre network into data processing, all on-site, with electronics heavily shielded to protect the telescopes from interference. Coming out from there, it’s about 5 Tb of data per second being transferred to supercomputing facilities off-site, which is pretty much equivalent to the output generated by SKA-Mid in South Africa. From that point the data will flow into supercomputers for on-the-fly calibration and data processing, emerging as “science-ready” data. It all flows into what we call the SKA Regional Centre network, basically supercomputers dotted around the globe, very much like that used in the Worldwide LHC Computing Grid. By piping the data out to a network of regional centres at a rate of 100 Gb per second, we are going to see around 350 Pb per year of science data from each telescope. 

And you’ve been collaborating with CERN on the SKA data challenge?

Very much so. We signed a memorandum of understanding three years ago, essentially to learn how CERN distributes its data and how its processing systems work. There are things we were able to share too, as the SKA will have to process a larger amount of data than even the High-Luminosity LHC will produce. Recently we have entered into a further, broader collaboration with CERN, GÉANT and PRACE [the Partnership for Advanced Computing in Europe] to look at the collaborative use of supercomputer centres in Europe.

SKAO’s organisational model also appears to have much in common with CERN’s?

If you were to look at the text of our treaty you would see its antecedents in those of CERN and ESO (the European Southern Observatory). We are an intergovernmental organisation with a treaty and a convention signed in Rome in March 2019. Right now, we’ve got seven members who have ratified the convention, which was enough for us to kick-off the observatory, and we’ve got countries like France, Spain and Switzerland on the road to accession. Other countries like India, Sweden, Canada and Germany are also following their internal processes and we expect them to join the observatory as full members in the months to come; Japan and South Korea are observers on the SKAO council at this stage. Unlike CERN, we don’t link member contributions directly to gross domestic product (GDP) – one reason being the huge disparity in GDP amongst our member states. We looked at a number of models and none of them were satisfactory, so in the end we invented something that we use as a starting point for negotiation and that’s a proxy for the scientific capacity within countries. It’s actually the number of scientists that an individual country has who are members of the International Astronomical Union. For most of our members it correlates pretty well with GDP. 

Is there a sufficient volume of contracts for industries across the participating nations? 

Absolutely. The SKA antennas, dishes and front-ends are essentially evolutions of existing designs. It’s the digital hardware and especially the software where there are huge innovations with the SKA. We have started a contracting process with every country and they’re guaranteed to get at least 70% of their investment in the construction funds back. The SKAO budget for the first 10 years – which includes the construction of the telescopes, the salaries of observatory staff and the start of first operations – is €2 billion. The actual telescope itself costs around €1.2 billion. 

Why did it take 30 years for the SKA project to be approved?

Back in the late 1980s/early 1990s, radio astronomers were looking ahead to the next big questions. The first mention of what we call the SKA was at a conference in Albuquerque, New Mexico, celebrating the 10th anniversary of the Very Large Array, which is still a state-of-the-art radio telescope. A colleague pulled together discussions and wrote a paper proposing the “Hydrogen Array”. It was clear we would need approximately one square kilometre of collecting area, which meant there had to be a lot of innovation in the telescopes to keep things affordable. A lot of the early design work was funded by the European Commission and we formed an international steering committee to coordinate the effort. But it wasn’t until 2011 that the SKA Organisation was formed, allowing us to go out and raise the money, put the organisational structure in place, confirm the locations, formalise the detailed design and then go and build the telescopes. There was a lot of exploration surrounding the details of the intergovernmental organisation – at one point we were discussing joining ESO. 

Building the SKA 10 years earlier would have been extremely difficult, however. One reason is that we would have missed out on the big-data technology and innovation revolution. Another relates to the cost of power in these remote regions: SKA’s Western Australia site is 200 km from the nearest power grid, so we are powering things with photovoltaics and batteries, the cost of which has dropped dramatically in the past five years.

What are the key ingredients for the successful management of large science projects?

One has to have a diplomatic manner. We’ve got 16 countries involved all the way from China to Canada and in both hemispheres, and you have to work closely with colleagues and diverse people all the way up to ministerial level. Being sure the connections with the government are solid and having the right connections are key. We also put a lot of effort into conveying the societal impact of the SKA. Just as CERN invented the web, Wi-Fi came out of radio astronomy, as did a lot of medical imaging technology, and we have been working hard to identify future knowledge-transfer areas.

SKA-MPI

It also would have been much harder if I did not have a radio-astronomy background, because a lot of what I had to do in the early days was to rely on a network of radio-astronomy contacts around the world to sign up for the SKA and to lobby their governments. While I have no immediate plans to step aside, I think 10 or 12 years is a healthy period for a senior role. When the SKAO council begins the search for my successor, I do hope they recognise the need to have at least an astronomer, if not radio astronomer. 

I look at science as an interlinked ecosystem

Finally, it is critical to have the right team, because projects like this are too large to keep in one person’s head. The team I have is the best I’ve ever worked with. It’s a fantastic effort to make all this a reality.

What are the long-term operational plans for the SKA?

The SKA is expected to operate for around 50 years, and our science case is built around this long-term aspiration. In our first phase, whose construction has started and should end in 2028/2029, we will have just under 200 dishes in South Africa, whereas we’d like to have potentially up to 2500 dishes there at the appropriate time. Similarly, in Western Australia we have a goal of up to a million low-frequency antennas, eight times the size of what we’re building now. Fifty years is somewhat arbitrary, and there are not yet any funded plans for such an expansion, but the dishes and antennas themselves will easily last for that time. The electronics are a different matter. That’s why the Lovell Telescope, which I can see outside my window here at SKAO HQ, is still an active science instrument after 65 years, because the electronics inside are state of the art. In terms of its collecting area, it is still the third largest steerable dish on Earth!

How do you see the future of big science more generally?

If there is a bright side to the COVID-19 pandemic, it has forced governments to recognise how critical science and expert knowledge are to survive, and hopefully that has translated into more realism regarding climate change for example. I look at science as an interlinked ecosystem: the hard sciences like physics build infrastructures designed to answer fundamental questions and produce technological impact, but they also train science graduates who enter other areas. The SKAO governments recognise the benefits of what South African colleagues call human capital development: that scientists and engineers who are inspired by and develop through these big projects will diffuse into industry and impact other areas of society. My experience of the senior civil servants that I have come across tells me that they understand this link.

The post Having the right connections is key appeared first on CERN Courier.

]]>
Opinion SKAO director-general Philip Diamond describes how the world's largest radio telescope went from concept to construction. https://cerncourier.com/wp-content/uploads/2021/11/CCNovDec21_INT_Diamond_feature.jpg
BICEP crunches primordial gravitational waves  https://cerncourier.com/a/bicep-crunches-primordial-gravitational-waves/ Tue, 12 Oct 2021 17:01:07 +0000 https://preview-courier.web.cern.ch/?p=95554 The new analysis significantly improve the upper bound on the strength of gravitational waves produced during the epoch of inflation.

The post BICEP crunches primordial gravitational waves  appeared first on CERN Courier.

]]>
The BICEP/Keck collaboration has published the strongest constraints to date on primordial gravitational waves, ruling out parameter space for models of inflation in the early universe (Phys. Rev. Lett. 2021 127 151301). A conjectured rapid expansion of the universe during the first fraction of a second of its existence, inflation was first proposed in the early 1980s to explain the surprising uniformity of the universe over scales which should not otherwise have been connected, and may have left an imprint in the polarisation of the cosmic-microwave background (CMB). Despite a high-profile false detection of gravitational-wave-induced “B-modes” by BICEP in 2014, which was soon explained as a mis-modelling of the galactic-dust foreground, the search for primordial gravitational waves remains one of the most promising avenues to study particle physics at extremely high energies, as inflation is thought to require a particle-physics explanation such as the scalar “inflaton” field proposed by Alan Guth.

Certain ‘standard’ types of inflation are now clearly disfavoured

Kai Schmitz

In its latest publication, the BICEP/Keck collaboration has managed to significantly improve the upper bound on the strength of gravitational waves produced during the epoch of inflation. “This is important for theorists because it further constrains the allowed range of viable models of inflation, and certain ‘standard’ types of models are now clearly disfavoured,” explains CERN theorist Kai Schmitz. “It’s also a great experimental achievement because it demonstrates that the sources of systematic uncertainties such as dust emission in our Milky Way are under good control. That’s a good sign for future observations.”

The BICEP/Keck collaboration searches for the imprint of gravitational waves in the polarisation pattern of the CMB, emitted 380,000 years after the Big Bang. Telescopes at the South Pole receive incoming CMB photons and focus them through plastic lenses onto detectors in the focal plane which are cooled to 300 mK, explains principal investigator Clem Pryke of the University of Minnesota. As the telescopes scan the sky they record the tiny changes in temperature due to  the intensity of the incoming microwaves. The detectors are arranged in pairs with each half sensitive to one of two orthogonal linear polarisation components. The telescopes take their best data during the six-month long Antarctic night, during which intrepid “winter-overs” maintain the detectors and upload data via satellite to the US for further analysis.

“The big change since 2014 was to make measurements in multiple frequency bands to allow the removal of the galactic foreground,” says Pryke. “Back then we had data only at 150 GHz and were relying on models and projections of the galactic foreground – models which turned out to be optimistic as far as the dust is concerned. Now we have super-deep maps at 95, 150 and 220 GHz allowing us to accurately remove the dust component.”

The current analysis uses data recorded by BICEP2, the Keck Array and BICEP3 up to 2018. Since then, the collaboration has installed a new more capable telescope platform called the BICEP Array designed to increase sensitivity to primordial gravitational waves by a factor of three, in collaboration with a large-aperture telescope at the South Pole called SPT3G. With 21 telescopes at the South Pole and in the Chilean Atacama desert, the proposed CMB Stage-4 project plans to improve sensitivity by a further factor of six in the 2030s.

The post BICEP crunches primordial gravitational waves  appeared first on CERN Courier.

]]>
News The new analysis significantly improve the upper bound on the strength of gravitational waves produced during the epoch of inflation. https://cerncourier.com/wp-content/uploads/2021/10/South_pole_spt_dsl.png
Cosmic-ray anisotropy probed across 10 decades in energy https://cerncourier.com/a/cosmic-ray-anisotropy-probed-across-10-decades-in-energy/ Sat, 24 Jul 2021 14:18:45 +0000 https://preview-courier.web.cern.ch/?p=93485 This week, at the 37th International Cosmic Ray Conference, space- and ground-based detectors unveiled new cosmic-ray anisotropy measurements from GeV to tens of EeV.

The post Cosmic-ray anisotropy probed across 10 decades in energy appeared first on CERN Courier.

]]>
Spanning 13 decades in energy and more than 26 decades in intensity, cosmic rays are one of the hottest topics in astroparticle physics today. Spectral features such as a “knee” at a few PeV and an “ankle” at a few EeV give insights into their varying origins, but studies of their arrival direction can also provide valuable information. Though magnetic fields mean we cannot normally trace cosmic rays directly back to their point of origin, angular anisotropies provide important independent evidence towards probable sources at different energies. This week, at the 37th International Cosmic Ray Conference (ICRC), a range of space- and ground-based experiments greatly increased our knowledge of cosmic-ray anisotropies, with new results spanning 10 decades in energy, from GeV to tens of EeV.

Vela Supernova Remnant

At sub-TeV energies, spectral features seen by the AMS-02 and CALET detectors on the International Space Station and the Chinese–European DAMPE satellite could potentially be explained by a local galactic source such as a supernova remnant like Vela (see “Spectral” figure). If a nearby source is indeed responsible for a significant fraction of the cosmic rays observed at such energies, it could show up in the arrival direction of these cosmic rays in the form of a dipole feature, despite bending by galactic magnetic fields; however, results from AMS-02 at ICRC showed no evidence of a dipole in the arrival direction of protons or any other light nucleus. This was confirmed by DAMPE, which excluded dipole features with amplitudes above about 0.1% in the 100s of GeV energy range. The search continues, however, with DAMPE, AMS-02 and CALET all set to take further data over the coming years.

Close to the knee, the dipole has a maximum rather than a minimum close to the galactic centre

Moving to higher energies, clear anisotropic dipole excesses have been observed over the last decade by ground-based experiments such as the ARGO-YBJ observatory in China, the HAWC observatory in Mexico and the IceCube observatory at the South Pole – though with different “phases” at different energies. The anisotropy in the TeV to the 100s of TeV energy range could point towards a nearby source, though models proposing the structure of the interstellar magnetic field as the true origin for the anisotropy also exist. This feature was further confirmed this year by the LHAASO experiment in China, using a year of data that was taken while constructing the detector. The results from LHAASO also confirm a switch in the phase of the anisotropy when moving from 100s of TeV to PeV energies, as reported by IceCube and other experiments in recent years: at PeV energies, close to the knee, the dipole has a maximum rather than a minimum close to the galactic centre. This could indicate an excess of “pevatron” sources near the galactic centre.

Antennae Galaxies

Extragalactic sources

While results up to PeV energies give an insight into sources within our galaxy, it is theorised that the flux starts to be dominated by extragalactic sources somewhere between the knee and the ankle of the cosmic-ray spectrum. Evidence for this was increased by new results from the Pierre Auger Observatory in Argentina and the Telescope Array in the US. These two observatories, which observe different hemispheres, find strong evidence for excesses in the cosmic-ray flux in certain regions of the sky at energies exceeding EeV. At energies as high as these, cosmic rays point more clearly to their origin, and galactic cosmic rays should have very clear point-like sources that are not observed, providing evidence that they originate outside of our galaxy. A prime candidate for such sources are so-called starburst galaxies, wherein star formation happens unusually rapidly, during a short period of the galaxy’s evolution (see “Antennae galaxies” figure). As presented at ICRC 2021, the available data was fitted to models where starburst galaxies are the primary source of EeV cosmic rays. The model fits the anisotropy data with more than 4σ significance relative to the null hypothesis with normal galaxies, indicating starburst galaxies to likely be at least one source of EeV cosmic rays.

While some of the features will likely be fully confirmed within the coming years simply by accumulating statistics, new features are also likely to arise. One example is further constraints on the lack of any observed anisotropy at sub-TeV energies using data from space-based missions, while new data from ground-based experiments will start to bridge the measurement gap between PeV and EeV energies. The latter will be especially important in gaining an understanding of the energy scale at which extragalactic sources start to dominate. To fully exploit the data it will be necessary to compare complex cosmic-ray-propagation simulations with diverse data such as the pevatron sources discovered this year by LHAASO.

The post Cosmic-ray anisotropy probed across 10 decades in energy appeared first on CERN Courier.

]]>
News This week, at the 37th International Cosmic Ray Conference, space- and ground-based detectors unveiled new cosmic-ray anisotropy measurements from GeV to tens of EeV. https://cerncourier.com/wp-content/uploads/2021/07/Antennae_galaxies_191.jpg
Astroparticle theory in rude health https://cerncourier.com/a/astroparticle-theory-in-rude-health/ Tue, 13 Jul 2021 10:51:25 +0000 https://preview-courier.web.cern.ch/?p=92948 The European Consortium for Astroparticle theory held its first annual symposium in May, bringing together hundreds of theoretical physicists across Europe.

The post Astroparticle theory in rude health appeared first on CERN Courier.

]]>
The EuCAPT census

The European Consortium for Astroparticle theory (EuCAPT) held its first annual symposium from 5 to 7 May. Hundreds of theoretical physicists from Europe and beyond met online to discuss the present and future of astroparticle physics and cosmology, in a dense and exciting meeting that featured 29 invited presentations, 42 lightning talks by young researchers, and two community-wide brainstorming sessions.  

Participants discussed a wide array of topics at the interface between particle physics, astrophysics and cosmology, with particular emphasis on the challenges and opportunities for these fields in the next decade. Rather than focusing on experimental activities and the discoveries they might enable, the sessions were structured around thematic areas and explored the interdisciplinary multi-messenger aspects of each. 

Two sessions were dedicated to cosmology, exploring the early and late universe. As stressed by Geraldine Servant (Hamburg), several unresolved puzzles of particle physics – such as the origin of dark matter, the baryon asymmetry, and inflation – are directly linked to the early universe, and new observational probes may soon shed new light on them.

Julien Lesgourgues (Aachen) showed how the very same puzzles are also linked to the late universe, and cautiously elaborated on a series of possible inconsistencies between physical quantities inferred from early- and late-universe probes, for example the Hubble constant. Those inconsistencies represent both a challenge and an extraordinary opportunity for cosmology, as they might “break” the standard Lambda–cold-dark-matter model of cosmology, and allow us to gain insights into the physics of dark matter, dark energy and gravity.

We are witnessing a proliferation of theoretically well-motivated models

New strategies to go beyond the standard models of particle physics and cosmology were also discussed by Marco Cirelli (LPTHE) and Manfred Lindner (Heidelberg), in the framework of dark-matter searches and neutrino physics, respectively. Progress in both fields is currently not limited by a lack of ideas – we are actually witnessing a proliferation of theoretically well-motivated models – but by the difficulty of identifying experimental strategies to conclusively validate or rule them out. Much of the discussion here concerned prospects for detecting new physics with dedicated experiments and multi-messenger observations. 

Gravitational waves have added a new observational probe in astroparticle physics and cosmology. Alessandra Buonanno (Max Planck Institute for Gravitational Physics) illustrated the exciting prospects for this new field of research, whose potential for discovering new physics is attracting enormous interest from particle and astroparticle theorists. The connection between cosmic rays, gamma rays and high-energy neutrinos was explored in the final outlook by Elena Amato (Arcetri Astrophysical Observatory), who highlighted how progress in theory and observations is leading the community to reconsider some long-held beliefs – such as the idea that supernova remnants are the acceleration sites of cosmic rays up to the so-called “knee” – and stimulating new ideas.

In line with EuCAPT’s mission, the local organisers and the consortium’s steering committee organised a series of community-building activities. Participants stressed the importance of supporting diversity and inclusivity, a continuing high priority for EuCAPT, while a second brainstorming session was devoted to the discussion of the EuCAPT white paper currently being written, which should be published by September. Last but not least, Hannah Banks (Cambridge), Francesca Capel (TU Munich) and Charles Dalang (University of Geneva) received prizes for the best lightning talks, and Niko Sarcevic (Newcastle) was awarded an “outstanding contributor” prize for the help and support she provides for the analysis of the EuCAPT census (pictured).

The next symposium will take place in 2022, hopefully in person, at CERN. 

The post Astroparticle theory in rude health appeared first on CERN Courier.

]]>
Meeting report The European Consortium for Astroparticle theory held its first annual symposium in May, bringing together hundreds of theoretical physicists across Europe. https://cerncourier.com/wp-content/uploads/2021/06/CCJulAug21_FN_sky.jpg
Mountain observatory nets PeV gamma rays https://cerncourier.com/a/mountain-observatory-nets-pev-gamma-rays/ Fri, 02 Jul 2021 07:55:48 +0000 https://preview-courier.web.cern.ch/?p=92750 Recent detection from LHAASO provides the first clear evidence of the presence of galactic “pevatrons”.

The post Mountain observatory nets PeV gamma rays appeared first on CERN Courier.

]]>
The universe seen with protons > 100TeV

Recent years have seen rapid growth in high-energy gamma-ray astronomy, with the first measurement of TeV photons from gamma-ray bursts by the MAGIC telescope and the first detection of gamma rays with energies above 100 TeV by the HAWC observatory.

Now, the Large High Altitude Air Shower Observatory (LHAASO) in China has increased the energy scale at which the universe has been observed by a further order of magnitude. The recent LHAASO detection provides the first clear evidence of the presence of galactic “pevatrons”: sources in the Milky Way capable of accelerating protons and electrons to PeV energies. Although PeV cosmic rays are known to exist, magnetic fields pervading the universe perturb their direction and therefore do not allow their origin to be traced. The gamma rays produced by such cosmic-rays, on the other hand, point directly to their source.

Wide field of view
LHAASO is located in the mountains of the Sichuan province of China and offers a wide field of view to study both high-energy cosmic and gamma rays. Once completed, the observatory will contain a water Cherenkov detector with a total area of about 78,000 m2, 18 widefield- of-view Cherenkov telescopes and a 1 km2 array of more than 5000 scintillator- based electromagnetic detectors (EDs). Finally, more than 1000 underground water Cherenkov tanks (the MDs) are placed over the grid to detect muons.

The latter two detectors, of which only half were finished during data-taking for this study, are used to directly detect the showers produced when high-energy particles interact with the Earth’s atmosphere. The EDs detect the shower profile and incoming angle, using charge and timing information of the detector array, while the MDs are used to distinguish hadronic showers from the electromagnetic showers produced by high-energy gamma rays. Thanks to both its large size and the MDs, LHAASO will ultimately be two orders of magnitude more sensitive at 100 TeV than the HAWC facility in Mexico, the previous most sensitive detector of this type.

The measurements reported by the Chinese-led international LHAASO collaboration reveal a total of 12 sources Astrowatch Mountain observatory nets PeV gamma rays located across the galactic plane (see image above). This distribution is expected, since gamma rays at such energies have a high cross-section for pair production with the cosmic microwave background and therefore the universe starts to become opaque at energies exceeding tens to hundreds of TeV, leaving only sources within our galaxy visible. Of the 12 presented sources, only the Crab nebula can be directly confirmed. This substantiates the pulsar-wind nebulae as a source in which electrons are accelerated beyond PeV energies, which in turn are responsible for the gamma rays through inverse Compton scattering.

Of specific interest is the source responsible for the photon with the highest energy, 1.4 PeV

The origin of the other photons remains unknown as the observed emission regions contain several possible sources within them. The sizes of the emission regions exceed the angular resolution of LHAASO, however, indicating that emission takes place over large scales. Of specific interest is the source responsible for the photon with the highest energy, 1.4 PeV. This came from a region containing both a supernova remnant as well as a star-forming cluster, both of which are prime theoretical candidates for hadronic pevatrons.

Tip of the iceberg
More detailed spectrometry as well as morphological measurements, in which the differences in emission intensity throughout the sources are measured, could allow the sources of > 100 TeV gamma rays to be identified in the next one or two years, say the authors. Furthermore, as the current 12 sources were visible using only one year of data from half the detector, it is clear that LHAASO is only seeing the tip of iceberg when it comes to high-energy gamma rays.

The post Mountain observatory nets PeV gamma rays appeared first on CERN Courier.

]]>
News Recent detection from LHAASO provides the first clear evidence of the presence of galactic “pevatrons”. https://cerncourier.com/wp-content/uploads/2021/07/162566054182328192.png
Exploring the Hubble tension https://cerncourier.com/a/exploring-the-hubble-tension/ Fri, 02 Jul 2021 07:39:50 +0000 https://preview-courier.web.cern.ch/?p=92896 Cosmologist and theoretical physicist Licia Verde discusses the current tension between early- and late-time measurements of the expansion rate of the universe.

The post Exploring the Hubble tension appeared first on CERN Courier.

]]>
Licia Verde

Did you always want to be a cosmologist?

One day, around the time I started properly reading, somebody gave me a book about the sky, and I found it fascinating to think about what’s beyond the clouds and beyond where the planes and the birds fly. I didn’t know that you could actually make a living doing this kind of thing. At that age, you don’t know what a cosmologist is, unless you happen to meet one and ask what they do. You are just fascinated by questions like “how does it work?” and “how do you know?”.

Was there a point at which you decided to focus on theory?

Not really, and I still think I’m somewhat in-between, in the sense that I like to interpret data and am plugged-in to observational collaborations. I try to make connections to what the data mean in light of theory. You could say that I am a theoretical experimentalist. I made a point to actually go and serve at a telescope a couple of times, but you wouldn’t want to trust me in handling all of the nitty-gritty detail, or to move the instrument around. 

What are your research interests?

I have several different research projects, spanning large-scale structure, dark energy, inflation and the cosmic microwave background. But there is a common philosophy: I like to ask how much can we learn about the universe in a way that is as robust as possible, where robust means as close as possible to the truth, even if we have to accept large error bars. In cosmology, everything we interpret is always in light of a theory, and theories are always at some level “spherical cows” – they are approximations. So, imagine we are missing something: how do I know I am missing it? It sounds vague, but I think the field of cosmology is ready to ask these questions because we are swimming in data, drowning in data, or soon will be, and the statistical error bars are shrinking. 

This explains your current interest in the Hubble constant. What do you define as the Hubble tension? 

Yes, indeed. When I was a PhD student, knowing the Hubble constant at the 40–50% level was great. Now, we are declaring a crisis in cosmology because there is a discrepancy at the very-few-percent level. The Hubble tension is certainly one of the most intriguing problems in cosmology today. Local measurements of the current expansion rate of the universe, for example based on supernovae as standard candles, which do not rely heavily on assumptions about cosmological models, give values that cluster around 73 km s–1 Mpc–1. Then there is another, indirect route to measuring what we believe is the same quantity but only within a model, the lambda-cold-dark-matter (ΛCDM) model, which is looking at the baby universe via the cosmic microwave background (CMB). When we look at the CMB, we don’t measure recession velocities, but we interpret a parameter within the model as the expansion rate of the universe. The ΛCDM model is extremely successful, but the value of the Hubble constant using this method comes out at around 67 km s–1 Mpc–1, and the discrepancy with local measurements is now 4σ or more.

What are the implications if this tension cannot be explained by systematic errors or some other misunderstanding of the data?

The Hubble constant is the only cosmological parameter in the ΛCDM universe that can be measured both directly locally and from classical cosmological observations such as the CMB, baryon acoustic oscillations, supernovae and big-bang nucleosynthesis. It’s also easy to understand what it is, and the error bars are becoming small enough that it is really becoming make-or-break for the ΛCDM model. The Hubble tension made everybody wake up. But before we throw the model out of the window, we need something more.

How much faith do you put in the ΛCDM model compared to, say, the Standard Model of particle physics?

It is a model that has only six parameters, most constrained at the percent level, which explains most of the observations that we have of the universe. In the case of Λ, which quantifies what we call dark energy, we have many orders of magnitude between theory and experiment to understand, and for dark matter we are yet to find a candidate particle. Otherwise, it does connect to fundamental physics and has been extremely successful. For 20 years we have been riding a wave of confirmation of the ΛCDM model, so we need to ask ourselves: if we are going to throw it out, what do we substitute it with? The first thing is to take small steps away from the model, say by adding one parameter. For a while, you could say that maybe there is something like an effective neutrino species that might fix it, but a solution like this doesn’t quite fit the CMB data any more. I think the community may be split 50/50 between being almost ready to throw the model out and keeping working with it, because we have nothing better to use. 

It is really becoming make-or-break for the ΛCDM model

Could it be that general relativity (GR) needs to be modified? 

Perhaps, but where do we modify it? People have tried to tweak GR at early times, but it messes around with the observations and creates a bigger problem than we already have. So, let’s say we modify in middle times – we still need it to describe the shape of the expansion history of the universe, which is close to ΛCDM. Or we could modify it locally. We’ve tested GR at the solar-system scale, and the accuracy of GPS is a vivid illustration of its effectiveness at a planetary scale. So, we’d need to modify it very close to where we are, and I don’t know if there are modifications on the market that pass all of the observational tests. It could also be that the cosmological constant changes value as the universe evolves, in which case the form of the expansion history would not be the one of ΛCDM. There is some wiggle room here, but changing Λ within the error bars is not enough to fix the mismatch. Basically, there is such a good agreement between the ΛCDM model and the observations that you can only tinker so much. We’ve tried to put “epicycles” everywhere we could, and so far we haven’t found anything that actually fixes it.

What about possible sources of experimental error?

Systematics are always unknowns that may be there, but the level of sophistication of the analyses suggests that if there was something major then it would have come up. People do a lot of internal consistency checks; therefore, it is becoming increasingly unlikely that it is only due to dumb systematics. The big change over the past two years or so is that you typically now have different data sets that give you the same answer. It doesn’t mean that both can’t be wrong, but it becomes increasingly unlikely. For a while people were saying maybe there is a problem with the CMB data, but now we have removed those data out of the equation completely and there are different lines of evidence that give a local value hovering around 73 km s–1 Mpc–1, although it’s true that the truly independent ones are in the range 70–73 km s–1 Mpc–1. A lot of the data for local measurements have been made public, and although it’s not a very glamorous job to take someone else’s data and re-do the analysis, it’s very important.

Is there a way to categorise the very large number of models vying to explain the Hubble tension?

Values of the Hubble constant

Until very recently, there was an interpretation of early versus late models. But if this is really the case, then the tension should show up in other observables, specifically the matter density and age of the universe, because it’s a very constrained system. Perhaps there is some global solution, so a little change here and a little in the middle, and a little there … and everything would come together. But that would be rather unsatisfactory because you can’t point your finger at what the problem was. Or maybe it’s something very, very local – then it is not a question of cosmology, but whether the value of the Hubble constant we measure here is not a global value. I don’t know how to choose between these possibilities, but the way the observations are going makes me wonder if I should start thinking in that direction. I am trying to be as model agnostic as possible. Firstly, there are many other people that are thinking in terms of models and they are doing a wonderful job. Secondly, I don’t want to be biased. Instead I am trying to see if I can think one-step removed, which is very difficult, from a particular model or parameterisation. 

What are the prospects for more precise measurements?

For the CMB, we have the CMB-S4 proposal and the Simons Array. These experiments won’t make a huge difference to the precision of the primary temperature-fluctuation measurements, but will be useful to disentangle possible solutions that have been proposed because they will focus on the polarisation of the CMB photons. As for the local measurements, the Dark Energy Spectroscopic Instrument, which started observations in May, will measure baryon acoustic oscillations at the level of galaxies to further nail down the expansion history of the low-redshift universe. However, it will not help at the level of local measurements, which are being pursued instead by the SH0ES collaboration. There is also another programme in Chicago focusing on the so-called tip of the red-giant-branch technique, with more results to come out. Observations of multiple images from strong gravitational lensing is another promising avenue that is very actively pursued, and, if we are lucky, gravitational waves with optical counterparts will bring in another important piece of the puzzle. 

If we are lucky, gravitational waves with optical counterparts will bring in another important piece of
the puzzle

How do we measure the Hubble constant from gravitational waves?

It’s a beautiful measurement, as you can get a distance measurement without having to build a cosmic distance ladder, which is the case with the other local measurements that build distances via Cepheids, supernovae, etc. The recession velocity of the GW source comes from the optical counterpart and its redshift. The detection of the GW170817 event enabled researchers to estimate the Hubble constant to be 70 km s–1 Mpc–1, for example, but the uncertainties using this novel method are still very large, in the region of 10%. A particular source of uncertainty comes from the orientation of the gravitational-wave source with respect to Earth, but this will come down as the number of events increases. So this route provides a completely different window on the Hubble tension. Gravitational waves have been dubbed, rather poetically, “standard sirens”. When these determinations of the Hubble constant become competitive with existing measurements really depends on how many events are out there. Upgrades to LIGO, VIRGO, plus next-generation gravitational-wave observatories will help in this regard, but what if the measurements end up clustering between or beyond the late- and early-time measurements? Then we really have to scratch our heads! 

How can results from particle physics help? 

Principally, if we learn something about dark matter it could force us to reconsider our entire way to fit the observations, perhaps in a way that we haven’t thought of because dark matter may be hot rather than cold, or something else that interacts in completely different ways. Neutrinos are another possibility. There are models where neutrinos don’t behave like the Standard Model yet still fit the CMB observations. Before the Hubble tension came along, the hope was to say that we have this wonderful model of cosmology that fits really well and implies that we live in a maximally boring universe. Then we could have used that to eventually make the connection to particle physics, say, by constraining neutrino masses or the temperature of dark matter. But if we don’t live in a maximally boring universe, we have to be careful about playing this game because the universe could be much, much more interesting than we assumed. 

The post Exploring the Hubble tension appeared first on CERN Courier.

]]>
Opinion Cosmologist and theoretical physicist Licia Verde discusses the current tension between early- and late-time measurements of the expansion rate of the universe. https://cerncourier.com/wp-content/uploads/2021/07/162566054182328192-3.jpeg
Accelerators meet gravitational waves https://cerncourier.com/a/accelerators-meet-gravitational-waves/ Tue, 25 May 2021 10:01:57 +0000 https://preview-courier.web.cern.ch/?p=92380 Gravitational waves crease and stretch the fabric of spacetime as they ripple out across the universe, potentially causing observable effects on beams in storage rings.

The post Accelerators meet gravitational waves appeared first on CERN Courier.

]]>
Gravitational waves (GWs) crease and stretch the fabric of spacetime as they ripple out across the universe. As they pass through regions where beams circulate in storage rings, they should therefore cause charged-particle orbits to seem to contract, as they climb new peaks and plumb new troughs, with potentially observable effects.

SRGW2021

Proposals in this direction have appeared intermittently over the past 50 years, including during and after the construction of LEP and the LHC. Now that the existence of GWs has been established by the LIGO and VIRGO detectors, and as new, even larger storage rings are being proposed in Europe and China, this question has renewed relevance. We are on the cusp of the era of GW astronomy — a young and dynamic domain of research with much to discover, in which particle accelerators could conceivably play a major role.

From 2 February to 31 March this year, a topical virtual workshop titled “Storage Rings and Gravitational Waves” (SRGW2021) shone light on this tantalising possibility. Organised within the European Union’s Horizon 2020 ARIES project, the meeting brought together more than 100 accelerator experts, particle physicists and members of the gravitational-physics community to explore several intriguing proposals.

Theoretically subtle

GWs are extremely feebly interacting. The cooling and expanding universe should have become “transparent” to them early in its history, long before the timescales probed through other known phenomena. Detecting cosmological backgrounds of GWs would, therefore, provide us with a picture of the universe at earlier times that we can currently access, prior to photon decoupling and Big-Bang nucleosynthesis. It could also shed light on high-energy phenomena, such as high-temperature phase transitions, inflation and new heavy particles that cannot be directly produced in the laboratory.

Gravitational wave sources and sensitivities

In the opening session of the workshop, Jorge Cervantes (ININ Mexico) presented a vivid account of the history of GWs, revealing how subtle they are theoretically. It took about 40 years and a number of conflicting papers to definitively establish their existence. Bangalore S. Sathyaprakash (Penn State and Cardiff) reviewed the main expected sources of GWs: the gravitational collapse of binaries of compact objects such as black holes, neutron stars and white dwarfs; supernovae and other transient phenomena; spinning neutron stars; and stochastic backgrounds with either astrophysical or cosmological origins. The GW frequency range of interest extends from 0.1 nHz to 1 MHz (see figure “Sources and sensitivities”).

The frequency range of interest extends from 0.1 nHz to 1 MHz

Raffaele Flaminio (LAPP Annecy) reviewed the mindboggling precision of VIRGO and LIGO, which can measure motion 10,000 times smaller than the width of an atomic nucleus. Jörg Wenninger (CERN) reported the similarly impressive sensitivity of LEP and the LHC to small effects, such as tides and earthquakes on the other side of the planet. Famously, LEP’s beam-energy resolution was so precise that it detected a diurnal distortion of the 27 km ring at an amplitude of a single millimetre, and the LHC beam-position-monitor system can achieve measurement resolutions on the average circumference approaching the micrometre scale over time intervals of one hour. While impressive, given that these machines are designed with completely different goals in mind, it is still far from the precision achieved by LIGO and VIRGO. However, one can strongly enhance the sensitivity to GWs by exploiting resonant effects and the long distances travelled by the particles over their storage times. In one hour, protons at the LHC travel through the ring about 40 million times. In principle, the precision of modern accelerator optics could allow storage rings and accelerator technologies to cover a portion of the enormous GW frequency range of interest.

Resonant Responses

Since the invention of the synchrotron, storage rings have been afflicted by difficult-to-control resonance effects which degrade beam quality. When a new ring is commissioned, accelerator physicists work diligently to “tune” the machine’s parameters to avoid such effects. But could accelerator physicists turn the tables and seek to enhance these effects and observe resonances caused by the passage of GWs?

In accelerators and storage rings, charged particles are steered and focused in the two directions transverse to their motion by dipole, quadrupole and higher-order magnets — the “betatron motion” of the beam. The beam is also kept bunched in the longitudinal plane as a result of an energy-dependent path length and oscillating electric fields in radio-frequency (RF) cavities — the “synchrotron motion” of the beam. A gravitational wave can resonantly interact with either the transverse betatron motion of a stored beam, at a frequency of several kHz, or with the longitudinal synchrotron motion at a frequency of tens of hertz.

Antenna optics

Katsunobu Oide (KEK and CERN) discussed the transverse betatron resonances that a gravitational wave can excite for a beam circulating in a storage ring. Typical betatron frequencies for the LHC are a few kHz, offering potentially sensitivity to GWs with frequencies of a similar order of magnitude. Starting from a standard 30 km ring, Oide-san proposed special beam-optical insertions with a large beta function, which would serve as “GW antennas” to enhance the resonance strength, resulting in 37.5 km-long optics (see figure “Antenna optics”). Among several parameters, the sensitivity to GWs should depend on the size of the ring. Oide derived a special resonance condition of kGWR±2=Qx, with R the ring radius, kGW the GW wave number and Qx the horizontal betatron tune. 

Suvrat Rao (Hamburg University) presented an analysis of the longitudinal beam response of the LHC. An impinging GW affects the revolution period, in a similar way to the static gravitational gradient effect due to the presence of the Mont Blanc (which alters the revolution time at the level of 10-16 s) and the diurnal effect of the changing locations of sun and moon (10-18 s) — the latter effect being about six orders of magnitude smaller than the tidal effect on the ring circumference.

The longitudinal beam response to a GW should be enhanced for perturbations close to the synchrotron frequency, which, for the LHC, would be in the range 10 to 60 Hz. Raffaele D’Agnolo (IPhT) estimated the sensitivity to the gravitational strain, h, at the synchrotron frequency, without any backgrounds, as h~10-13, and listed three possible paths to further improve the sensitivity by several orders of magnitude. Rao also highlighted that storage-ring GW detection potentially allows for an earth-based GW observatory sensitive to millihertz GWs, which could complement space-based laser interferometers such as LISA, which is planned to be launched in 2034. This would improve the sky-localisation GW-source, which is useful for electromagnetic follow-up studies with astronomical telescopes.

Out of the ordinary

More exotic accelerators were also mooted. A “coasting-beam” experiment might have zero restoring voltage and no synchrotron oscillations. Cold “crystalline” beams of stable ordered 1D, 2D or 3D structures of ions could open up a whole new frequency spectrum, as the phonon spectrum which could be excited by a GW could easily extend up to the MHz range. Witek Krasny (LPNHE) suggested storing beams consisting of “in the LHC: decay times and transition rates could be modified by an incident GW. The stored particles could, for example, include the excited partially stripped heavy ions that are the basis of a “gamma factory”.

Finally on the storage-ring front, Andrey Ivanov (TU Vienna) and co-workers discussed the possibly shrinking circumference of a storage ring, such as the 1.4 km light source SPring-8 in Japan, under the influence of the relic GW background.

The Gertsenshtein effect

Delegates at SRGW2021 also proposed completely different ways of using accelerator technology to detect GWs. Sebastian Ellis (IPhT) explained how an SRF cavity might act as a resonant bar or serve as a Gertsenshtein converter, in both cases converting a graviton into a photon in the presence of a strong background magnetic field and yielding a direct electromagnetic signal — similar to axion searches. Related attempts at GW detection using cavities were pioneered in the 1970s by teams in the Soviet Union and Italy, but RF technology has made big strides in quality factors, cooling and insulation since then, and a new series of experiments appears to be well justified.

Another promising approach for GW detection is atomic-beam interferometry. Instead of light interference, as in LIGO and VIRGO, an incident GW would cause interference between carefully prepared beams of cold atoms. This approach is being pursued by the recently approved AION experiment using ultra-cold-strontium atomic clocks over increasingly large path lengths, including the possible use of an LHC access shaft to house a 100-metre device targeting the 0.01 to 1 Hz range. Meanwhile, a space-based version, AEDGE, could be realised with a pair of satellites in medium earth orbit separated by 4.4×107 m.

Storage rings as sources

Extraordinarily, storage rings could act not only as GW detectors, but also as observable sources of GWs. Pisin Chen (NTU Taiwan) discussed how relativistic charged particles executing circular orbital motion can emit gravitational waves in two channels: “gravitational synchrotron radiation” (GSR) emitted directly by the massive particle, and  “resonant conversion” in which, via the Gertsenshtein effect, electromagnetic synchrotron radiation (EMSR) is converted into GWs.

Gravitons could be emitted via “gravitational beamstrahlung”

John Jowett (GSI, retired from CERN) and Fritz Caspers (also retired from CERN) recalled that GSR from beams at the SPS and other colliders had been discussed at CERN as early  as the 1980s. It was realised that these beams would be among the most powerful terrestrial sources of gravitational radiation although the total radiated power would still be many orders of magnitude lower than from regular synchrotron radiation. The dominant frequency of direct GSR is the revolution frequency, 10 kHz, while the dominant frequency of resonant EMSR-GSR conversion is a factor γ3 higher, around 10 THz at the LHC, conceivably allowing the observation of gravitons. If all particles and bunches of a beam excited the GW coherently, the space-time metric perturbation has been estimated to be as large as hGSR~10-18. Gravitons could also be emitted via “gravitational beamstrahlung” during the collision with an opposing beam, perhaps producing the most prominent GW signal at future proposed lepton colliders. At the LHC, argued Caspers, such signals could be detected by a torsion-balance experiment with a very sensitive, resonant mechanical pickup installed close to the beam in one of the arcs. In a phase-lock mode of operation, an effective resolution bandwidth of millihertz or below could be possible, opening the exciting prospect of detecting synthetic sources of GWs.

Towards an accelerator roadmap

The concluding workshop discussion, moderated by John Ellis (King’s College London), focused on the GW-detection proposals considered closest to implementations: resonant betatron oscillations near 10 kHz; changes in the revolution period using “low-energy” coasting ion-beams without a longitudinally focusing RF system; “heterodyne” detection using SRF cavities up to 10 MHz; beam-generated GWs at the LHC; and atomic interferometry. These potential components of a future R&D plan cover significant regions of the enormous GW frequency space.

Apart from an informal meeting at CERN in the 1990s, SRGW2021 was the first workshop to link accelerators and GWs and bring together the implicated scientific communities. Lively discussions in this emerging field attest to the promise of employing accelerators in a completely different way to either detect or generate GWs. The subtleties of the particle dynamics when embedded in an oscillating fabric of space and time, and the inherent sensitivity problems in detecting GWs, pose exceptional challenges. The great interest prompted by SRGW2021, and the tantalising preliminary findings from this workshop, call for more thorough investigations into harnessing future storage rings and accelerator technologies for GW physics.

The post Accelerators meet gravitational waves appeared first on CERN Courier.

]]>
Meeting report Gravitational waves crease and stretch the fabric of spacetime as they ripple out across the universe, potentially causing observable effects on beams in storage rings. https://cerncourier.com/wp-content/uploads/2021/05/SRGW2021_resized-191.jpg
ANAIS challenges DAMA dark-matter claim https://cerncourier.com/a/anais-challenges-dama-dark-matter-claim/ Wed, 24 Mar 2021 14:18:26 +0000 https://preview-courier.web.cern.ch/?p=91906 First results from the ANAIS experiment show no annual modulation, in conflict with longstanding results from the DAMA experiment.

The post ANAIS challenges DAMA dark-matter claim appeared first on CERN Courier.

]]>
ANAIS shows no modulation

Despite the strong indirect evidence for the existence of dark matter, a plethora of direct searches have not resulted in a positive detection. The exception to this are the famous results from the DAMA/NaI experiment at Gran Sasso underground laboratory in Italy, first reported in the late 1990s, which show a modulating signal compatible with Earth moving through a region containing Weakly Interacting Massive Particles (WIMPs). These results were backed-up more recently with measurements from the follow-up DAMA/LIBRA detector. Combining the data in 2018, the evidence reported for a dark-matter signal is as high as 13 sigma.Now, the Annual modulation with NaI Scintillators (ANAIS) collaboration, which aims to directly reproduce the DAMA results using the same detector concept, has published the results from their first three years of operations. The results, which were presented today at Rencontres de Moriond, show a clear contradiction with DAMA, indicating that we are still no closer to finding dark matter.

The DAMA results are based on searches for an annual modulation in the interaction rate of WIMPs in a detector comprising NaI crystals. First theoretically proposed in 1986 by Andrzej Drukier, Katherine Freese and David Spergel, this modulation is a result of the difference in velocity of Earth with respect to the dark-matter halo of the galaxy. On 2 June, the velocities of Earth and the Sun are aligned with respect to the galaxy, whereas half a year later they are oppositely aligned, resulting in a lower cross section for WIMPs with a detector placed on Earth. Although this method has advantages compared to more direct detection methods, it requires that other potential sources of such a seasonal modulation be ruled out. Despite the significant modulation with the correct phase observed by DAMA, its results were not immediately accepted as a clear signal of dark matter due to the remaining possibility of instrumental effects, seasonal background modulation or artifacts from the analysis.

Over the years the significance of the DAMA results has continued to increase while other dark-matter searches, in particular with the XENON1T and LUX experiments, found no evidence of WIMPs capable of explaining the DAMA results. The fact that only the final analysis products from DAMA have been made public has also hampered attempts to prove or disprove alternative origins of the modulation. To overcome this, the ANAIS collaboration set out to reproduce the data with an independent detector intentionally similar to DAMA, consisting of NaI(Tl) scintillators readout by photomultipliers placed in the Canfranc Underground Laboratory deep beneath the Pyrenees in northern Spain. Using this method ANAIS can rule out any instrument-induced effects while producing data in a controlled way and studying it in detail with different analysis procedures.

The ANAIS results agree with the first results published by the COSINE-100 collaboration

ANAIS and DAMA signals

The first three years of ANAIS data have now been unblinded, and the results were posted on arXiv on 1 March. None of the analysis methods used show any signs of a modulation, with a statistical analysis ruling out the DAMA results at 99% confidence. The results therefore narrow down the possible causes of the modulation observed by DAMA to either differences in the detector compared to ANAIS, or in the analysis method. One specific issue raised by the ANAIS collaboration regards the background-subtraction method. In the DAMA results the mean background rate for each year is subtracted from the raw data for that full year. In case the background during that year is not constant, however, this will produce an artificial saw-tooth shape which, with the limited statistics, can be fitted with a sinusoidal. This effect was already pointed out in a publication by a group from INFN in 2020, which showed how a slowly increasing background is capable of producing the exact modulation observed by DAMA. The ANAIS collaboration describes their background in detail, shows that it is indeed not constant, and provides suggestions for a more robust handling of the background.

The ANAIS results also agree with the first results published by the COSINE-100 collaboration in 2019 which, again using a NaI-based detector, found no evidence of a yearly modulation. Thanks to the continued experimental efforts of these two groups, and with the ANAIS collaboration planning to make their data public to allow independent analyses, the more than 20 year-old DAMA anomaly looks likely to be settled in the next few years.

The post ANAIS challenges DAMA dark-matter claim appeared first on CERN Courier.

]]>
News First results from the ANAIS experiment show no annual modulation, in conflict with longstanding results from the DAMA experiment. https://cerncourier.com/wp-content/uploads/2021/03/Astro-1.jpg
Lifting the veil on supernova 1987A https://cerncourier.com/a/lifting-the-veil-on-supernova-1987a/ Wed, 24 Feb 2021 14:10:03 +0000 https://preview-courier.web.cern.ch/?p=91375 High-energy X-rays reveal possible missing neutron star formed by the collapse of the famous supernova's progenitor star.

The post Lifting the veil on supernova 1987A appeared first on CERN Courier.

]]>
The dusty core of SN1987A

On 23 February 1987 astronomers around the world saw an extremely bright supernova, now called SN1987A. It was the closest supernova observed for over 300 years and was visible to the naked eye. The event was quickly confirmed to be the result of the collapse of “Sanduleak –69 202”, a blue supergiant star in the Large Magellanic Cloud. As the first nearby supernova in the era of modern astronomy, SN1987A remains one of the most monitored objects in the sky. Apart from confirming several important theories, such as radioactive decay being the source of the observed optical emission, the supernova also raised a number of questions that remain unanswered. The most important is: where is the remnant of the progenitor star?

Despite several false detection claims in the past, evidence is mounting that Sanduleak –69 202 collapsed into a neutron star

Despite several false detection claims in the past, evidence is mounting that Sanduleak –69 202 collapsed into a neutron star that is becoming more visible as the dust around it starts to settle. A new analysis by researchers in Italy and Japan based on high-energy X-ray data from the Chandra and NuSTAR space telescopes adds the latest support to this idea.

Even before the optical light from SN1987A was detected, several neutrino detectors around the world saw a burst of neutrinos. The brightest one was observed by Japan’s Kamiokande II detector, which detected a total of 12 antineutrinos approximately three hours before the first optical light reached Earth. The detection of antineutrinos seemed to confirm theoretical predictions for a star the size of Sanduleak –69 202: namely that it should collapse into a neutron star, and emit large numbers of neutrinos while doing so. The optical light arrives later because it is only produced when the shock waves from the collapse reach the surface of the star.

Since the newly formed neutron star would be expected to emit large amounts of energy at various wavelengths, one might assume it would be relatively easy to detect. However, no signs were found in follow-up searches over the past three decades, leading to much speculation about the fate of this star and its surrounding medium.

The first signs of the stellar remnants of SN1987A came from radio observations by the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile in 2019. A group led by Phil Cigan from Cardiff University in the UK used ALMA data at various frequencies to study the core of SN1987A. Close to the centre, they found a bright “blob” structure, the emission from which appeared to be compatible with radio emission from particles accelerated by a neutron star, also called a pulsar wind nebula. Although the researchers could not exclude local heating from 44Ti produced during the supernova as the source, the results provided the first hint that the blob houses a young neutron star.

Wind power

Inspired by the ALMA results, Emanuele Greco from the University of Palermo and coworkers started to study the same region using X-ray data from Chandra and NuSTAR taken during 2012, 2013 and 2014. They found that the detected soft X-ray emission (0.5–8 keV) was compatible with thermal emission produced in the remnant shock waves of the supernova event with the circumstellar medium. However, at higher energies (10–20 keV) the emission was clearly non-thermal in nature. Describing their findings in a preprint posted in January, the group studied the two possible sources for such emission: synchrotron emission from a pulsar wind nebula and synchrotron emission produced in shock waves in the region. Whereas models for both ideas fit the spectral data, the pulsar wind nebula is favoured because the shock emission would not be expected to look like this for such a young remnant.

It appears that after 34 years of searching we will finally understand what happened in SN1987A

The reason why this neutron star has escaped previous observations in optical or soft X-ray energies is likely absorption by cold dust emitted during the supernova, which appears to still absorb a large part of the synchrotron emission observed in X-rays, especially at lower energies. But the dust is expected to start to heat up during the coming decades, thereby becoming transparent to lower energy emission. Greco and colleagues predict that, if the emission is indeed induced by a neutron star, it will become visible in the soft X-ray regime by 2030 with Chandra.

Although astronomers have just two observational hints that Sanduleak –69 202 did, as it should according to theory, collapse into a neutron star, it appears that after 34 years of searching we will finally understand what happened in SN1987A.

The post Lifting the veil on supernova 1987A appeared first on CERN Courier.

]]>
News High-energy X-rays reveal possible missing neutron star formed by the collapse of the famous supernova's progenitor star. https://cerncourier.com/wp-content/uploads/2021/02/CCMarApr21_NA_astrowatch.jpg
Cosmic plasma-wakefield acceleration https://cerncourier.com/a/cosmic-plasma-wakefield-acceleration/ Wed, 27 Jan 2021 08:38:27 +0000 https://preview-courier.web.cern.ch/?p=90800 Recent studies suggest that plasma-wakefield acceleration also occurs naturally, potentially explaining the highest energy cosmic rays.

The post Cosmic plasma-wakefield acceleration appeared first on CERN Courier.

]]>
Cygnus A

The ability to accelerate charged particles using the “wakefields” of plasma density waves offers the promise of high-energy particle accelerators that are more compact than those based on radio-frequency cavities. Proposed in 1979, the idea is to create a wave inside a plasma upon which electrons can “surf” and gain energy over short distances. Although highly complex, wakefield acceleration (WFA) driven by laser pulses or electron beams has been successfully used to accelerate electron beams to tens of GeV within distances of less than a metre, and the AWAKE experiment at CERN is attempting to achieve higher energy gains by using protons as drive beams. Recent studies suggest that WFA may also occur naturally, potentially offering an explanation for some of the highest energy cosmic rays ever observed.

So-called Fermi acceleration, first conceived by the eponymous Italian in 1949, is considered to be the main mechanism responsible for high-energy cosmic rays. In this process, charged particles are accelerated due to relativistic shockwaves occurring within jets emitted by black-hole binaries, active galactic nuclei or gamma-ray bursts, to name just a few sources. As a charged particle travels within the jet it gets accelerated each time it passes through the shock wave, allowing it to gain energy until the magnetic field in the environment can no longer contain it. This process predicts the observed power-law spectrum of cosmic rays quite well, at least up to energies of around 1019 eV. Beyond this energy, however, Fermi acceleration becomes less efficient as the particles start to lose energy due to collisions and/or synchrotron radiation. The existence of ultra-high-energy cosmic rays (UHECRs), which have been observed up to energies of 1021 eV, indicates that a different acceleration mechanism could be at play in that energy domain. Thanks to its very high efficiency, WFA could provide such a mechanism.

Although there are clearly no laser beams in astrophysical objects, plasma fields that can support waves can be found in many astrophysical settings. For example, in theories developed by Toshiki Tajima of the University of California at Irvine (UCI), one of the inventors of WFA technology, waves could be produced by instabilities in the accretion disks around compact objects such as black holes. These accretion disks can periodically transition from a highly magnetised to a little magnetised state, emitting electromagnetic waves that can propagate into the disk’s jets in the form of Alfven waves. As these waves continue to propagate along the jets they transform back into electromagnetic waves that can accelerate electrons on the front of the plasma’s “bow wake” and protons on the back of it.

Clear predictions

The energies that are theoretically achievable in cosmic-ray WFA depend on the mass of the compact object, as do the periodicities with which such waves can be produced. This allows clear predictions to be made for a range of different objects, which can be tested against observational data.

Groups based at UCI and at RIKEN in Japan recently tested these predictions on a range of astrophysical objects, spanning from 1 to 109 solar masses. Although not conclusive, these first comparisons between theory and observations indicate several interesting features that require further investigation. For example, WFA models predict periodic emission of both UHECRs – the protons on the back of the bow wake – in coincidence with electromagnetic radiation produced by the electrons from the front of the bow wake. Due to interactions with the intergalactic medium, UHECRs are also expected to produce secondary particles, including neutrinos. WFA could thereby also explain periodic outbursts of neutrinos in coincidence with gamma-rays from, for example, blazars, for which evidence was recently found by the IceCube experiment in collaboration with a range of electromagnetic instruments. Additionally, WFA could explain the non-uniformity of the UHECR sky such as that recently reported by the Pierre Auger Observatory (see CERN Courier December 2017 p15), as it allows for cosmic rays with energies up to 1024 eV to be produced within objects that lie within the location of the observed hot-spot.

In concert with future space-based UHECR detectors such as JEM-EUSO and POEMMA, further analysis of existing data should definitively answer the question of whether WFA does indeed occur in space. The clear predictions relating to periodicity, and the coincident emission of neutrinos, gamma-rays and other electromagnetic radiation, make it an ideal subject to study within the multi-messenger frameworks that are currently being set up.

The post Cosmic plasma-wakefield acceleration appeared first on CERN Courier.

]]>
News Recent studies suggest that plasma-wakefield acceleration also occurs naturally, potentially explaining the highest energy cosmic rays. https://cerncourier.com/wp-content/uploads/2021/01/CCJanFeb21_NA_astrowatch.jpg
Pulsars hint at low-frequency gravitational waves https://cerncourier.com/a/pulsars-hint-at-low-frequency-gravitational-waves/ Fri, 23 Oct 2020 15:25:22 +0000 https://preview-courier.web.cern.ch/?p=89431 Observations of millisecond pulsars by the NANOGrav collaboration show early signs of consistency with gravitational waves with a period of decades.

The post Pulsars hint at low-frequency gravitational waves appeared first on CERN Courier.

]]>
NANOGrav uses pulsate to detect potential distortions in space time

The direct detection of a gravitational wave (GW) in 2015 by the LIGO and Virgo collaborations confirmed the existence of these long sought after events. However, these and other GW events detected so far constitute only a small fraction – in the kHz regime — of the vast GW spectrum. As a result, they only probe certain phenomena such as stellar mass black-hole and neutron-star mergers. On the opposite side of the spectrum to LIGO and Virgo are Pulsar Timing Array (PTA) experiments, which search for nHz frequency GWs. Such low-frequency signals can originate from supermassive black-hole binaries (SMBHBs), while in more exotic models they can be proof of cosmic strings, phase transitions or a primordial GW background. The NANOGrav (North American Nanohertz Observatory for Gravitational Waves) collaboration has now found possible first hints of low-frequency GWs.

To detect such rumblings of space—time, which also have minute amplitudes, researchers need to track subtle movements of measurement points spread out over the size of a galaxy. For this purpose, the NANOGrav collaboration uses millisecond pulsars, several tens of which have been detected in our galaxy. Pulsars are quickly rotating neutron stars which emit cones of electromagnetic emission from their poles. When a pole points towards Earth it is detected as a short pulse of electromagnetic radiation. Not only is the frequency of millisecond pulsars high, making it easier to detect small variations in arrival time, but it is very stable over periods of many years. Combined with their great distances from Earth, this makes millisecond-pulsar emissions sensitive to any small alterations in their travel path — for example, those introduced by distortions of space–time by low-frequency gravitational waves. Such waves would cause the pulses to arrive a few nanoseconds early during January and a few nanoseconds late in June, for instance. By observing the radio emission of these objects once a week throughout many years, researchers can search for such effects.

The new results show a clear sign of a common spectrum between the studied pulsars

The problem is that GWs are not the only things which can cause a change in the arrival time of the pulses. Changes in the Earth’s atmosphere already alter the arrival time, as do changes in the position of the pulsar itself (which is usually part of a quickly rotating binary system), and the movement of Earth with respect to the source. The complexity of the measurements lies mostly in correcting for all of these effects. The latest results from NANOGrav, for example, reduce systematics by incorporating unprecedented precision (of the order of tens of km) in the orbital parameters of Jupiter.

Whereas previous results by NANOGrav and other PTA collaborations only allowed upper limits to be set on the amplitude of the GW background travelling through our galaxy, the new results show a clear sign of a common spectrum between the studied pulsars. Based on 12.5 years of data and a total of 47 pulsars studied using the ultra-sensitive Arecibo Observatory and Green Bank Telescope, the spectrum of variations in the pulsar signal arrival time was found to agree with theoretical predictions of the GW background produced by SMBHBs. The uncertainties remain large, however, which admits alternative interpretations such as cosmic strings which predict only a slightly different spectral shape. Furthermore, a key ingredient is still missing: a spatial correlation between the pulsar variations, which would confirm the quadrupole nature of GWs and provide clear proof of the nature of the signal. Finding this “smoking gun” will require longer observation times, more pulsars and smaller systematic errors — something the NANOGrav team is now working towards.

While the NANOGrav collaboration remains cautious, several exotic interpretations have already been proposed. The final sentences of their preprint summarise the status of this exciting field well: “The LIGO–Virgo discovery of high-frequency, transient GWs from stellar black-hole binaries appeared meteorically, with incontrovertible statistical significance. By contrast, the PTA discovery of very-low-frequency GWs from SMBHBs will emerge from the gradual and not always monotonic accumulation of evidence and arguments. Still, our GW vista on the unseen universe continues to get brighter”.

The post Pulsars hint at low-frequency gravitational waves appeared first on CERN Courier.

]]>
News Observations of millisecond pulsars by the NANOGrav collaboration show early signs of consistency with gravitational waves with a period of decades. https://cerncourier.com/wp-content/uploads/2020/10/PTAs.gif
Weinberg on astrophysics https://cerncourier.com/a/weinberg-on-astrophysics/ Mon, 21 Sep 2020 12:24:17 +0000 https://preview-courier.web.cern.ch/?p=88692 Steven Weinberg has penned a concise account of the foundations of astrophysics of permanent value, writes our reviewer.

The post Weinberg on astrophysics appeared first on CERN Courier.

]]>
Typical introductions to astrophysics range from 500 to over 1000 pages. This trend is at odds with many of today’s students, who prepare for examinations using search engines and are often put off by ponderous treatises. Steven Weinberg’s new book wisely goes in the opposite direction. The 1979 Nobel laureate, and winner last week of a special Breakthrough prize in fundamental physics, has written a self-contained and relatively short 214-page account of the foundations of astrophysics, from stars to galaxies. The result is extremely pleasant and particularly suitable for students and young practitioners in the field.

Weinberg Lectures on Astrophysics

Instead of building a large treatise, Weinberg prioritises key topics that appeared in a set of lectures taught by the author at the University of Texas at Austin. The book has four parts, which deal with stars, binaries, the interstellar medium and galaxies, respectively. The analysis of stellar structure starts from the study of hydrostatic equilibrium and is complemented by various classic discussions including the mechanisms for nuclear energy generation and the Hertzsprung-Russell diagram. In view of the striking observations in 2015 by the LIGO and Virgo interferometers, the second part contains a dedicated discussion of the emission of gravitational waves by binary pulsars and coalescing binaries.

As you might expect from the classic style of Weinberg’s monographs, the book provides readers with a kit of analytic tools of permanent value. His approach contrasts with many modern astrophysics and particle-theory texts, where analytical derivations and back-of-the-envelope approximations are often replaced by numerical computations which are mostly performed by computers. By the author’s own admission, however, this book is primarily intended for those who care about the rationale of astrophysical formulas and their applications.

Weinberg’s books always stimulate a wealth of considerations on the mutual interplay of particle physics, astrophysics and cosmology

This monograph is also a valid occasion for paying tribute to a collection of classic treatises that inspired the current astrophysical literature and that are still rather popular among the practitioners of the field. The author reveals in his preface that his interest in stellar structure started many years ago after reading the celebrated book of Subrahmanyan Chandrasekhar (An introduction to Stellar Structure), which was reprinted by Dover in the late fifties. Similarly the discussions on the interstellar medium are inspired by the equally famous monograph of Lyman Spitzer Jr. (Physical Processes in the Interstellar medium 1978, J. Wiley & Sons). For the benefit of curious and alert readers, these as well as other texts are cited in the essential bibliography at the end of each chapter.

Steven Weinberg’s books always stimulate a wealth of considerations on the mutual interplay of particle physics, astrophysics and cosmology, and the problems of dark matter, dark energy, gravitational waves and neutrino masses are today so interlocked that it is quite difficult to say where particle physics stops and astrophysics takes over. Modern science calls for multidisciplinary approaches, and while the frontiers between the different areas are now fading away, the potential discovery of new laws of nature will not only proceed from concrete observational efforts but also from the correct interpretation of the existing theories. If we want to understand the developments of fundamental physics in coming years, Lectures on Astrophysics will be an inspiring source of reflections and a valid reference.

The post Weinberg on astrophysics appeared first on CERN Courier.

]]>
Review Steven Weinberg has penned a concise account of the foundations of astrophysics of permanent value, writes our reviewer. https://cerncourier.com/wp-content/uploads/2020/09/Weinberg-image-GW190521_Virgo_1667-191.jpg
Neutrinos confirm rare solar fusion process https://cerncourier.com/a/neutrinos-confirm-rare-solar-fusion-process/ Fri, 28 Aug 2020 08:50:31 +0000 https://preview-courier.web.cern.ch/?p=87987 The Borexino collaboration has used an ingenious analysis to provide the first direct proof of the CNO cycle.

The post Neutrinos confirm rare solar fusion process appeared first on CERN Courier.

]]>
Despite being our closest star, much remains to be learned about the exact nature of the Sun and how it produces its energy. Two different fusion processes are thought to be at play in the majority of stars: the direct fusion of hydrogen into helium, which is thought to be responsible for approximately 99% of the Sun’s energy; and the fusion of hydrogen into helium via the six-stage carbon-nitrogen-oxygen (CNO) process (see diagram below). Although theorised in the 1930s, direct proof of this fusion process was missing. As a result, the amount of energy produced through the CNO cycle and the amount of elements such as carbon and nitrogen in the Sun’s core could only be estimated from models. Recently, the international Borexino collaboration directly detected neutrinos produced in the CNO cycle, providing the first direct proof of this process.

The Borexino detector was specifically developed to detect the extremely rare interactions between solar neutrinos and a highly pure liquid scintillator. It comprises 278 tonnes of scintillator held in a nylon balloon deep under the mountains at Gran Sasso National Laboratory in Italy. In 2012 the experiment detected neutrinos from the main solar fusion process. Now, one year before the end of its scheduled operations, the Borexino team has fully probed the solar energy production. The discovery of the CNO process was complicated both by the lower flux of neutrinos compared to that from the main fusion process, and by the large similarity between the signal and one of the main irreducible background processes taking place in the detector.

In the six-stage CNO cycle a proton is absorbed by a carbon nucleus, followed by a nuclear decay, followed by a second and third absorption of a proton, followed by another decay, the absorption of a fourth proton and finally a decay into a carbon nucleus, a helium nucleus and the release of around 25 MeV of energy. Source: Creative Commons/Borb.

Battling background
Despite minimising backgrounds from cosmic rays, trace amounts of radioactive nuclei which leak into the active volume of Borexino produce a background of the same magnitude as the sought-after signal. The most important background for the CNO analysis was 210Bi, a product of 210Pb of which trace amounts can diffuse into the scintillator from the nylon balloon surface. Since the energy spectrum of the beta-decay of 210Bi resembles that induced by neutrinos produced in the CNO process, the key to detecting the CNO neutrinos was to directly measure the 210Bi-induced background. This was made possible by delving into the fluid dynamics of the liquid scintillator.

The 210Bi in Borexino’s scintillator produces 210Po, which undergoes alpha decay with a half-life of 134 days. As the alpha decay is relatively easy to identify, the team used 210Po decays to deduce the number of 210Bi decays in the detector. However, as the different isotopes move around in the liquid it cannot be guaranteed that the 210Bi distribution is equal to 210Po unless the flow in the detector is well understood. To overcome this, the collaboration had to reduce the flow of the scintillator material by stabilising the temperature, both through insulation and direct temperature regulation. After the 210Po decay distribution inside the detector was found to be stable over times exceeding its half-life, an area with low 210Po activity was identified and used to measure the CNO neutrinos with a well-understood and relatively low background.

This was made possible by delving into the fluid dynamics of the liquid scintillator

The spectral measurements performed of the CNO cycle exclude a non-detection with a statistical significance of more than five sigma. The measured solar-neutrino flux (7.2+3.01.7 counts per day per 100 tonnes of target, at 68% confidence) furthermore agrees with models which predict that 1% of the energy produced in the Sun comes from the CNO process. Additionally, the results shine light on the density of elements other than hydrogen and helium — the metallicity — of the Sun’s core, which in recent years has been debated to potentially differ from that on the solar surface. The Borexino results indicate that the density is likely similar although more precise measurements with future detectors are required for precision measurements.

This groundbreaking study, which required not only some of the most precise techniques used in particle physics but also complex fluid-dynamics simulations, confirms predictions made almost a century ago. In doing so it provides a first probe into the processes at the core of the Sun and thus of other stars. Although it has now been proved that the CNO process is responsible for only a fraction of the Sun’s energy, for heavier and therefore hotter stars it is predicted to be the dominant fusion process, making future high-precision studies important to understand the evolution of the universe in general.

The post Neutrinos confirm rare solar fusion process appeared first on CERN Courier.

]]>
News The Borexino collaboration has used an ingenious analysis to provide the first direct proof of the CNO cycle. https://cerncourier.com/wp-content/uploads/2020/08/featured-image-CNO.jpg
Researchers grapple with XENON1T excess https://cerncourier.com/a/researchers-grapple-with-xenon1t-excess/ Thu, 02 Jul 2020 15:11:38 +0000 https://preview-courier.web.cern.ch/?p=87667 The excess could be due to a difficult-to-constrain tritium background, solar axions or solar neutrinos with a Majorana nature, says the collaboration.

The post Researchers grapple with XENON1T excess appeared first on CERN Courier.

]]>
An intriguing low-energy excess of background events recorded by the world’s most sensitive WIMP dark-matter experiment has sparked a series of preprints speculating on its underlying cause. On 17 June, the XENON collaboration, which searches for excess nuclear recoils in the XENON1T detector, a one-tonne liquid-xenon time-projection chamber (TPC) located underground at Gran Sasso National Laboratory in Italy, reported an unexpected excess in electronic recoils at energies of a few keV, just above its detection threshold. Though acknowledging that the excess could be due to a difficult-to-constrain tritium background, the collaboration says solar axions and solar neutrinos with a Majorana nature, both of which would signal physics beyond the Standard Model, are credible explanations for the approximately 3σ effect.

Who needs the WIMP if we can have the axion?

Elena Aprile

“Thanks to our unprecedented low event rate in electronic recoils background, and thanks to our large exposure, both in detector mass and time, we could afford to look for signatures of rare and new phenomena expected at the lowest energies where one usually finds lots of background,” says XENON spokesperson Elena Aprile, of Columbia University in New York. “I am especially intrigued by the possibility to detect axions produced in the Sun,” she says. “Who needs the WIMP if we can have the axion?”

The XENON collaboration has been in pursuit of WIMPs, a leading bosonic cold-dark-matter candidate, since 2005 with a programme of 10 kg, 100 kg and now 1 tonne liquid-xenon TPCs. Particles scattering in the liquid xenon create both scintillation light and ionisation electrons; the latter drift upwards in an electric field towards a gaseous phase where electroluminescence amplifies the charge signal into a light signal. Photomultiplier tubes record both the initial scintillation light and the later electroluminescence, to reveal 3D particle tracks, and the relative magnitudes of the two signals allows nuclear and electronic recoils to be differentiated. XENON1T derives its world-leading limit on WIMPs – the strictest 90% confidence limit being a cross-section of 4.1×10−47 cm2 for WIMPs with a mass of 30 GeV – from the very low rate of nuclear recoils observed by XENON1T from February 2017 to February 2018.

XENON1T low-energy electronic recoils

A surprise was in store, however, in the same data set, which also revealed 285 electronic recoils at the lower end of XENON1T’s energy acceptance, from 1 to 7 keV, over the expected background of 232±15. The sole background-modelling explanation for the excess that the collaboration has not been able to rule out is a minute concentration of tritium in the liquid xenon. With a half-life of 12.3 years and a relatively low amount of energy liberated in the decay of 18.6 keV, an unexpected contribution of tritium decays is favoured over XENON1T’s baseline background model at approximately 3σ. “We can measure extremely tiny amounts of various potential background sources, but unfortunately, we are not sensitive to a handful of tritium atoms per kilogram,” explains deputy XENON1T spokesperson Manfred Lindner, of the Max Planck Institute for Nuclear Physics in Heidelberg. Cryogenic distillation plus running the liquid xenon through a getter is expected to remove any tritium below the level that would be relevant, he says, but this needs to be cross-checked. The question is whether a minute amount of tritium could somehow remain in liquid xenon or if some makes it from the detector materials into the liquified xenon in the detector. “I personally think that the observed excess could equally well be a new background or new physics. About 3σ implies of course a certain statistical chance for a fluctuation, but I find it intriguing to have this excess not at some random place, but towards the lower end of the spectrum. This is interesting since many new-physics scenarios generically lead to a 1/E or 1/E2 enhancement which would be cut off by our detection threshold.”

Solar axions

One solution proposed by the collaboration is solar axions. Axions are a consequence of a new U(1) symmetry proposed in 1977 to explain the immeasurably small degree of CP violation in quantum chromodynamics – the so-called strong CP problem — and are also a dark-matter candidate. Though XENON1T is not expected to be sensitive to dark-matter axions, should they exist they would be produced by the sun at energies consistent with the XENON1T excess. According to this hypothesis, the axions would be detected via the “axioelectric” effect, an axion analogue of the photoelectric effect. Though a good fit phenomenologically, and like tritium favoured over the background-only hypothesis at approximately 3σ, the solar-axion explanation is disfavoured by astrophysical constraints. For example, it would lead to a significant extra energy loss in stars.

Axion helioscopes such as the CERN Axion Solar Telescope (CAST) experiment, which directs a prototype LHC dipole magnet at the Sun and could convert solar axions into X-ray photons, will help in testing the hypothesis. “It is not impossible to have an axion model that shows up in XENON but not in CAST,” says deputy spokesperson Igor Garcia Irastorza of University of Zaragoza, “but CAST already constraints part of the axion interpretation of the XENON signal.” Its successor, the International Axion Observatory (IAXO), which is set to begin data taking in 2024, will have improved sensitivity. “If the XENON1T signal is indeed an axion, IAXO will find it within the first hours of running,” says Garcia Irastorza.

A second new-physics explanation cited for XENON1T’s low-energy excess is an enhanced rate of solar neutrinos interacting in the detector. In the Standard Model, neutrinos have a negligibly small magnetic moment, however, should they be Majorana rather than Dirac fermions, and identical to their antiparticles, their magnetic moment should be larger, and proportional to their mass, though still not detectable. New physics beyond the Standard Model could, however, enhance the magnetic moment further. This leads to a larger interaction cross section at low energies and an excess of low-energy electron recoils. XENON1T fits indicate that solar Majorana neutrinos with an enhanced magnetic moment are also favoured over the background-only hypothesis at the level of 3σ.

The absorption of dark photons could explain the observed excess.

Joachim Kopp

The community has quickly chimed in with additional ideas, with around 40 papers appearing on the arXiv preprint server since the result was released. One possibility is a heavy dark-matter particle that annihilates or decays to a second, much lighter, “boosted dark-matter” particle which could scatter on electrons via some new interaction, notes CERN theorist Joachim Kopp. Another class of dark-matter model that has been proposed, he says, is “inelastic dark matter”, where dark-matter particles down-scatter in the detector into another dark-matter state just a few keV below the original one, with the liberated energy then seen in the detector. “An explanation I like a lot is in terms of dark photons,” he says. “The Standard Model would be augmented by a new U(1) gauge symmetry whose corresponding gauge boson, the dark photon, would mix with the Standard-Model photon. Dark photons could be abundant in the Universe, possibly even making up all the dark matter. Their absorption in the XENON1T detector could explain the observed excess.”

“The strongest asset we have is our new detector, XENONnT,” says Aprile. Despite COVID-19, the collaboration is on track to take first data before the end of 2020, she says. XENONnT will boast three times the fiducial volume of XENON1T and a factor six reduction in backgrounds, and should be able to verify or refute the signal within a few months of data taking. “An important question is if the signal has an annual modulation of about 7% correlated to the distance of the sun,” notes Lindner. “This would be a strong hint that it could be connected to new physics with solar neutrinos or solar axions.”

The post Researchers grapple with XENON1T excess appeared first on CERN Courier.

]]>
News The excess could be due to a difficult-to-constrain tritium background, solar axions or solar neutrinos with a Majorana nature, says the collaboration. https://cerncourier.com/wp-content/uploads/2020/07/XENON1T-1000.jpg
100 TeV photons test Lorentz invariance https://cerncourier.com/a/100-tev-photons-test-lorentz-invariance/ Tue, 02 Jun 2020 10:05:57 +0000 https://preview-courier.web.cern.ch/?p=87510 HAWC and other high-altitude observatories are pushing the energy of gamma-ray observations into new territory to test fundamental symmetries.

The post 100 TeV photons test Lorentz invariance appeared first on CERN Courier.

]]>
Over the past decades the photon emission from astronomical objects has been measured across 20 orders of magnitude in energy, from radio up to TeV gamma rays. This has not only led to many astronomical discoveries, but also, thanks to the extreme distances and energies involved, allowed researchers to test some of the fundamental tenets of physics. For example, the 2017 joint measurement of gravitational waves and gamma-rays from a binary neutron-star merger made it possible to determine the speed of gravity with a precision of less than 10-16 compared to the speed of light. Now, the High-Altitude Water Cherenkov (HAWC) collaboration has pushed the energy of gamma-ray observations into new territory, placing constraints on Lorentz-invariance violation (LIV) that are up to two orders of magnitude tighter than before.

Models incorporating LIV allow for modifications to the standard energy—momentum relationship dictated by special relativity, predicting phenomenological effects such as photon decay and photon splitting. Even if the probability for a photon to decay through such effects is small, the large distances involved in astrophysical measurements in principle allow experiments to detect it. The most striking implication would be the existence of a cutoff in the energy spectrum above which photons would decay while traveling towards Earth. Simply by detecting gamma-ray photons above the expected cutoff would put strong constraints on LIV.

HAWC

Increasing the energy limit for photons with which we observe the universe is, however, challenging. Since the flux of a typical source, such as a neutron star, decreases rapidly (by approximately two orders of magnitude for each order of magnitude increase in energy), ever larger detectors are needed to probe higher energies. Photons with energies of hundreds of GeV can still be directly detected using satellite-based detectors equipped with tracking and colorimetry. However, these instruments, such as the US-European Fermi-LAT detector and the Chinese-European DAMPE detector, require a mass of several tonnes, making launching them expensive and complex. To get to even higher energies ground-based detectors, which detect gamma-rays through the showers they induce in Earth’s atmosphere, are more popular. While they can be more easily scaled up in size than can space-based detectors, the indirect detection and the large background coming from cosmic rays make such measurements difficult.

It is likely that LIV will be further constrained in the near future, as a range of new high-energy gamma-ray detectors are developed

Recently, significant improvements have been made in ground-based detector technology and data analysis. The Japanese-Chinese Tibet air shower gamma-ray experiment ASγ, a Cherenkov-based detector array built at an altitude of 4 km in Yangbajing, added underground muon detectors to allow hadronic air showers to be differentiated from photon-induced ones via the difference in muon content. By additionally improving the data-analysis techniques to more accurately remove the isotropic all-sky background from the data, in 2019 the ASγ team managed to observe a source, in this case the Crab pulsar, at energies above 100 TeV for the first time. This ground-breaking measurement was soon followed by measurements of nine different sources above 56 TeV by the HAWC observatory located at 4 km altitude in the mountains near Puebla, Mexico.

These new measurements of astrophysical sources, which are likely all pulsars, could not only lead to an answer on the question where the highest-energy (PeV and above) cosmic rays are produced, but also allows new constraints to be placed on LIV. The spectra of the four sources studied by the collaboration did not show any signs of a cutoff, allowing the HAWC team to exclude the LIV energy scale to 2.2×1031  eV — an improvement of one-to-two orders of magnitude over previous limits.

It is likely that LIV will be further constrained in the near future, as a range of new high-energy gamma-ray detectors are developed. Perhaps the most powerful of these is the Large High Altitude Air Shower Observatory (LHAASO) located in the mountains of the Sichuan province of China. The construction of the detector array is ongoing while the first stage of the array commenced data taking in 2018. Once finished, LHAASO will be close to two orders of magnitude more sensitive than HAWC at 100 TeV and capable of pushing the photon energy into to the PeV range. Additionally, the limit of direct-detection measurements will be pushed beyond that from Fermi-LAT and DAMPE by the Chinese European High Energy cosmic Radiation Detector (HERD), a 1.8-tonne calorimeter surrounded by a tracker scheduled for launch in 2025 which is foreseen to be able to directly detect photons up to 100 TeV.

The post 100 TeV photons test Lorentz invariance appeared first on CERN Courier.

]]>
News HAWC and other high-altitude observatories are pushing the energy of gamma-ray observations into new territory to test fundamental symmetries. https://cerncourier.com/wp-content/uploads/2020/06/HAWC-es8_medium_1.png
Gamma-ray polarisation sharpens multi-messenger astrophysics https://cerncourier.com/a/gamma-ray-polarisation-sharpens-multi-messenger-astrophysics/ Thu, 30 Apr 2020 15:34:08 +0000 https://preview-courier.web.cern.ch/?p=87277 The 2020s should see the start of a new type of astrophysics, reports Merlin Kole.

The post Gamma-ray polarisation sharpens multi-messenger astrophysics appeared first on CERN Courier.

]]>
POLAR polarisation plot

Recent years have seen the dawn of multi-messenger astrophysics. Perhaps the most significant contributor to this new era was the 2017 detection of gravitational waves (GWs) in coincidence with a bright electromagnetic phenomenon, a gamma-ray burst (GRB). GRBs consist of intense bursts of gamma rays which, for periods ranging from hundreds of milliseconds to hundreds of seconds, outshine any other source in the universe. Although the first such event was spotted back in 1967, and typically one GRB is detected every day, the underlying astrophysical processes responsible remain a mystery. The joint GW–electromagnetic detection answered several questions about the nature of GRBs, but many others remain.

Recently, researchers made the first attempts to add gamma-ray polarisation into the mix. If successful, this could enable the next step forward within the multi-messenger field.

So far, three photon parameters – arrival time, direction and energy – have been measured extensively for a range of different objects within astrophysics. Yet, despite the wealth of information it contains, the photon polarisation has been neglected. X-ray or gamma-ray fluxes emitted by charged particles within strong magnetic fields are highly polarised, while those emitted by thermal processes are typically unpolarised. Polarisation therefore allows researchers to easily identify the dominant emission mechanism for a particular source. GRBs are one such source, since a consensus on where the gamma rays actually originate from is still missing.

Difficult measurements

The reason that polarisation has not been measured in great detail is related to the difficulty of performing the measurements. To measure the polarisation of an incoming photon, details of the secondary products produced as it interacts in a detector need to be measured. With gamma rays, for example, the angle at which the gamma ray scatters in the detector is related to its polarisation vector. This means that, in addition to detecting the photon, researchers need to study its subsequent path. Such measurements are further complicated by the need to perform them above the atmosphere on satellites, which complicates the detector design significantly.

The 2020s should see the start of a new type of astrophysics

Recent progress has shown that, although challenging, polarisation measurements are possible. The most recent example came from the POLAR mission, a Swiss, Polish and Chinese experiment fully dedicated to measuring the polarisation of GRBs, which took data from September 2016 to April 2017. The team behind POLAR, which was launched to space in 2016 attached to a module for the China Space Station, recently published its first results. Though they indicate that the emission from GRBs is likely unpolarised, the story appears to be more complex. For example, the polarisation is found to be low when looking at the full GRB emission, but when studying it over short time intervals, a strong hint of high polarisation is found with a rapidly changing polarisation angle during the GRB event. This rapid evolution of the polarisation angle, which is yet to be explained by the theoretical community, smears out the polarisation when looking at the full GRB. In order to fully understand the evolution, which could give hints of an evolution of a magnetic field, finer time-binning and more precise measurements are needed, which require more statistics.

POLAR-2

Two future instruments capable of providing such detailed measurements are currently being developed. The first, POLAR-2, is the follow-up of the POLAR mission and was recently recommended to become a CERN-recognised experiment. P OLAR-2 w ill b e a n order of magnitude more sensitive (due to larger statistics and lower systematics) than its predecessor and therefore should be able to answer most of the questions raised by the recent POLAR results. The experiment will also play an important role in detecting extremely weak GRBs, such as those expected from GW events. POLAR-2, which will be launched in 2024 to the under-construction China Space Station, could well be followed by a similar but slightly smaller instrument called LEAP, which recently progressed to the final stage of a NASA selection process. If successful, LEAP would join POLAR-2 in 2025 in orbit on the International Space Station.

Apart from dedicated GRB polarimeters, progress is also being made at other upcoming instruments such as NASA’s Imaging X-ray Polarimetry Explorer and China-ESA’s enhanced X-ray Timing and Polarimetry mission, which aim to perform the first detailed polarisation measurements of a range of astrophysical objects in the X-ray region. While the first measurements from POLAR have been published recently, and more are expected soon, the 2020s should see the start of a new type of astrophysics, which adds yet another parameter to multi-messenger exploration.

The post Gamma-ray polarisation sharpens multi-messenger astrophysics appeared first on CERN Courier.

]]>
News The 2020s should see the start of a new type of astrophysics, reports Merlin Kole. https://cerncourier.com/wp-content/uploads/2020/04/PLOAR-preview.jpg
AMS detector given a new lease of life https://cerncourier.com/a/ams-detector-given-a-new-lease-of-life/ Wed, 25 Mar 2020 11:17:49 +0000 https://preview-courier.web.cern.ch/?p=86531 With its new cooling system in place, the Alpha Magnetic Spectrometer continues its quest to understand the origin of cosmic rays.

The post AMS detector given a new lease of life appeared first on CERN Courier.

]]>
Checking the installation of the Upgraded Tracker Thermal Pump System for AMS

On 25 January, European Space Agency astronaut Luca Parmitano stepped outside a half-million-kilogramme structure travelling at tens of thousands of kilometres per hour, hundreds of kilometres above Earth, and, tethered by a thin cord, ventured into the vacuum of space to check for a leak.

It was the fourth such extravehicular activity (EVA) he’d been on in two months. All things considered, the task ahead was relatively straightforward: to make sure that a newly installed cooling system for the Alpha Magnetic Spectrometer (AMS), the cosmic-ray detector that has been attached to the International Space Station (ISS) since 2011, had been properly plumbed in.

Heart-stopping spacewalks

The first EVA on 15 November saw Parm­itano and fellow astronaut, NASA’s Drew Morgan, remove and jettison the AMS debris shield, which is currently still spiralling its way to Earth, to allow access to the experiment’s cooling system. The CO2 pumps, needed to keep the 200,000-channel tracker electronics at a temperature of 10 ± 3 °C, had started to fail in 2014 – which was no surprise, as AMS was initially only supposed to operate for three years. During the second EVA on 22 November, the astronauts cut through the cooling system’s eight stainless-steel lines to isolate and prepare it for removal, and a critical EVA3 on 2 December saw Morgan and Parmitano successfully connect the new pump system, which had been transported to the ISS by an Antares rocket the previous month. Then came a long wait until January to find out if the repair had been successful.

“EVA4 was the heart-stopping EVA because that’s where we did the leak tests on all those tubes,” says Ken Bollweg, NASA’s AMS project manager. The success of the previous EVAs suggested that the connections were going to be fine. But Parmitano arrived at the first tube, attached one of 29 bespoke tools developed specially for the AMS repair, and saw that the instrument had issued a warning signal. “I see red,” he reported to anxious teams at NASA’s Johnson Space Center’s Mission Control Center and the AMS Payload Operations Control Centre (POCC) at CERN’s Prévessin site, from where spokesperson Sam Ting and his colleagues were monitoring proceedings closely. Though not huge, the leak was serious enough not to guarantee that the system would work, jeopardising four years of preparation involving hundreds of astronauts, engineers and scientists. Following procedures put in place to deal with such a situation, Parmitano tightened the connection and waited for about an hour before checking the tube again. A leak was still present. Then, after re-tightening the troublesome connection again, while the team was preparing a risky “jumper” manoeuvre to bypass the leak and make a new connection, he checked a third time: “No red!” Happy faces lit up the POCC.

NASA has learned a lot of new things from this

Ken Bollweg

AMS was never designed to be service­able, and the repair, unprecedented in complexity for a space intervention, required the avoidance of sharp edges and other hazards in order to bring it back to full operational capacity. The chances of something going wrong were high, says Bollweg. “NASA has learned a lot of new things from this. We really pushed the envelope. It showed that we have the capabilities to do even more than we have done in the past.” EVA4 lasted almost six hours. Five hours and two minutes into it, Parmitano, who returned safely to Earth on 6 February, broke the European record for the most time spent spacewalking (33 hours and nine minutes). It’s not a job for the fainthearted. During a spacewalk in 2013, while wedged into a confined space outside the ISS, a malfunction in Parmitano’s spacesuit caused his helmet to start filling with water and he almost drowned.

“Building and operating AMS in space has been an incredible journey through engineering and physics, but today it is thanks to the NASA group that in AMS we can continue this journey and this is amazing. An enormous thanks to the EVA crew,” said AMS integration engineer Corrado Gargiulo of CERN. The day after EVA4, the POCC team spent about 10 hours refilling the new AMS cooling system with 1.3 kg of CO2 and started to power up the detector. At noon on 27 January, all the detector’s subsystems were sending data back, marking a new chapter for AMS that will see it operate for the lifetime of the ISS.

Into the unknown

The 7.5 tonne AMS apparatus has so far recorded almost 150 billion charged cosmic rays with energies up to the multi-TeV range, and its percent-level results show clear and unexpected behaviour of cosmic-ray events at high energies. A further 10 years of operation will allow AMS to make conclusive statements on the origin of these unexpected observations, says Ting. “NASA is to be congratulated on seeing this difficult project through over a period of many years. AMS has observed unique features in cosmic-ray spectra that defy traditional explanations. We’re entering into a region where nobody has been before.”

AMS has observed unique features in cosmic-ray spectra that defy traditional explanations

Sam Ting

The first major result from AMS came in 2013, when measurements of the cosmic positron fraction (the ratio of the posi­tron flux to the flux of electrons and positrons) up to an energy of 350 GeV showed that the spectrum fits well to dark-matter models. The following year, AMS published the positron and electron fluxes, which showed that neither can be fitted with the single-power-law assumption underpinning the traditional understanding of cosmic rays. The collaboration has continued to find previously unobserved features in the measured fluxes and flux ratio of electrons and positrons, publishing the results in several high-profile papers during the past couple of years.

Figure 1. The positron spectrum measured by AMS (yellow), showing that low-energy positrons mostly come from cosmic ray collisions (shaded area). Unexpectedly, there is a continuous excess starting at 25 GeV. The spectrum reaches a maximum at around 284 GeV followed by a sharp drop-off with a finite energy cutoff established at 99.99% confidence.
Figure 2. Comparison between 0.6 million antiprotons (blue, right axis) with 1.9 million positrons (yellow, left axis) using the latest AMS data.

Last year, AMS reaffirmed the complex energy dependence exhibited by the positron flux: a significant excess starting from 25 GeV, a sharp drop-off above 284 GeV and a finite energy cutoff at 810 GeV (figure 1). “In the entire energy range the positron flux is well described by the sum of a term associated with the positrons produced in the collision of cosmic rays, which dominates at low energies, and a new source term of positrons, which dominates at high energies,” says Ting. “These experimental data on cosmic-ray positrons show that, at high energies, they predominantly originate either from dark-matter annihilation or from other astrophysical sources.” Although dark-matter models predict such a cut off, the AMS data cannot yet rule out astrophysical sources, in particular pulsars. Further intrigue comes from the latest, to-be-published, AMS result on antiprotons, which, although rare at high energies, exhibit similar functional behaviour as the positron spectrum (figure 2). “This indicates that the excess of positrons may not come from pulsars due to the similarity of the two spectra and the high mass of antiprotons,” says Ting.

Thanks to the successful installation of the new AMS cooling system, the expected positron spectrum by 2028, in particular the high-energy data points, should enable an accurate comparison with dark-matter models (figure 3). High-energy (>TeV) events are also expected to provide insights into the origins of cosmic electrons, the latest results on which show that the electron flux exhibits a significant excess starting from 42 GeV.

Figure 3. Comparison between the projected positron spectrum (light blue) and the prediction from a dark-matter model (Phys. Rev. D 88 076013).
Figure 4. The electron spectrum (light blue points) fitted with the sum of two power laws (green curve) in the energy range 0.5–1400 GeV. The two power-law components a and b are represented by the grey and blue areas, respectively. The minute contribution of electrons from cosmic-ray collisions is also shown (green area).

Unlike the positron flux, which has an exponential energy cutoff at 810 GeV, the electron flux does not have a cutoff (at least not below 1.9 TeV). Also: in the entire energy range the electron flux is well described by the sum of two power law components (figure 4), providing “clear evidence”, says Ting, that most high energy electrons originate from different sources than high energy positrons.

Novelties in nuclei

Unexpected results continue to appear in data from cosmic nuclei, which make up the bulk of cosmic rays travelling through space. Helium, carbon and oxygen nuclei are thought to be mainly produced and accelerated in astrophysical sources and are known as primary cosmic rays, while lithium, beryllium and boron nuclei are produced by the collision of heavier nuclei with nuclei of the interstellar matter and are known as secondary cosmic rays.

New properties of primary cosmic rays – helium, carbon and oxygen – have been observed in the rigidity range 2 GV to 3 TV; at high energies these three spectra also have identical rigidity dependence, all deviating from a single power law above 200 GV. Similar oddities have appeared in measurements of secondary cosmic rays – lithium, beryllium and boron – in the range 1.9 GV to 3.3 TV (figure 5). The lithium and boron fluxes have an identical rigidity dependence above 7 GV, all three fluxes have an identical rigidity dependence above 30 GV, and, unexpectedly, above 30 GV the Li/Be flux ratio is approximately equal to two.

Figure 5. The rigidity dependences of the spectra of primary cosmic rays (helium, carbon and oxygen) compared to the spectra of secondary cosmic rays (lithium, beryllium and boron), all scaled to the helium flux at 30 GV.

The ratio of secondary fluxes to primary fluxes is particularly interesting because it directly measures the amount and properties of the interstellar medium. Before AMS, only the B/C ratio was measured and was assumed to be proportional to RΔ with Δ a constant for R > 60 GV. The latest AMS results on secondary (Li, Be, B) to primary (C, O) flux ratios show that Δ is not a constant, but changes by more than 5σ between the two rigidity ranges, 60 < R < 200 GV and 200 < R < 3300 GV. As with electron and positron fluxes, none of the current AMS results can be explained by existing theoretical models. By 2028, says Ting, AMS will extend its measurements of cosmic nuclei up to Z=30 (zinc) with sufficient statistics to get to the bottom of these and other mysteries. “We have measured many particles, electrons, positrons, antiprotons and many nuclei, and they all have distributions and none agree with current theoretical models. So we will begin to create a new field.”

The post AMS detector given a new lease of life appeared first on CERN Courier.

]]>
Feature With its new cooling system in place, the Alpha Magnetic Spectrometer continues its quest to understand the origin of cosmic rays. https://cerncourier.com/wp-content/uploads/2020/03/CCMarApr_News-AMS_feature.jpg
Success in scientific management https://cerncourier.com/a/success-in-scientific-management/ Fri, 13 Mar 2020 10:59:18 +0000 https://preview-courier.web.cern.ch/?p=86542 Barry Barish speaks to the Courier about his role in turning LIGO into a Nobel Prize-winning machine.

The post Success in scientific management appeared first on CERN Courier.

]]>
Barry Barish

Your co-Nobelists in the discovery of gravitational waves, Kip Thorne and Rainer Weiss, have both recognised your special skills in the management of the LIGO collaboration. When you landed in LIGO in 1994, what was the first thing you changed?

When I arrived in LIGO, there was a lot of dysfunction and people were going after each other. So, the first difficult problem was to make LIGO smaller, not bigger, by moving people out who weren’t going to be able to contribute constructively in the longer term. Then, I started to address what I felt were the technical and management weaknesses. Along with my colleague, Gary Sanders, who had worked with me on one of the would-be detectors for the Superconducting Super Collider (SSC) before the project was cancelled, we started looking for the kind of people that were missing in technical areas.

For example, LIGO relies on very advanced lasers but I was convinced that the laser that was being planned for, a gas laser, was not the best choice because lasers were, and still are, a very fast-moving technology and solid-state lasers were more forward-looking. Coming from particle physics, I’m used to not seeing a beam with my own eyes. So I wasn’t disturbed that the most promising lasers at that time emitted light in the infrared, instead of green, and that technology had advanced to where they could be built in industry. People who worked with interferometers were used to “little optics” on lab benches where the lasers were all green and the alignment of mirrors etc was straightforward. I asked three of the most advanced groups in the world who worked on lasers of the type we needed (Hannover in Germany, Adelaide in Australia and Stanford in California) if they’d like to work together with us, and we brought these experts into LIGO to form the core of what we still have today as our laser group.

Project management for forefront science experiments is very different, and it is hard for people to do it well

This story is mirrored in many of the different technical areas in LIGO. Physics expertise and expertise in the use of interferometer techniques were in good supply in LIGO, so the main challenge was to find expertise to develop the difficult forefront technologies that we were going to depend on to reach our ambitious sensitivity goals. We also needed to strengthen the engineering and project-management areas, but that just required recruiting very good people. Later, the collaboration grew a lot, but mostly on the data-analysis side, which today makes up much of our collaboration.

According to Gary Sanders of SLAC, “efficient management of large science facilities requires experience and skills not usually found in the repertoire of research scientists”. Are you a rare exception?

Gary Sanders was a student of Sam Ting, then he went to Los Alamos where he got a lot of good experience doing project work. For myself, I learned what was needed kind of organically as my own research grew into larger and larger projects. Maybe my personality matched the problem, but I also studied the subject. I know how engineers go about building a bridge, for example, and I could pass an exam in project management. But, project management for forefront science experiments is very different, and it is hard for people to do it well. If you build a bridge, you have a boss, and he or she has three or four people who do tasks under his/her supervision, so generally the way a large project is structured is a big hierarchical organisation. Doing a physics research project is almost the opposite. For large engineering projects, once you’ve built the bridge, it’s a bridge, and you don’t change it. When you build a physics experiment, it usually doesn’t do what you want it to do. You begin with one plan and then you decide to change to another, or even while you’re building it you develop better approaches and technologies that will improve the instruments. To do research in physics, experience tells us that we need a flat, rather than vertical, organisational style. So, you can’t build a complicated, expensive ever-evolving research project using just what’s taught in the project-management books, and you can’t do what’s needed to succeed in cost, schedule, performance, etc, in the style found in a typical physics-department research group. You have to employ some sort of hybrid. Whether it’s LIGO or an LHC experiment, you need to have enough discipline to make sure things are done on time, yet you also need the flexibility and encouragement to change things for the better. In LIGO, we judiciously adapted various project-management formalities, and used them by not interfering any more than necessary with what we do in a research environment. Then, the only problem – but admittedly a big one – is to get the researchers, who don’t like any structure, to buy into this approach.

How did your SSC experience help?

It helped with the political part, not the technical part, because I came to realise how difficult the politics and things outside of a project are. I think almost anything I worked on before has been very hard, because of what it was or because of some politics in doing it, but I didn’t have enormous problems that were totally outside my control, as we had in the SSC.

How did you convince the US government to keep funding LIGO, which has been described as the most costly project in the history of the NSF?

It’s a miracle, because not only was LIGO costly, but we didn’t have much to show in terms of science for more than 20 years. We were funded in 1994, and we made the first detection more than 20 years later. I think the miracle wasn’t me, rather we were in a unique situation in the US. Our funding agency, the NSF, has a different mission than any other agency I know about. In the US, physical sciences are funded by three big agencies. One is the DOE, which has a division that does research in various areas with national labs that have their own structures and missions. The other big agency that does physical science is NASA, and they have the challenge of safety in space. The NSF gets less money than the other two agencies, but it has a mission that I would characterise by one word: science. LIGO has so far seen five different NSF directors, but all of them were prominent scientists. Having the director of the funding agency be someone who understood the potential importance of gravitational waves, maybe not in detail, helped make NSF decide both to take such a big risk on LIGO and then continue supporting it until it succeeded. The NSF leadership understands that risk-taking is integral to making big advancements in science.

What was your role in LIGO apart from management?

I concentrated more on the technical side in LIGO than on data analysis. In LIGO, the analysis challenges are more theoretical than they are in particle physics. What we have to do is compare general relativity with what happens in a real physical phenomenon that produces gravitational waves. That involves more of a mixed problem of developing numerical relativity, as well as sophisticated data-analysis pipelines. Another challenge is the huge amount of data because, unlike at CERN, there are no triggers. We just take data all the time, so sorting through it is the analysis problem. Nevertheless, I’ve always felt and still feel that the real challenge for LIGO is that we are limited by how sensitive we can make the detector, not by how well we can do the data analysis.

What are you doing now in LIGO?

Now that I can do anything I want, I am focusing on something I am interested in and that we don’t employ very much, which is artificial intelligence and machine learning (ML). In LIGO there are several problems that could adapt themselves very well to ML with recent advances. So we built a small group of people, mostly much younger than me, to do ML in LIGO. I recently started teaching at the University of California Riverside, and have started working with young faculty in the university’s computer-science department on adapting some techniques in ML to problems in physics. In LIGO, we have a problem in the data that we call “glitches”, which appear when something that happens in the apparatus or outside world appears in the data. We need to get rid of glitches, and we use a lot of human manpower to make the data clean. This is a problem that should adapt itself very well to a ML analysis.

Now that gravitational waves have joined the era of multi-messenger astronomy, what’s the most exciting thing that can happen next?

For gravitational waves, knowing what discovery you are going to make is almost impossible because it is really a totally new probe of the universe. Nevertheless, there are some known sources that we should be able to see soon, and maybe even will in the present run. So far we’ve seen two sources of gravitational waves: a collision of two black holes and a collision of two neutron stars, but we haven’t yet seen a black hole with a neutron star going around it. They’re particularly interesting scientifically because they contain information about nuclear physics of very compact objects, and because the two objects are very different in mass and that’s very difficult to calculate using numerical relativity. So it’s not just checking off another source that we found, but new areas of gravitational-wave science. Another attractive possibility is to detect a spinning neutron star, a pulsar. This is a continuous signal that is another interesting source which we hope to detect in a short time. Actually, I’m more interested in seeing unanticipated sources where we have no idea what we’re going to see, perhaps phenomena that uniquely happen in gravity alone.

The NSF leadership understands that risk-taking is integral to making big advancements

Will we ever see gravitons?

That’s a really good question because gravitons don’t exist in Einstein’s equations. But that’s not necessarily nature, that’s Einstein’s equations! The biggest problem we have in physics is that we have two fantastic theories. One describes almost anything you can imagine on a large scale, and that’s Einstein’s equations, and the other, which describes almost too well everything you find here at CERN, is the Standard Model, which is based on quantum field theory. Maybe black holes have the feature that they satisfy Einstein’s equations and at the same time conserve quantum numbers and all the things that happen in quantum physics. What we are missing is the experimental clue, whether it’s gravitons or something else that needs to be explained by both these theories. Because theory alone has not been able to bring them together, I think we need experimental information.

Do particle accelerators still have a role in this?

We never know because we don’t know the future, but our best way of understanding what limits our present understanding has been traditional particle accelerators because we have the most control over the particles we’re studying. The unique feature of particle accelerators is that of being able to measure all the parameters of particles that we want. We’ve found the Higgs boson and that’s wonderful, but now we know that the neutrinos also have mass and the Higgs boson possibly doesn’t describe that. We have three families of particles, and a whole set of other very fundamental questions that we have no handle on at all, despite the fact that we have this nice “standard” model. So is it a good reason to go to higher energy or a different kind of accelerator? Absolutely, though it’s a practical question whether it’s doable and affordable.

What’s the current status of gravitational-wave observatories?

We will continue to improve the sensitivity of LIGO and Virgo in incremental steps over the next few years, and LIGO will add a detector in India to give better global coverage. KAGRA in Japan is also expected to come online. But we can already see that
next-generation interferometers will be needed to pursue the science in the future. A good design study, called the Einstein Telescope, has been developed in Europe. In the US we are also looking at next-generation detectors and have different ideas, which is healthy at this point. We are not limited by nature, but by our ability to develop the technologies to make more sensitive interferometers. The next generation of detectors will enable us to reach large red shifts and study gravitational-wave cosmology. We all look forward to exploiting this new area of physics, and I am sure important discoveries will emerge.

The post Success in scientific management appeared first on CERN Courier.

]]>
Opinion Barry Barish speaks to the Courier about his role in turning LIGO into a Nobel Prize-winning machine. https://cerncourier.com/wp-content/uploads/2020/02/CCMarApr_Interview_Barish_feature.jpg
Cosmology and the quantum vacuum https://cerncourier.com/a/cosmology-and-the-quantum-vacuum/ Wed, 11 Mar 2020 11:05:07 +0000 https://preview-courier.web.cern.ch/?p=86754 The sixth conference in the series marked Spanish theorist Emilio Elizalde’s 70th birthday.

The post Cosmology and the quantum vacuum appeared first on CERN Courier.

]]>
The sixth Cosmology and the Quantum Vacuum conference attracted about 60 theoreticians to the Institute of Space Sciences in Barcelona from 5 to 7 March. This year the conference marked Spanish theorist Emilio Elizalde’s 70th birthday. He is a well known specialist in mathematical physics, field theory and gravity, with over 300 publications and three monographs on the Casimir effect and zeta regularisation. He has co-authored remarkable works on viable theories of modified gravity which unify inflation with dark energy.

These meetings bring together researchers who study theoretical cosmology and various aspects of the quantum vacuum such as the Casimir effect. This quantum effect manifests itself as an attractive force which appears between plates which are extremely close to each other. As it is related to the quantum vacuum, it is expected to be important in cosmology as well, giving a kind of effective induced cosmological constant. Manuel Asorey (Zaragoza), Mike Bordag (Leipzig) and Aram Saharian (Erevan) discussed various aspects of the Casimir effect for scalars and for gauge theories. Joseph Buchbinder gave a review of one-loop effective action in supersymmetric gauge theories. Conformal quantum gravity and quantum electrodynamics in de Sitter space were presented by Enrique Alvarez (Madrid) and Drazen Glavan (Brussels), respectively.

Enrique Gaztanaga argued for two early inflationary periods

Even more attention was paid to theoretical cosmology. The evolution of the early and/or late universe in different theories of modified gravity was discussed by several delegates, with Enrique Gaztanaga (Barcelona) expressing an interesting point of view on the inflationary universe, arguing for two early inflationary periods.

Martiros Khurshyadyan and I discussed modified-gravity cosmology with the unification of inflation and dark energy, and wormholes, building on work with Emilio Elizalde. Wormholes are usually related with exotic matter, however they may in alternative gravity be caused by modifications to the gravitational equations of motion. Iver Brevik (Trondheim) gave an excellent introduction to viscosity in cosmology. Rather exotic wormholes were presented by Sergey Sushkov (Kazan), while black holes in modified gravity were discussed by Gamal Nashed (Cairo). A fluid approach to the dark-energy epoch and the addition of four forms (antisymmetric tensor fields with four indices) to late universe evolution was given by Diego Saez (Vallodolid) and Mariam Bouhmadi-Lopez (Bilbao), respectively. Novel aspects of non-standard quintessential inflation were presented by Jaime Haro (Barcelona).

Many interesting talks were given by young participants at this meeting. The exchange of ideas between cosmologists on the one side and quantum-field-theory specialists on the other will surely help in the further development of rigorous approaches to the construction of quantum gravity. It also opens the window onto a much better account of quantum effects in the history of the universe.

The post Cosmology and the quantum vacuum appeared first on CERN Courier.

]]>
Meeting report The sixth conference in the series marked Spanish theorist Emilio Elizalde’s 70th birthday. https://cerncourier.com/wp-content/uploads/2020/03/EkaterinaPozdeeva3.jpg
Scoping out the Einstein Telescope https://cerncourier.com/a/scoping-out-the-einstein-telescope/ Mon, 09 Mar 2020 21:17:51 +0000 https://preview-courier.web.cern.ch/?p=86613 Activities are gathering pace at two sites in Europe where the Einstein Telescope, a proposed next-generation gravitational-wave observatory, may be built.

The post Scoping out the Einstein Telescope appeared first on CERN Courier.

]]>
The layout of the ETpathfinder facility

In a former newspaper printing plant in the southern Dutch town of Maastricht, the future of gravitational-wave detection is taking shape. In a huge hall, known to locals as the “big black box”, construction of a facility called ETpathfinder has just got under way, with the first experiments due to start as soon as next year. ETpathfinder will be a testing ground for the new technologies needed to detect gravitational waves in frequency ranges that the present generation of detectors cannot cover. At the same time, plans are being developed for a full-scale gravitational-wave detector, the Einstein Telescope (ET), in the Dutch–Belgian–German border region. Related activities are taking place 1500 km south in the heart of Sardinia, Italy. In 2023, one of these two sites (which have been selected from a total of six possible European locations) will be selected as the location of the proposed ET.

In 2015, the Laser Interferometer Gravitational-Wave Observatory (LIGO), which is based at two sites in the US, made the first direct detection of a gravitational wave. The Virgo observatory near Pisa in Italy came online soon afterwards, and the KAGRA observatory in Japan is about to become the third major gravitational-wave observatory in operation. All are L-shaped laser interferometers that detect relative differences in light paths between mirrors spaced far apart (4 km in LIGO; 3 km in Virgo and KAGRA) at the ends of two perpendicular vacuum tubes. A passing gravitational wave changes the relative path lengths by as little as one part in 1021, which is detectable via the interference between the two light paths. Since 2015, dozens of gravitational waves have been detected from various sources, providing a new window onto the universe. One event has already been linked to astronomical observations in other channels, marking a major step forward in multi-messenger astronomy (CERN Courier December 2017 p17).

Back in time

The ET would be at least 10 times more sensitive than Advanced LIGO and Advanced Virgo, extending its scope for detections and enabling physicists to look back much further in cosmological time. For this reason, the interferometer has to be built at least 200 m underground in a geologically stable area, its mirrors have to operate in cryogenic conditions to reduce thermal disturbance, and they have to be larger and heavier to allow for a larger and more powerful laser beam. The ET would be a triangular laser interferometer with sides of 10 km and four ultra-high vacuum tubes per tunnel. The triangle configuration is equivalent to three overlapping interferometers with two arms each, allowing sources in the sky to be pinpointed via triangulation from just one location instead of several as needed by existing observatories. First proposed more than a decade ago and estimated to cost close to €2 billion, the ET, if approved, is expected to start looking at the sky sometime in the 2030s.

The surface above the Sar-Grav laboratory in Sardinia

“In the next decade we will implement new technologies in Advanced Virgo and Advanced LIGO, which will enable about a factor-two increase in sensitivity, gaining in detection volume too, but we are reaching the limits of the infrastructure hosting the detectors, and it is clear that at a certain point these will strongly limit the progress you can make by installing new technologies,” explains Michele Punturo of INFN Perugia, who is co-chair of the international committee preparing the ET proposal. “The ET idea and its starting point is to have a new infrastructure capable of hosting further and further evolutions of the detectors for decades.”

Belgian, Dutch and German universities are investing heavily in the ETpathfinder project, which is also funded by European Union budgets for interregional development, and are considering a bid for the ET in the flowing green hills of the border region around Vaals between Maastricht (Netherlands) and Luik (Belgium). A geological study in September 2019 concluded that the area has a soft-soil top layer that provides very good environmental noise isolation for a detector built in granite-like layers 200 m below. Economic studies also show a net benefit, both regional and national, from the high-tech infrastructure the ET would need. But even if ET is not built there, ETpathfinder will still be essential to future gravitational-wave detection, stresses project leader Stefan Hild of Maastricht University. “This will become the testing ground for the disruptive technologies we will need in this field anyway,” he says.

ET in search of home

ETpathfinder is a research infrastructure, not a scale model for the future ET. Its short length means that it is not aimed at detecting gravitational waves at any point in time. The L-shaped apparatus (“Triangulating for the future” image) has two arms about 20 m long, with two large steel suspension towers each containing large mirrors. The arms meet in a central fifth steel optical tower and one of the tubes extends behind the central tower, ending in a sixth tower. The whole facility will be housed in a new climate-controlled clean room inside the hall, and placed on a new low-vibration concrete floor. ETpathfinder is not a single interferometer but consists of two separate research facilities joined at one point for shared instrumentation and support systems. The two arms could be used to test different mirrors, suspensions, temperatures or laser frequencies independently. Those are the parameters Hild and his team are focusing on to further reduce noise in the interferometers and enhance their sensitivity.

Deep-cooling the mirrors is one way to beat noise, says Hild. But it also brings huge new challenges. One is that thermal conductivity of silica glass is not perfect at deep cryogenic temperatures, leading to deformations due to local laser heating. For that reason, pure silicon has to be used, but silicon is not transparent to the conventional 1064 nm laser light used for detecting gravitational waves and to align the optical systems in the detector. Instead, a whole new laser technology at 1550 nm will have to be developed and tested, including fibre-laser sources, beam control and manipulation, and specialised low-noise sensors. “All these key technologies and more need testing before they can be scaled up to the 10 km scales of the future ET,” says Hild. Massive mirrors in pure silicon of metre-sizes have never been built, he points out, nor have silicon wire suspensions for the extreme cold payloads of more than half a tonne. Optoelectronics and sensors at 1550 nm at the noise level required for gravitational-wave detectors are also non-standard.

On paper, the new super-low noise detection technologies to be investigated by ETpathfinder will provide stunning new ways of looking at the universe with the ET. The sensitivity at low frequencies will enable researchers to actually hear the rumblings of space–time hours before spiralling black holes or neutron stars coalesce and merge. Instead of astronomers struggling to point their telescopes at the point in the sky indicated by millisecond chirps in LIGO and Virgo, they will be poised to catch the light from cosmic collisions many billions of light years away.

Archimedes weighs in on the quantum vacuum

Archimedes’ beam-balance apparatus

The Archimedes experiment, which will be situated under 200 m of rock at the Sar-Grav laboratory in the Sos Enattos mine in Sardinia, was conceived in 2002 to investigate the interaction between the gravitational field and vacuum fluctuations. Supported by a group of about 25 physicists from Italian institutes and the European Gravitational Observatory, it is also intended as a “bridge” between present- and next-generation interferometers. A separate project in the Netherlands, ETpathfinder, is performing a similar function (see main text).

Quantum mechanics predicts that the vacuum is a sea of virtual particles which contribute an energy density – although one that is tens of orders of magnitude larger than what is observed. Archimedes will attempt to shed light on the puzzle by clarifying whether virtual photons gravitate or not, essentially testing the equivalent of Archimedes’ principle in vacuum. “If the virtual photons do gravitate then they must follow the gravitational field around the Earth,” explains principal investigator Enrico Calloni of the University of Naples Federico II. “If we imagine removing part of them from a certain volume, creating a bubble, there will be a lack of weight (and pressure differences) in that volume, and the bubble will sense a force directed upwards, similar to the Archimedes force in a fluid. Otherwise, if they do not gravitate, the bubble will not experience any variation in the force even being immersed in the gravitational field.”

The experiment (pictured) will use a Casimir cavity comprising two metallic plates placed a short distance apart so that virtual photons that have too large a wavelength cannot survive and are expelled, enabling Archimedes to measure a variation of the “weight” of the quantum vacuum. Since the force is so tiny, the measurement must be modulated and performed at a frequency where noise is low, says Calloni. This will be achieved by modulating the vacuum energy contained in the cavity using plates made from a high-temperature superconductor, which exhibits transitions from a semiconducting to superconducting state and in doing so alters the reflectivity of the plates. The first prototype is ready and in March the experiment is scheduled to begin six years of data-taking. “Archimedes is a sort of spin-off of Virgo, in the sense that it uses many of the technologies learned with Virgo: low frequency, sensors. And it has a lot of requirements in common with third-generation interferometers like ET: cryogenics and low seismic noise, first and foremost,” explains Calloni. “Being able to rely on an existing lab with the right infrastructure is a very strong asset for the choice of a site for ET.”

Sardinian adventure

The Sos Enattos mine is situated in the wild and mountainous heart of Sardinia, an hour’s drive from the Mediterranean coast. More than 2000 years ago, the Romans (who, having had a hard time conquering the land, christened the region “Barbaria”) excavated around 50 km of underground tunnels to extract lead for their aqueduct pipes. Until it closed activity in 1996, the mine has been the only alternative to livestock-rearing in this area for decades. Today, the locals are hoping that Sos Enattos will be chosen as the site to host the ET. Since 2010, several underground measurement campaigns have been carried out to characterise the site in terms of environmental noise. The regional government of Sardinia is supporting the development of the “Sar-Grav” underground laboratory and its infrastructures with approximately €3.5 million, while the Italian government is supporting the upgrade of Advanced Virgo and the characterisation of the Sos Enattos site with about €17 million, as part of a strategy to make Sardinia a possible site for the ET.

Sar-Grav’s control room was completed late last year, and its first experiment – Archimedes – will soon begin (see “Archimedes weighs in on the quantum vacuum” panel), with others expected to follow. Archimedes will measure the effect of quantum interactions with gravity via the Casimir effect and, at the same time, provide a testbed to verify the technologies needed by a third-generation gravitational-wave interferometer such as the ET. “Archimedes has the same requirements as an underground interferometer: extreme silence, extreme cooling with liquid nitrogen, and the ensuing safety requirements,” explains Domenico D’Urso, a physicist from the University of Sassari and INFN.

Follow the noise

Sardinia is the oldest land in Italy and the only part of the country without significant seismic risk. The island also has a very low population density and thus low human activity. The Sos Enattos mine has very low seismic noise and the most resistant granitic rock, which was used until the 1980s to build the skyscrapers of Manhattan. Walking along the mine’s underground tunnels – past the Archimedes cavern, amidst veins of schist, quartz, gypsum and granite, ancient mining machines and giant portraits of miners bearing witness to a glorious past – an array of instruments can be seen measuring seismic noise; some of which are so sensitive that they are capable of recording the sound of waves washing against the shores of the Thyrrenian sea. “We are talking about really small sensitivities,” continues Domenico. “An interferometer needs to be able to perform measurements of 10–21, otherwise you cannot detect a gravitational wave. You have to know exactly what your system is doing, follow the noise and learn how to remove it.”

With the Einstein Telescope, we have 50 years of history ahead

The open European ET collaboration will spend the next two years characterising both the Sardinian and Netherlands sites, and then choosing which best matches the required parameters. In the current schedule, a technical design report for the ET would be completed in 2025 and, if approved, construction would take place from 2026 with first data-taking during the 2030s. “As of then, wherever it is built, ET will be our facility for decades, because its noise will be so low that any new technology that at present we cannot even imagine could be implemented and not be limited,” says Punturo, emphasising the scientific step-change. Current detectors can see black-hole mergers occurring at a redshift of around one when the universe was six billion years old, Punturo explains, while current detectors at their final sensitivity will achieve a redshift of around two, corresponding to three billion years after the Big Bang. “But we want to observe the universe in its dark age, before stars existed. To do so, we need to increase sensitivity to a redshift tenfold and more,” he says. “With ET, we have 50 years of history ahead. It will study events from the entire universe. Gravitational waves will become a common tool just like conventional astronomy has been for the past four centuries.” 

The post Scoping out the Einstein Telescope appeared first on CERN Courier.

]]>
Feature Activities are gathering pace at two sites in Europe where the Einstein Telescope, a proposed next-generation gravitational-wave observatory, may be built. https://cerncourier.com/wp-content/uploads/2020/02/CCMarApr20_ET_frontis.jpg
Renewed doubt cast on origin of fast radio bursts https://cerncourier.com/a/renewed-doubt-cast-on-origin-of-fast-radio-bursts/ Tue, 04 Feb 2020 09:58:39 +0000 https://preview-courier.web.cern.ch/?p=86475 Researchers trace second repeating fast radio burst to ordinary galaxy, suggesting that extreme environments are not required.

The post Renewed doubt cast on origin of fast radio bursts appeared first on CERN Courier.

]]>
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is one of several radio telescopes scouring the sky for fast radio bursts.

Fast radio bursts (FRBs) are a relatively new mystery within astrophysics. Around 100 of these intense few-millisecond bursts of radio waves have been spotted since the first detection in 2007, and hardly anything is known about their origin. Thanks to close collaboration between different radio facilities and lessons learned from the study of previous astrophysical mysteries such as quasars, our understanding of these phenomena is evolving rapidly. During the past year or so, several FRBs have been localised in different galaxies, strongly suggesting that they are extra-galactic. A newly published FRB measurement, however, casts doubts about their underlying origin.

As recently as one year ago, only a few tens of FRBs had been measured. One of these FRBs was of particular interest because, unlike the single-event nature of all other known FRBs, it produced several radio signals within a short time scale – earning it the nickname “the repeater”. This could imply that while all other FRBs were a result of some type of cataclysmic event, the repeater was an altogether different source which just happened to produce a similar signal. Adding to the intrigue, measurements also showed it to be in a rather peculiar high-metallicity dwarf galaxy close to the supermassive black hole within this host galaxy.

Much has happened in the field of FRBs since then, mainly thanks to data from new facilities such as ASKAP in Australia, CHIME in Canada (pictured above), and FAST in China. A number of new FRBs have been detected including nine more repeaters. Additionally, the new range of facilities has allowed for more detailed location measurements, including some for non-repeating FRBs which are more challenging due to their unpredictable occurrence. Since non-repeating bursts were found to be in more conventional galaxies than that of the repeater, a fully different origin of the two types of FRBs seemed the more likely explanation.

The new repeating fast radio burst (red circle) was traced to a star forming region arm of a fairly ordinary spiral galaxy, unlike the previous localisation of the first repeater.

The latest localisation measurement of an FRB, using data from CHIME and subsequent triangulation via eight radio telescopes from the European VLBI network, throws this theory into question. Writing in Nature, the international team found that another repeater was not only the closest FRB found to date (at a distance of 500 million light years), it was found in a star-forming region of a galaxy not that different from the Milky Way and therefore very different from the other localised repeating FRB. This precise localisation measurement, which allowed astronomers to pinpoint the location within an area just seven light years across, indicates that extreme environments are not required for repeater FRBs. Additionally, some of the repeated signals from this source were not strong enough to have come from any of the non-repeating FRBs as these are all at a larger distance. The latter finding casts doubt on the idea of two distinct classes of FRBs as the non-repeaters could just simply be too far away for some of their signal to reach us.

Although these latest findings give new insights in the quickly evolving field of FRBs it is clear that more measurements are required. The new radio facilities will soon make populations studies possible. Such populations studies have previously answered many questions for the fields of gamma-ray burst and quasars which in their early stages showed large similarities with the state in which FRB studies are now. Such studies could show if one of the two vastly differing environments in the two repeaters are found is simply a peculiarity or if FRBs can be produced in a range of different environments. Additionally, studies of the burst intensities and the distances of their origin will be able to prove if repeaters and non-repeaters are only different because of their distance.

The post Renewed doubt cast on origin of fast radio bursts appeared first on CERN Courier.

]]>
News Researchers trace second repeating fast radio burst to ordinary galaxy, suggesting that extreme environments are not required. https://cerncourier.com/wp-content/uploads/2020/02/cs_0315N_CHIME_1280x720.jpg
Astroparticle physicists head down under https://cerncourier.com/a/astroparticle-physicists-head-down-under/ Fri, 24 Jan 2020 14:25:13 +0000 https://preview-courier.web.cern.ch/?p=86369 Wide-ranging discussions at TeVPA featured the DAMA signal, GeV gamma rays from the galactic centre, the 21 cm radio line, AMS's positron excess and a host of cosmic messengers.

The post Astroparticle physicists head down under appeared first on CERN Courier.

]]>
Yvonne Wong at TeVPA 2019

Despite the thick haze of bushfire smoke hanging over the skyline, 200 delegates gathered in Sydney from 2 to 6 December for the 14th edition of the TeV Particle-Astrophysics conference (TeVPA), to discuss the status and future of astroparticle physics.

The week began with a varied series of talks on dark matter. Luca Grandi (Chicago) and Tom Thorpe (LNGS) updated delegates on progress towards the next generation of xenon and argon-based experiments: these massive underground detectors are now approaching total masses in the multiton-scale. Experiments like XENON, LZ and DarkSide are poised to be so sensitive to rare signals that they will even able to detect coherent elastic neutrino-nucleus scattering – the ultimate background to direct dark-matter searches. Meanwhile, Greg Lane (Australian National University) brought us news of exciting developments in Australian dark-matter research. The Stawell Underground Laboratory—the first deep underground site in the southern hemisphere—will host part of the SABRE experiment, which aims to test the annually modulating event rate seen by the DAMA experiment. This highly controversial, dark-matter-like signal has been observed for two decades by DAMA, but remains in irreconcilable tension with null results from many other experiments. Excavation at Stawell is underway as of October last year. The site will form a central component of the Centre of Excellence for Dark Matter Particle Physics, recently awarded by the Australian Research Council.

Galaxies can be used as laboratories for particle physics

Eminent astrophysicist Joe Silk (IAP) reviewed the many ways in which galaxies can be used as laboratories for particle physics. One of the most persistent hints of dark-matter particle interactions in astrophysical data is the notorious excess of GeV gamma rays coming from the galactic centre. Recent analyses of the excess using improved statistical techniques and better models for the Milky Way’s central bulge were detailed by Shunsaku Horiuchi (Virginia Tech). While dark-matter-related explanations remain tempting, there is growing evidence in support of millisecond pulsars being responsible, given the spatial morphology of the excess. Francesca Calore (LAPTh) told us that multi-wavelength probes of the excess will be possible in the near-future, and may finally allow us to conclusively determine the origin of the signal.

Probing the cosmos

Delegates enjoyed a stirring series of talks on the ever-increasing number of probes of cosmology. Following a review of the post-Planck status of cosmology by Jan Hamaan (UNSW), Xuelei Chen (CAS) explained how the unique 21 cm radio line can be used to map neutral hydrogen throughout the universe and across cosmic time. A host of upcoming ground and space-based experiments attempting to observe the sky-averaged 21 cm line will hopefully allow us to peer back to the birth of the first stars at “cosmic dawn”. We also heard from Yvonne Wong (UNSW) about how cosmological data can be used as a test of neutrino physics and how neutrino physics may in turn be a means to alleviate tensions between cosmological datasets. For example, strong self-interactions between neutrinos could bring the two increasingly divergent measurements of the Hubble constant, from the cosmic microwave background and type-1a supernovae respectively, into agreement.

The 21 cm radio line can be used to map neutral hydrogen throughout the universe and across cosmic time

Much of the week’s schedule was devoted to cosmic-ray research, gamma rays and indirect searches for dark matter. The antimatter cosmic-ray detector AMS, mounted on the International Space Station, is making measurements of cosmic-ray spectra to within 1% accuracy. Weiwei Xu (Shandong) summarised an impressive array of physics results made over almost a decade by AMS, including the most recent measurement of the positron flux, which has a clear high-energy component with a well-defined cutoff at 810 GeV – just as expected for galactic dark-matter annihilations. As with the GeV gamma-ray excess, however, pulsars represent a possible natural astrophysical explanation. The mystery could be resolved by the fact that, unlike pulsars, dark-matter annihilations are expected to produce antiprotons. While current antiproton data show a tantalisingly similar trend to the positron spectrum, more data is needed to identify the origin of the high-energy positrons. Many ongoing and upcoming observatories in the fields of cosmic-ray and gamma-ray research were also introduced to us, such as DAMPE (Jingjing Zang, CAS), the Cherenkov Telescope Array (Roberta Zanin, CTAO), the Pierre Auger Observatory (Bruce Dawson, U. Adelaide) and LHAASO (Zhen Cao, CAS). We are entering an exciting time when many of the enticing but ambiguous anomalies in cosmic-ray spectra will be definitively tested, potentially identifying a signal of dark matter in the process.

Gamma ray bursts (GRBs) generated much enthusiasm this year, with Edna Ruiz-Velasco (MPIK) and Elena Moretti (IFAF) talking about brand new observations of GRBs from the H.E.S.S. and MAGIC collaborations, including the first detection of a GRB afterglow at very high energies (>100 GeV), by H.E.S.S. These observations have helped resolve long-standing mysteries surrounding the complex array of processes that are needed to produce the phenomenal energies of GRB emission. An important contribution is now known to be “synchrotron self-Compton” – emission in which a synchrotron photon generated from an electron spiralling around a magnetic field line is Compton up-scattered by the same electron that produced it.

Many well-motivated theories of modified gravity are now finding little room to hide

Finally, the subject of gravitational waves continues to surge in popularity within this community. We were first given a summary by Susan Scott (Australian National University) of over 50 confirmed gravitational-wave discoveries made by Advanced LIGO and Advanced Virgo to date, and from Tara Murphy (Sydney), about the intense work involved in rapidly following-up luminous gravitational-wave events with radio observations. LIGO’s discoveries of neutron-star and black-hole mergers are a window into the one of the strongest regimes of gravity we have ever been able to see. With general relativity still holding up as robustly as ever, many well-motivated theories of modified gravity are now finding little room to hide.

The next TeVPA will take place in late October 2020 in Chengdu, China.

The post Astroparticle physicists head down under appeared first on CERN Courier.

]]>
Meeting report Wide-ranging discussions at TeVPA featured the DAMA signal, GeV gamma rays from the galactic centre, the 21 cm radio line, AMS's positron excess and a host of cosmic messengers. https://cerncourier.com/wp-content/uploads/2020/01/TeVPA-Yvonne-Wong.jpg
Crisis for cosmology? https://cerncourier.com/a/crisis-for-cosmology/ Wed, 22 Jan 2020 12:17:03 +0000 https://preview-courier.web.cern.ch/?p=86302 A new fit to Planck’s 2018 data release exchanges an anomalously large lensing amplitude for a higher energy density, thereby inferring a closed universe.

The post Crisis for cosmology? appeared first on CERN Courier.

]]>
Planck data on the cosmic microwave background (CMB) have been reinterpreted to favour a closed universe at more than 99% confidence, in contradiction with the flat universe favoured by the established ΛCDM model of cosmology. In their new fit to Planck’s 2018 data release, Eleonora Di Valentino (Manchester), Alessandro Melchiorri (La Sapienza) and Joe Silk (Oxford) exchanged an anomalously large lensing amplitude (a phenomenological parameter that rescales the gravitational-lensing potential in the CMB power spectrum) for a higher energy density.

In addition to the lensing anomaly, which leads to inconsistencies between large and small scales, the flat interpretation is already plagued by a 4.4σ tension with the latest determination of the Hubble constant using observations of the recession of Cepheid stars – a tension that grows to 5.4σ in a closed universe.

The inconsistencies between data sets signal “a possible crisis for cosmology”, argue the authors.

The post Crisis for cosmology? appeared first on CERN Courier.

]]>
News A new fit to Planck’s 2018 data release exchanges an anomalously large lensing amplitude for a higher energy density, thereby inferring a closed universe. https://cerncourier.com/wp-content/uploads/2020/01/CMB-detail.jpg
2019 Nobel Prize in Physics for cosmic perspectives https://cerncourier.com/a/2019-nobel-prize-in-physics-for-cosmic-perspectives/ Fri, 10 Jan 2020 15:58:59 +0000 https://preview-courier.web.cern.ch/?p=85017 One half of the prize was granted to James Peebles for theoretical discoveries in cosmology, while the other was shared between Michel Mayor and Didier Queloz for the discovery of an exoplanet orbiting a Sun-like star.

The post 2019 Nobel Prize in Physics for cosmic perspectives appeared first on CERN Courier.

]]>
James Peebles, Michel Mayor and Didier Queloz

The Nobel Prize in Physics for 2019 has recognised two independent bodies of work that have transformed our view of the universe and humanity’s place in it. One half of the SEK 9 million prize, announced on 8 October in Stockholm, was granted to James Peebles of Princeton University for theoretical discoveries in physical cosmology, while the other was shared between Michel Mayor of the University of Geneva and Didier Queloz of the universities of Geneva and Cambridge for the discovery of an exoplanet orbiting a Sun-like star.

Peebles was instrumental in turning cosmology into the precision science it is today, with its ever closer links to collider and particle physics in general. Following the unexpected discovery of the cosmic microwave background (CMB) in 1965, he and others at Princeton used it to support the idea that the universe began in a hot, dense state. While the idea of a “big bang” was already many years old, Peebles paired it with concrete physics processes such as nucleosynthesis and described the role of temperature and density in the formation of structure. With others, he arrived at a model accounting for the density fluctuations in the CMB showing a series of acoustic peaks, which would demonstrate that the universe is geometrically flat and that ordinary matter constitutes just 5% of its total matter and energy content. In the early 1980s, Peebles was the first to consider non-relativistic “cold” dark matter and its effect on structure formation, and he went on to reintroduce Einstein’s forsaken cosmological constant – work that underpins today’s Lambda Cold Dark Matter model of cosmology.

Mayor and Queloz’s discovery of an exoplanet orbiting a solar-type star in the Milky Way opened a new field of study. 51 Pegasi b lies 50 light years from Earth and takes just four days to complete its orbit. It was spotted by tracking how it and its star orbit around their common centre of gravity: a subtle wobbling seen from Earth whose speed can be measured from the starlight via the Doppler effect. The problem is that the radial velocities are extremely low. Mayor mounted his first spectrograph on a telescope at the Haute-Provence Observatory near Marseille in 1977, but it was only sensitive to velocities above 300 ms–1 – too high to see a planet pulling on its star. It took almost two decades of work by him and his group to strike success, with doctoral student Queloz tasked with developing new methods to increase the machine’s light sensitivity. Today, more than 4000 exoplanets with a vast variety of forms, sizes and orbits have been discovered in our galaxy using the radial-velocity method and the newer technique of transit photometry, challenging ideas about planetary formation.

The post 2019 Nobel Prize in Physics for cosmic perspectives appeared first on CERN Courier.

]]>
News One half of the prize was granted to James Peebles for theoretical discoveries in cosmology, while the other was shared between Michel Mayor and Didier Queloz for the discovery of an exoplanet orbiting a Sun-like star. https://cerncourier.com/wp-content/uploads/2019/11/CCNovDec19_News_Nobel_feature.jpg
KAGRA complete https://cerncourier.com/a/kagra-complete/ Fri, 10 Jan 2020 15:56:50 +0000 https://preview-courier.web.cern.ch/?p=85026 KAGRA is the first gravitational-wave detector to operate at cryogenic temperatures.

The post KAGRA complete appeared first on CERN Courier.

]]>
KAGRA

The construction of Japan’s first gravitational-wave (GW) detector, KAGRA, was finished on 4 October. Following agreement with the LIGO and Virgo collaborations, KAGRA will now participate in their third joint observation run, which began in April. The detector, which was built by the University of Tokyo, the National Astronomical Observatory of Japan and KEK, is the world’s fourth major GW detector, alongside LIGO in Washington state and Louisiana and Virgo in Italy. One of a suite of detectors in the Kamioka Observatory in northern Japan, KAGRA is also the first GW detector to operate at cryogenic temperatures, improving sensitivity at frequencies around 100 Hz – an important feature for proposed third-generation detectors such as the Einstein Telescope in Europe and the Cosmic Explorer in the US.

The post KAGRA complete appeared first on CERN Courier.

]]>
News KAGRA is the first gravitational-wave detector to operate at cryogenic temperatures. https://cerncourier.com/wp-content/uploads/2019/11/CCNovDec19_digest_kagra.jpg
Astronomers scale new summit https://cerncourier.com/a/astronomers-scale-new-summit/ Fri, 29 Nov 2019 14:44:05 +0000 https://preview-courier.web.cern.ch/?p=85128 The world’s largest optical/near-infrared telescope, the Extremely Large Telescope, under construction in Chile, will bring mysteries such as dark energy into focus.

The post Astronomers scale new summit appeared first on CERN Courier.

]]>
The foundations of ESO’s Extremely Large Telescope

The 3 km-high summit of Cerro Armazones, located in the Atacama desert of Northern Chile, is a construction site for one of most ambitious projects ever mounted by astronomers: the Extremely Large Telescope (ELT). Scheduled for first light in 2025, the ELT is centred around a 39 m-diameter main mirror that will gather 250 times more light than the Hubble Space Telescope and use advanced corrective optics to obtain exceptional image quality. It is the latest major facility of the European Southern Observatory (ESO), which has been surveying the southern skies for almost 60 years.

The science goals of the ELT are vast and diverse. Its sheer size will enable the observation of distant objects that are currently beyond reach, allowing astronomers to better understand the formation of the first stars, galaxies and even black holes. The sharpness of its images will also enable a deeper study of extrasolar planets, possibly even the characterisation of their atmospheres. “One new direction may become possible through very high precision spectroscopy – direct detection of the expansion rate of the universe, which would be an amazing feat,” explains Pat Roche of the University of Oxford and former president of the ESO council. “But almost certainly the most exciting results will be from unexpected discoveries.”

Technical challenges

Approved in 2006, civil engineering for the ELT began in 2014. Construction of the 74 m-high, 86 m-diameter dome and the 3400-tonne main structure began in 2019. In January 2018 the first segments of the main mirror were successfully cast, marking the first step of a challenging five-mirror system that goes beyond the traditional two-mirror “Gregorian” design. The introduction of a third powered mirror delivers a focal plane that remains un-aberrated at all field locations, while a fourth and a fifth mirror correct distortions in real-time due to the Earth’s atmosphere or other external factors. This novel arrangement, combined with the sheer size of the ELT, makes almost every aspect of the design particularly challenging.

Concepts of the ELT at work

The main mirror is itself a monumental enterprise; it consists of 798 hexagonal segments, each measuring approximately 1.4 m across and 50 mm thick. To keep the surface unchanged by external factors such as temperature or wind, each segment has edge sensors measuring its location within a few nanometres – the most accurate ever used in a telescope. The construction and polishing of the segments, as well as the edge sensors, is a demanding task and only possible thanks to the collaboration with industry; at least seven private companies are working on the main mirror alone. The size of the mirror was originally 42 m, but it was later reduced to 39 m, mainly for costs reasons, but still allowing the ELT to fulfill its main scientific goals. “The ELT is ESO’s largest project and we have to ensure that it can be constructed and operated within the available budget,” says Roche. “A great deal of careful planning and design, most of it with input from industry, was undertaken to understand the costs and the cost drivers, and the choice of primary mirror diameter emerged from these analyses.”

The task is not much easier for the other mirrors. The secondary mirror, measuring 4 m across, is highly convex and will be the largest secondary mirror ever employed on a telescope and the largest convex mirror ever produced. The ELT’s tertiary mirror also has a curved surface, contrary to more traditional designs. The fourth mirror will be the largest adaptive mirror ever made, supported by more than 5000 actuators that will deform and adjust its shape in real-time to achieve a factor-500 improvement in resolution. 

Currently 28 companies are actively collaborating on different parts of the ELT design; most of these companies are European, but also include contracts with the Chilean companies ICAFAL, for the road and platform construction, and Abengoa for the ELT technical facility. Among the European contracts, the construction of the telescope dome and main structure by the Italian ACe consortium of Astraldi and Cimolai is the largest in ESO’s history. The total cost estimate for the baseline design of the ELT is 1.174 billion, while the running cost is estimated to be around 50 million per year. Since the approval of the ELT, ESO has increased its number of member states from 14 to 16, with Poland and Ireland incorporating in 2015 and 2018, respectively. Chile is a host state and Australia a strategic partner.

European Southern Observatory’s particle-physics roots

ESO’s Telescope Project Division and Sky Atlas Laboratory in the 1970s

The ELT’s success lies in ESO’s vast experience in the construction of innovative telescopes. The idea for ESO, a 16-nation intergovernmental organisation for research in ground-based astronomy, was conceived in 1954 with the aim of creating a European observatory dedicated to observations of the southern sky. At the time, the largest such facilities had an aperture of about 2 m; more than 50 years later, ESO is responsible for a variety of observatories, including its first telescope at La Silla, not far from Cerro Armazones (home of the ELT).

Like CERN, ESO was born in the aftermath of the war to allow European countries to develop scientific projects that nations were unable to do on their own. The similarities are by no means a mere coincidence. From the beginning, CERN served as a model regarding important administrative aspects of the organisation, such as the council delegate structure, the finance base or personnel regulations. A stronger collaboration ensued in 1969, when ESO approached CERN to assist with the powerful and sophisticated instrumentation of its 3.6 m telescope and other challenges ESO was facing, both administrative and technological. This collaboration saw ESO facilities established at CERN: the Telescope Project Division and, a few years later, ESO’s Sky Atlas Laboratory. A similar collaboration has since been organised for EMBL and, more recently for a new hadron-therapy facility in Southeast Europe.

Unprecedented view

A telescope of this scale has never been attempted before in astronomy. Not only must the ELT be constructed and operated within the available budget, but it should not impact the operation of ESO’s current flagship facilities (such as the VLT, the VLT interferometer and the ALMA observatory).

The amount of data produced by the ELT is estimated to be around 1-2 TB per night, including scientific observations plus calibration observations. The data will be analysed automatically, and users have the option to download the processed data or, if needed, download the original data and process it in their own research centres. To secure observation time with the facility, ESO makes a call for proposals once or twice a year, at which researchers propose desired observations according to their own fields. “A committee of astronomers then evaluates the proposals and ranks them according to their relevance and potential scientific impact, the highest ranked ones are then chosen to be followed,” explains project scientist Miguel Pereira of the University of Oxford.

Currently, 28 companies are actively collaborating on different parts of the ELT design, mostly from Europe

In addition to its astronomical goals, the ELT will contribute to the growing confluence of cosmology and fundamental physics. Specifically, it will help elucidate the nature of dark energy by identifying distant type 1a supernovae, which serve as excellent markers of the universe’s expansion history. The ELT will also measure the change in redshift with time of distant objects – a feat that is beyond the capabilities of current telescopes – to indicate the rate of expansion. Possible variations over time of fundamental physics constants, such as the fine-structure constant and the strong coupling constant, will also be targeted. Such measurements are very challenging because the strength of the constraint on the variability depends critically on the accuracy of the wavelength calibration. The ELT’s ultra-stable high-resolution spectrograph aims to remove the systematic uncertainty currently present in the wavelength calibration measurements, offering the possibility to make an unambiguous detection of such variations.

The ELT construction is on schedule for completion, and first light is expected in 2025. “In the end, projects succeed because of the people who design, build and support them,” Roche says, attributing the success of the ELT to rigorous attention to design and analysis across all aspects of the project. The road ahead is still challenging and full of obstacles, but, as the former director of the Paris observatory André Danjon wrote to his counterpart at the Leiden Observatory, Jan Oort, in 1962: “L’astronomie est bien l’ecole de la patience.” No doubt the ELT will pay extraordinary scientific rewards.

The post Astronomers scale new summit appeared first on CERN Courier.

]]>
Feature The world’s largest optical/near-infrared telescope, the Extremely Large Telescope, under construction in Chile, will bring mysteries such as dark energy into focus. https://cerncourier.com/wp-content/uploads/2019/11/CCSepOct19_ELT1.jpg
MAGIC spots epic gamma-ray burst https://cerncourier.com/a/magic-spots-epic-gamma-ray-burst/ Wed, 27 Nov 2019 12:05:55 +0000 https://preview-courier.web.cern.ch/?p=85637 TeV photons from distant gamma-ray burst provide evidence for synchrotron self-Compton process.

The post MAGIC spots epic gamma-ray burst appeared first on CERN Courier.

]]>
Gamma-ray bursts (GRBs) are the brightest electromagnetic events in the universe since the Big Bang. First detected in 1967, GRBs have been observed about once per day using a range of instruments, allowing astrophysicists to gain a deeper understanding of their origin. As often happens, 14 January 2019 saw the detection of three GRBs. While the first two were not of particular interest, the unprecedented energy of photons emitted by the third – measured by the MAGIC telescopes — provides a new insight into these mysterious phenomena.

The study of GRBs is unique, both because GRBs occur at random locations and times and because each GRB has different time characteristics and energy spectra. GRBs consist of two phases: a prompt phase, lasting from hundreds of milliseconds to hundreds of seconds, which consists of one or several bright bursts of hard X-rays and gamma-rays; followed by a significantly weaker “afterglow” phase which can be observed at lower energies ranging from radio to X-rays and lasts for periods up to months.

The recent detection adds yet another messenger: TeV photons

Since the late 1990, optical observations have confirmed both that GRBs happen in other galaxies and that longer duration GRBs tend to be associated with supernovae, strongly hinting that they result from the death of massive stars. Shorter GRBs, meanwhile, have recently been shown to be the result of neutron-star mergers thanks to the first joint observations of a GRB with a gravitational wave event in 2017. While this event is often regarded as the start of multi-messenger astrophysics, the recent detection of GRB190114C lying 4.5 billion light years from Earth adds yet another messenger to the field of GRB astrophysics: TeV photons.

HST GRB190114C

The MAGIC telescopes on the island of La Palma measure Cherenkov radiation produced when TeV photons induce electromagnetic showers after interacting with the Earth’s atmosphere. During the past 15 years, MAGIC has discovered a range of astrophysical sources via their emission at these extreme energies. However, detecting the emission from GRBs, despite over 100 attempts, remained elusive despite theoretical predictions that such emission could exist.

On 14 January, based on an alert provided by space-based gamma-ray detectors, the MAGIC telescopes started repointing within a few tens of seconds of the onset of the GRB. Within the next half hour, the telescopes had observed around a 1000 high energy photons from the source. This emission, which has long been predicted by theorists, is shown by the collaboration to be the result of the “synchrotron self-Compton” process, whereby high-energy electrons accelerated in the initial violent explosion interact with magnetic fields produced by the collision between these ejecta and interstellar matter. The synchrotron emission from this interaction produces the afterglow observed at X-ray, optical and radio energies. However, some of these synchrotron photons subsequently undergo inverse Compton scattering with the same electrons, allowing them to reach TeV energies. These measurements by MAGIC show for the first time that indeed this mechanism does occur. Given the many observations in the past where it wasn’t observed, it appears to be yet another feature which differs between GRBs.

The MAGIC results were published in an issue of Nature which also reported a discovery of similar emission in a different GRB by another Cherenkov telescope: the High Energy Stereoscopic System (H.E.S.S) in Namibia. While the measurements are consistent, it is interesting to note that the measurements by H.E.S.S were made ten hours after that particular GRB, showing that this type of emission can occur also at much later time scales. With two new large-scale Cherenkov observatories – the Large High Altitude Air Shower Observatory in China and the global Cherenkov Telescope Array — about to commence data taking, the field of GRB astrophysics can now expect a range of new discoveries.

The post MAGIC spots epic gamma-ray burst appeared first on CERN Courier.

]]>
News TeV photons from distant gamma-ray burst provide evidence for synchrotron self-Compton process. https://cerncourier.com/wp-content/uploads/2019/11/MAGIC-telescope-twitter.jpg
Building Gargantua https://cerncourier.com/a/building-gargantua/ Tue, 12 Nov 2019 16:55:07 +0000 https://preview-courier.web.cern.ch/?p=85333 Oliver James describes the visual effects which produced the black hole in Interstellar.

The post Building Gargantua appeared first on CERN Courier.

]]>
Oliver James is chief scientist of the world’s biggest visual effects studio, DNEG, which produced the spectacular visual effects for Interstellar. DNEG’s work, carried out in collaboration with theoretical cosmologist Kip Thorne, led to some of the most physically-accurate images of a spinning black hole ever created, earning the firm an Academy Award and a BAFTA. For James, it all began with an undergraduate degree in physics at the University of Oxford in the late 1980s – a period that he describes as one of the most fascinating and intellectually stimulating of his life. “It confronted me with the gap between what you observe and reality. I feel it was the same kind of gap I faced while working for Interstellar. I had to study a lot to understand the physics of black holes and curved space time.”

A great part of visual effects is understanding how light interacts with surfaces and volumes and eventually enters a camera’s lens and as a student, Oliver was interested in atomic physics, quantum mechanics and modern optics. This, in addition to his two other passions – computing and photography – led him to his first job in a small photographic studio in London where he became familiar with the technical and operational aspects of the industry. Missing the intellectual challenge offered by physics, in 1995 he contacted and secured a role in the R&D team of the Computer Film Company – a niche studio specialising in digital film which was part of the emerging London visual effects industry.

Suddenly these rag-dolls came to life and you’d find yourself wincing in sympathy as they were battered about

Oliver James

A defining moment came in 2001, when one of his ex-colleagues invited him to join Warner Bros’ ESC Entertainment at Alameda California to work on The Matrix Reloaded & Revolutions. His main task was to work on rigid-body simulations – not a trivial task given the many fight scenes. “There’s a big fight scene, called the Burly Brawl, where hundreds of digital actors get thrown around like skittles,” he says. “We wanted to add realism by simulating the physics of these colliding bodies. The initial tests looked physical, but lifeless, so we enhanced the simulation by introducing torque at every joint, calculated from examples of real locomotion. Suddenly these rag-dolls came to life and you’d find yourself wincing in sympathy as they were battered about”. The sequences took dozens of artists and technicians months of work to create just a few seconds of the movie.

DNEG chief scientist Oliver James

Following his work in ESC Entertainment, James moved back to London and, after a short period at the Moving Picture Company, he finally joined “Double Negative” in 2004 (renamed DNEG in 2018). He’d been attracted by Christopher Nolan’s film Batman Begins, for which the firm was creating visual effects, and it was the beginning of a long and creative journey that would culminate in the sci-fi epic Interstellar, which tells the story of an astronaut searching for habitable planets in outer space.

Physics brings the invisible to life
“We had to create a new imagery for black holes; a big challenge even for someone with a physics background,” recalls James. Given that he hadn’t studied general relativity as an undergraduate and had only touched upon special relativity, he decided to call Kip Thorne of Caltech for help. “At one point I asked [Kip] a very concrete question: ‘Could you give me an equation that describes the trajectory of light from a distant star, around the black hole and finally into an observer’s eye?’ This must have struck the right note as the next day I received an email—it was more like a scientific paper that included the equations answering my questions.” In total, James and Thorne exchanged some 1000 emails, often including detailed mathematical formalism that DNEG could then use in its code. “I often phrased my questions in a rather clumsy way and Kip insisted: “What precisely do you mean”? says James. “This forced me to rethink what was lying at the heart of my questions.”

The result for the wormhole was like a crystal ball reflecting each point the universe

Oliver James

DNEG was soon able to develop new rendering software to visualise black holes and wormholes. The director had wanted a wormhole with an adjustable shape and size and thus we designed one with three free parameters, namely the length and radius of the wormhole’s interior as well as a third variant describing the smoothness of the transition from its interior to its exteriors, explains James. “The result for the wormhole was like a crystal ball reflecting each point the universe; imagine a spherical hole in space–time.” Simulating a black hole represented a bigger challenge as, by definition, it is an object that doesn’t allow light to escape. With his colleagues, he developed a completely new renderer that simulates the path of light through gravitationally warped space–time – including gravitational lensing effects and other physical phenomena that take place around a black hole.

Quality standards
On the internet, one can find many images of black holes “eating” other stars of stars colliding to form a black hole. But producing an image for a motion picture requires totally different quality standards. The high quality demanded of an IMAX image meant that the team had to eliminate any artefacts that could show up in the final picture, and consequently rendering times were up to 100 hours compared to the typical 5–6 hours needed for other films. Contrary to the primary goal of most astrophysical visualisations to achieve a fast throughput, their major goal was to create images that looked like they might really have been filmed. “This goal led us to employ a different set of visualisation techniques from those of the astrophysics community—techniques based on propagation of ray bundles (light beams) instead of discrete light rays, and on carefully designed spatial filtering to smooth the overlaps of neighbouring beams,” says James.

Gravitationally-lensed accretion disks

DNEG’s team generated a flat, multicoloured ring standing for the accretion disk and positioned it surrounding the spinning black hole. The result was a warped spac–time around the black hole including its accretion disk. Thorne later wrote in his 2014 book The Science of Interstellar: “You cannot imagine how ecstatic I was when Oliver sent me his initial film clips. For the first time ever –and before any other scientist– I saw in ultra-high definition what a fast-spinning black hole looks like. What it does, visually, to its environment.” The following year, James and his DNEG colleagues published two papers with Thorne on the science and visualisation of these objects (Am. J. Phys 83 486 and Class. Quantum Grav. 32 065001).

Another challenge was to capture the fact that the film camera should be traveling at a substantial fraction of the speed of light. Relativistic aberration, Doppler shifts and gravitational redshifts had to be integrated in the rendering code, influencing how the disk layers would look close to the camera as well as the colour grading and brightness changes in the final image. Things get even more complicated closer to the black hole where space–time is more distorted; gravitational lensing gets more extreme and the computation takes more steps. Thorne developed procedures describing how to map a light ray and a ray bundle from the light source to the camera’s local sky, and produced low-quality images in Mathematica to verify his code before giving it to DNEG to create the fast and high-resolution render. This was used to simulate all the images to be lensed: fields of stars, dust clouds and nebulae and the accretion disk around the Gargantua, Interstellar’s gigantic black hole. In total, the movie notched up almost 800 TB of data. To simulate the starry background, DNEG used the Tycho-2 catalogue star catalogue from the European Space Agency containing about 2.5 million stars, and more recently the team has adopted the Gaia catalogue containing 1.7 billion stars.

Creative industry
With the increased use of visual effects, more and more scientists are working in the field including mathematicians and physicists. And visual effects are not vital only for sci-fi movies but are also integrated in drama or historical films. Furthermore, there are a growing number of companies creating tailored simulation packages for specific processes. DNEG alone has increased from 80 people in 2004 to more than 5000 people today. At the same time, this increase in numbers means that software needs to be scalable and adaptable to meet a wide range of skilled artists, James explains. “Developing specialised simulation software that gets used locally by a small group of skilled artists is one thing but making it usable by a wide range of artists across the globe calls for a much bigger effort – to make it robust and much more accessible”.

DNEG CERN Colloquium

Asked if computational resources are a limiting factor for the future of visual effects, James thinks any increase in computational power will quickly be swallowed up by artists adding extra detail or creating more complex simulations. The game-changer, he says, will be real-time simulation and rendering. Today, video games are rendered in real-time by the computer’s video card, whereas visual effects in movies are almost entirely created as batch-processes and afterwards the results are cached or pre-rendered so they can be played back in real-time. “Moving to real-time rendering means that the workflow will not rely on overnight renders and would allow artists many more iterations during production. We have only scratched the surface and there are plenty of opportunities for scientists”. Even machine learning promises to play a role in the industry, and James is currently involved in R&D to use it to enable more natural body movements or facial expressions. Open data and open access is also an area which is growing, and in which DNEG is actively involved.

“Visual effects is a fascinating industry where technology and hard-science are used to solve creative problems,” says James. “Occasionally the roles get reversed and our creativity can have a real impact on science.”

The post Building Gargantua appeared first on CERN Courier.

]]>
Feature Oliver James describes the visual effects which produced the black hole in Interstellar. https://cerncourier.com/wp-content/uploads/2019/11/Interstellar.jpg
TAUP tackles topical questions https://cerncourier.com/a/taup-tackles-topical-questions/ Tue, 12 Nov 2019 13:28:15 +0000 https://preview-courier.web.cern.ch/?p=85301 The 16th International Conference on Topics in Astroparticle and Underground Physics was held in Toyama, Japan from 9–13 September.

The post TAUP tackles topical questions appeared first on CERN Courier.

]]>
Guido Drexlin

The 16th International Conference on Topics in Astroparticle and Underground Physics (TAUP 2019) was held in Japan from 9–13 September, attracting a record 540 physicists from around 30 countries. The 2019 edition of the series, which covered recent experimental and theoretical developments in astroparticle physics, was hosted by the Institute for Cosmic Ray Research of the University of Tokyo, and held in Toyama – the gateway city to the Kamioka experimental site.

Discussions first focused on gravitational-wave observations. During their first two observing runs, reported Patricia Schmidt from Radboud University, LIGO and Virgo confidently detected gravitational waves from 10 binary black-hole coalescenses and one binary neutron star inspiral, seeing one gravitational-wave event every 15 days of observation. It was also reported that, during the ongoing third observing run, LIGO and Virgo have already observed 26 candidate events. Among them is the first signal from a black hole–neutron star merger.

Guido Drexlin revealed the first measurement results on the upper limit of the neutrino mass

The programme continued with presentations from various research fields, a highlight being a report on the first result of the KATRIN experiment (KATRIN sets first limit on neutrino mass). Co-spokesperson Guido Drexlin revealed the first measurement results on the upper limit of the neutrino mass: < 1.1 eV at 90% confidence. This world-leading direct limit – which measures the neutrino mass by precisely measuring the kinematics of the electrons emitted from tritium beta decays – was obtained based on only four weeks of data. With the continuation of the experiment, it is expected that the limit will be reduced further, or even – if the neutrino mass is sufficiently large – the actual mass will be determined. Due to their oscillatory nature, it has been known since 1998 that neutrinos have tiny, but non-zero, masses. However, their absolute values have not yet been measured.

Diversity is a key feature of the TAUP conference. Topics discussed included cosmology, dark matter, neutrinos, underground laboratories, new technologies, gravitational waves, high-energy astrophysics and cosmic rays. Multi-messenger astronomy – which combines information from gravitational-wave observation, optical astronomy, neutrino detection and other electromagnetic signals – is quickly becoming established and is expected to play an even more important role in the future in gaining a deeper understanding of the universe.

The next TAUP conference will be held in Valencia, Spain, from 30 August to 3 September 2021.

The post TAUP tackles topical questions appeared first on CERN Courier.

]]>
Meeting report The 16th International Conference on Topics in Astroparticle and Underground Physics was held in Toyama, Japan from 9–13 September. https://cerncourier.com/wp-content/uploads/2019/11/CCNovDec19_FN_taup.jpg
The galaxy that feeds three times per day https://cerncourier.com/a/the-galaxy-that-feeds-three-times-per-day/ Fri, 04 Oct 2019 12:20:44 +0000 https://preview-courier.web.cern.ch/?p=84757 Astronomers puzzled by galaxy that emits strong bursts of X-rays every nine hours.

The post The galaxy that feeds three times per day appeared first on CERN Courier.

]]>
All galaxies are thought to contain a super-massive black hole (SMBH) at their centres, one of which was famously pictured for the first time by the Event Horizon Telescope only a few months ago (CERN Courier May/June 2019 p10). Both the size and activity of such SMBHs differ significantly from galaxy to galaxy: some galaxies contain an almost dormant black hole at their centre, while in others the SMBH is accumulating surrounding matter at a vast rate resulting in bright emission with energies ranging from the radio to the X-ray regime.

While solar-mass black holes can show dramatic variations in their emission on the time scale of days or even hours, such time scales increase with size, meaning that for an SMBH one would not expect much change during years or even centuries. However, observations during the past decade have revealed sudden increases. In 2010 the X-ray emission from a galaxy called GSN 069, which has a relatively small SMBH (400,000 solar masses), became 240 times brighter compared to observations in 1989 – turning it into an active galaxy. In such objects the matter falling into the central SMBH releases radiation when it approaches the event horizon (the boundary beyond which nothing can escape the black hole’s gravitational field).

The brightness of the emission produced as the SMBH feeds on the surrounding disk of matter typically varies randomly on short time scales, a result of a change in accretion rate and turbulence in the disk. But subsequent observations with the European Space Agency’s X-ray satellite XMM-Newton in 2018 revealed never-before-seen behavior. The object emitted strong bursts of X-rays lasting about one hour. Even more surprising was that the bursts appeared to occur at very consistent intervals of nine hours. Follow-up observations in 2019 with both XMM-Newton and NASA’s Chandra X-ray telescope have now confirmed this picture. While simultaneous observations at radio wavelengths showed no variability, the intensity of the bursts at X-ray wavelengths decreased. An extrapolation of this decrease indicates that, by now, the bursts should have fully disappeared, although further observations are needed to confirm this.

XMM-Newton data

The team behind the latest observations, published in Nature, has no clear explanation of what causes such extreme periodic behaviour from such a massive object. One possibility, claims the paper, is that it is the result of a second SMBH orbiting the main one: each time it crosses the disk of matter a burst would be expected. However, the associated variation would be expected to be more smooth than is observed. Furthermore, no such bursts were seen in the 2010 observations, making this theory implausible. Another explanation is that a semi-destroyed star is currently orbiting the SMBH, disturbing the accretion rate. The last and most probable hypothesis is that the quasi-periodic explosions are a result of complex oscillations in the disk of hot matter surrounding the SMBH induced by instabilities. The authors make it clear, however, that deeper studies are required to fully explain this new phenomenon.

Although only observed for the first time in GSN 069, it could very well be that other galaxies exhibit a similar behaviour. Other SMBHs with masses many orders of magnitude larger could exhibit the same periodic burst but on time scales of months or years, explaining why no one has ever noticed them. So while it could be that GSN 069 is simply a strange galaxy, the finding could have large implications for galaxies in general.

Further reading:
G Miniutti et al. 2019 Nature 573 381.

 

The post The galaxy that feeds three times per day appeared first on CERN Courier.

]]>
News Astronomers puzzled by galaxy that emits strong bursts of X-rays every nine hours. https://cerncourier.com/wp-content/uploads/2019/10/Astrowatch-features-image-0hours_1_lab.jpg
A new centre for astroparticle theory https://cerncourier.com/a/a-new-centre-for-astroparticle-theory/ Thu, 12 Sep 2019 08:52:28 +0000 https://preview-courier.web.cern.ch/?p=84306 On 10 July, CERN and the Astroparticle Physics European Consortium (APPEC) founded a new research centre for astroparticle physics theory called EuCAPT.

The post A new centre for astroparticle theory appeared first on CERN Courier.

]]>
Gian Giudice, Teresa Montaruli, Eckhard Elsen and Job de Kleuver

On 10 July, CERN and the Astroparticle Physics European Consortium (APPEC) founded a new research centre for astroparticle physics theory called EuCAPT. Led by an international steering committee comprising 12 theorists from institutes in France, Portugal, Spain, Sweden, Germany, the Netherlands, Italy, Switzerland and the UK, and from CERN, EuCAPT aims to coordinate and promote theoretical physics in the fields of astroparticle physics and cosmology in Europe.

Astroparticle physics is undergoing a phase of profound transformation, explains inaugural EuCAPT director Gianfranco Bertone, who is spokesperson of the Centre for Gravitation and Astroparticle Physics at the University of Amsterdam. “We have recently obtained extraordinary results such as the discovery of high-energy cosmic neutrinos with IceCube, the direct detection of gravitational waves with LIGO and Virgo, and we have witnessed the birth of multi-messenger astrophysics. Yet we have formidable challenges ahead of us: understanding the nature of dark matter and dark energy, elucidating the origin of cosmic rays, understanding the matter-antimatter asymmetry problem, and so on. These are highly interdisciplinary problems that have ramifications in cosmology, particle, and astroparticle physics, and that are best addressed by a strong and diverse community of scientists.”

The construction of experimental astroparticle facilities is coordinated by APPEC, but until now there was no Europe-wide coordination of theoretical activities, says Bertone. “We want to be open and inclusive, and we hope that all interested scientists will feel welcome to join this new initiative.” On a practical level, EuCAPT aims to coordinate scientific and training activities, help researchers attract adequate resources for their projects, and promote a stimulating and open environment in which young scientists can thrive. CERN will act as the central hub of the consortium for the first five years.

It is not a coincidence that CERN has been chosen as the central hub of EuCAPT, says Gian Giudice, head of CERN’s theory department. “The research that we are doing at CERN-TH is an exploration of the possible links between physics at the smallest and largest scales. Creating a collaborative network among European research centres in astroparticle physics and cosmology will boost activities in these fields and foster dialogue with particle physics,” he says. “Dark matter, dark energy, inflation and the origin of large-scale structures are big questions regarding the universe. But there are good hints that suggest that their explanation has to be looked for in the domain of particle physics.”

The post A new centre for astroparticle theory appeared first on CERN Courier.

]]>
News On 10 July, CERN and the Astroparticle Physics European Consortium (APPEC) founded a new research centre for astroparticle physics theory called EuCAPT. https://cerncourier.com/wp-content/uploads/2019/09/CCSepOct19_news_centre.jpg
Grappling with dark energy https://cerncourier.com/a/grappling-with-dark-energy/ Mon, 09 Sep 2019 13:36:38 +0000 https://preview-courier.web.cern.ch/?p=84360 Adam Riess discusses intriguing discrepancies in the value of the Hubble constant.

The post Grappling with dark energy appeared first on CERN Courier.

]]>
Adam Riess of Johns Hopkins University

Could you tell us a few words about the discovery that won you a share of the 2011 Nobel Prize in Physics?

Back in the 1990s, the assumption was that we live in a dense universe governed by baryonic and dark matter, but astronomers could only account for 30% of matter. We wanted to measure the expected deceleration of the universe at larger scales, in the hope that we would find evidence for some kind of extra matter that theorists predicted could be out there. So, from 1994 we started a campaign to measure the distances and redshifts of type-1a supernovae explosions. The shift in a supernova’s spectrum due to the expansion of space gives its redshift, and the relation between redshift and distance is used to determine the expansion rate of the universe. By comparing the expansion rates at two different epochs of the universe, we can estimate the expansion rate of the universe and how it changes over time. We made this comparison in 1998 and, to our surprise, we found that instead of decreasing, the expansion rate was speeding up. A stronger confirmation came after combining our measurements with those of the High-z Supernova Search Team. The result could be interpreted if the universe instead of decelerating is speeding up its expansion.

What was the reaction from your colleagues when you announced your findings?

That our result was wrong! There were understandably different reactions but the fact that two independent teams were measuring an accelerating expansion rate, plus the independent confirmation from measurements of the Cosmic Microwave Background (CMB), made it clear that the universe is accelerating. We reviewed all possible sources of errors including the presence of some yet unknown astronomical process, but nothing came out. Barring a series of unrelated mistakes, we were looking at a new feature of the universe.

There were other puzzles at that time in cosmology that the idea of an accelerating universe could also solve. The so-called “age crisis” (many stars looked older than the age of the universe) was one of them. This meant that either the stellar ages are too high or that there is something wrong with the age of the universe and its expansion. This discrepancy could be resolved by accounting for an accelerated expansion.

What is driving the accelerated expansion?

One idea is that the cosmological constant, initially introduced by Einstein so that general relativity could accommodate a static universe, is linked to the vacuum energy. Today we know that the vacuum energy can’t be the final answer because summing the contributions from the presumed quantum states in the universe produces an enormous number for the expansion rate that is about 120 orders of magnitude higher than observed. This rate is so high that it would have ripped apart galaxies, stars, planets, before any structure was formed.

The accelerating expansion can be due to what we broadly refer to as dark energy, but its source and its physics remain unknown. It is an ongoing area of research. Today we are making further supernovae observations to measure even more precisely the expansion rate, which will help us to understand the physics behind it.

By which other methods can we determine the source of the acceleration?

Today there is a vast range of approaches, using both space and ground experiments. A lot of work is ongoing on identifying more supernovae and measuring their distances and redshifts with higher precision. Other experiments are also looking to baryonic acoustic oscillations that would provide a standard ruler for measuring cosmological distances in the universe. There are proposals to use weak gravitational lensing, which is extremely sensitive to the parameters describing dark energy as well as the shape and history of the universe. Redshift space distortions due to the peculiar velocities of galaxies can also tell us something. We may be able to learn something from these different types of observations in a few years. The hope is to be able to measure the equation-of-state of dark energy with a 1% precision, and its variation over time with about 10% precision. This will offer a better understanding of whether dark energy is the cosmological constant or perhaps some form of energy temporarily stored in a scalar field that could change over time.

Is this one of the topics that you are currently involved with?

Yes, among other things. I am also working on improving the precision of the measurements of the Hubble constant, Ho, which characterises the present state and expansion rate of our universe. Refined measurements of Ho could point to potential discrepancies in the cosmological model.

What’s wrong with our current determination of the Hubble constant?

The problem is that even when we account for dark energy (factoring in any uncertainties we are aware of) we get a discrepancy of about 9% when comparing the predicted expansion rate based on CMB data using the standard “ΛCDM” cosmological model with the present expansion. The uncertainty in this measurement has now gone below 2%, leading to a significance of more than 5σ while future observations from the SH0ES programme would likely reduce it to 1.5%.

A new feature in the dark sector of the universe appears increasingly necessary to explain the present tension

There is something more profound in the disagreement of these two measurements. One measures how fast the universe is expanding today, while the other is based on the physics of the early universe – taking into account a specific model – and measuring how fast it should have been expanding. If these values don’t agree, there is a very strong likelihood that we are missing something in our cosmological model that connects the two epochs in the history of our universe. A new feature in the dark sector of the universe appears in my view increasingly necessary to explain the present tension.

When did the seriousness of the H0 discrepancy become clear?

It is hard to pinpoint a date, but it was between the publication of first results from Planck in 2013, which predicted the value of H0 based on precise CMB measurements, and the publication of our 2016 paper that confirmed the H0 measurement. Since then, the tension has been growing. Various people were convinced along this way as new data came in, while there are people who are still not convinced. This diversity of opinions is a healthy sign for science: we should take into account alternative viewpoints and continuously reassess the evidence that we have without taking anything for granted.

How can the Hubble discrepancy be interpreted?

The standard cosmological model, which contains just six free parameters, allows us to extrapolate the evolution from the Big Bang to the present cosmos – period of almost 14 billion years. The model is based on certain assumptions: that space in the early universe was flat; that there are three neutrinos; that dark matter is very nonreactive; that dark energy is similar to the cosmological constant; and that there is no more complex physics. So one or perhaps a combination of these can be wrong. Knowing the original content of the universe and the physics, we should be able to measure how the universe was expanding in the past and what should be its present expansion rate. The fact that there is a discrepancy means that we don’t have the right understanding.

We think that the phenomenon that we call inflation is similar to what we call dark energy, and it is possible that there was another expansion episode in the history of the universe just after the recombination period. Certain theories predict a form of “early dark energy” becomes significant giving a boost to the universe that matches our current observations. Another option is the presence of dark radiation: a term that could account for a new type of neutrino or for another relativistic particle present in the early history of the universe. The presence of dark radiation would change the estimate of the expansion rate before the recombination period and gives us a way to address the current Hubble-constant problem. Future measurements could tell us if other predictions of this theory are correct or not.

Does particle physics have a complementary role to play?

Oh definitely. Both collider and astrophysics experiments could potentially reveal either the properties of dark matter or a new relativistic particle or something new that could change the cosmological calculations. There is an overlap concerning the contributions of these fields in understanding the early universe, a lot of cross-talk and blurring of the lines – and in my view, that’s healthy.

What has it been like to win a Nobel prize at the relatively early age of 42?

It has been a great honour. You can choose whether you want to do science or not, as long as this choice is available. So certainly, the Nobel is not a curse. Our team is continually trying to refine the supernovae measurements, while this is a growing community. Hopefully, if you come back in a couple of years, we will have more answers to your questions.

The post Grappling with dark energy appeared first on CERN Courier.

]]>
Opinion Adam Riess discusses intriguing discrepancies in the value of the Hubble constant. https://cerncourier.com/wp-content/uploads/2019/09/CCSepOct19_Int-riess_feature.jpg
Galaxies thrive on new physics https://cerncourier.com/a/galaxies-thrive-on-new-physics/ Fri, 30 Aug 2019 15:46:04 +0000 https://preview-courier.web.cern.ch/?p=84199 Theorists at Durham University simulated the universe using hydrodynamical simulations in which a scalar field enhances gravitational forces in low-density regions.

The post Galaxies thrive on new physics appeared first on CERN Courier.

]]>
This supercomputer-generated image of a galaxy suggests that general relativity might not be the only way to explain how gravity works. Theorists at Durham University in the UK simulated the universe using hydrodynamical simulations based on “f(R) gravity” – in which a scalar field enhances gravitational forces in low-density regions (such as the outer parts of a galaxy) but is screened by the so-called chameleon mechanism in high-density environments such as our solar system (see C Arnold et al. Nature Astronomy; arXiv:1907.02977).

The left-hand side of the image shows the scalar field of the theory: bright-yellow regions correspond to large scalar-field values, while dark-blue regions correspond to to a very small scalar fields, i.e. regions where screening is active and the theory behaves like general relativity. The right-hand side of the image shows the gas density with stars overplotted. The simulation, which was based on a total of 12 simulations for different model parameters and resolutions, and which required a total runtime of about 2.5 million core-hours, shows that spiral galaxies like our Milky Way could still form even with different laws of gravity.

“Our research definitely does not mean that general relativity is wrong, but it does show that it does not have to be the only way to explain gravity’s role in the evolution of the universe,” says lead author Christian Arnold of Durham University’s Institute for Computational Cosmology.

The post Galaxies thrive on new physics appeared first on CERN Courier.

]]>
News Theorists at Durham University simulated the universe using hydrodynamical simulations in which a scalar field enhances gravitational forces in low-density regions. https://cerncourier.com/wp-content/uploads/2019/08/L25_group60_blend_3_lower.jpg
Interdisciplinary physics at the AEDGE https://cerncourier.com/a/interdisciplinary-physics-at-the-aedge/ Thu, 29 Aug 2019 12:53:03 +0000 https://preview-courier.web.cern.ch/?p=84127 Cold-atom interferometry could fill a gap in observational capability for gravitational waves in the intermediate-frequency band.

The post Interdisciplinary physics at the AEDGE appeared first on CERN Courier.

]]>
Frequency niche

Following the discovery of gravitational waves by the LIGO and Virgo collaborations, there is great interest in observing other parts of the gravitational-wave spectrum and seeing what they can tell us about astrophysics, particle physics and cosmology. The European Space Agency (ESA) has approved the LISA space experiment that is designed to observe gravitational waves in a lower frequency band than LIGO and Virgo, while the KAGRA experiment in Japan, the INDIGO experiment in India and the proposed Einstein Telescope (ET) will reinforce LIGO and Virgo. However, there is a gap in observational capability in the intermediate-frequency band where there may be signals from the mergers of massive black holes weighing between 100 and 100,000 solar masses, and from a first-order phase transition or cosmic strings in the early universe.

This was the motivation for a workshop held at CERN on 22 and 23 July that brought experts from the cold-atom community together with particle physicists and representatives of the gravitational-wave community. Experiments using cold atoms as clocks and in interferometers offer interesting prospects for detecting some candidates for ultralight dark matter as well as gravitational waves in the mid-frequency gap. In particular, a possible space experiment called AEDGE could complement the observations by LIGO, Virgo, LISA and other approved experiments.

The workshop shared information about long-baseline terrestrial cold-atom experiments that are already funded and under construction, such as MAGIS in the US, MIGA in France and ZAIGA in China, as well as ideas for future terrestrial experiments such as MAGIA-advanced in Italy, AION in the UK and ELGAR in France. Delegates also heard about space – CACES (China) and CAL (NASA) – and sounding-rocket experiments – MAIUS (Germany) – using cold atoms in space and microgravity.

A suggestion for an atom interferometer using a pair of satellites is being put forward by the AEDGE team

ESA has recently issued a call for white papers for its Voyage 2050 long-term science programme, and a suggestion for an atom interferometer using a pair of satellites is being put forward by the AEDGE team (in parallel with a related suggestion called STE-QUEST) to build upon the experience with prior experiments. AEDGE was the focus of the CERN workshop, and would have unique capabilities to probe the assembly of the supermassive black holes known to power active galactic nuclei, physics beyond the Standard Model in the early universe and ultralight dark matter. AEDGE would be a uniquely interdisciplinary space mission, harnessing cold-atom technologies to address key issues in fundamental physics, astrophysics and cosmology.

The post Interdisciplinary physics at the AEDGE appeared first on CERN Courier.

]]>
Meeting report Cold-atom interferometry could fill a gap in observational capability for gravitational waves in the intermediate-frequency band. https://cerncourier.com/wp-content/uploads/2019/08/binary-black-holes-merge.jpg
Clocking the merger of two white dwarfs https://cerncourier.com/a/clocking-the-merger-of-two-white-dwarfs/ Wed, 10 Jul 2019 15:44:32 +0000 https://preview-courier.web.cern.ch?p=83590 As two white dwarfs orbited one another they slowly lost momentum through the emission of gravitational waves.

The post Clocking the merger of two white dwarfs appeared first on CERN Courier.

]]>

A never-before-seen object with a cataclysmic past has been spotted in the constellation Cassiopeia, about 10,000 light years away. The star-like object has a temperature of 200,000 K, shines 40,000 times brighter than the Sun and is ejecting matter with velocities up to 16,000 km s–1. In combination with the chemical composition of the surrounding nebula, the data indicate that it is the result of the merger of two dead stars.

Astronomers from the University of Bonn and Moscow detected the unusual object while searching for circumstellar nebulae in data from NASA’s Wide-Field Infrared Survey Explorer satellite. Memorably named J005311, and measuring about five light years across, it barely emits any optical light and radiates almost exclusively in the infrared. Additionally, the matter it emits consists mostly of oxygen and does not have any signs of hydrogen or helium, the two most abundant materials in the universe. All this makes it unlike a normal massive star and more in line with a white dwarf.

White dwarfs are “dead stars” that remain when typical stars have used up all of their hydrogen and helium fuel, at which point the oxygen- and carbon-rich star collapses into itself to form a high-mass Earth-sized object. The white dwarf is kept from further collapse into a neutron star only by the electron degeneracy pressure of the elements in its core, and its temperature is too low to enable further fusion. However, if the mass of the white dwarf increases, for example if it accretes matter from a nearby companion star, it can become hot enough to restart the fusion of carbon into heavier elements. This process is so violent that the radiation pressure it produces blows the star apart. Such “type 1A” supernovae are observed frequently and, since they are unleashed when a white dwarf reaches a very specific mass, they have a standard brightness that can be used to measure cosmic distances.

Despite having the chemical signature of a white dwarf, such an object cannot possibly burn as bright as J005311. By comparing the characteristics of J005311 with models of what happens when two white dwarfs merge, however, the explanation falls into place. As two white dwarfs, likely produced billions of years ago, orbited one another they slowly lost momentum through the emission of gravitational waves. Over time, the objects came so close to each other that they merged. This would commonly be expected to produce a type 1A supernova, but there are also models in which carbon is ignited in a more subtle way during the merging process, allowing it to start fusing without blowing the newly formed object apart. J005311’s detection appears to indicate that those models are correct, marking the first observation of a white-dwarf merger.

The rejuvenated star is, however, not expected to live for long. Based on the models it will burn through its remaining fuel within 10,000 years or so, forming a core of iron that is set to collapse into a neutron star through a violent event accompanied by a flash of neutrinos and possibly a gamma-ray burst. Using the speed of the ejected material and the distance it has reached from the star by now, it can be calculated that the merger took place about 16,000 years ago, meaning that its final collapse is not far away.

The post Clocking the merger of two white dwarfs appeared first on CERN Courier.

]]>
News As two white dwarfs orbited one another they slowly lost momentum through the emission of gravitational waves. https://cerncourier.com/wp-content/uploads/2019/07/CCJulAug19_News-astro_th.jpg
Studying neutron stars in the laboratory https://cerncourier.com/a/studying-neutron-stars-in-the-laboratory/ Wed, 10 Jul 2019 14:58:23 +0000 https://preview-courier.web.cern.ch?p=83596 The ALICE collaboration is now using the scattering between particles produced in collisions at the LHC to constrain interaction potentials in a new way.

The post Studying neutron stars in the laboratory appeared first on CERN Courier.

]]>
A report from the ALICE experiment

Neutron stars consist of extremely dense nuclear matter. Their maximum size and mass are determined by their equation of state, which in turn depends on the interaction potentials between nucleons. Due to the high density, not only neutrons but also heavier strange baryons may play a role.

The main experimental information on the interaction potentials between nucleons and strange baryons comes from bubble-chamber scattering experiments with strange-hadron beams undertaken at CERN in the 1960s, and is limited in precision due to the short lifetimes (< 200 ps) of the hadrons. The ALICE collaboration is now using the scattering between particles produced in collisions at the LHC to constrain interaction potentials in a new way. So far, pK, pΛ, pΣ0, pΞ and pΩ interactions have been investigated. Recent data have already yielded the first evidence for a strong attractive interaction between the proton and the Ξ baryon.

Strong final-state interactions between pairs of particles make their momenta more parallel to each other in the case of an attractive interaction, and increase the opening angle between them in the case of a repulsive interaction. The attractive potential of the p-Ξ interaction was observed by measuring the correlation of pairs of protons and Ξ particles as a function of their relative momentum (the correlation function) and comparing it with theoretical calculations based on different interaction potentials. This technique is referred to as “femtoscopy” since it simultaneously measures the size of the region in which particles are produced and the interaction potential between them.

Data from proton–lead collisions at a centre-of-mass energy per nucleon pair of 5.02 TeV show that p-Ξ pairs are produced at very small distances (~1.4 fm); the measured correlation is therefore sensitive to the short-range strong interaction. The measured p-Ξ correlations were found to be stronger than theoretical correlation functions with only a Coulomb interaction, whereas the prediction obtained by including both the Coulomb and strong interactions (as calculated by the HAL-QCD collaboration) agrees with the data (figure 1).

As a first step towards evaluating the impact of these results on models of neutron-star matter, the HAL-QCD interaction potential was used to compute the single-particle potential of Ξ within neutron-rich matter. A slightly repulsive interaction was inferred (of the order of 6 MeV, compared to the 1322 MeV mass of the Ξ), leading to better constraints on the equation of state for dense hadronic systems that contain Ξ particles. This is an important step towards determining the equation of state for dense and cold nuclear matter with strange hadrons.

The post Studying neutron stars in the laboratory appeared first on CERN Courier.

]]>
News The ALICE collaboration is now using the scattering between particles produced in collisions at the LHC to constrain interaction potentials in a new way. https://cerncourier.com/wp-content/uploads/2017/01/CCast1_01_17.jpg
Multi-messenger adventures https://cerncourier.com/a/review-multi-messenger-adventures/ Wed, 08 May 2019 13:29:05 +0000 https://preview-courier.web.cern.ch?p=83118 "This book offers a well-balanced introduction to particle and astroparticle physics, requiring only a basic background of classical and quantum physics."

The post Multi-messenger adventures appeared first on CERN Courier.

]]>

Recent years have seen enormous progress in astroparticle physics, with the detection of gravitational waves, very-high-energy neutrinos, combined neutrino–gamma observation and the discovery of a binary neutron-star merger, which was seen across the electromagnetic spectrum by some 70 observatories. These important advances opened a new and fascinating era for multi-messenger astronomy, which is the study of astronomical phenomena based on the coordinated observation and interpretation of disparate “messenger” signals.

This book, first published in 2015, is now released in a renewed version to include such recent discoveries and to describe present research lines.

The Standard Model (SM) of particle physics and the lambda-cold-dark-matter theory, also referred to as the SM of cosmology, have both proved to be tremendously successful. However, they leave a few important unsolved puzzles. One issue is that we are still missing a description of the main ingredients of the universe from an energy-budget perspective. This volume provides a clear and updated description of the field, preparing and possibly inspiring students towards a solution to these puzzles.

The book introduces particle physics together with astrophysics and cosmology, starting from experiments and observations. Written by experimentalists actively working on astroparticle physics and with extensive experience in sub-nuclear physics, it provides a unified view of these fields, reflecting the very rapid advances that are being made.

The first eight chapters are devoted to the construction of the SM of particle physics, beginning from the Rutherford experiment up to the discovery of the Higgs particle and the study of its decay channels. The next chapter describes the SM of cosmology and the dark universe. Starting from the observational pillars of cosmology (the expansion of the universe, the cosmic microwave background and primordial nucleosynthesis), it moves on to a discussion about the origins and the future of our universe. Astrophysical evidence for dark matter is presented and its possible constituents and their detection are discussed. A separate chapter is devoted to neutrinos, covering natural and man-made sources; it presents the state of the art and the future prospects in a detailed way. Next, the “messengers from the high-energy universe”, such as high-energy charged cosmic rays, gamma rays, neutrinos and gravitational waves, are explored. A final chapter is devoted to astrobiology and the relations between fundamental physics and life.

This book offers a well-balanced introduction to particle and astroparticle physics, requiring only a basic background of classical and quantum physics. It is certainly a valuable resource that can be used as a self-study book, a reference or a textbook. In the preface, the authors suggest how different parts of the essay can serve as introductory courses on particle physics and astrophysics, and for advanced classes of high-energy astroparticle physics. Its 700+ pages allow for a detailed and clear presentation of the material, contain many useful references and include proposed exercises.

The post Multi-messenger adventures appeared first on CERN Courier.

]]>
Review "This book offers a well-balanced introduction to particle and astroparticle physics, requiring only a basic background of classical and quantum physics." https://cerncourier.com/wp-content/uploads/2019/05/CCMayJun19_Rev_galaxy.jpg
DESY’s astroparticle aspirations https://cerncourier.com/a/desys-astroparticle-aspirations/ Wed, 08 May 2019 13:08:24 +0000 https://preview-courier.web.cern.ch?p=83113 Christian Stegmann, director of DESY’s newly established research division for astroparticle physics, describes the ambitious plans ahead in this vibrant field.

The post DESY’s astroparticle aspirations appeared first on CERN Courier.

]]>

What is your definition of astroparticle physics?

There is no general definition, but let me try nevertheless. Astroparticle physics addresses astrophysical questions through particle-physics experimental methods and, vice versa, questions from particle physics are addressed via astronomical methods. This approach has enabled many scientific breakthroughs and opened new windows to the universe in recent years. In Germany, what drives us is the question of the influence of neutrinos and high-energy processes in the development of our universe, and the direct search for dark matter. There are differences to particle physics both in the physics questions and in the approach: we observe high-energy radiation from our cosmos or rare events in underground laboratories. But there are also many similarities between the two fields of research that make a fruitful exchange possible.

What was your path into the astroparticle field?

I grew up in particle physics: I did my PhD on b-physics at the OPAL experiment at CERN’s LEP collider and then worked for a few years on the HERA-B experiment at DESY. I was not only fascinated by particle physics, but also by the international cooperation at CERN and DESY. Particle physics and astroparticle physics overcome borders, and this is a feat that is particularly important again today. Around 20 years ago I switched to ground-based gamma astronomy. I became fascinated in understanding how nature manages to accelerate particles to such enormous energies as we see them in cosmic rays and what role they play in the development of our universe. I experienced very closely how astroparticle physics has developed into an independent field. Seven years ago, I became head of the DESY site in Zeuthen near Berlin. My task is to develop DESY and in particular the Zeuthen site into an international centre for astroparticle physics. The new research division is also a recognition of the work of the people in Zeuthen and an important step for the future.

What are DESY’s strengths in astroparticle research?

Astroparticle physics began in Zeuthen with neutrino astronomy around 20 years ago. It has evolved from humble beginnings, from a small stake in the Lake Baikal experiment to a major role in the km3-sized IceCube array deep in the Antarctic ice. Having entered high-energy gamma-ray astronomy only a few years ago, the Zeuthen location is now a driving force behind the next-generation gamma-ray observatory the Cherenkov Telescope Array (CTA). The campus in Zeuthen will host the CTA Science Data Management Centre and we are participating in almost all currently operating major gamma-ray experiments to prepare for the CTA science harvest. A growing theoretical group supports all experimental activities. The combination of high-energy neutrinos and gamma rays offers unique opportunities to study processes at energies far beyond those reachable by human-made particle accelerators.

Why did DESY establish a dedicated division?

A dedicated research division underlines the importance of astroparticle physics in general and in DESY’s scientific programme in particular, and offers promising opportunities for the future. Astroparticle physics with cosmic messengers has experienced a tremendous development in recent years. The discovery of a large number of gamma-ray sources, the observation of cosmic neutrinos in 2013, the direct detection of gravitational waves in 2015, the observation of the merger of two neutron stars with more than 40 observatories worldwide triggered by its gravitational waves in August 2017, and the simultaneous observation of neutrinos and high-energy gamma radiation from the direction of a blazar the following month are just a few prominent examples. We are on the threshold of a golden age of multi-messenger astronomy, with gamma rays, neutrinos, gravitational waves and cosmic rays together promising completely new insights into the origins and evolution of our universe.

What are the divisions scale and plans?

The next few years will be exciting for us. We have just completed an architectural competition, new buildings will be built and the entire campus will be redesigned in the coming years. We expect well over 350 people to work on the Zeuthen campus, and hosting the CTA data centre will make us a contact point for astroparticle physicists globally. In addition to the growth through CTA, we are expanding our scientific portfolio to include radio detection of high-energy neutrinos and increased activities in astronomical-transient-event follow-up. We are also establishing close cooperation with other partners. Together with the Weizmann Institute in Israel, the University of Potsdam and the Humboldt University in Berlin, we are currently establishing an international doctoral school for multi-messenger astronomy funded by the Helmholtz Association.

How can we realise the full potential of multi-messenger astronomy?

Our potential lies primarily in committed scientists who use their creativity and ideas to take advantage of existing opportunities. For years we have experienced a large number of young people moving into astroparticle physics. We need new, highly sensitive instruments and there is a whole series of outstanding project proposals waiting to be implemented. CTA is being built, the upgrade of the Pierre Auger Observatory is progressing and the first steps for the further upgrade of IceCube have been taken. The funding for the next generation of gravitational-wave experiments, the Einstein Telescope in Europe, is not yet secured. We are currently discussing a possible participation of DESY in gravitational-wave astronomy. Multi-messenger astronomy promises a breathtaking amount of new discoveries. However, the findings will only be possible if, in addition to the instruments, the data are also made available in a form that allows scientists to jointly analyse the information from the various instruments. DESY will play an important role in all these tasks – from the construction of instruments to the training of young scientists. But we will also be involved in the development of the research-data infrastructure required for multi-messenger astronomy.

I was not only fascinated by particle physics, but also by the international cooperation at CERN and DESY

How would you describe the astroparticle physics landscape?

The community in Europe is growing. Not only in terms of the number of scientists, but also the size and variety of experiments. In many areas, European astroparticle physics is in transition from medium-sized experiments to large research infrastructures. CTA is the outstanding example of this. The large number of new scientists and the ideas for new research infrastructures show the great appeal of astroparticle physics as a young and exciting field. The proposed Einstein Telescope will cross the threshold of projects requiring investments of more than one billion Euros, requiring coordination at European and international level. With the Astroparticle Physics European Consortium (APPEC) we have taken a step towards improved coordination. DESY is one of the founding members of APPEC and I have been elected vice-chairman of the APPEC general assembly for the next two years. In this area, too, we can learn something from particle physics and are very pleased that CERN is an associate member of APPEC.

What implication does the update of the European strategy for particle physics have for your field?

European astroparticle physics provides a wide range of input to the European Strategy for particle physics, from concrete proposals for experiments to contributions from national committees for astroparticle physics. The contribution to the construction of the Einstein Telescope deserves special attention, and my personal wish is that CERN will coordinate the Einstein Telescope, as suggested in the contribution. With the LHC, CERN has again demonstrated in an outstanding way that it can successfully implement major research projects. With the first gravitational- wave events, we saw only the first flashes of a completely unknown part of our universe. The Einstein Telescope would revolutionise our new view of the world.

The post DESY’s astroparticle aspirations appeared first on CERN Courier.

]]>
Opinion Christian Stegmann, director of DESY’s newly established research division for astroparticle physics, describes the ambitious plans ahead in this vibrant field. https://cerncourier.com/wp-content/uploads/2019/05/CCMayJun19_Int_1.jpg
First images of the centre of a galaxy https://cerncourier.com/a/first-images-of-the-centre-of-a-galaxy/ Tue, 07 May 2019 15:28:34 +0000 https://preview-courier.web.cern.ch?p=82987 On 10 April, researchers working on the Event Horizon Telescope released the first direct image of a black hole.

The post First images of the centre of a galaxy appeared first on CERN Courier.

]]>

On 10 April, researchers working on the Event Horizon Telescope – a network of eight radio dishes that creates an Earth-sized interferometer – released the first direct image of a black hole. The landmark result, which shows the radiation emitted by superheated gas orbiting the event horizon of a super massive black hole in a nearby galaxy, opens a brand new window on these incredible objects.

Super massive black holes (SMBHs) are thought to occupy the centre of most galaxies, including our own, with masses up to billions of solar masses and sizes up to 10 times larger than our solar system. Discovered in the 1960s via radio and optical measurements, their origin, as well as their nature and surrounding environments, remain important open issues within astrophysics. Spatially resolved images of an SMBH and the potential accretion disks around them form vital input, but producing such images is extremely challenging.

SMBHs are relatively bright in radio wavelengths. However, since the imaging resolution achievable with a telescope scales with the wavelength (which is long in the radio range) and scales inversely with the telescope diameter, it is difficult to obtain useful images in the radio region. For example, producing an image with the same resolution as the optical Hubble Space Telescope would require a km-wide telescope, while obtaining a resolution that would allow an SMBH to be imaged, would require a telescope diameter of thousands of kilometres. One way around this is to use interferometry to turn many telescopes dishes at different locations into one large telescope. Such an interferometer measures the differences in arrival time of one radio wave at different locations on Earth (induced by the difference in travel path), from which it is possible to reconstruct an image on the sky. This does not only require a large coordination between many telescopes around the world, but also very precise timing, vast amounts of collected data and enormous computing power.

Despite the considerable difficulties, the Event Horizon Telescope project used this technique to produce the first image of an SMBH using an observation time of only tens of minutes. The imaged SMBH lies at the centre of the supergiant elliptical galaxy Messier 87, which is located in the Virgo constellation at a distance of around 50 million light years. Although relatively close in astronomical terms, its very large mass makes its size on the sky comparable to that of the much lighter SMBH in the centre of our galaxy. Furthermore, its accretion rate (brightness) is variable on longer time scales, making it easier to image. The resulting image (above) shows the clear shadow of the black hole in the centre surrounded by an asymmetric ring caused by radio waves that are bent around the SMBH by its strong gravitational field. The asymmetry is likely a result of relativistic beaming of part of the disk of matter which moves towards Earth.

The team compared the image to a range of detailed simulations in which parameters such as the black hole’s mass, spin and orientation were varied. Additionally, the characteristics of the matter around the SMBH, mainly hot electrons and ions, as well as the magnetic field properties were varied. While the image alone does not allow researchers to constrain many of these parameters, combining it with X-ray data taken by the Chandra and NuSTAR telescopes enables a deeper understanding. For example, the combined data constrain the SMBH mass to 6.5 billion solar masses and appears to exclude a non-spinning black hole. Whether the matter orbiting the SMBH rotates in the same direction or opposite to the black hole, as well as details on the environment around it, will require additional studies. Such studies can also potentially exclude alternative interpretations of this object; currently, exotic objects like boson stars, gravastars and wormholes cannot be fully excluded.

The work of the Event Horizon Telescope collaboration, which involves more than 200 researchers worldwide, was published in six consecutive papers in The Astrophysical Journal Letters. While more images at shorter wavelengths are foreseen in the future, the collaboration also points out that much can be learned by combining the data with that from other wavelengths, such as gamma-rays. Despite this first image being groundbreaking, it is likely only the start of a revolution in our understanding of black holes and, with it, the universe.

The post First images of the centre of a galaxy appeared first on CERN Courier.

]]>
News On 10 April, researchers working on the Event Horizon Telescope released the first direct image of a black hole. https://cerncourier.com/wp-content/uploads/2019/05/CCMayJun19_News-astro.jpg
Mysterious burst confounds astrophysicists https://cerncourier.com/a/mysterious-burst-confounds-astrophysicists/ Fri, 08 Mar 2019 14:43:56 +0000 https://preview-courier.web.cern.ch?p=13604 Hypotheses are that the Cow was either a compact object being destroyed when coming too close to a black hole, or a special type of supernova in which a black hole or magnetar is produced.

The post Mysterious burst confounds astrophysicists appeared first on CERN Courier.

]]>
An optical image of the Cow and its host galaxy

On 16 June 2018, a bright burst of light was observed by the Asteroid Terrestrial-impact Last Alert System (ATLAS) telescope in Hawaii, which automatically searches for optical transient events. The event, which received the automated catalogue name “AT2018cow”, immediately received a lot of attention and acquired a shorter name: “the Cow”. While transient objects are observed on the sky every day – caused, for example, by nearby asteroids or supernovae – two factors make the Cow intriguing. First, the very short time it took for the event to reach its extreme brightness and fade away again indicates that this event is nothing like anything observed before. Second, it took place relatively close to Earth, 200 million light years away in a star-forming arm of a galaxy in the Hercules constellation, making it possible to study the event in a wide range of wavelengths.

Soon after the ATLAS detection, the object was observed by more than 20 different telescopes around the world, revealing it to be 10–100 times brighter than a typical supernova. In addition to optical measurements, the object was observed for several days by space-based X- and gamma-ray telescopes such as NuSTAR, XMM-Newton, INTEGRAL and Swift, which also observed it in the UV energy range, as well as by radio telescopes on Earth. The IceCube observatory in Antarctica also identified two possible neutrinos coming from the Cow, although the detection is still compatible with a background fluctuation. The combination of all the data – demonstrating the power of multi-messenger astronomy – confirmed that this was not an ordinary supernova, but potentially something completely different.

Bright spark

While standard supernovae take several days to reach maximum brightness, the Cow did so in just 1.5 days, after which the brightness also started to decrease much faster than a typical supernova. Another notable feature was the lack of heavy-element decays. Normally, elements such as 56Ni produced during the explosion are the main source of supernovae brightness, but the Cow only revealed signs of lighter elements such as hydrogen and helium. Furthering the event’s mystique is the variability of the X-ray emission several days after its discovery, which is a clear sign of an energy source at its centre. Half a year after its discovery, two opposing theories aim to explain these features.

The first theory states that an unlucky compact object was destroyed when coming too close to a black hole – a phenomenon called a tidal disruption event. The fast increase in brightness excludes normal stars. On the other hand, a smaller object (such as a neutron star, a very dense star consisting of neutron matter) cannot explain the hydrogen and helium observed in the remnant, since it contains no proper elements. The remaining possibility is a white dwarf, a dense star remaining after a normal star has ceased fusion but kept from gravitational collapse into a neutron star or black hole by the electron-degeneracy pressure in its core. The observed emission from the Cow could be explained if a white dwarf was torn apart by tidal forces in the vicinity of a massive black hole. One problem with this theory, however, is the event’s location, since black holes with the sizes required for such an event are normally not found in the spiral arms of galaxies.

The opposing theory is that the Cow was a special type of supernova in which either a black hole or a quickly rotating highly magnetic neutron star, a magnetar, is produced. While the bright emission in the optical and UV bands are produced by the supernova-like event, the variable X-ray emission is produced by radiating gas falling into the compact object. Normally the debris of a supernova blocks most of the light from reaching us, but the progenitor of the Cow was likely a relatively low-mass star that caused little debris. A hint of its low mass was also found in the X-ray data. If so, the observations would constitute the first observation of the birth of a compact object, making these data very valuable for further theoretical development. Such magnetar sources could also be responsible for ultra-high-energy cosmic rays as well as high-energy neutrinos, two of which might have been observed already. The debate on the nature of the Cow continues, but the wealth of information gathered so far indicates the growing importance of multi-messenger astronomy.

The post Mysterious burst confounds astrophysicists appeared first on CERN Courier.

]]>
News Hypotheses are that the Cow was either a compact object being destroyed when coming too close to a black hole, or a special type of supernova in which a black hole or magnetar is produced. https://cerncourier.com/wp-content/uploads/2019/03/CCMarApr19_News-astro.jpg
Colliders join the hunt for dark energy https://cerncourier.com/a/colliders-join-the-hunt-for-dark-energy/ Thu, 24 Jan 2019 09:00:56 +0000 https://preview-courier.web.cern.ch/?p=13083 The ATLAS collaboration carried out a first collider search for light scalar particles that could contribute to the accelerating expansion of the universe.

The post Colliders join the hunt for dark energy appeared first on CERN Courier.

]]>
Dark analysis

It is 20 years since the discovery that the expansion of the universe is accelerating, yet physicists still know precious little about the underlying cause. In a classical universe with no quantum effects, the cosmic acceleration can be explained by a constant that appears in Einstein’s equations of general relativity, albeit one with a vanishingly small value. But clearly our universe obeys quantum mechanics, and the ability of particles to fluctuate in and out of existence at all points in space leads to a prediction for Einstein’s cosmological constant that is 120 orders of magnitude larger than observed. “It implies that at least one, and likely both, of general relativity and quantum mechanics must be fundamentally modified,” says Clare Burrage, a theorist at the University of Nottingham in the UK.

With no clear alternative theory available, all attempts to explain the cosmic acceleration introduce a new entity called dark energy (DE) that makes up 70% of the total mass-energy content of the universe. It is not clear whether DE is due to a new scalar particle or a modification of gravity, or whether it is constant or dynamic. It’s not even clear whether it interacts with other fundamental particles or not, says Burrage. Since DE affects the expansion of space–time, however, its effects are imprinted on astronomical observables such as the cosmic microwave background and the growth rate of galaxies, and the main approach to detecting DE involves looking for possible deviations from general relativity on cosmological scales.

Unique environment

Collider experiments offer a unique environment in which to search for the direct production of DE particles, since they are sensitive to a multitude of signatures and therefore to a wider array of possible DE interactions with matter. Like other signals of new physics, DE (if accessible at small scales) could manifest itself in high-energy particle collisions either through direct production or via modifications of electroweak observables induced by virtual DE particles.

Last year, the ATLAS collaboration at the LHC carried out a first collider search for light scalar particles that could contribute to the accelerating expansion of the universe. The results demonstrate the ability of collider experiments to access new regions of parameter space and provide complementary information to cosmological probes.

Unlike dark matter, for which there exists many new-physics models to guide searches at collider experiments, few such frameworks exist that describe the interaction between DE and Standard Model (SM) particles. However, theorists have made progress by allowing the properties of the prospective DE particle and the strength of the force that it transmits to vary with the environment. This effective-field-theory approach integrates out the unknown microscopic dynamics of the DE interactions.

The new ATLAS search was motivated by a 2016 model by Philippe Brax of the Université Paris-Saclay, Burrage, Christoph Englert of the University of Glasgow, and Michael Spannowsky of Durham University. The model provides the most general framework for describing DE theories with a scalar field and contains as subsets many well-known specific DE models – such as quintessence, galileon, chameleon and symmetron. It extends the SM lagrangian with a set of higher dimensional operators encoding the different couplings between DE and SM particles. These operators are suppressed by a characteristic energy scale, and the goal of experiments is to pinpoint this energy for the different DE–SM couplings. Two representative operators predict that DE couples preferentially to either very massive particles like the top quark (“conformal” coupling) or to final states with high-momentum transfers, such as those involving high-energy jets (“disformal” coupling).

Signatures

“In a big class of these operators the DE particle cannot decay inside the detector, therefore leaving a missing energy signature,” explains Spyridon Argyropoulos of the University of Iowa, who is a member of the ATLAS team that carried out the analysis. “Two possible signatures for the detection of DE are therefore the production of a pair of top-anti­top quarks or the production of high-energy jets, associated with large missing energy. Such signatures are similar to the ones expected by the production of supersymmetric top quarks (“stops”), where the missing energy would be due to the neutralinos from the stop decays or from the production of SM particles in association with dark-matter particles, which also leave a missing energy signature in the detector.”

The ATLAS analysis, which was based on 13 TeV LHC data corresponding to an integrated luminosity of 36.1 fb–1, re-interprets the result of recent ATLAS searches for stop quarks and dark matter produced in association with jets. No significant excess over the predicted background was observed, setting the most stringent constraints on the suppression scale of conformal and disformal couplings of DE to normal matter in the context of an effective field theory of DE. The results show that the characteristic energy scale must be higher than approximately 300 GeV for the conformal coupling and above 1.2 TeV for the disformal coupling.

The search for DE at colliders is only at the beginning, says Argyropoulos. “The limits on the disformal coupling are several orders of magnitudes higher than the limits obtained from other laboratory experiments and cosmological probes, proving that colliders can provide crucial information for understanding the nature of DE. More experimental signatures and more types of coupling between DE and normal matter have to be explored and more optimal search strategies could be developed.”

With this pioneering interpretation of a collider search in terms of dark-energy models, ATLAS has become the first experiment to probe all forms of matter in the observable universe, opening a new avenue of research at the interface of particle physics and cosmology. A complementary laboratory measurement is also being pursued by CERN’s CAST experiment, which studies a particular incarnation of DE (chameleon) produced via interactions of DE with photons.

But DE is not going to give up its secrets easily, cautions theoretical cosmologist Dragan Huterer at the University of Michigan in the US. “Dark energy is normally considered a very large-scale phenomenon, but you may justifiably ask how the study of small systems in a collider can say anything about DE. Perhaps it can, but in a fairly model-dependent way. If ATLAS finds a signal that departs from the SM prediction it would be very exciting. But linking it firmly to DE would require follow-up work and measurements – all of which would be very exciting to see happen.”

The post Colliders join the hunt for dark energy appeared first on CERN Courier.

]]>
News The ATLAS collaboration carried out a first collider search for light scalar particles that could contribute to the accelerating expansion of the universe. https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_News-dark.png
Solving the next mystery in astrophysics https://cerncourier.com/a/solving-the-next-mystery-in-astrophysics/ Thu, 24 Jan 2019 09:00:44 +0000 https://preview-courier.web.cern.ch/?p=13086 Until recently the number of theories on the origin of fast radio bursts outnumbered the number of observations.

The post Solving the next mystery in astrophysics appeared first on CERN Courier.

]]>
Stellar stats

In 2007, while studying archival data from the Parkes radio telescope in Australia, Duncan Lorimer and his student David Narkevic of West Virginia University in the US found a short, bright burst of radio waves. It turned out to be the first observation of a fast radio burst (FRB), and further studies revealed additional events in the Parkes data dating from 2001. The origin of several of these bursts, which were slightly different in nature, was later traced back to the microwave oven in the Parkes Observatory visitors centre. After discarding these events, however, a handful of real FRBs in the 2001 data remained, while more FRBs were being found in data from other radio telescopes.

The cause of FRBs has puzzled astronomers for more than a decade. But dedicated searches under way at the Canadian Hydrogen Intensity Mapping Experiment (CHIME) and the Australian Square Kilometre Array Pathfinder (ASKAP), among other activities, are intensifying the search for their origin. Recently, while still in its pre-commissioning phase, CHIME detected no less than 13 new FRBs – one of them classed as a “repeater” on account of its regular radio output – setting the field up for an exciting period of discovery.

Dispersion

All FRBs have one thing in common: they last for a period of several milliseconds and have a relatively broad spectrum where the radio waves with the highest frequencies arrive first followed by those with lower frequencies. This dispersion feature is characteristic of radio waves travelling through a plasma in which free electrons delay lower frequencies more than the higher ones. Measuring the amount of dispersion thus gives an indication of the number of free electrons the pulse has traversed and therefore the distance it has travelled. In the case of FRBs, the measured delay cannot be explained by signals travelling within the Milky Way alone, strongly indicating an extragalactic origin.

The size of the emission region responsible for FRBs can be deduced from their duration. The most likely sources are compact km-sized objects such as neutron stars or black holes. Apart from their extragalactic origin and their size, not much more is known about the 70 or so FRBs that have been detected so far. Theories about their origin range from the mundane, such as pulsar or black-hole emission, to the spectacular – such as neutron stars travelling through asteroid belts or FRBs being messages from extraterrestrials.

For one particular FRB, however, its location was precisely measured and found to coincide with a faint unknown radio source within a dwarf galaxy. This shows clearly that the FRB was extragalactic. The reason this FRB could be localised is that it was one of several to come from the same source, allowing more detailed studies and long-term observations. For a while, it was the only FRB found to do so, earning it the title “The Repeater”. But the recent detection by CHIME has now doubled the number of such sources. The detection of repeater FRBs could be seen as evidence that FRBs are not the result of a cataclysmic event, since the source must survive in order to repeat. However, another interpretation is that there are actually two classes of FRBs: those that repeat and those that come from cataclysmic events.

Until recently the number of theories on the origin of FRBs outnumbered the number of detected FRBs, showing how difficult it is to constrain theoretical models based on the available data. Looking at the experience of a similar field – that of gamma-ray burst (GRB) research, which aims to explain bright flashes of gamma rays discovered during the 1960s – an increase in the number of detections and searches for counterparts in other wavelengths or in gravitational waves will enable quick progress. As the number of detected GRBs started to go into the thousands, the number of theories (which initially also included those with extraterrestrial origins) decreased rapidly to a handful. The start of data taking by ASKAP and the increasing sensitivity of CHIME means we can look forward to an exponential growth of the number of detected FRBs, and an exponential decrease in the number of theories on their origin.

The post Solving the next mystery in astrophysics appeared first on CERN Courier.

]]>
News Until recently the number of theories on the origin of fast radio bursts outnumbered the number of observations. https://cerncourier.com/wp-content/uploads/2019/01/CCJanFeb19_News-stellar.png
Gaia finds evidence of old Milky Way merger https://cerncourier.com/a/gaia-finds-evidence-of-old-milky-way-merger/ Fri, 30 Nov 2018 09:00:30 +0000 https://preview-courier.web.cern.ch/?p=12952 Many of the stars appearing in the night sky did not originate from within our galaxy.

The post Gaia finds evidence of old Milky Way merger appeared first on CERN Courier.

]]>

Many of the stars appearing in the night sky did not originate from within our galaxy, concludes a new study of data from the European Space Agency’s Gaia observatory. Instead, Gaia has found evidence that these stars formed in a smaller galaxy that merged with ours about 10 billion years ago.

Gaia was launched in 2013 with the aim of measuring the positions and distances of more than one billion astronomical objects (mainly stars) in and around our galaxy with unprecedented precision. Using Gaia data containing about seven million stars, Amina Helmi of the University of Groningen in the Netherlands and colleagues have found that a subset of these stars is different from the bulk of the stars in the Milky Way. Earlier research had shown that some stars in the galaxy’s inner stellar halo, which surrounds the central bulge and disk, have different chemical abundances from the bulge and disk stars (see figure). But the latest study found that these halo stars also exhibit orbits around the galactic centre that differ significantly from the rest of the stars.

The orbits of the stars in a galaxy typically follow that of the gas cloud in which they were born, which means that a proto-galaxy consisting of an orbiting gas cloud will produce stars orbiting along with the cloud. However, Helmi and co-workers show that many of the Milky Way’s halo stars orbit backwards relative to the rest of the galaxy, suggesting that their origin is probably different. The team then compared the Gaia observations with simulations in which the Milky Way merged in the past with a smaller galaxy with 25% of its mass, finding a remarkable similarity between the observed and simulated orbits.

Additional analysis of spectral data from APOGEE-2 (Apache Point Observatory Galactic Evolution Experiment), which is part of the Sloan Digital Sky Survey, revealed that the halo stars contain fewer of the chemical elements that are produced in specific types of supernovae, indicating that they are significantly older than the bulk of the Milky Way’s stars.

Taken together, the results suggest that, after the smaller galaxy (named Gaia–Enceladus by the authors) merged with the Milky Way, it lost all the gas it needed to produce new stars. As a result, only the old stars survived and no new stars were born. The age of the youngest stars from Gaia–Enceladus – about 10 billion years – can therefore tell astronomers when the merger took place. A final piece of evidence that this dramatic event occurred comes from Gaia data of 13 star clusters orbiting the Milky Way at large distances. The orbits of these clusters, which contain millions of gravitationally bound stars, match those that would be expected for the remnants of Gaia–Enceladus.

The results, published in Nature, constitute one of the first major discoveries to emerge from Gaia data. They shed light on the origin of our galaxy and galaxy mergers in general, but much more will no doubt be learned from the vast amount of data that the satellite has gathered.

The post Gaia finds evidence of old Milky Way merger appeared first on CERN Courier.

]]>
News Many of the stars appearing in the night sky did not originate from within our galaxy. https://cerncourier.com/wp-content/uploads/2018/11/CCDec18_Astro-Gaia.jpg
Particle physics meets astrophysics and gravity https://cerncourier.com/a/particle-physics-meets-astrophysics-and-gravity/ Mon, 29 Oct 2018 14:54:39 +0000 https://preview-courier.web.cern.ch?p=13354 The 7th International Conference on New Frontiers in Physics (ICNFP 2018) took place on 4–12 July in Kolymbari, Crete.

The post Particle physics meets astrophysics and gravity appeared first on CERN Courier.

]]>
ICNFP 2018 participants

The 7th International Conference on New Frontiers in Physics (ICNFP 2018) took place on 4–12 July in Kolymbari, Crete, Greece, bringing together about 250 participants.

The opening talk was given by Slava Mukhanov and was dedicated to Stephen Hawking. To mention some of the five special sessions featured, the memorial session of Lev Lipatov, a leading figure worldwide in the high-energy behaviour of quantum field theory (see CERN Courier January/February 2018 p50), the session on quantum chromodynamics and the round table on the future of fundamental physics chaired by Albert de Roeck, saw a high number of attendees.

Alongside the main conference sessions, there were 10 workshops. Among these, the one on heavy neutral leptons highlighted novel mechanisms for producing sterile-neutrino dark matter and prospects for future searches of such dark matter with the next generation of space-based X-ray telescopes, including Spektr-RG, Hitomi and Athena+.

The workshop on instrumentation and methods in high-energy physics focused on the latest developments and the performance of complex detector systems, including triggering, data acquisition and signal-control systems, with an emphasis on large-scale facilities in nuclear physics, particle physics and astrophysics. This programme attracted many participants and led to the exchange of scientific information between different physics communities.

The workshop on new physics paradigms after the Higgs-boson and gravitational-wave discoveries provided an opportunity both to review results from searches for gravitational waves and to show plans for future precision measurements of Standard Model parameters at the LHC.

The workshop also featured several theory talks covering a wide range of subjects, including the implementation of supersymmetry breaking in string theory, new developments in early-universe cosmology and beyond-Standard Model physics. ICNFP 2018 also saw the first workshop on frontiers in gravitation, astrophysics and cosmology, which strengthened the Asian presence at ICNFP, gathering many participants from the Asia Pacific region.

For the second time in the ICNFP series, a workshop on quantum information and quantum foundations took place, with the aim of promoting discussions and collaborations between theorists and experimentalists working on these topics.

Yakir Aharonov gave a keynote lecture on novel conceptual and practical applications of so-called weak values and weak measurements, showing that they lead to many interesting hitherto-unnoticed phenomena. The latter include, for instance, a “separation” of a particle from its physical variables (such as its spin), emergent correlations between remote parties defying fundamental classical concepts, and a completely top-down hierarchical structure in quantum mechanics, which stands in contrast to the concept of reductionism. As exemplified in the talk of Avshalom Elitzur, the latter could be explained using self-cancelling pairs of positive and negative weak values.

Sandu Popescu, Pawel Horodecki, Marek Czachor and Eliahu Cohen presented many new phenomena involving quantum nonlocality in space and time, which open new avenues for extensive research. Ebrahim Karimi discussed various applications of structured quantum waves carrying orbital angular momentum (either photons or massive particles) and also discussed how to manipulate the topology of optical polarisation knots. Onur Hosten emphasised the importance of cold atoms for quantum metrology.

The workshop also featured many excellent talks discussing the intriguing relations between quantum information and condensed-matter physics or quantum optics. Some connections with quantum gravity, based on entanglement, complexity and quantum thermodynamics, were also discussed. Another topic presented was the comparison between the role of spin and polarisation in high-energy physics and quantum optics. In both of these fields, one should consider the total angular momentum, not the spin alone, and helicity is a very helpful concept in both, too.

Future accelerator facilities such as the low-energy heavy-ion accelerator centres FAIR in Darmstadt, Germany, and NICA at the Joint Institute for Nuclear Research in Dubna, Russia, were also discussed, particularly in the workshop on physics at FAIR-NICA-SPS-BES/RHIC accelerator facilities. Here new ideas as well as overview talks on current and future experiments on the formation and exploration of baryon-rich matter in heavy-ion collisions were presented.

The MoEDAL collaboration at CERN, which searches for highly ionising messengers of new physics such as magnetic monopoles, organised a mini-workshop on highly ionising avatars of new physics. The workshop provided a forum for experimentalists and phenomenologists to meet, discuss and expand this discovery frontier. The latest results from the ATLAS, CMS, MoEDAL and IceCube experiments were presented, and some important developments in theory and phenomenology were introduced for the first time. Highlights of the workshop included monopole production via photon fusion at colliders, searches for heavy neutral leptons and other long-lived particles at the LHC, regularised Kalb–Ramond monopoles with finite energy, and monopole detection techniques using solid-state and Timepix detectors.

Finally, on the education and outreach front, Despina Hatzifotiadou gave LHC “masterclasses” in collaboration with EKFE (the laboratory centre for physical sciences) to 30 high-school students and teachers, who had the opportunity to analyse data from the ALICE experiment and “observe” strangeness enhancement in relativistic heavy-ion collisions.

The next ICNFP conference will take place on 21–30 August 2019 in Kolymbari, Crete, Greece.

The post Particle physics meets astrophysics and gravity appeared first on CERN Courier.

]]>
Meeting report The 7th International Conference on New Frontiers in Physics (ICNFP 2018) took place on 4–12 July in Kolymbari, Crete. https://cerncourier.com/wp-content/uploads/2018/10/CCNov18_Faces-ICNFP.jpg
Gravitational hunt for extra dimensions https://cerncourier.com/a/gravitational-hunt-for-extra-dimensions/ Mon, 29 Oct 2018 09:00:09 +0000 https://preview-courier.web.cern.ch/?p=12855 If extra dimensions are large, part of the gravitational field would “leak” into the extra dimensions and gravitational waves would be weaker than expected.

The post Gravitational hunt for extra dimensions appeared first on CERN Courier.

]]>
Figure 1

General relativity predicts very accurately how objects fall from a table and how planets move within the solar system. At larger scales, however, some issues arise. The most glaring is the theory’s prediction of the motion of stars within a galaxy and of the acceleration of galaxies away from each other, both of which are at odds with observations. Models containing dark matter and dark energy can solve these two problems, respectively. Another potential solution is that space–time contains additional dimensions, modifying general relativity. Such additional dimensions are not observable with electromagnetic waves, but new information gleaned from gravitational waves (GWs) are allowing such models to be tested for the first time.

Some modifications of general relativity, such as the Dvali–Gabadadze–Porrati (DGP) model, involve the addition of extra dimensions accessible to gravity. If such extra dimensions are large, and thus not rolled up to a microscopic size as predicted by some beyond-Standard Model theories, part of the gravitational field would “leak” into the extra dimensions. Therefore, GWs arriving at detectors such as those of the LIGO and VIRGO observatories would be weaker than expected.

The first GWs detected, in September 2015, came from distant black-hole binaries. For such objects, there is no electromagnetic-wave counterpart, so the only information astronomers have about their distance from Earth is from the GWs themselves, making it impossible to check if some of the wave’s intensity was lost. However, GW170817, the first observed merger of binary neutron stars, produced both GWs and electromagnetic radiation, which was measured by a wide range of instruments (CERN Courier December 2017 p16). As a result, we know in which galaxy the merger took place and therefore have a good measurement of the distance the GWs travelled. Using this distance measurement and the measured strength of the GW signal, one can test whether the signal follows general relativity or a model with additional dimensions.

Doing exactly this, a group led by Kris Pardo from Princeton University has found that the results are most compatible with the standard 3+1 space–time-dimensions picture. Assuming two values for the Hubble constant, as required due to a large discrepancy between values obtained by two different methods (CERN Courier May 2018 p17), the researchers show that, regardless of the value assumed, the results allow for a total of 4.0 ± 0.1 dimensions (see figure).

The authors also obtained an upper limit on the graviton’s lifetime of 450 million years. As is the case with a potential leakage of gravity into extra dimensions, the decay of gravitons propagating towards Earth would also cause the strength of the GW signal to decrease.

These findings are just the beginning of the physics studies made possible by gravitational-wave astronomy. As the authors make clear in their paper, the results only affect theories with finite but large-scale extra dimensions. That may change, however, as more GWs are expected to be measured, with increased precision, in the future. One promising parameter capable of probing a larger set of models is the polarisation of the GWs. For the GW170817 system, polarisation information was not available at the time of observation owing to the limited number of GW detectors. Any higher-dimensional model allows for extra GW polarisation modes, which can be studied with the help of additional GW detectors such as the planned KAGRA and IndIGO facilities.

With a future global array of GW detectors, we can look forward to more studies in this field of physics which, until now, has been almost inaccessible.

The post Gravitational hunt for extra dimensions appeared first on CERN Courier.

]]>
News If extra dimensions are large, part of the gravitational field would “leak” into the extra dimensions and gravitational waves would be weaker than expected. https://cerncourier.com/wp-content/uploads/2018/10/CCNov18_Viewpoint-astro-1.png
Particle interactions up to the highest energies https://cerncourier.com/a/particle-interactions-up-to-the-highest-energies/ Fri, 28 Sep 2018 15:03:31 +0000 https://preview-courier.web.cern.ch?p=13356 The 20th International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI 2018) was held in Nagoya, Japan.

The post Particle interactions up to the highest energies appeared first on CERN Courier.

]]>
The 20th International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI 2018) was held in Nagoya, Japan, on 21–25 May. More than 120 attendees from 19 countries discussed various aspects of hadronic interactions at the intersection between high-energy cosmic-ray physics and classical accelerator-based particle physics. The 65 contributions reflected the large diversity and interdisciplinary character of this biennial series, which is held under the auspices of the International Union of Pure and Applied Physics.

In his opening address, Sunil Gupta paid a tribute to Oscar Saavedra, one of the leading scientists and founders of the ISVHECRI series, who passed away in 2018. Following the long tradition of this symposium series, the main topic was the discussion of particle physics of relevance to extensive air showers, secondary cosmic-ray production, and hadronic multi-particle production at accelerators. This time, the symposium expanded its coverage of multi-messenger astrophysics, especially to neutrino and gamma-ray astrophysics. Many talks were invited from the Pierre Auger Observatory and Telescope Array, as well as from IceCube, Super-Kamiokande, CTA and HAWC, and space-borne experiments such as AMS-02, Fermi and CALET.

Participants discussed how many open questions in high-energy astroparticle physics are related to our understanding of cosmic-ray interactions from the multi-messenger point of view; for example, the relevance of production and propagation of positrons or antimatter for indirect dark-matter searches, or of atmospheric-neutrino production for neutrino oscillations or neutrino astronomy.

Showcasing several models of high-energy cosmic-ray interactions, and their verification by accelerator measurements, was also a highlight of the symposium. The event offered a unique opportunity for developers of major cosmic-ray interaction models to gather and engage in valuable discussions. Other highlights were the talks about accelerator data relevant to cosmic-ray observations, reported by the teams behind CERN’s large LHC experiments as well as smaller fixed-target experiments such as NA61. Emphasis was put on forward measurements by ATLAS, CMS, LHCb and LHCf, including first results from the SMOG gas-jet target measurements of LHCb (see “Fixed-target physics in collider mode at LHCb“).

A public lecture, “Exploring the Invisible Universe” by Nobel Laureate Takaaki Kajita, attracted more than 250 participants, which was complemented by a tour of the nuclear emulsion lab of Nagoya University to see state-of-the-art emulsion technology. The progress in this technology was clearly visible when Edison Shibuya and others recalled the early days of studying cosmic rays with emulsion chambers and Saavedra’s related pioneering contributions.

There were many discussions on future studies of relevance to cosmic-ray interactions and astroparticle physics. Hans Dembinski discussed prospects in the near and far future in collider experiments, including possible proton–oxygen runs at the LHC and a study of multi-particle production at a future circular collider. The cosmic-ray community is very enthusiastic about a future proton–oxygen run since, even with a short run of 100 million events, charged particle and pion spectra could be measured to an accuracy of 10% – a five-fold improvement over current model uncertainties that would bring us a crucial step closer to unveiling the cosmic accelerators of the highest energy particles in the universe.

The next ISVHECRI will be held in June 2020 at Ooty, the location of the RAPES air-shower experiment in India.

The post Particle interactions up to the highest energies appeared first on CERN Courier.

]]>
Meeting report The 20th International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI 2018) was held in Nagoya, Japan. https://cerncourier.com/wp-content/uploads/2018/10/CCOct18Faces-ISVHECRI.jpg
Solving the mystery of a historic stellar blast https://cerncourier.com/a/solving-the-mystery-of-a-historic-stellar-blast/ Fri, 28 Sep 2018 10:00:41 +0000 https://preview-courier.web.cern.ch/?p=12736 Some 180 years ago, a relatively normal star called Eta Carinae suddenly brightened to become the second brightest star in the sky, before almost disappearing at the end of the 19th century.

The post Solving the mystery of a historic stellar blast appeared first on CERN Courier.

]]>
Eta Carinae

Some 180 years ago, a relatively normal star called Eta Carinae suddenly brightened to become the second brightest star in the sky, before almost disappearing at the end of the 19th century. The sudden brightening and subsequent disappearance, recorded by astronomer John Herschel, suggested that the star had undergone a supernova explosion, leaving behind a black hole. More recent observations have shown, however, that the star still exists – ruling out the supernova hypothesis. Even more remarkably, what remains is a binary system of two stars, the more massive of which is surrounded by a large nebula.

Although supernovae imposters such as Eta Carinea are now known to occur in other galaxies, this event – known as the Great Eruption – appeared relatively close to Earth at a distance of around 7500 light years. It is therefore a perfect laboratory in which to study what exactly happens when stars appear to survive a supernova.

The fate of Eta Carinae has remained mysterious, but since the turn of the millennium clues have emerged in echoes of the light emitted during the Great Eruption. While the light observed in the 19th century travelled directly from the system towards Earth, other light initially travelled towards distant clouds surrounding the stars before being reflected in our direction. In 2003, the light echoes from this event were bright enough to be observed using the moderate-sized telescopes at the Cerro Tololo Inter-American Observatory in Chile, while the different gas clouds reflecting the light were observed more recently using the larger scale Magellan Observatory and the Gemini South Observatory, also located in Chile. By comparing historical records of the variability observed in the 19th century with the variability of the light reflected from a gas cloud, it can be determined how far in the past astronomers are observing the explosion.

Now, a team led by Nathan Smith of the University of Arizona in Tucson has studied the spectra of the light echo in more detail using the 6.5 m Magellan telescopes and found that it matches observations during the 1840s and 1850s, when the Great Eruption was at its peak. Spectral analysis of the reflected light indicates that initially matter was ejected at relatively low velocities of 150–200 km–1, while during the 1850s some matter was travelling at speeds of 10,000–20,000 km–1. The data are compatible with a system that first ejects material as one star brightens followed by more violent ejection from an explosion.

Smith and collaborators claim that the scenario which best matches the data, including information about the age and mass of the two remaining stars, is that the system originally consisted of three stars. The two closest stars initially interacted to form one massive star, while the donor star moved further away, losing mass and thereby increasing the radius of its orbit around the massive star. The gravitational field of the far-away donor star would have caused the orbiting third star to dramatically change orbit, forcing it to spiral into the massive central star. In doing so, its gravitational interactions with the massive star caused it to shed large amounts of matter as it started to burn brighter. Finally, the binary system merged, causing a violent explosion where large amounts of stellar material were ejected at large velocities towards the earlier ejected material. As the fast ejecta smashed into the slower moving ejecta, a bright object was formed on the night sky that was visible for many years during the 1850s. The remaining binary system still lights up every few years as the old donor star moves through the nebula left over from the merger.

The new details about the evolution of this complex and relatively nearby system not only teach us more about what was observed by Herschel almost two centuries ago, but also provide valuable information about the evolution of massive stars, binary and triple systems, and the nature of the supernovae imposters.

The post Solving the mystery of a historic stellar blast appeared first on CERN Courier.

]]>
News Some 180 years ago, a relatively normal star called Eta Carinae suddenly brightened to become the second brightest star in the sky, before almost disappearing at the end of the 19th century. https://cerncourier.com/wp-content/uploads/2018/10/CCOct18Astro-Eta.jpg
Elephants in the gamma-ray sky https://cerncourier.com/a/elephants-in-the-gamma-ray-sky/ Fri, 31 Aug 2018 08:00:31 +0000 https://preview-courier.web.cern.ch/?p=12645 The discovery of large-scale features in the gamma-ray sky in approximately the same direction continues to puzzle researchers.

The post Elephants in the gamma-ray sky appeared first on CERN Courier.

]]>
The gamma-ray sky

High-energy gamma rays provide a window into the physics of cosmic objects at extreme energies, such as black holes, supernova remnants and pulsars. In addition to revealing the nature of such objects, high-energy gamma-ray signals test general relativity and the Standard Model of particle physics. Take for example gamma-ray bursts, which can last from 10 milliseconds to several hours and are emitted by sources located up to several billion light-years away from Earth. A comparison between the arrival times of the bursts’ X-rays and gamma rays has been used to exclude modifications of Einstein’s general relativity that predict different arrival times. Also, in some theories in which dark matter is in the form of weakly interacting massive particles (WIMPs), dark-matter particles can annihilate into gamma-ray photons and other Standard Model particles. Significant effort is therefore being spent in searches for dark-matter annihilation signals in the gamma-ray band, including searches towards the Milky Way centre, which is estimated to contain a large amount of dark matter.

Studies of individual gamma-ray emitting sources and diffuse gamma-ray emission, which could include a galactic dark-matter annihilation signal, have benefited greatly from the launch of the large-area telescope on board NASA’s Fermi Gamma-ray Space Telescope (Fermi-LAT) in June 2008. Fermi-LAT, which observes gamma rays with energies from about 20 MeV to 1 TeV, has discovered more than 3000 point sources that have enabled researchers to significantly improve models of known galactic and extragalactic gamma-ray-emitting objects. But Fermi-LAT has also thrown up some surprising discoveries (figure 1). One of these is the so-called Fermi bubbles – two large gamma-ray lobes above and below the galactic centre that, intriguingly, have no clear counterpart in the X-ray and radio bands.

Fig. 2(a)

A second unexpected discovery by Fermi-LAT was an excess of gamma-ray radiation near the galactic centre with an energy of a few GeV. Interestingly, the excess has properties that are consistent with an annihilation signal from dark-matter particles with a mass of a few tens of GeV. The excess is visible up to 10 or 15 degrees away from the galactic centre – an elephant at a distance of four metres from an observer would have a similar apparent size. The Fermi bubbles, spanning 110 degrees from the northern to the southern edge, have an apparent size comparable to that of an elephant located one metre away.

Fig. 2(b)

Finally, there is a third, even larger, feature in the gamma-ray, radio and X-ray bands called Loop I. The challenge of explaining these three “elephants” in the gamma-ray sky has puzzled physicists and astronomers for years – tens of years in the case of Loop I. Are the features related to each other? Are they located near the galactic centre or close to us? And is the GeV gamma-ray excess caused by dark-matter annihilation or by astrophysical phenomena such as pulsars?

Loop I

The largest of the gamma-ray elephants, Loop I, has been known since the 1950s from its radio emission (figure 2a). Its large angular size – it stretches up to 80 degrees above the galactic plane – could easily be explained if it were a nearby feature. For instance, it could be the combined emission from a “superbubble”, the collective remnant of several supernova explosions taking place in a localised region. Such a bubble easily reaches a size of a few hundred light-years, and if the distance to the bubble was also a few hundred light-years, then it would appear very large, up to 90 degrees in angular size. In this scenario, the galactic magnetic field would drape around the expanding bubble and high-energy cosmic-ray electrons from sources in the galactic disk, compressed by the expansion of the bubble, would produce synchrotron emission that would appear as a huge, ring-like structure in the sky. A possible location of the underlying supernova explosions would be the Scorpius–Centaurus stellar cluster located a few hundred light-years away from Earth.

Loop I, or at least its brightest part, known as the North Polar Spur, is also seen at other wavelengths, in particular at gamma-ray (figure 1) and soft X-ray (figure 2b) wavelengths. While the gamma rays can be produced through inverse Compton emission by the same cosmic-ray electrons that produce the synchrotron radio emission, the soft X-ray emission is probably produced by hot interstellar gas. The approximate angular alignment between the radio and X-ray emissions of the North Polar Spur suggests that they both belong to Loop I. Yet there are several differences between the X-ray and radio emissions. For example, a bright, ring-like feature in X-rays that is crossing the North Polar Spur could be explained by the collision of the hypothetical Loop I superbubble with another bubble containing the solar system, the local hot bubble. One can even trace back the motion of stars within a few hundred light-years from us to find a population of stars with members that could have exploded as supernovae up to about 10 million years ago and inflated the local hot bubble.

However, apparent X-ray absorption at the southern part of the North Polar Spur by neutral gas located along the line of sight points to a different interpretation. Detailed spectral modelling of this absorption has recently shown that the amount of gas required to explain the absorption puts the X-ray emitting structure at distances far beyond a few hundred light-years. This lower bound on the distance to the X-ray structure favours models of Loop I as a galactic-scale phenomenon, for example the product of a large-scale outflow from the galactic-centre region, as opposed to the nearby superbubble. More X-ray data is needed to pin down the nature of Loop I, but if this feature is indeed a large-scale galactic structure, then it might be related to the second elephant in the sky – the Fermi bubbles.

Fermi bubbles

The Fermi bubbles consist of two large gamma-ray lobes above and below the galactic centre, each of which is slightly larger than the distance from Earth to the galactic centre (about 25,000 light-years). They appear smaller than Loop I and were discovered in 2010 with about a year and a half of Fermi-LAT data. From observations of galaxies other than the Milky Way, we know of two possible mechanisms for creating such bubbles: emission from a supermassive black hole at the galactic centre, or a period of intensive star formation (a starburst) and supernova explosions. Which of these processes is responsible for the formation of the Fermi bubbles in our galaxy is not yet known.

Even the mechanism for producing the gamma rays in the first place is not yet resolved: it could be due to interactions between cosmic-ray protons and galactic gas, or inverse Compton scattering of high-energy electrons off interstellar radiation fields. Both of these options have caveats. For the first, it’s unclear, for instance, how one can collect and keep the high density of cosmic rays required to compensate for the low density of gas at large distances from the galactic plane. It’s also unclear whether the pressure of cosmic rays will expel the gas and create a cavity that will make the gas density even lower. For the inverse-Compton-scattering hypothesis, one would need electrons with energies up to 1 TeV. If these electrons were accelerated to such energies at the beginning of the expansion of the Fermi bubbles, then the bubbles’ expansion velocity would be about 10,000 km s–1 – at least 10 times larger than the typical observed outflow velocities.

Fig. 3.

Moreover, even though the Fermi bubbles are similar in shape to gamma-ray lobes in other galaxies, which are typically visible in X-ray and radio wavelengths, they have no clear counterpart in X-rays and radio waves at high latitudes. Perhaps the Fermi bubbles are unique to the Milky way. Then again, perhaps astronomers have simply struggled to detect in other galaxies gamma-ray lobes that are “quiet” in the radio and X-ray bands.

A study of the gamma-ray emission from the Fermi bubbles at low latitudes could shed light on their origin, as it may point to the supermassive black hole at the galactic centre or to a region away from the centre, which would support the starburst scenario. Although the diffuse foreground and background gamma-ray emission from the Milky Way near the galactic centre is very bright, making it hard to interpret the observations, several analyses of Fermi-LAT gamma-ray data have revealed an increased intensity of gamma-ray emission from the Fermi bubbles near the galactic plane and a displacement of the emission relative to the galactic centre (figure 3). The higher intensity of the emission at the base of the Fermi bubbles opens up the possibility for a detection with ground-based very-high-energy gamma-ray Cherenkov telescopes, such as the upcoming Cherenkov Telescope Array, which is expected to start taking data with a partial array in 2022 and with the full array in 2025. At low energies, below 100 GeV, the flux from the base of the Fermi bubbles may also be confused with the third elephant in the sky – the galactic-centre GeV excess.

Galactic-centre GeV excess

The first hints of an extended excess of gamma rays from the centre and bulge of the galaxy at energies around a few GeV and with an almost spherical morphology were presented in 2009, before the discovery of the Fermi bubbles. However, given that the diffuse foreground gamma-ray emission along the galactic plane is very bright, and also rather uncertain towards the galactic centre, it took a long time to prove that the excess is not caused by mis-modelling of foreground components (such as inverse Compton scattering of high-energy electrons and hadronic interactions of the stationary distribution of cosmic rays along the line of sight). The spectrum of the excess has a peak at a few GeV, hence the name “GeV excess”, whereas the components of the diffuse foreground have a power-law structure around a few GeV.

Fig. 4.

Intriguingly, the combined GeV centre and bulge emission has properties that are largely compatible with expectations from a dark-matter annihilation signal: the emission is extended, up to at least 10–15 degrees away from the galactic centre, with a profile that is consistent with that from a slightly contracted dark-matter-halo profile (figure 4). At energies below about 1 GeV, the gamma-ray emission grows steeply, and has a maximum at a few GeV with a cut-off or a significant softening at higher energies, which is expected for a signal from dark-matter annihilation.

Given the high stakes of claiming a discovery of a dark-matter annihilation signal, corroborating evidence for this hypothesis must be found, or alternative astrophysical explanations must be confidently excluded. Unfortunately, neither has happened up to now. Quite the contrary: there are several sources of gamma-ray emission near the galactic centre that could, within uncertainties, together account for all of the bulge and centre emission. For example, massive molecular-gas clouds near the galactic centre show clear indications of star-formation activity, which results in cosmic-ray production and associated gamma-ray emission in the inner galaxy. While the hadronic cosmic rays from such activity are not likely to explain the GeV excess, because their gamma-ray emission is not as extended as the GeV excess, inverse Compton emission from cosmic-ray electrons linked with such an activity can be extended over many degrees and is expected to contribute to the GeV emission. However, given that the energy spectrum expected for this inverse Compton emission is significantly flatter than the observed GeV excess, it is unlikely that this component accounts completely for the GeV-excess emission.

Fig. 5.

Arguably, the most plausible explanation for the GeV-excess emission from the galactic bulge and centre is a population of thousands of millisecond pulsars – highly magnetised neutron stars with a rotational period of 1–10 ms. They can emit gamma rays for billions of years before they lose energy, and their gamma-ray spectrum, as observed by Fermi-LAT, is similar to the spectrum of the GeV excess. It is plausible that millisecond pulsars in the bulge follow a similar spatial distribution as the majority of bulge stars. Indeed, recent analyses showed that the profile of the GeV-excess emission in the inner galaxy is better described by the boxy stellar bulge, rather than by a spherically symmetric dark-matter profile. Moreover, several detailed statistical analyses found evidence that the emission is more likely to be from a population of numerous but faint point sources, such as millisecond pulsars in the bulge, rather than from truly diffuse emission, such as that resulting from the annihilation of dark-matter particles (figure 5).

Future observations with radio telescopes such as MeerKAT in South Africa, expected to start taking data this year, and its successor, the Square Kilometre Array (SKA), the first construction phase of which is expected to end in 2020, should be able to test whether millisecond pulsars exist in the inner galaxy and can explain the GeV excess.

Additional multi-wavelength observations will provide new information about the three elephants in the sky. In particular, the eROSITA experiment, the successor of the X-ray ROSAT satellite, will survey the whole sky in X-rays and will be one order of magnitude more sensitive than ROSAT. With the eROSITA data, astronomers will search for a possible cavity carved out by cosmic rays in the Fermi bubbles and will estimate the distance to the North Polar Spur using the absorption of soft X-rays from the spur by the distribution of gas along the line of sight.

Possible connections

On the high-energy gamma-ray front, the upcoming Cherenkov Telescope Array is expected to detect the Fermi bubbles near the galactic plane above a few hundred GeV. This detection should help to answer the question of whether the Fermi bubbles are linked to the galaxy’s central supermassive black hole or to a different source away from the galactic centre. On the other side of the electromagnetic spectrum, the new generation of radio-telescope arrays, MeerKAT and SKA, should, as mentioned, be able to confirm or rule out the millisecond-pulsar hypothesis for the GeV excess. If the millisecond-pulsar hypothesis is excluded, then the dark-matter interpretation will remain as one of the plausible explanations. By contrast, a confirmation of the millisecond-pulsar hypothesis will significantly constrain the dark-matter hypothesis.

The presence of the three elephants in the gamma-ray sky in approximately the same direction raises the question of whether they are connected. One of the possible connections between the Fermi bubbles and Loop I is that Loop I is created by galactic gas pushed away by the expansion of the bubbles. In this case, the two elephants would become one, where Loop I is an outer part and the Fermi bubbles are an inner part. This scenario looks especially plausible for the northern bubble because Loop I extends beyond it.

The overlap between the GeV excess and the Fermi bubbles in the galactic-centre region provides the exciting possibility of a connection between the two. Models that try to explain the GeV excess with an additional population of cosmic-ray electrons, star formation and cosmic-ray acceleration processes, can connect the gamma-ray emission in the bulge with that at higher latitudes in the Fermi bubbles. Also, the mechanism underpinning the formation of the bubbles – whether it is linked to activity of the galaxy’s central supermassive black hole or to a burst of star formation – might affect the properties of the GeV excess. Future observations and analyses will help to settle the nature – common or not – of the three elephants in the sky, and might point to new physics such as dark-matter annihilation in the Milky Way. Studying the gamma-ray sky will no doubt be an exciting journey for many years to come.

The post Elephants in the gamma-ray sky appeared first on CERN Courier.

]]>
Feature The discovery of large-scale features in the gamma-ray sky in approximately the same direction continues to puzzle researchers. https://cerncourier.com/wp-content/uploads/2018/08/CCSep18Gamma-frontis.png
Relativity passes test on a galactic scale https://cerncourier.com/a/relativity-passes-test-on-a-galactic-scale/ Fri, 31 Aug 2018 08:00:18 +0000 https://preview-courier.web.cern.ch/?p=12603 Gravity behaves the same way on a galactic scale as it does in our solar system, and alternative gravity models which remove the need for dark energy are disfavoured.

The post Relativity passes test on a galactic scale appeared first on CERN Courier.

]]>
ESO 325-G004

Einstein’s theory of gravity, general relativity, is known to work well on scales smaller than an individual galaxy. For example, the orbits of the planets in our solar system and the motion of stars around the centre of the Milky Way have been measured precisely and shown to follow the theory. But general relativity remains largely untested on larger length scales. This makes it hard to rule out alternative theories of gravity, which modify how gravity works over large distances to explain away mysterious cosmic substances such as dark matter. Now a precise test of general relativity on a galactic scale excludes some of these alternative theories.

Using data from the Hubble Space Telescope, a team led by Thomas Collett from the University of Portsmouth in the UK has found that a nearby galaxy dubbed ESO 325-G004 is surrounded by a ring-like structure known as an Einstein ring – a striking manifestation of gravitational lensing. As the light from a background object passes a foreground object, the gravity of the foreground object bends and magnifies the light of the background one into a ring. The ring system found by Collett’s group is therefore a perfect laboratory with which to test general relativity on galactic scales.

But it isn’t easy to make such a test, because the size and structure of the ring depend on several factors, including the distance of the background galaxy from Earth, and the distance, mass and shape of the foreground (lensing) galaxy. In previous tests the uncertainty on some of these factors resulted in large systematic errors in the modelling of the gravitational-lensing effect, allowing only weak constraints to be placed on alternative theories of gravity. Now Collett and colleagues’ discovery of an Einstein ring around a relatively close galaxy, ESO 325-G004, along with high-resolution observations of that same galaxy taken with the Multi Unit Spectroscopic Explorer (MUSE) on the European Southern Observatory (ESO) Very Large Telescope, has allowed the most precise test of general relativity outside the Milky Way.

The researchers derived the distances of the background galaxy and the lensing galaxy from measurements of their redshifts. Measuring the mass and the shape of the lensing galaxy is more complex, but was made possible here thanks to the MUSE observations that allowed the team to perform measurements of the motions of the stars that make up the galaxy relative to the galaxy’s centre. Since these motions are governed by the gravitational fields inside the galaxy, they can be used to indirectly measure the mass and shape of ESO 325-G004.

The team put all of these measurements together and determined the gravitational effect that ESO 325-G004 should have on the background galaxy’s light if general relativity holds true. The result, which, technically, tests the scale invariance of a parameter in general relativity called gamma, is almost in perfect agreement with general relativity, with an uncertainty of only 9%. Not only does it show that gravity behaves on a galactic scale in the same way as it does in our solar system, it also disfavours alternative gravity models, in particular those that attempt to remove the need for dark energy.

The post Relativity passes test on a galactic scale appeared first on CERN Courier.

]]>
News Gravity behaves the same way on a galactic scale as it does in our solar system, and alternative gravity models which remove the need for dark energy are disfavoured. https://cerncourier.com/wp-content/uploads/2018/08/CCSep18Astr-galaxy.png
Gravitational Waves Vol 1: Theory and Experiments https://cerncourier.com/a/gravitational-waves-vol-1-theory-and-experiments/ Sun, 19 Aug 2018 08:38:18 +0000 https://preview-courier.web.cern.ch/?p=105079 Carlo Bradaschia reviews in 2009 Gravitational Waves Vol 1: Theory and Experiments.

The post Gravitational Waves Vol 1: Theory and Experiments appeared first on CERN Courier.

]]>
By Michele Maggiore, Oxford University Press. Hardback ISBN 9780198570745 £45 ($90).

This is a complete book for a field of physics that has just reached maturity. Gravitational wave (GW) physics recently arrived at a special stage of development. On the theory side, most of the generation mechanisms have been understood and some technical controversies have been settled. On the experimental side, several large interferometers are now operating around the world, with sensitivities that could allow the first detection of GWs, even if with a relatively low probability. The GW community is also starting vigorous upgrade programmes to bring the detection probability to certitude in less than a decade from now.

The need for a textbook that treats the production and detection of GWs systematically is clear. Michele Maggiore has succeeded in doing this in a way that is fruitful not only for the young physicist starting to work in the field, but also for the experienced scientist needing a reference book for everyday work.

CCboo2_09_08

In the first part, on theory, he uses two complementary approaches: geometrical and field-theoretical. The text fully develops and compares both, which is of great help for a deep understanding of the nature of GWs. The author also derives all equations completely, leaving just the really straightforward algebra for the reader. A basic knowledge of general relativity and field theory is the only prerequisite.

Maggiore explains thoroughly the generation of gravitational radiation by the most important astrophysical sources, including the emitted power and its frequency distribution. One full chapter is dedicated to the Hulse-Taylor binary pulsar, which constituted the first evidence for GW emission. The “tricky” subject of post-Newtonian sources is also clearly introduced and developed. Exercises that are completely worked out conclude most of these theory chapters, enhancing the pedagogical character of the book.

The second part is dedicated to experiments and starts by setting up a background of data-analysis techniques, including noise spectral density, matched filtering, probability and statistics, all of which are applied to pulse and periodic sources and to stochastic backgrounds. Maggiore treats resonant mass detectors first, because they were the first detectors chronologically to have the capability of detecting signals, even if only strong ones originating in the neighbourhood of our galaxy. The study of resonant bar detectors is instructive and deals with issues that are also very relevant to understanding interferometers. The text clearly explains fundamental physics issues, such as approaching the quantum limits and quantum non-demolition measurements.

The last chapter is devoted to a complete and detailed study of the large interferometers – the detectors of the current generation – which should soon make the first detection of GWs. It discusses many details of these complex devices, including their coupling to gravitational waves, and it makes a careful analysis of all of the noise sources.

Lastly, it is important to remark on a little word that appears on the cover: “Volume 1”. As the author explains in the preface, he is already working on the second volume. This will appear in a few years and will be dedicated to astrophysical and cosmological sources of GWs. The level of this first book allows us to expect an interesting description of all “we can learn about nature in astrophysics and cosmology, using these tools”.

The post Gravitational Waves Vol 1: Theory and Experiments appeared first on CERN Courier.

]]>
Review Carlo Bradaschia reviews in 2009 Gravitational Waves Vol 1: Theory and Experiments. https://cerncourier.com/wp-content/uploads/2008/08/CCboo2_09_08.jpg
Black holes galore at galactic core https://cerncourier.com/a/black-holes-galore-at-galactic-core/ Mon, 09 Jul 2018 10:40:46 +0000 https://preview-courier.web.cern.ch/?p=12487 NASA’s Chandra X-ray Observatory have revealed a dozen stellar-mass black holes at the centre of the galaxy, providing the first observational evidence for such a black-hole cluster.

The post Black holes galore at galactic core appeared first on CERN Courier.

]]>
Galactic centre

For decades, theoretical models of galaxy evolution have predicted that the supermassive black hole lying at the heart of the Milky Way is surrounded by thousands of smaller black holes left behind by dying stars. Testing such theories is important to understand our own galaxy and, more generally, to understand how galaxies evolve and how black holes are produced. Now, observations by NASA’s Chandra X-ray Observatory have revealed a dozen stellar-mass black holes at the centre of the galaxy, providing the first observational evidence for such a black-hole cluster.

Black holes emit virtually no radiation, so it’s not possible to detect them when they are isolated and located at large distances from Earth. But many black holes have close stellar companions from which they accrete matter and, as this matter is sucked into the black hole, it heats up and emits X-rays that can be detected on Earth. If only a few of the thousands of the stellar-mass black holes that are predicted to exist in the galactic centre had a companion star, at least this binary fraction of the total black-hole population would be detectable by X-ray telescopes.

Using Chandra data, a group led by Chuck Hailey of Columbia University in New York searched for such black-hole binary systems in a region extending several light years from the galactic centre. This type of search is confounded by two aspects: the high density of other X-ray-emitting objects in the same region, such as binary systems containing neutron stars or white dwarfs instead of black holes; and the relatively low intensity of the X-ray binary sources in the region. But in their study, Hailey and colleagues were able to distinguish between the different types of weak X-ray binary system in the region by studying their spectra.

The researchers examined the Chandra spectra of 415 weak X-ray point sources, containing as few as 100 counts, and looked for the expected spectral features of black-hole binaries. They found 12 sources that have the expected spectral characteristics of black-hole binaries, all within a radius of three light years from the supermassive black hole (see figure). Other X-ray sources whose spectra match well with those of white-dwarf binary systems were found to be distributed at larger distances from the galactic centre.

The researchers went on to estimate the total number of black-hole binary systems in the observed region, assuming that the 12 sources are the brightest in their family and using the known fluxes of brighter and well-studied black-hole binary systems. This resulted in about 300–1000 binary black holes, which is a lower limit on the total number because it only includes those with companion stars. According to theoretical follow-up work by Aleksey Generozov of Columbia and colleagues, the total number of black holes should be between 10,000 and 40,000.

The results, published in Nature, agree with the theoretical predictions and therefore confirm the existing models of galaxy evolution. What’s more, the findings allow astronomers to predict the number of black-hole mergers – and thus the number of gravitational waves – from this region.

The post Black holes galore at galactic core appeared first on CERN Courier.

]]>
News NASA’s Chandra X-ray Observatory have revealed a dozen stellar-mass black holes at the centre of the galaxy, providing the first observational evidence for such a black-hole cluster. https://cerncourier.com/wp-content/uploads/2018/07/CCJulAug18_Astro-Chandra.jpg
HESS proves the power of TeV astronomy https://cerncourier.com/a/hess-proves-the-power-of-tev-astronomy/ Fri, 01 Jun 2018 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/hess-proves-the-power-of-tev-astronomy/ To celebrate its 15th anniversary, the HESS collaboration has published its largest set of scientific results to date.

The post HESS proves the power of TeV astronomy appeared first on CERN Courier.

]]>

For hundreds of years, discoveries in astronomy were all made in the visible part of the electromagnetic spectrum. This changed in the past century when new objects started being discovered at both longer wavelengths, such as radio, and shorter wavelengths, up to gamma-ray wavelengths corresponding to GeV energies. The 21st century then saw another extension of the range of astronomical observations with the birth of TeV astronomy.

The High Energy Stereoscopic System (HESS) – an array of five telescopes located in Namibia in operation since 2002 – was the first large ground-based telescope capable of measuring TeV photons (followed shortly afterwards by the MAGIC observatory in the Canary Islands and, later, VERITAS in Arizona). To celebrate its 15th anniversary, the HESS collaboration has published its largest set of scientific results to date in a special edition of Astronomy and Astrophysics. Among them is the detection of three new candidates for supernova remnants that, despite being almost the size of the full Moon on the sky, had thus far escaped detection.

Supernova remnants are what’s left after massive stars die. They are the prime suspect for producing the bulk of cosmic rays in the Milky Way and are the means by which chemical elements produced by supernovae are spread in the interstellar medium. They are therefore of great interest for different fields in astrophysics.

HESS observes the Milky Way in the energy range 0.03–100 TeV, but its telescopes do not directly detect TeV photons. Rather, they measure the Cherenkov radiation produced by showers of particles generated when these photons enter Earth’s atmosphere. The energy and direction of the primary TeV photons can then be determined from the shape and direction of the Cherenkov radiation.

Detections by HESS demonstrate the power of TeV astronomy to identify new objects

Using the characteristics of known TeV-emitting supernova remnants, such as their shell-like shape, the HESS search revealed three new objects at gamma-ray wavelengths, prompting the team to search for counterparts of these objects in other wavelengths. Only one, called HESS J1534-571 (figure, left), could be connected to a radio source and thus be classified as a supernova remnant. For the two other sources, HESS J1614-518 and HESS J1912+101, no clear counterparts were found. These objects thus remain candidates for supernova remnants.

The lack of an X-ray counterpart to these sources could have implications for cosmic-ray acceleration mechanisms. The cosmic rays thought to originate from supernova remnants should be directly connected to the production of high-energy photons. If the emission of TeV photons is a result of low-energy photons being scattered by high-energy cosmic-ray electrons originating from a supernova remnant (as described by leptonic emission models), soft X-rays would also be produced while such electrons travelled through magnetic fields around the remnant. The lack of detection of such X-rays could therefore indicate that the TeV photons are not linked to such scattering but are instead associated with the decay of high-energy cosmic-ray pions produced around the remnant, as described by hadronic emission models. Searches in the X-ray band with more sensitive instruments than those available today are required to confirm this possibility and bring deeper insight into the link between supernova remnants and cosmic rays.

The new supernova-remnant detections by HESS demonstrate the power of TeV astronomy to identify new objects. The latest findings increase the anticipation for a range of discoveries from the future Cherenkov Telescope Array (CTA). With more than 100 telescopes, CTA will be more sensitive to TeV photons than HESS, and it is expected to substantially increase the number of detected supernova remnants in the Milky Way.

The post HESS proves the power of TeV astronomy appeared first on CERN Courier.

]]>
News To celebrate its 15th anniversary, the HESS collaboration has published its largest set of scientific results to date. https://cerncourier.com/wp-content/uploads/2018/06/CCJune18_Astro-hess.jpg
Hubble expansion discrepancy deepens https://cerncourier.com/a/hubble-expansion-discrepancy-deepens/ Thu, 19 Apr 2018 10:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/hubble-expansion-discrepancy-deepens/ Adam Riess from the Space Telescope Science Institute and colleagues have now made a more precise, direct measurement that reinforces the mismatch and could signal new physics.

The post Hubble expansion discrepancy deepens appeared first on CERN Courier.

]]>

In the 1920s, Edwin Hubble discovered that the universe is expanding by showing that more distant galaxies recede faster from Earth than nearby ones. Hubble’s measurements of the expansion rate, now called the Hubble constant, had relatively large errors, but astronomers have since found ways of measuring it with increasing precision. One way is direct and entails measuring the distance to far-away galaxies, whereas another is indirect and involves using cosmic microwave background (CMB) data. However, over the last decade a mismatch between the values derived from the two methods has become apparent. Adam Riess from the Space Telescope Science Institute in Baltimore, US, and colleagues have now made a more precise direct measurement that reinforces the mismatch and could signal new physics.

Riess and co-workers’ new value relies on improved measurements of the distances to distant galaxies, and builds on previous work by the team. The measurements are based on more precise measurements of type Ia supernovae within the galaxies. Such supernovae have a known luminosity profile, so their distances from Earth can be determined from how bright they are observed to be. But their luminosity needs to be calibrated – a process that requires an exact measurement of their distance, which is typically rather large.

To calibrate their luminosity, Riess and his team used Cepheid stars, which are closer to Earth than type Ia supernovae. Cepheids have an oscillating apparent brightness, the period of which is directly related to their luminosity, and so their apparent brightness can also be used to measure their distance. Riess and colleagues measured the distance to Cepheids in the Milky Way using parallax measurements from the Hubble Space Telescope, which determine the apparent shift of the stars against the background sky as the Earth moves to the other side of the Sun. The researchers measured this minute shift for several Cepheids, giving a direct measurement of their distance. The team then used this measurement to estimate the distance to distant galaxies containing such stars, which in turn can be used to calibrate the luminosity of supernovae in those galaxies. Finally, they used this calibration to determine the distance to even more distant galaxies with supernovae. Using such a “distance ladder”, the team obtained a value for the Hubble constant of  73.5 ± 1.7   km s–1 Mpc–1. This value is more precise than the 73.2 ± 1.8   km s–1 Mpc–1 value obtained by the team in 2016, and it is 3.7 sigma away from the 66.9 ± 0.6   km s–1 Mpc–1 value derived from CMB observations made by the Planck satellite.

Future data could also potentially help to identify the source of the discrepancy

Reiss and colleagues’ results therefore reinforce the discrepancy between the results obtained through the two methods. Although each method is complex and may thus be subject to error, the discrepancy is now at a level that a coincidence seems unlikely. It is difficult to imagine that systematic errors in the distance-ladder method are the root cause of the tension, says the team. Figuring out the nature of the discrepancy is pivotal because the Hubble constant is used to calculate several cosmological quantities, such as the age of the universe. If the discrepancy is not due to errors, explaining it will require new physics beyond the current standard model of cosmology. But future data could also potentially help to identify the source of the discrepancy. Upcoming Cepheid data from ESA’s Gaia satellite could reduce the uncertainty in the distance-ladder value, and new measurements of the expansion rate using a third method based on observations of gravitational waves could throw new light on the problem.

The post Hubble expansion discrepancy deepens appeared first on CERN Courier.

]]>
News Adam Riess from the Space Telescope Science Institute and colleagues have now made a more precise, direct measurement that reinforces the mismatch and could signal new physics. https://cerncourier.com/wp-content/uploads/2018/06/CCMay18-astro1-1.jpg
Spotting the first extragalactic planets https://cerncourier.com/a/spotting-the-first-extragalactic-planets/ Fri, 23 Mar 2018 11:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/spotting-the-first-extragalactic-planets/ Radiation emitted around a distant black hole has revealed the existence of extragalactic planets in a galaxy 3.8 billion light years away.

The post Spotting the first extragalactic planets appeared first on CERN Courier.

]]>

Three decades since astronomers first detected planets outside our solar system, exoplanets are now being discovered at a rate of hundreds per year. Although it is reasonable to assume other galaxies than our own contain planets, no direct detections of such objects have been made owing to their small size and their large distances from Earth.

Now, however, radiation emitted around a distant black hole has revealed the existence of extragalactic planets in a galaxy 3.8 billion light years away, located between the black hole and us. The planets, which have no way of being directly detected using any kind of existing telescope, are visible thanks to the small gravitational distortions they inflict on X-rays emanating from the more distant black hole.

The discovery was made by Xinyu Dai and Eduardo Guerras from the University of Oklahoma in the US using data from the Chandra X-ray Observatory. The distant black hole in question, which forms the supermassive centre of the quasar RX J1131-1231, is surrounded by an accretion disk that heats up as it orbits and emits radiation at X-ray wavelengths. Thanks to a fortunate cosmic alignment, this radiation is amplified by gravitational lensing and therefore can be studied accurately. The lensing galaxy positioned between Earth and the quasar causes light from RX J1131-1231 to bend around it, appearing to us not as a normal point-source but as a ring with four bright spots (see figure). The spots are a result of radiation coming from the same location of the quasar, which initially followed different paths but ended up being directed to the Earth.

Dai and Guerras focused on the spectral features of iron, a strong emission line that reveals details of the accretion disk, and found that this emission line is not just shifted in energy but that the amount of the shift varies with time. Although a shift in the frequency of this line is common, for example due to relative velocities between observers, its position is generally very stable with time when studying a specific object. Based on the 38 times RX J1131-1231 had been observed by the Chandra satellite during the past decade, the Oklahoma duo found that the energy varied significantly between observations in all of the four bright points of the ring.

These observations thus form the best evidence for the existence of extragalactic planets.

This feature can be explained using microlensing. The intermediate lensing galaxy is not a uniform mass but rather consists of small point masses, mainly stars and planets. As the relatively small objects within the lensing galaxy move, the light from the quasar passing through it is deflected in slightly different ways, causing different parts of the accretion disk to be amplified at different levels over time. As the different parts of the disk appear to emit at different energies, the measured variations in the energy of this emission line can be explained by the movement of objects within the lensing galaxy. The question is: what objects could cause such changes over time scales of several years?

Stars, being so numerous and massive, are one good candidate explanation. But Dai and Guerras calculated that the chance for a star to cause such short-term variations is very small. A better candidate, suggest fits to analytical models, is unbound planets which do not orbit a star. The Chandra data were best described by a model in which, for each star, there are more than 2000 unbound planets with masses between that of the Moon and Jupiter. Although the exact population of such planets is not well known even for our own galaxy, their number is well within the existing constraints. These observations thus form the best evidence for the existence of extragalactic planets and, by also providing the number of such planets in that galaxy, teach us something about the number of unbound planets we can expect in our own galaxy.

The post Spotting the first extragalactic planets appeared first on CERN Courier.

]]>
News Radiation emitted around a distant black hole has revealed the existence of extragalactic planets in a galaxy 3.8 billion light years away. https://cerncourier.com/wp-content/uploads/2018/06/CCApr18_Astro-quasar.jpg
Europe defines astroparticle strategy https://cerncourier.com/a/europe-defines-astroparticle-strategy/ Fri, 16 Feb 2018 12:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/europe-defines-astroparticle-strategy/ Multi-messenger astronomy, neutrino physics and dark matter are among several topics set to take priority in Europe, according to a report by the Astroparticle Physics European Consortium (APPEC).

The post Europe defines astroparticle strategy appeared first on CERN Courier.

]]>

Multi-messenger astronomy, neutrino physics and dark matter are among several topics in astroparticle physics set to take priority in Europe in the coming years, according to a report by the Astroparticle Physics European Consortium (APPEC).

The APPEC strategy for 2017–2026, launched at an event in Brussels on 9 January, is the climax of two years of talks with the astroparticle and related communities. 20 agencies in 16 countries are involved and includes representation from the European Committee for Future Accelerators, CERN and the European Southern Observatory (ESO).

Lying at the intersection of astronomy, particle physics and cosmology, astroparticle physics is well placed to search for signs of physics beyond the standard models of particle physics and cosmology. As a relatively new field, however, European astroparticle physics does not have dedicated intergovernmental organisations such as CERN or ESO to help drive it. In 2001, European scientific agencies founded APPEC to promote cooperation and coordination, and specifically to formulate a strategy for the field.

Building on earlier strategies released in 2008 and 2011, APPEC’s latest roadmap presents 21 recommendations spanning scientific issues, organisational aspects and societal factors such as education and industry, helping Europe to exploit tantalising potential for new discoveries in the field.

There are plans to join forces with experiments in the US to build the next generation of NDBD detectors.

The recent detection of gravitational waves from the merger of two neutron stars (CERN Courier December 2017 p16) opens a new line of exploration based on the complementary power of charged cosmic rays, electromagnetic waves, neutrinos and gravitational waves for the study of extreme events such as supernovae, black-hole mergers and the Big Bang itself. “We need to look at cross-fertilisation between these modes to maximise the investment in facilities,” says APPEC chair Antonio Masiero of the INFN and the University of Padova. “This is really going to become big.”

APPEC strongly supports Europe’s next-generation ground-based gravitational interferometer, the Einstein Telescope, and the space-based LISA detector. In the neutrino sector, KM3NeT is being completed for high-energy cosmic neutrinos at its site in Sicily, as well as for precision studies of atmospheric neutrinos at its French site near Toulon. Europe is also heavily involved in the upgrade of the leading cosmic-ray facility the Pierre Auger Observatory in Argentina. Significant R&D work is taking place at CERN’s neutrino platform for the benefit of long- and short-baseline neutrino experiments in Japan and the US (CERN Courier July/August 2016 p21), and Europe is host to several important neutrino experiments. Among them are KATRIN at KIT in Germany, which is about to begin measurements of the neutrino absolute mass scale, and experiments searching for neutrinoless double-beta decay (NDBD) such as GERDA and CUORE at INFN’s Gran Sasso National Laboratory (CERN Courier December 2017 p8).

There are plans to join forces with experiments in the US to build the next generation of NDBD detectors. APPEC has a similar vision for dark matter, aiming to converge next year on plans for an “ultimate” 100-tonne scale detector based on xenon and argon via the DARWIN and Argo projects. APPEC also supports ESA’s Euclid mission, which will establish European leadership in dark-energy research, and encourages continued European participation in the US-led DES and LSST ground-based projects. Following from ESA’s successful Planck mission, APPEC strongly endorses a European-led satellite mission, such as COrE, to map the cosmic-microwave background and the consortium plans to enhance its interactions with its present observers ESO and CERN in areas of mutual interest.

“It is important at this time to put together the human forces,” says Masiero. “APPEC will exercise influence in the European Strategy for Particle Physics, and has a significant role to play in the next European Commission Framework Project, FP9.”

A substantial investment is needed to build the next generation of astroparticle-physics research, the report concedes. According to Masiero, European agencies within APPEC currently invest around €80 million per year in astroparticle-related activities, in addition to funding large research infrastructures. A major effort in Europe is necessary for it to keep its leading position. “Many young people are drawn into science by challenges like dark matter and, together with Europe’s existing research infrastructures in the field, we have a high technological level and are pushing industries to develop new technologies,” continues Masiero. “There are great opportunities ahead in European astroparticle physics.”

• View the full report at www.appec.org.

The post Europe defines astroparticle strategy appeared first on CERN Courier.

]]>
News Multi-messenger astronomy, neutrino physics and dark matter are among several topics set to take priority in Europe, according to a report by the Astroparticle Physics European Consortium (APPEC). https://cerncourier.com/wp-content/uploads/2018/02/CCnew1_02_18.jpg
ESO https://cerncourier.com/a/eso/ Fri, 16 Feb 2018 12:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/eso/ A new national facility at La Silla Observatory in Chile, operated by the European Southern Observatory (ESO), made its first observations at the beginning of the year.

The post ESO appeared first on CERN Courier.

]]>
The new ExTrA facility

A new national facility at La Silla Observatory in Chile, operated by the European Southern Observatory (ESO), made its first observations at the beginning of the year. ExTrA (Exoplanets in Transits and their Atmospheres) will search for Earth-sized planets orbiting nearby red dwarf stars, its three 0.6 m-diameter near-infrared telescopes (pictured) increasing the sensitivity compared to previous searches. ExTrA is a French project also funded by the European Research Council and the telescopes will be operated remotely from Grenoble.

The post ESO appeared first on CERN Courier.

]]>
News A new national facility at La Silla Observatory in Chile, operated by the European Southern Observatory (ESO), made its first observations at the beginning of the year. https://cerncourier.com/wp-content/uploads/2018/06/CCnew6_02_18.jpg
Ancient black hole lights up early universe https://cerncourier.com/a/ancient-black-hole-lights-up-early-universe/ Fri, 16 Feb 2018 12:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/ancient-black-hole-lights-up-early-universe/ The spectrum of J1342+0928 gives us a view on the universe when it was just 690 million years old.

The post Ancient black hole lights up early universe appeared first on CERN Courier.

]]>

Many questions remain about what happened in the first billion years of the universe. At around 100 million years old, the universe was a dark place consisting of mostly neutral hydrogen without many objects emitting detectable radiation. This situation changed as stars and galaxies formed, leading to a phase transition known as reionisation where the neutral hydrogen was ionised. Exactly when reionisation started and how long it took is still not fully clear, but a recent discovery of the oldest massive black hole ever found can help answer this important question.

Up to about 300,000 years after the Big Bang, the universe was hot and dense, and electrons and protons were fully separated. As the universe started to expand, it cooled down and underwent a first phase transition where electrons and protons formed neutral gases such as hydrogen. The following period is known as the cosmic dark ages. During this period, protons and electrons were mostly combined into neutral hydrogen, but the universe had to cool much further before matter could condense to the level where light-producing objects such as stars could form. These new objects started to emit both the radiation we can now detect to study the early universe and also the radiation responsible for the last phase transition – the reionisation of the universe. Some of the brightest and therefore easiest-to-detect objects are quasars: massive black holes surrounded by discs of hot accreting matter that emit radiation over a wide but distinctive spectrum.

Using data from a range of large-area surveys by different telescopes, a group led by Eduardo Bañados from the Carnegie Institution for Science has discovered a distant quasar called J1342+0928, with the black hole at its centre found to be eight million solar masses. After the radiation was emitted by J1342+0928, it travelled through the expanding universe, increasing its wavelength or “red shifting” in proportion to its travel time. Using known spectral features of quasars, the redshift (and therefore the moment at which the radiation was emitted) can be calculated.

The spectrum of J1342+0928, shown in the figure, demonstrates that the universe was only 690 million years old – just 5% of its current age – at the time we see J1342+0928. The spectrum also shows a second interesting feature: the absorption of a part of the spectrum by neutral hydrogen, which implies that at the time we are observing the black hole, the universe was not fully ionised yet. By modelling the emission and absorption, Bañados and co-workers found that the spectrum from J1342+0928 is compatible with emission in a universe where half the hydrogen was ionised, putting the time of emission right in the middle of the epoch of reionisation.

The next mystery is to explain how a black hole weighing eight million solar masses could form so early in the universe. Black holes grow as they accrete mass surrounding them, but the accreting mass radiates and this radiation pushes other accreting mass away from the black hole. As a result, there is a theoretical limit on the amount of matter a black hole can accrete. Forming a black hole the size of J1342+0928 with such accretion limits would require black holes in the very early universe with sizes that challenge current theoretical models. One possible explanation, however, is that this particular black hole is a peculiar case and was formed by a merger of several smaller black holes.

Thanks to continuous data taking from a range of existing telescopes and upcoming new instrumentation, we can expect more objects like J1342+0928 or even older to be discovered, offering a probe of the universe at even earlier stages. The discovery of further objects would allow a more exact date for the period of reionisation, which can be compared with indirect measurements coming from the cosmic microwave background. At the same time, more measurements will show if black holes of this size in the early universe are just an anomaly or if there are more. In either case, such observations would provide important input for research on early black hole formation.

The post Ancient black hole lights up early universe appeared first on CERN Courier.

]]>
News The spectrum of J1342+0928 gives us a view on the universe when it was just 690 million years old. https://cerncourier.com/wp-content/uploads/2018/06/CCast1_02_18.jpg
HAWC clarifies cosmic positron excess https://cerncourier.com/a/hawc-clarifies-cosmic-positron-excess/ Mon, 15 Jan 2018 09:15:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/hawc-clarifies-cosmic-positron-excess/ New measurements by the High-Altitude Water Cherenkov (HAWC) experiment hints at a more exotic origin of the positron excess.

The post HAWC clarifies cosmic positron excess appeared first on CERN Courier.

]]>

Since 2008, astronomers have been puzzled by a mysterious feature in the cosmic-ray energy spectrum. Data from the PAMELA satellite showed a significant increase in the ratio of positrons to electrons at energies above 10 GeV. This unexpected positron excess was subsequently confirmed by both the Fermi-LAT satellite and the AMS-02 experiment onboard the ISS (CERN Courier December 2016 p26–30), sparking many explanations, ranging from dark-matter annihilation to positron emission by nearby pulsars. New measurements by the High-Altitude Water Cherenkov (HAWC) experiment now seem to rule out the second explanation, hinting at a more exotic origin of the positron excess.

Although standard cosmic-ray propagation models predict the production of positrons from interactions of high-energy protons travelling through the galaxy, the positron fraction is expected to decrease as a function of energy. One explanation for the excess is the annihilation of dark-matter particles with masses of several TeV, which would result in a bump in the electron–positron fraction, with the measured increase perhaps being the rising part of such a bump. According to other models, however, the excess is the result of positron production by astrophysical sources such as pulsars (rapidly spinning neutron stars). Since these charged particles lose energy due to interactions with interstellar magnetic and radiation fields they must be produced relatively close to Earth, making nearby pulsars a prime suspect.

HAWC, situated near the city of Puebla in Mexico, detects charged particles created in the Earth’s atmosphere from collisions between high-energy photons and atmospheric nuclei. The charged particles produced in the resulting shower produce Cherenkov radiation in HAWC’s 300 water tanks, their high altitude location making HAWC the most sensitive survey instrument to measure astrophysical photons in the TeV range. This allows the study of TeV-scale photon emission from nearby pulsars, such as Geminga and PSR B0656+14, to investigate if these objects could be responsible for the positron excess.

Pulsars are thought to emit electrons and positrons with energies up to several hundred TeV, which diffuse into the interstellar medium, but the details of the emission, acceleration and propagation of these leptons are not well understood. The TeV photons measured by HAWC are produced as the electrons and positrons emitted by the pulsars interact with low energy photons in the interstellar medium. One can, therefore, use the intensity of the TeV photon emission and the size of the emitting region to indirectly measure the high-energy positrons. The HAWC data show the large emitting regions of both the pulsars Geminga and PSR B0656+14 (see figure). The spectral and spatial features of the TeV emission were then inserted in a diffusion model for the positrons, allowing the team to calculate the positron flux from these sources reaching Earth. The results, published in Science, indicate that the positron flux from these sources reaching Earth is significantly smaller than that measured by PAMELA and AMS-02.

These indirect measurements of the positron emission appear to rule out a significant contribution of the local positron flux by these two pulsars, making it unlikely that pulsars are the origin of the positron excess. More exotic explanations such as dark matter, or other astrophysical sources such as micro-quasars and supernovae remnants, are not ruled out, however. Results from gamma-ray observations of such sources, along with more detailed measurements of the lepton flux at even higher energies by AMS-02, DAMPE or CALET, are therefore highly anticipated to fully solve the mystery of the cosmic positron excess.

The post HAWC clarifies cosmic positron excess appeared first on CERN Courier.

]]>
News New measurements by the High-Altitude Water Cherenkov (HAWC) experiment hints at a more exotic origin of the positron excess. https://cerncourier.com/wp-content/uploads/2018/01/CCast1_01_18-2-NEW.jpg
First cosmic-ray results from CALET on the ISS https://cerncourier.com/a/first-cosmic-ray-results-from-calet-on-the-iss/ Fri, 10 Nov 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/first-cosmic-ray-results-from-calet-on-the-iss/ The CALorimetric Electron Telescope (CALET) has released its first results concerning the nature of high-energy cosmic rays.

The post First cosmic-ray results from CALET on the ISS appeared first on CERN Courier.

]]>

The CALorimetric Electron Telescope (CALET), a space mission led by the Japan Aerospace Exploration Agency with participation from the Italian Space Agency (ASI) and NASA, has released its first results concerning the nature of high-energy cosmic rays.

Having docked with the International Space Station (ISS) on 25 August 2015, CALET is carrying out a full science programme with long-duration observations of high-energy charged particles and photons coming from space. It is the second high-energy experiment operating on the ISS following the deployment of AMS-02 in 2011. During the summer of 2017 a third experiment, ISS-CREAM, joined these two. Unlike AMS-02, CALET and ISS-CREAM have no magnetic spectrometer and therefore measure the inclusive electron and positron spectrum. CALET’s homogeneus calorimeter is optimised to measure electrons, and one of its main science goals is to measure the detailed shape of the electron spectrum.

Due to the large radiative losses during their travel in space, high-energy cosmic electrons are expected to originate from regions relatively close to Earth (of the order of a few thousand light-years). Yet their origin is still unknown. The shape of the spectrum and the anisotropy in the arrival direction might contain crucial information as to where and how electrons are accelerated. It could also provide a clue on possible signatures of dark matter – for example, the presence of a peak in the spectrum might tell us about a possible dark-matter decay or annihilation with an electron or positron in the final state – and shed light on the intriguing electron and positron spectra reported by AMS-02 (CERN Courier December 2016 p26).

To pinpoint possible spectral features on top of the overall power-law energy dependence of the spectrum, CALET was designed to measure the energy of the incident particle with very high resolution and with a large proton rejection power, well into the TeV energy region. This is provided by a thick homogeneous calorimeter preceded by a high-granularity pre-shower with imaging capabilities with a total thickness of 30 radiation length at normal incidence. The calibration of the two instruments is the key to control the energy scale and this is why CALET – a CERN-recognised experiment – performed several calibration tests at CERN.

The first data from CALET concern a measurement of the inclusive electron and positron spectrum in the energy range from 10 GeV to 3 TeV, based on about 0.7 million candidates (1.3 million in full acceptance). Above an energy of 30 GeV the spectrum can be fitted with a single power law with a spectral index of –3.152±0.016. A possible structure observed above 100 GeV requires further investigation with increased statistics and refined data analysis. Beyond 1 TeV, where a roll-off of the spectrum is expected and low statistics is an issue, electron data are now being carefully analysed to extend the measurement. CALET has been designed to measure electrons up to around 20 TeV and hadrons up to an energy of 1 PeV.

CALET is a powerful space observatory with the ability to identify cosmic nuclei from hydrogen to elements heavier than iron. It also has a dedicated gamma-ray-burst instrument (CGBM) that so far has detected bursts at an average rate of one every 10 days in the energy range of 7 KeV–20 MeV. The search for electromagnetic counterparts of gravitational waves (GWs) detected by the LIGO and Virgo observatories proceeds around the clock thanks to a special collaboration agreement with LIGO and Virgo. Upper limits on X-ray and gamma-ray counterparts of the GW151226 event were published and further research on GW follow-ups is being carried out. Space-weather studies relative to the relativistic electron precipitation (REP) from the Van Allen belts have also been released.

With more than 500 million triggers collected so far and an expected extension of the observation time on the ISS to five years, CALET is likely to produce a wealth of interesting results in the near future.

The post First cosmic-ray results from CALET on the ISS appeared first on CERN Courier.

]]>
News The CALorimetric Electron Telescope (CALET) has released its first results concerning the nature of high-energy cosmic rays. https://cerncourier.com/wp-content/uploads/2018/06/CCnew4_10_17.jpg
Extreme cosmic rays reveal clues to origin https://cerncourier.com/a/extreme-cosmic-rays-reveal-clues-to-origin/ Fri, 10 Nov 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/extreme-cosmic-rays-reveal-clues-to-origin/ The Pierre Auger collaboration has published results showing that the arrival direction of ultra-high-energy cosmic rays (UHECRs) is far from uniform.

The post Extreme cosmic rays reveal clues to origin appeared first on CERN Courier.

]]>

The energy spectrum of cosmic rays continuously bombarding the Earth spans many orders of magnitude, with the highest energy events topping 108 TeV. Where these extreme particles come from, however, has remained a mystery since their discovery more than 50 years ago. Now the Pierre Auger collaboration has published results showing that the arrival direction of ultra-high-energy cosmic rays (UHECRs) is far from uniform, giving a clue to their origins.

The discovery in 1963 at the Vulcano Ranch Experiment of cosmic rays with energies exceeding one million times the energy of the protons in the LHC raised many questions. Not only is the charge of these hadronic particles unknown, but the acceleration mechanisms required to produce UHECRs and the environments that can host these mechanisms are still being debated. Proposed origins include sources in the galactic centre, extreme supernova events, mergers of neutron stars, and extragalactic sources such as blazars. Unlike the case with photons or neutrinos, the arrival direction of charged cosmic rays does not point directly towards their origin because, despite their extreme energies, their paths are deflected by magnetic fields both inside and outside our galaxy. Since the deflection reduces as the energy goes up, however, some UHECRs with the highest energies might still contain information about their arrival direction.

At the Pierre Auger Observatory, cosmic rays are detected using a vast array of detectors spread over an area of 3000 km2 near the town of Malargüe in western Argentina. Like the first cosmic-ray detectors in the 1960s, the array measures the air showers induced as the cosmic rays interact with the atmosphere. The arrival times of the particles, measured with GPS receivers, are used to determine the direction from which the primary particles came within approximately one degree.

The presented dipole measurement is based on a total of 30,000 cosmic rays measured.

The collaboration studied the arrival direction of particles with energies in the range 4-8 EeV and for particles with energies exceeding 8 EeV. In the former data set, no clear anisotropy was observed, whereas for particles with energies above 8 EeV a dipole structure was observed (see figure), indicating that more particles come from a particular part of the sky. Since the maximum of the dipole is outside the galactic plane, the measured anisotropy is consistent with an extragalactic nature. The collaboration reports that the maximum, when taking into account the deflection of magnetic fields, is consistent with a region in the sky known to have a large density of galaxies, supporting the view that UHECRs are produced in other galaxies. The lack of anisotropy at lower energies could be a result of the higher deflection of these particles in the galactic magnetic field.

The presented dipole measurement is based on a total of 30,000 cosmic rays measured by the Pierre Auger Observatory, which is currently being upgraded. Although the results indicate an extragalactic origin, the particular source responsible for accelerating these particles remains unknown. The upgraded observatory will enable more data to be acquired and allow a more detailed investigation of the currently studied energy ranges. It will also open the possibility to explore even higher energies where the magnetic-field deflections become even smaller, making it possible to study the origin of UHECRs, their acceleration mechanism and the magnetic fields that deflect them.

The post Extreme cosmic rays reveal clues to origin appeared first on CERN Courier.

]]>
News The Pierre Auger collaboration has published results showing that the arrival direction of ultra-high-energy cosmic rays (UHECRs) is far from uniform. https://cerncourier.com/wp-content/uploads/2018/06/CCast1_10_17.jpg
Gravitational waves and the birth of a new science https://cerncourier.com/a/gravitational-waves-and-the-birth-of-a-new-science/ Fri, 10 Nov 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/gravitational-waves-and-the-birth-of-a-new-science/ The era of multi-messenger astronomy is here, calling for next-generation gravitational-wave observatories.

The post Gravitational waves and the birth of a new science appeared first on CERN Courier.

]]>

On 14 September 2015, the world changed for those of us who had spent years preparing for the day when we would detect gravitational waves. Our overarching goal was to directly detect gravitational radiation, finally confirming a prediction made by Albert Einstein in 1916. A year after he had published his theory of general relativity, Einstein predicted the existence of gravitational waves in analogy to electromagnetic waves (i.e. photons) that propagate through space from accelerating electric charges. Gravitational waves are produced by astrophysical accelerations of massive objects, but travel through space as oscillations of space–time itself.

It took 40 years before the theoretical community agreed that gravitational waves are real and an integral part of general relativity. At that point, proving they exist became an experimental problem and experiments using large bars of aluminium were instrumented to detect a tiny change in shape from the passage of a gravitational wave. Following a vigorous worldwide R&D programme, a potentially more sensitive technique – suspended-mass interferometry – has superseded resonant-bar detectors. There was limited theoretical guidance regarding what sensitivity would be required to achieve detections from known astrophysical sources. But various estimates indicated that a strain sensitivity ΔL/L of approximately 10–21 caused by the passage of a gravitational wave would be needed to detect known sources such as binary compact objects (binary black-hole mergers, binary neutron-star systems or binary black-hole neutron-star systems). That’s roughly equivalent to measuring the Earth–Sun separation to a precision of the proton radius.   

The US National Science Foundation approved the construction of the Laser Interferometer Gravitational-Wave Observatory (LIGO) in 1994 at two locations: Hanford in Washington state and Livingston in Louisiana, 3000 km away. At that time, there was a network of cryogenic resonant-bar detectors spread around the world, including one at CERN, but suspended-mass interferometers have the advantage of broadband frequency acceptance (basically the audio band, 10–10,000 Hz) and a factor-1000 longer arms, making it feasible to measure a smaller ΔL/L. Earth-based detectors are sensitive to the most violent events in the universe, such as the merger of compact objects, supernovae and gamma-ray bursts. The detailed interferometric concept and innovations had already been demonstrated during the 1980s and 1990s in a 30 m prototype in Garching, Germany, and a 40 m prototype at Caltech in the US. Nevertheless, these prototype interferometers were at least four orders of magnitude away from the target sensitivity.

Strategic planning

We built a flexible technical infrastructure for LIGO such that it could accommodate a future major upgrade (Advanced LIGO) without rebuilding too much infrastructure. Initial LIGO had mostly used demonstrated technologies to assure technical success, despite the large extrapolation from the prototype interferometers. After completing Initial LIGO construction in about 2000, we undertook an ambitious R&D programme for Advanced LIGO. Over a period of about 10 years, we performed six observational runs with Initial LIGO, each time searching for gravitational waves with improved sensitivity. Between each run, we made improvements, ran again, and eventually reached our Initial LIGO design sensitivity. But, unfortunately, we failed to detect gravitational waves.

We then undertook a major upgrade to Advanced LIGO, which had the goal of improving the sensitivity over Initial LIGO by at least a factor of 10 over the entire frequency range. To accomplish this, we developed a more powerful NdYAG laser system to reduce shot noise at high frequencies, a multiple suspension system and larger test masses to reduce thermal noise in the middle frequencies, and introduced active seismic isolation, which reduced seismic noise at frequencies of around 40 Hz by a factor of 100 (CERN Courier January/February 2017 p34). This was the key to our discovery of our first 30 solar-mass binary black-hole mergers, which are concentrated at low frequencies, two years ago. The increased sensitivity to such events had expanded the volume of the universe searched by a factor of up to 106, enabling a binary black-hole-merger detection coincidence within 6 ms between the Livingston and Hanford sites.

We recorded the last 0.2 seconds of this astrophysical collision: the final merger; coalescence; and “ring-down” phase, constituting the first direct observation of gravitational waves. The waveform was accurately matched by numerical-relativity calculations with a signal-to-noise ratio of 24:1 and a statistical probability easily exceeding 5σ. Beyond confirming Einstein’s prediction, this event represented the first direct observation of black holes, and established that stellar black holes exist in binary systems and that they merge within the lifetime of the universe (CERN Courier January/February 2017 p16). Surprisingly, the two black holes were each about 30 times the mass of the Sun – much heavier than expectations from astrophysics.

Run 2 surprises

Similar to Initial LIGO, we plan to reach Advanced LIGO design sensitivity in steps. After completion of the four-month-long first data run (called O1) in January 2016, we improved the interferometer at the Livingston site from 60 Mpc to 100 Mpc for binary neutron-star mergers, but fell somewhat short in Hanford due to some technical issues, which we decided to fix after LIGO’s second observational run (O2). We have now reported a total of four black-hole-merger events and are beginning to determine characteristics such as mass distributions and spin alignments that will help distinguish between the different possibilities for the origin of such heavy black holes. The leading ideas are that they originate in low-metallicity parts of the universe, were produced in dense clusters, or are primordial. They might even constitute some of the dark matter.   

We recorded the last 0.2 seconds of this astrophysical collision: the final merger.

Advanced LIGO’s O2 run ended in August this year. Although it seemed almost impossible that it could be as exciting as O1, several more black-hole binary mergers have been reported, including one after the Virgo interferometer in Italy joined O2 in August and dramatically improved our ability to locate the direction of the source. In addition, the orientation of Virgo relative to the two LIGO interferometers enabled the first information on the polarisation of the gravitational waves. Together with other measurements, this allowed us to limit the existence of an additional tensor term in general relativity and showed that the LIGO–Virgo event is consistent with the predicted two-state polarisation picture.

Then, on 17 August, we really hit the jackpot: our interferometers detected a neutron-star binary merger for the first time. We observed a coincidence signal in both LIGO and Virgo that had strikingly different properties from the black-hole binary mergers we had spotted earlier. Like those, this event entered our detector at low frequencies and propagated to higher frequencies, but lasted much longer (around 100 s) and reached much higher frequencies. This is because the masses in the binary system were much lower and, in fact, are consistent with being neutron stars. A neutron star results from the collapse of a star into a compact object of between 1.1–1.6 solar masses. We have identified our event as the merger of two neutron stars, each about the size of Geneva, but having several hundred thousand times the mass of the Earth.

As we accumulate more events and improve our ability to record their waveforms, we look forward to studying nuclear physics under these extreme conditions. This latest event was the first observed gravitational-wave transient phenomenon also to have electromagnetic counterparts, representing multi-messenger astronomy. Combining the LIGO and Virgo signals, the source of the event was narrowed down to a location in the sky of about 28 square degrees, and it was soon recognised that the Fermi satellite had detected a gamma-ray burst shortly afterwards in the same region. A large and varied number of astronomical observations followed. The combined set of observations has resulted in an impressive array of new science and papers on gamma-ray bursts, kilonovae, gravitational-wave measurements of the Hubble constant, and more. The result even supports the idea that binary neutron-star collisions are responsible for the very heavy elements, such as platinum and gold.

Going deeper

Much has happened since our first detection, and this portends well for the future of this new field. Both LIGO and Virgo entered into a 15 month shutdown at the end of August to further improve noise levels and raise their laser power. At present, Advanced LIGO is about a factor of two below its design goal (corresponding to a factor of eight in event rates). We anticipate reaching design sensitivity by about 2020, after which the KAGRA interferometer in Japan will join us. A third LIGO interferometer (LIGO-India) is also scheduled for operation in around 2025. These observatories will constitute a network offering good global coverage and will accumulate a large sample of binary merger events, achieve improved pointing accuracy for multi-messenger astronomy, and hopefully will observe other sources of gravitational waves. This will not be the end of the story. Beyond the funded programme, we are developing technologies to improve our instruments beyond Advanced LIGO, including improved optical coatings and cryogenic test masses.

In the longer range, concepts and designs already exist for next-generation interferometers, having typically 10 times better sensitivity than will be achieved in Advanced LIGO and Virgo (see panel on previous page). In Europe, a mature concept called the Einstein Telescope is an underground interferometer facility in a triangular configuration (see panel on previous page), and in the US a very long (approximately 40 km) LIGO-like interferometer is under study. The science case for such next-generation devices is being developed through the Gravitational Wave International Committee (GWIC), which is the gravitational-wave field’s equivalent to the International Committee for Future Accelerators (ICFA) in particle physics. Although the science case appears very strong scientifically and technical solutions seem feasible, these are still very early days and many questions must be resolved before a new generation of detectors is proposed.

To fully exploit the new field of gravitational-wave science, we must go beyond ground-based detectors and into the pristine seismic environment of space, where different gravitational-wave sources will become accessible. As described earlier, the lowest frequencies accessible by Earth-based observatories are about 10 Hz. The Laser Interferometer Space Antenna (LISA), a European Space Agency project scheduled for launch in the early 2030s, was approved earlier this year and will cover frequencies around 10–1–10–4 Hz. LISA will consist of three satellites separated by 2.5 × 106 km in a triangular configuration and a heliocentric orbit, with light travelling continually along each arm to monitor the satellite separations for deviations from a passing gravitational wave. A test mission, LISA Pathfinder, was recently flown and demonstrated the key performance requirements for LISA in space (CERN Courier November 2017 p37).

Meanwhile, pulsar-timing arrays are being implemented to monitor signals from millisecond pulsars, with the goal of detecting low-frequency gravitational waves by studying correlations between pulsar arrival times. The sensitivity range of this technique is 10–6–10–9 Hz, where gravitational waves from massive black-hole binaries in the centres of merging galaxies with periods of months to years could be studied.

An ultimate goal is to study the Big Bang itself. Gravitational waves are not absorbed as they propagate and could potentially probe back to the very earliest times, while photons only take us to within 300,000 or so years after the Big Bang. However, we do not yet have detectors sensitive enough to detect early-universe signals. The imprint also of gravitational waves on the cosmic microwave background has been pursued by the Bicep2 experiment, but background issues so far mask a possible signal.

Although gravitational-wave science is clearly in its infancy, we have already learnt an enormous amount and numerous exciting opportunities lie ahead. These vary from testing general relativity in the strong-field limit to carrying out multi-messenger gravitational-wave astronomy over a wide range of frequencies – as demonstrated by the most recent and stunning observation of a neutron-star merger. Since Galileo first looked into a telescope and saw the moons of Jupiter, we have learnt a huge amount about the universe through modern-day electromagnetic astronomy. Now, we are beginning to look at the universe with a new probe and it does not seem to be much of a stretch to anticipate a rich new era of gravitational-wave science.

CERN LIGO–Virgo meeting weighs up 3G gravitational-wave detectors

Similar to particle physicists, gravitational-wave scientists are contemplating major upgrades to present facilities and developing concepts for next-generation observatories. Present-generation (G2) gravitational-wave detectors – LIGO in Hanford, Livingston and India, Virgo in Italy, GEO600 in Germany and KAGRA in Japan – are in different stages of development and have different capabilities (see main text), but all are making technical improvements to better exploit the science potential from gravitational waves over the coming years. As the network develops, the more accurate location information will enable the long-time dream of studying the same astrophysical event with gravitational waves and their electromagnetic and neutrino counterpart signals.

The case for making future, more sensitive next-generation gravitational-wave detectors is becoming very strong, and technological R&D and design efforts for 3G gravitational detectors may have interesting overlaps with both CERN capabilities and future directions. The 3G concepts have many challenging new features, including: making longer arms; going underground; incorporating squeezed quantum states; developing lower thermal-noise coatings; developing low-noise cryogenics; implementing Newtonian noise cancellation; incorporating adaptive controls; new computing capabilities and strategies; and new data-analysis methods.

In late August, coinciding with the end of the second Advanced LIGO observational run, CERN hosted a LIGO–Virgo collaboration meeting. On the final day, a joint meeting between LIGO–Virgo and CERN explored possible synergies between the two fields. It provided strong motivation for next-generation facilities in both particle and gravitational physics and revealed intriguing overlaps between them. On a practical level, the event identified issues facing both communities, such as geology and survey, vacuum and cryogenics, control systems, computing and governance.

The time for R&D, construction and commissioning is expected to be around a decade, with problems near to intractable. It is planned to use cryogenics to bring mirrors to the temperature of a few kelvin. The mirrors themselves are coated using ion beams for deposition, to obtain a controlled reflectivity that must be uniform over areas 1 m in diameter. These mirrors work in an ultra-high vacuum, and residual gas-density fluctuations must be minimal along a vacuum cavity of several tens of kilometres, which will be the approximate footprint of the 3G scientific infrastructure.

Data storage and analysis is another challenge for both gravitational and particle physicists. Unlike the large experiments at the LHC, which count or measure energy deposition in millions of pixels at the detector level, interferometers continuously sample signals from hundreds of channels, generating a large amount of data consisting of waveforms. Data storage and analysis places major demands on the computing infrastructure, and analysis of the first gravitational events called for the GRID infrastructure.

Interferometers have to be kept on an accurately controlled working point, with mirrors used for gravitational-wave detection positioned and oriented using a feedback control system, without introducing additional noise. Sensors and actuators are different in particle accelerators but the control techniques are similar.

Comparisons of the science capabilities, costs and technical feasibility for the next generation of gravitational-wave observatories are under active discussion, as is the question of how many 3G detectors will be needed worldwide and how similar or different they need be. Finally, there were discussions of how to form and structure a worldwide collaboration for the 3G detectors and how to manage such an ambitious project – similar to the challenge of building the next big particle-physics project after the LHC.

Barry Barish, the author of this feature, shared the 2017 Nobel Prize in Physics with Kip Thorne and Rainer Weiss for the discovery of gravitational waves (CERN Courier November 2017 p37).

The post Gravitational waves and the birth of a new science appeared first on CERN Courier.

]]>
Feature The era of multi-messenger astronomy is here, calling for next-generation gravitational-wave observatories. https://cerncourier.com/wp-content/uploads/2018/06/CCgrav1_10_17.jpg
First intermediate black-hole candidate https://cerncourier.com/a/first-intermediate-black-hole-candidate/ Fri, 13 Oct 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/first-intermediate-black-hole-candidate/ A group of researchers from Keio University in Japan has now shown strong evidence for the existence of an intermediate-mass black hole (IMBH) within the Milky Way.

The post First intermediate black-hole candidate appeared first on CERN Courier.

]]>

Since the prediction of black holes a century ago, numerous black-hole candidates have been found. These consist of both low-mass black holes, which have several times the mass of the Sun, and supermassive black holes (SMBHs), which are billions of times heavier. While many candidates exist for stellar-mass black holes and SMBHs, the latter being thought to occupy the centres of galaxies, candidates for black holes in the intermediate mass range were lacking.

A group of researchers from Keio University in Japan has now shown strong evidence for the existence of an intermediate-mass black hole (IMBH) within the Milky Way, which could shed light on the formation of black holes and of our galaxy.

While there is a consensus that stellar-mass black holes form when massive stars die, the source of SMBHs – one of which is thought to be at the centre of the Milky Way – is not well known. It is believed that large galaxies such as the Milky Way grew to their current size by cannibalising smaller dwarf galaxies containing IMBHs at their centres. Finding a candidate IMBH would provide evidence for this theory.

First, using the Nobeyama radio telescope, the team detected a gas cloud in the Milky Way with a peculiar velocity profile, hinting that an IMBH exists near the centre of our galaxy. This then prompted a more precise observation of the area using the Atacama Submillimeter Telescope Experiment (ASTE) and Atacama Large Millimeter/submillimeter Array (ALMA) in Chile. The cloud, named CO-0.40-0.22, was found to consist of one dense cloud in the centre with a large velocity profile, surrounded by 20 smaller clouds, the velocity profiles of which are aligned. Since the probability of these clouds being aligned by chance is less than one part in 108, it suggests there is some other object close to the cloud interacting with it. Within the gas cloud the data also revealed a point source emitting weak electromagnetic radiation at submillimetre wavelengths and none at higher wavelengths, ruling out a massive star cluster.

Based on these striking observations, the group simulated the gravitational interactions of the cluster and found that the measured velocity profiles are consistent with a gravitational kick by a dense object of 105 solar masses. Combined with the lack of high-energy emission and the spectrum of the object measured in radio wavelengths, the object matches all the characteristics of an IMBH – the first one ever observed. Two further IMBHs are now under study. The finding opens a new research avenue in understanding both massive and supermassive black holes, and strengthens the hypothesis that our galaxy grew by cannibalising smaller ones.

The post First intermediate black-hole candidate appeared first on CERN Courier.

]]>
News A group of researchers from Keio University in Japan has now shown strong evidence for the existence of an intermediate-mass black hole (IMBH) within the Milky Way. https://cerncourier.com/wp-content/uploads/2018/06/CCast1_09_17.jpg
Strangeness in quark matter https://cerncourier.com/a/strangeness-in-quark-matter/ Fri, 22 Sep 2017 16:28:30 +0000 https://preview-courier.web.cern.ch?p=13385 17th edition of the International Conference on Strangeness in Quark Matter (SQM 2017) held at Utrecht University in the Netherlands.

The post Strangeness in quark matter appeared first on CERN Courier.

]]>

The 17th edition of the International Conference on Strangeness in Quark Matter (SQM 2017) was held from 10 to 15 July at Utrecht University in the Netherlands. The SQM series focuses on new experimental and theoretical developments on the role of strangeness and heavy-flavour production in heavy-ion collisions, and in astrophysical phenomena related to strangeness. This year’s SQM event attracted more than 210 participants from 25 countries, with 20% of attendees made up of female researchers. A two-day-long graduate school on the role of strangeness in heavy-ion collisions with 40 participants preceded the conference.

The scientific programme consisted of 53 invited plenary talks, 70 contributed parallel talks and a poster session. Three discussion sessions provided scope for the necessary debates on crucial observables to characterise strongly interacting matter at extreme conditions of high baryon density and high temperature and to define future possible directions. One of the discussions centred on the production of hadron resonances and their vital interactions in the partonic and hadronic phase, which provide evidence for an extended hadronic lifetime even in small collision systems and might affect other QGP observables. Moreover, future astrophysical consequences for SQM following the recent detection of gravitational waves were outlined: gravitational waves from relativistic neutron-star collisions can serve as cosmic messengers for the phase structure and equation-of-state of dense and strange matter, quite similar to the environment created in relativistic heavy-ion collisions.

Representatives from all major collaborations at CERN’s Large Hadron Collider and Super Proton Synchrotron, Brookhaven’s Relativistic Heavy Ion Collider (RHIC), and the Heavy Ion Synchrotron SIS at the GSI Helmholtz Centre in Germany made special efforts to release new data at this conference. Thanks to the excellent performance of these accelerator facilities, a wealth of new data on the production of strangeness and heavy quarks in nuclear collisions have become available.

The post Strangeness in quark matter appeared first on CERN Courier.

]]>
Meeting report 17th edition of the International Conference on Strangeness in Quark Matter (SQM 2017) held at Utrecht University in the Netherlands. https://cerncourier.com/wp-content/uploads/2017/09/CCfac6_08_17.jpg
Study links solar activity to exotic dark matter https://cerncourier.com/a/study-links-solar-activity-to-exotic-dark-matter/ Fri, 22 Sep 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/study-links-solar-activity-to-exotic-dark-matter/ The temporal distribution of solar flares is correlated with the positions of the Earth, Mercury and Venus.

The post Study links solar activity to exotic dark matter appeared first on CERN Courier.

]]>

The origin of solar flares, powerful bursts of radiation appearing as sudden flashes of light, has puzzled astrophysicists for more than a century. The temperature of the Sun’s corona, measuring several hundred times hotter than its surface, is also a long-standing enigma.

A new study suggests that the solution to these solar mysteries is linked to a local action of dark matter (DM). If true, it would challenge the traditional picture of DM as being made of weakly interacting massive particles (WIMPs) or axions, and suggest that DM is not uniformly distributed in space, as is traditionally thought.

The study is not based on new experimental data. Rather, lead author Sergio Bertolucci, a former CERN research director, and collaborators base their conclusions on freely available data recorded over a period of decades by geosynchronous satellites. The paper presents a statistical analysis of the occurrences of around 6500 solar flares in the period 1976–2015 and of the continuous solar emission in the extreme ultraviolet (EUV) in the period 1999–2015. The temporal distribution of these phenomena, finds the team, is correlated with the positions of the Earth and two of its neighbouring planets: Mercury and Venus. Statistically significant (above 5σ) excesses of the number of flares with respect to randomly distributed occurrences are observed when one or more of the three planets find themselves in a slice of the ecliptic plane with heliocentric longitudes of 230°–300°. Similar excesses are observed in the same range of longitudes when the solar irradiance in the EUV region is plotted as a function of the positions of the planets.

If true, our findings will provide a totally different view about dark matter

Konstantin Zioutas

These results suggest that active-Sun phenomena are not randomly distributed, but instead are modulated by the positions of the Earth, Venus and Mercury. One possible explanation, says the team, is the existence of a stream of massive DM particles with a preferred direction, coplanar to the ecliptic plane, that is gravitationally focused by the planets towards the Sun when one or more of the planets enter the stream. Such particles would need to have a wide velocity spectrum centred around 300 km s–1 and interact with ordinary matter much more strongly than typical DM candidates such as WIMPs. The non-relativistic velocities of such DM candidates make planetary gravitational lensing more efficient and can enhance the flux of the particles by up to a factor of 106, according to the team.

Co-author Konstantin Zioutas, spokesperson for the CAST experiment at CERN, accepts that this interpretation of the solar and planetary data is speculative – particularly regarding the mechanism by which a temporarily increased influx of DM actually triggers solar activity. However, he says, the long persisting failure to detect the ubiquitous DM might be due to the widely assumed small cross-section of its constituents with ordinary matter, or to erroneous DM modelling. “Hence, the so-far-adopted direct-detection concepts can lead us towards a dead end, and we might find that we have overlooked a continuous communication between the dark and the visible sector.

Models of massive DM streaming particles that interact strongly with normal matter are few and far between, although the authors suggest that “antiquark nuggets” are best suited to explain their results. “In a few words, there is a large ‘hidden’ energy in the form of the nuggets,” says Ariel Zhitnitsky, who first proposed the quark-nugget dark-matter model in 2003. “In my model, this energy can be precisely released in the form of the EUV radiation when the anti-nuggets enter the solar corona and get easily annihilated by the light elements present in such a highly ionised environment.”

The study calls for further investigation, says researchers. “It seems that the statistical analysis of the paper is accurate and the obtained results are rather intriguing,” says Rita Bernabei, spokesperson of the DAMA experiment, which for the first time in 1998 claimed to have detected dark matter in the form of WIMPs on the basis of an observed seasonal modulation of a signal in their scintillation detector. “However, the paper appears to be mostly hypothetical in terms of this new type of dark matter.

The team now plans to produce a full simulation of planetary lensing taking into account the simultaneous effect of all the planets in the solar system, and to extend the analysis to include sunspots, nano-flares and other solar observables. CAST, the axion solar telescope at CERN, will also dedicate a special data-taking period to the search for streaming DM axions.

“If true, our findings will provide a totally different view about dark matter, with far-reaching implications in particle and astroparticle physics,” says Zioutas. “Perhaps the demystification of the Sun could lead to a dark-matter solution also.”

The post Study links solar activity to exotic dark matter appeared first on CERN Courier.

]]>
News The temporal distribution of solar flares is correlated with the positions of the Earth, Mercury and Venus. https://cerncourier.com/wp-content/uploads/2017/09/CCnew2_08_17.png
Machine learning improves cosmic citizen science https://cerncourier.com/a/machine-learning-improves-cosmic-citizen-science/ Fri, 22 Sep 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/machine-learning-improves-cosmic-citizen-science/ The DECO team has developed a new machine-learning analysis.

The post Machine learning improves cosmic citizen science appeared first on CERN Courier.

]]>

Launched in 2014, the Distributed Electronic Cosmic-ray Observatory (DECO) enables Android smartphone cameras to detect cosmic rays. In response to the increasing number of events being recorded, however, the DECO team has developed a new machine-learning analysis that classifies 96% of events correctly.

Similar to detectors used in high-energy physics experiments, the semiconductors in smartphone camera sensors detect ionising radiation when charged particles traverse the depleted region of their sensor. The DECO app can spot three distinct types of charged-particle events: tracks, worms and spots (see image). Each event can be caused by a variety of particle interactions, from cosmic rays to alpha particles, and a handful of events can be expected every 24 hours or so.

These events have so far been classified by the users themselves, but the increasing number of images being collected meant there was a need for a more reliable computerised classification system. Due to the technological variations of smartphones and the orientation of the sensor when a cosmic ray strikes, traditional algorithms would have struggled to classify events.

The DECO team used advances in machine learning similar to those widely used in high-energy physics to design several deep neural-network architectures to classify the images. The best performing design, which contained over 14 million learnable parameters and was trained with 3.6 million images, correctly sorted 96% of 100 independent images. An iOS version of DECO is currently in the beta stage and is expected to be released within the next year.

The post Machine learning improves cosmic citizen science appeared first on CERN Courier.

]]>
News The DECO team has developed a new machine-learning analysis. https://cerncourier.com/wp-content/uploads/2018/06/CCnew5_08_17.jpg
Optical survey pinpoints dark-matter structure https://cerncourier.com/a/optical-survey-pinpoints-dark-matter-structure/ Fri, 22 Sep 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/optical-survey-pinpoints-dark-matter-structure/ Dark Energy Survey (DES) will predict cosmological parameters such as the fractions of dark energy, dark matter and normal matter.

The post Optical survey pinpoints dark-matter structure appeared first on CERN Courier.

]]>

During the last two decades the WMAP and Planck satellites have produced detailed maps of the density distribution of the universe when it was only 380,000 years old – the moment electrons and protons recombined into neutral hydrogen, producing today’s cosmic microwave background (CMB). The CMB measurements show that the distribution of both normal and dark matter in the universe is inhomogeneous, which is explained via a combination of inflation, dark matter and dark energy: initial quantum fluctuations in the very early universe expanded and continued to grow as gravity pulled matter together while dark energy worked to force it apart. Data from the CMB have allowed cosmologists to predict a range of cosmological parameters such as the fractions of dark energy, dark matter and normal matter.

Now, using new optical measurements of the current universe from the international Dark Energy Survey (DES), these predictions can be tested independently. DES is an ongoing, five-year survey that aims to map 300 million galaxies and tens of thousands of galaxy clusters using a 570 megapixel camera to capture light from galaxies eight billion light-years away (see figure). The camera, one of the most powerful in existence, was built and tested at Fermilab in the US and is mounted on the 4 m Blanco telescope in Chile.

The DES data sample is set to grow from 26 million to 300 million galaxies

To measure how the clumps seen in the CMB evolved from the early universe into their current state, the DES collaboration first mapped the distribution of galaxies in the universe precisely. The researchers then produced detailed maps of the matter distribution using weak gravitational lensing, which measures small distortions of the optical image due to the mass between an observer and multiple sources. The galaxies observed by DES are elongated by only a few per cent due to lensing and, since galaxies are intrinsically elliptical, it is not possible to measure the lensing from individual galaxy measurements.

The first year of DES data, which includes measurements of 26 million galaxies, has allowed researchers to measure cosmological parameters such as the matter density with a precision comparable to those made using the CMB data. The matter-density parameter, which indicates the total fraction of matter in the universe, measured using optical light is found to be fully compatible with Planck data based on measurements of microwave radiation emitted around 13 billion years ago. Combining the measurements of Planck and DES places further constraints on this crucial parameter, indicating that only about 30% of the universe consists of matter while the rest consists of dark energy. The results are also compatible with other important cosmological parameters such as the fluctuation amplitude, which indicates the amplitude of the initial density fluctuations, and further constrain measurements of the Hubble constant and even the sum of the neutrino masses.

The DES results allow for a fully independent measurement of parameters initially derived using a map of the early universe. With the DES data sample set to grow from 26 million to 300 million galaxies, cosmological parameters will be measured with even higher precision and allow more detailed comparisons with the CMB data.

The post Optical survey pinpoints dark-matter structure appeared first on CERN Courier.

]]>
News Dark Energy Survey (DES) will predict cosmological parameters such as the fractions of dark energy, dark matter and normal matter. https://cerncourier.com/wp-content/uploads/2018/06/CCast1_08_17.jpg
Evidence suggests all stars born in pairs https://cerncourier.com/a/evidence-suggests-all-stars-born-in-pairs/ Fri, 11 Aug 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/evidence-suggests-all-stars-born-in-pairs/ A new study suggests that all stars start their lives as part of a binary pair.

The post Evidence suggests all stars born in pairs appeared first on CERN Courier.

]]>

The reason why some stars are born in pairs while others are born singly has long puzzled astronomers. But a new study suggests that no special conditions are required: all stars start their lives as part of a binary pair. The result has implications not only in the field of star evolution but also for studies of binary neutron-star and binary black-hole formation. It also suggests that our own Sun was born together with a companion that has since disappeared.

Stars are born in dense molecular clouds measuring light-years across, within which denser regions can collapse under their own gravity to form high-density cores opaque to optical radiation, which appear as dark patches. When the densities reach the level where hydrogen fusion begins, the cores can form stars. Although young stars already emit radiation before the onset of the hydrogen-burning phase, it is absorbed in the dense clouds that surround them, making star-forming regions difficult to study. Yet, since clouds that absorb optical and infrared radiation re-emit it at much longer wavelengths, it is possible to probe them using radio telescopes.

Sarah Sadavoy of the Max Planck Institute for Astronomy in Heidelberg and Steven Stahler of the University of California at Berkeley used data from the Very Large Array (VLA) radio telescopes in New Mexico, together with micrometre-wavelength data from the James Clerk Maxwell Telescope (JCMT) in Hawaii, to study the dense gas clumps and the young stars forming in them in the Perseus cluster – a star-forming region about 600 light-years away. Data from the JCMT show the location of dense cores in the gas, while the VLA provides the location of the young stars within them.

Studying the multiplicity as well as the location of the young stars inside the dense regions, the researchers found a total of 19 binary systems, 45 single-star systems and five systems with a higher multiplicity. Focusing on the binary pairs, they observed that the youngest binaries typically have a large separation of 500 astronomical units (500 times the Sun–Earth distance). Furthermore, the young stars were aligned along the long axis of the elongated cloud. Older binary systems, with an age between 500,000 and one million years, were found typically to be closer together and separated around a random axis.

Subsequent to cataloguing all the young stars, the team compared the observed star multiplicity and the features seen in the binary pairs to simulations of stars being formed either as single or binary systems. The only way the model could reproduce the data was if its starting conditions contained no single stars but only stars that started out as part of wide binaries, implying that all stars are formed as part of a binary system. After formation, the stars either move closer to one another into a close binary system or move away from each other. The latter option is likely to be what happened in the case of the Sun, its companion having drifted away long ago.

If indeed all stars are formed in pairs, it would have big implications for models of stellar birth rates in molecular clouds as well as for the formation of binary systems of compact objects. The studied nearby Perseus cluster could, however, just be a special case, and further studies of other star-forming regions are therefore required to know if the same conditions exist elsewhere in the universe.

The post Evidence suggests all stars born in pairs appeared first on CERN Courier.

]]>
News A new study suggests that all stars start their lives as part of a binary pair. https://cerncourier.com/wp-content/uploads/2018/06/CCast1_07_17.jpg
ESA gives green light for LISA https://cerncourier.com/a/esa-gives-green-light-for-lisa/ Mon, 10 Jul 2017 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/esa-gives-green-light-for-lisa/ The European Space Agency (ESA) gave the official go-ahead for the Laser Interferometer Space Antenna (LISA).

The post ESA gives green light for LISA appeared first on CERN Courier.

]]>

On 20 June the European Space Agency (ESA) gave the official go-ahead for the Laser Interferometer Space Antenna (LISA), which will comprise a trio of satellites to detect gravitational waves in space. LISA is the third mission in ESA’s Cosmic Vision plan, set to last for the next two decades, and has been given a launch date of 2034.

Predicted a century ago by general relativity, gravitational waves are vibrations of space–time that were first detected by the ground-based Laser Interferometer Gravitational-Wave Observatory (LIGO) in September 2015. While upgrades to LIGO and other ground-based observatories are planned, LISA will access a much lower-frequency region of the gravitational-wave universe. Three craft, separated by 2.5M km in a triangular formation, will follow Earth in its orbit around the Sun, waiting to be distorted by a fractional amount by a passing gravitational wave.

Although highly challenging experimentally, a LISA test mission called Pathfinder has recently demonstrated key technologies needed to detect gravitational waves from space (CERN Courier January/February 2017 p34). These include free-falling test masses linked by lasers and isolated from all external and internal forces except gravity. LISA Pathfinder concluded its pioneering mission at the end of June, as LISA enters a more detailed phase of study. Following ESA’s selection, the design and costing of the LISA mission can be completed. The project will then be proposed for “adoption” before construction begins.

Following the first and second detections of gravitational waves by LIGO in September and December 2015, on 1 June the collaboration announced the detection of a third event (Phys. Rev. Lett. 118 221101). Like the previous two, it is thought that “GW170104” – the signal for which arrived on Earth on 4 January – was produced when two black holes merged into a larger one billions of years ago.

The post ESA gives green light for LISA appeared first on CERN Courier.

]]>
News The European Space Agency (ESA) gave the official go-ahead for the Laser Interferometer Space Antenna (LISA). https://cerncourier.com/wp-content/uploads/2018/06/CCnew13_06_17.jpg
Astronomers spot first failed supernova https://cerncourier.com/a/astronomers-spot-first-failed-supernova/ Mon, 10 Jul 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/astronomers-spot-first-failed-supernova/ Findings suggest the star N6946-BH1 directly collapsed into black hole.

The post Astronomers spot first failed supernova appeared first on CERN Courier.

]]>

Massive stars are traditionally expected to end their life cycle by triggering a supernova, a violent event in which the stellar core collapses into a neutron star, potentially followed by a further collapse into a black hole. During this process, a shock wave ejects large amounts of material from the star into interstellar space with large velocities, producing heavy elements in the process, while the supernova outshines all the stars in its host galaxy combined.

In the past few years, however, there has been mounting evidence that not all massive-star deaths are accompanied by these catastrophic events. Instead, it seems that for some stars only a small part of their outer layers is ejected before the rest of the volume collapses into a massive black hole. For instance, there are hints that the birth rate and supernova rate of massive stars do not match. Furthermore, results from the LIGO gravitational-wave observatory in the US indicate the existence of black holes with masses more than 30 times that of the Sun, which is easier to explain if stars can collapse without a large explosion.

The results would explain why we observe less supernovae than expected

Motivated by this indirect evidence, researchers from Ohio State University began a search for stars that quietly form a black hole without triggering a supernova. Using the Large Binocular Telescope (LBT) in Arizona, in 2015 the team identified its first candidate. The star, called N6946-BH1, was approximately 25 times more massive than the Sun and lived in the Fireworks galaxy, which is known for hosting a large number of supernovae. Previously presenting a stable luminosity, the star was seen to become brighter, although not at the level expected for a supernova, during 2009, before completely disappearing in optical wavelengths in 2010 (see image).

The lack of emission observed by the LBT triggered follow-up searches for the star, both using the Hubble Space Telescope (HST) and the Spitzer Space Telescope (SST). While the HST did not find signs of the star in the optical wavelength, the SST did observe infrared emission. A careful analysis of the data disfavoured alternative explanations such as a large dust cloud obscuring the optical emission from the star, and the infrared data were also shown to be compatible with emission from remaining matter falling into a black hole.

If the star did indeed directly collapse into a black hole, as these findings suggest, the in-falling matter is expected to radiate in the X-ray region. The team is therefore waiting for observations from the space-based Chandra X-ray Observatory to search for this emission.

If confirmed in X-ray data, this result would be the first measurement of the birth of a black hole and the first measurement of a failed supernova. The results would explain why we observe less supernovae than expected and could reveal the origin of the massive black holes responsible for the gravitational waves seen by LIGO, in addition to having implications for the production of heavy elements in the universe.

The post Astronomers spot first failed supernova appeared first on CERN Courier.

]]>
News Findings suggest the star N6946-BH1 directly collapsed into black hole. https://cerncourier.com/wp-content/uploads/2018/06/CCast1_06_17.jpg
Survey reveals edge of dark-matter halos https://cerncourier.com/a/survey-reveals-edge-of-dark-matter-halos/ Fri, 19 May 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/survey-reveals-edge-of-dark-matter-halos/ Results show that the density of dark matter in a halo does not gradually fall off with distance, as might be expected, but instead exhibits a sharp edge.

The post Survey reveals edge of dark-matter halos appeared first on CERN Courier.

]]>

Gravitational-lensing measurements indicate that clusters of galaxies are surrounded by large halos of dark matter. By studying the distribution and colour of galaxies inside galaxy clusters using data from the Sloan Digital Sky Survey (SDSS), researchers have now measured a new feature of the shape of these halos. The results show that the density of dark matter in a halo does not gradually fall off with distance, as might be expected, but instead exhibits a sharp edge.

According to the standard cosmological model, dark-matter halos are the result of small perturbations in the density of the early universe. Over time, and under the influence of gravity, these perturbations grew into large dense clumps that affect surrounding matter: galaxies in the vicinity of a halo will initially all move away due to the expansion of the universe, but gravity eventually causes the matter to fall towards and then orbit the halo. Studying the movements of the matter inside halos therefore provides an indirect measurement of the interaction between normal and dark matter, allowing researchers to probe new physics such as dark-matter interactions, dark energy and modifications to gravity.

Using the SDSS galaxy survey, Bhuvnesh Jain and Eric Baxter from the University of Pennsylvania and colleagues at other institutes report new evidence for an edge-like feature in the density profile of galaxies within a halo. The large amount of SDSS data available allowed a joint analysis of thousands of galaxy clusters each containing thousands of galaxies, revealing an edge inside clusters in agreement with simulations based on “splash-back” models. The edge is associated with newly accreted matter which, after falling into the halo, slows down as it reaches the extremity of its elliptical orbit before falling back towards the halo centre. As the matter “splashes back” it slows down, which leads to a build-up of matter at the edge of the halo and a steep fall-off in the amount of matter right outside this radius.

The authors found additional evidence for the edge by studying the colour of the galaxies. Since new stars that formed in hydrogen-rich regions are more bright in the blue part of the spectrum, galaxies with large amounts of new-star formation are more blue than those with little star formation. As a galaxy travels through a cluster, different mechanisms can strip it of the gasses required to form new blue stars, reducing star formation and making the galaxy appear more red. Models therefore predict galaxies still in the process of falling into the halo to be more blue, while those which already passed the edge and are in orbit have started to become red – exactly as data from the SDSS galaxy survey showed.

A range of ongoing and new galaxy surveys – such as Hyper Suprime-Cam, Dark Energy Survey, Kilo-Degree Survey and the Large Synoptic Survey Telescope – will measure the galaxy clusters in more detail. Using additional information on the shape of the clusters, says the team, it is possible to study both the standard physics of how galaxies interact with the cluster and the possible unknown physics of what the nature of dark matter and gravity is.

The post Survey reveals edge of dark-matter halos appeared first on CERN Courier.

]]>
News Results show that the density of dark matter in a halo does not gradually fall off with distance, as might be expected, but instead exhibits a sharp edge. https://cerncourier.com/wp-content/uploads/2017/05/CCast1_05_17.jpg
LHCb brings cosmic collisions down to Earth https://cerncourier.com/a/lhcb-brings-cosmic-collisions-down-to-earth/ Thu, 13 Apr 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/lhcb-brings-cosmic-collisions-down-to-earth/ The LHCb collaboration has generated high-energy collisions between protons and helium nuclei similar to those that take place when cosmic rays strike the interstellar medium.

The post LHCb brings cosmic collisions down to Earth appeared first on CERN Courier.

]]>

In an effort to improve our understanding of cosmic rays, the LHCb collaboration has generated high-energy collisions between protons and helium nuclei similar to those that take place when cosmic rays strike the interstellar medium. Such collisions are expected to produce a certain number of antiprotons, and are currently one of the possible explanations for the small fraction of antiprotons (about one per 10,000 protons) observed in cosmic rays outside of the Earth᾿s atmosphere. By measuring the antimatter component of cosmic rays, we can potentially unveil new high-energy phenomena, notably a possible contribution from the annihilation or decay of dark-matter particles.

In the last few years, space-borne detectors devoted to the study of cosmic rays have dramatically improved our knowledge of the antimatter component. Data from the Alpha Magnetic Spectrometer (AMS-02), which is attached to the International Space Station and operated from a control centre at CERN, published last year are currently the most precise and provide the antiproton over proton fraction up to an antiproton energy of 350 GeV (CERN Courier December 2016 p26). The interpretation of these data is currently limited by poor knowledge of the antiproton production cross-sections, however, and no data are available so far on antiproton production in proton–helium collisions.

LHCb physicists were able to mimic cosmic collisions between 6.5 TeV protons and at-rest helium nuclei

The LHCb’s recently installed internal gas target “SMOG” (System for Measuring Overlap with Gas) provides the unique possibility to study fixed-target proton collisions at the unprecedented energy offered by the LHC, with the forward geometry of the LHCb detector well suited for this configuration. The SMOG device allows a tiny amount of a noble gas to be injected inside the LHC beam pipe near the LHCb vertex detector region. The gas pressure is less than a billionth of atmospheric pressure so as not to perturb LHC operations, but this is sufficient to observe hundreds of millions of beam–gas collisions per hour. By operating SMOG with helium, LHCb physicists were able to mimic cosmic collisions between 6.5 TeV protons and at-rest helium nuclei – a configuration that closely matches the energy scale of the antiproton production observed by space-borne experiments. Data-taking was carried out during May 2016 and lasted just a few hours.

LHCb’s advanced particle-identification capabilities were used to determine the yields of antiprotons, among other charged particles, in the momentum range 12–110 GeV. A novel method has been developed to precisely determine the amount of gas in the target: events are counted where a single electron elastically scattered off the beam is projected inside the detector acceptance. Owing to their distinct signature, these events could be isolated from the much more abundant interactions with the helium nuclei. The cross-section for proton–electron elastic scattering is very well known and allows the density of atomic electrons to be computed.

The result for the antiproton production has been compared to the most popular cosmic-ray models describing soft hadronic collisions, revealing significant disagreements with their predictions. The accuracy of the LHCb measurement is below 10% for most of the accessible phase space, and is expected to contribute to the continuous progress in turning high-energy astroparticle physics into a high-precision science.

The post LHCb brings cosmic collisions down to Earth appeared first on CERN Courier.

]]>
News The LHCb collaboration has generated high-energy collisions between protons and helium nuclei similar to those that take place when cosmic rays strike the interstellar medium. https://cerncourier.com/wp-content/uploads/2018/06/CCnew9_04_17.jpg
Dark-matter surprise in early universe https://cerncourier.com/a/dark-matter-surprise-in-early-universe/ Thu, 13 Apr 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/dark-matter-surprise-in-early-universe/ A surprising result at the Max Planck Institute for Extraterrestrial Physics in Germany suggests that dark matter was less influential in the early universe than it is today.

The post Dark-matter surprise in early universe appeared first on CERN Courier.

]]>

New observations using ESO’s Very Large Telescope (VLT) in Chile indicate that massive, star-forming galaxies in the early universe were dominated by normal, baryonic matter. This is in stark contrast to present-day galaxies, where the effects of dark matter on the rotational velocity of spiral galaxies seem to be much greater. The surprising result, published in Nature by an international team of astronomers led by Reinhard Genzel at the Max Planck Institute for Extraterrestrial Physics in Germany, suggests that dark matter was less influential in the early universe than it is today.

Whereas normal matter in the cosmos can be viewed as brightly shining stars, glowing gas and clouds of dust, dark matter does not emit, absorb or reflect light. This elusive, transparent matter can only be observed via its gravitational effects, one of which is a higher speed of rotation in the outer parts of spiral galaxies. The disc of a spiral galaxy rotates with a velocity of hundreds of kilometres per second, making a full revolution in a period of hundreds of millions of years. If a galaxy’s mass consisted entirely of normal matter, the sparser outer regions should rotate more slowly than the dense regions at the centre. But observations of nearby spiral galaxies show that their inner and outer parts actually rotate at approximately the same speed.

It is widely accepted that the observed “flat rotation curves” indicate that spiral galaxies contain large amounts of non-luminous matter in a halo surrounding the galactic disc. This traditional view is based on observations of numerous galaxies in the local universe, but is now challenged by the latest observations of galaxies in the distant universe. The rotation curve of six massive, star-forming galaxies at the peak of galaxy formation, 10 billion years ago, was measured with the KMOS and SINFONI instruments on the VLT, and the results are intriguing. Unlike local spiral galaxies, the outer regions of these distant galaxies seem to be rotating more slowly than regions closer to the core – suggesting they contain less dark matter than expected. The same decreasing velocity trend away from the centres of the galaxies is also found in a composite rotation curve that combines data from around 100 other distant galaxies, which have too weak a signal for an individual analysis.

Genzel and collaborators identify two probable causes for the unexpected result. Besides a stronger dominance of normal matter with the dark matter playing a much smaller role, they also suggest that early disc galaxies were much more turbulent than the spiral galaxies we see in our cosmic neighbourhood. Both effects seem to become more marked as astronomers look further back in time into the early universe. This suggests that three to four billion years after the Big Bang, the gas in galaxies had already efficiently condensed into flat, rotating discs, while the dark-matter halos surrounding them were much larger and more spread out. Apparently it took billions of years longer for dark matter to condense as well, so its dominating effect is only seen on the rotation velocities of galaxy discs today.

This explanation is consistent with observations showing that early galaxies were much more gas-rich and compact than today’s galaxies. Embedded in a wider dark-matter halo, their rotation curves would be only weakly influenced by its gravity. It would be therefore interesting to explore whether the suggestion of a slow condensation of dark-matter halos could help shed light on this mysterious component of the universe.

The post Dark-matter surprise in early universe appeared first on CERN Courier.

]]>
News A surprising result at the Max Planck Institute for Extraterrestrial Physics in Germany suggests that dark matter was less influential in the early universe than it is today. https://cerncourier.com/wp-content/uploads/2018/06/CCast1_04_17.jpg
Editor’s note https://cerncourier.com/a/editors-note-2/ https://cerncourier.com/a/editors-note-2/#respond Thu, 13 Apr 2017 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/editors-note-2/ After 13 years as the Courier’s Astrowatch contributor, astronomer Marc Türler is moving to pastures new.

The post Editor’s note appeared first on CERN Courier.

]]>
After 13 years as the Courier’s Astrowatch contributor, astronomer Marc Türler is moving to pastures new. We thank him for his numerous lively columns keeping readers up to date with the latest astro results.

The post Editor’s note appeared first on CERN Courier.

]]>
https://cerncourier.com/a/editors-note-2/feed/ 0 News After 13 years as the Courier’s Astrowatch contributor, astronomer Marc Türler is moving to pastures new.
Euclid to pinpoint nature of dark energy https://cerncourier.com/a/euclid-to-pinpoint-nature-of-dark-energy/ Thu, 13 Apr 2017 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/euclid-to-pinpoint-nature-of-dark-energy/ Due for launch in 2020, ESA’s Euclid probe will track galaxies and large areas of sky to find the cause of the cosmic acceleration.

The post Euclid to pinpoint nature of dark energy appeared first on CERN Courier.

]]>
The accelerating expansion of the universe, first realised 20 years ago, has been confirmed by numerous observations. Remarkably, whatever the source of the acceleration, it is the primary driver of the dynamical evolution of the universe in the present epoch. That we are unable to know the nature of this so-called dark energy is one of the most important puzzles in modern fundamental physics. Whether due to a cosmological constant, a new dynamical field, a deviation from general relativity on cosmological scales, or something else, dark energy has triggered numerous theoretical models and experimental programmes. Physicists and astronomers are convinced that pinning down the nature of this mysterious component of the universe will lead to a revolution in physics.

Based on the current lambda-cold-dark-matter (ΛCDM) model of cosmology – which has only two ingredients: general relativity with a nonzero cosmological constant and cold dark matter – we identify at this time three dominant components of the universe: normal baryonic matter, which makes up only 5% of the total energy density; dark matter (27%); and dark energy (68%). This model is extremely successful in fitting observations, such as the Planck mission’s measurements of the cosmic microwave background, but it gives no clues about the nature of the dark-matter or dark-energy components. It should also be noted that the assumption of a nonzero cosmological constant, implying a nonzero vacuum energy density, leads to what has been called the worst prediction ever made in physics: its value as measured by astronomers falls short of what is predicted by the Standard Model for particle physics by well over 100 orders of magnitude.

It is only by combining several complementary probes that the source of the acceleration of the universe can be understood.

Depending on what form it takes, dark energy changes the dynamical evolution during the expansion history of the universe as predicted by cosmological models. Specifically, dark energy modifies the expansion rate as well as the processes by which cosmic structures form. Whether the acceleration is produced by a new scalar field or by modified laws of gravity will impact differently on these observables, and the two effects can be decoupled using several complementary cosmological probes. Type 1a supernovae and baryon acoustic oscillations (BAO) are very good probes of the expansion rate, for instance, while gravitational lensing and peculiar velocities of galaxies (as revealed by their redshift) are very good probes of gravity and the growth rate of structures (see panel “The geometry of the universe” below). It is only by combining several complementary probes that the source of the acceleration of the universe can be understood. The changes are extremely small and are currently undetectable at the level of individual galaxies, but by observing many galaxies and treating them statistically it is possible to accurately track the evolution and therefore get a handle on what dark energy physically is. This demands new observing facilities capable of both measuring individual galaxies with high precision and surveying large regions of the sky to cover all cosmological scales.

Euclid science parameters

Euclid is a new space-borne telescope under development by the European Space Agency (ESA). It is a medium-class mission of ESA’s Cosmic Vision programme and was selected in October 2011 as the first-priority cosmology mission of the next decade. Euclid will be launched at the end of 2020 and will measure the accelerating expansion of our universe from the time it kicked in around 10 billion years ago to our present epoch, using four cosmological probes that can explore both dark-energy and modified-gravity models. It will capture a 3D picture of the distribution of the dark and baryonic matter from which the acceleration will be measured to per-cent-level accuracy, and measure possible variations in the acceleration to 10% accuracy, improving our present knowledge of these parameters by a factor 20–60. Euclid will observe the dynamical evolution of the universe and the formation of its cosmic structures over a sky area covering more than 30% of the celestial sphere, corresponding to about five per cent of the volume of the observable universe.

The dark-matter distribution will be probed via weak gravitational-lensing effects on galaxies. Gravitational lensing by foreground objects slightly modifies the shape of distant background galaxies, producing a distortion that directly reveals the distribution of dark matter (see panel “Tracking cosmic structure” below). The way such lensing changes as a function of look-back time, due to the continuing growth of cosmic structure from dark matter, strongly depends on the accelerating expansion of the universe and turns out to be a clear signature of the amount and nature of dark energy. Spectroscopic measurements, meanwhile, will enable us to determine tiny local deviations of the redshift of galaxies from their expected value derived from the general cosmic expansion alone (see image below). These deviations are signatures of peculiar velocities of galaxies produced by the local gravitational fields of surrounding massive structures, and therefore represent a unique test of gravity. Spectroscopy will also reveal the 3D clustering properties of galaxies, in particular baryon acoustic oscillations.

Together, weak-lensing and spectroscopy data will reveal signatures of the physical processes responsible for the expansion and the hierarchical formation of structures and galaxies in the presence of dark energy. A cosmological constant, a new dark-energy component or deviations to general relativity will produce different signatures. Since these differences are expected to be very small, however, the Euclid mission is extremely demanding scientifically and also represents considerable technical, observational and data-processing challenges.

By further analysing the Euclid data in terms of power spectra of galaxies and dark matter and a description of massive nonlinear structures like clusters of galaxies, Euclid can address cosmological questions beyond the accelerating expansion. Indeed, we will be able to address any topic related to power spectra or non-Gaussian properties of galaxies and dark-matter distributions. The relationship between the light- and dark-matter distributions of galaxies, for instance, can be derived by comparing the galaxy power spectrum as derived from spectroscopy with the dark-matter power spectrum as derived from gravitational lensing. The physics of inflation can then be explored by combining the non-Gaussian features observed in the dark-matter distribution in Euclid data with the Planck data. Likewise, since Euclid will map the dark-matter distribution with unprecedented accuracy, it will be sensitive to subtle features produced by neutrinos and thereby help to constrain the sum of the neutrino masses. On these and other topics, Euclid will provide important information to constrain models.

Euclid’s science objectives translate into stringent performance requirements.

The definition of Euclid’s science cases, the development of the scientific instruments and the processing and exploitation of the data are under the responsibility of the Euclid Consortium (EC) and carried out in collaboration with ESA. The EC brings together about 1500 scientists and engineers in theoretical physics, particle physics, astrophysics and space astronomy from around 200 laboratories in 14 European countries, Canada and the US. Euclid’s science objectives translate into stringent performance requirements. Mathematical models and detailed complete simulations of the mission were used to derive the full set of requirements for the spacecraft pointing and stability, the telescope, scientific instruments, data-processing algorithms, the sky survey and the system calibrations. Euclid’s performance requirements can be broadly grouped into three categories: image quality, radiometric and spectroscopic performance. The spectroscopic performance in particular puts stringent demands on the ground-processing algorithms and demands a high level of control over cleanliness during assembly and launch.

Dark-energy payload

The Euclid satellite consists of a service module (SVM) and a payload module (PLM), developed by ESA’s industrial contractors Thales Alenia Space of Turin and Airbus Defence and Space of Toulouse, respectively. The two modules are substantially thermally and structurally decoupled to ensure that the extremely rigid and cold (around 130 K) optical bench located in the PLM is not disturbed by the warmer (290 K±20 K) and more flexible SVM. The SVM comprises all the conventional spacecraft subsystems and also hosts the instrument’s warm electronics units. The Euclid image-quality requirements demand very precise pointing and minimal “jitter”, while the survey requirements call for fast and accurate movements of the satellite from one field to another. The attitude and orbit control system consists of several sensors to provide sub-arc-second stability during an exposure time, and cold gas thrusters with micronewton resolution are used to actuate the fine pointing. Three star trackers provide the absolute inertial attitude accuracy. Since the trackers are mounted on the SVM, which is separate from the telescope structure and thus subject to thermo-elastic deformation, the fine guidance system is located on the same focal plane of the telescope and endowed with absolute pointing capabilities based on a reference star catalogue.

The PLM is designed to provide an extremely stable detection system enabling the sharpest possible images of the sky. The size of the point spread function (PSF), which is the image of a point source such as an unresolved star, closely resembles the Airy disc, the theoretical limit of the optical system. The PSF of Euclid images is comparable to those of the Hubble space telescope’s, considering Euclid’s smaller primary mirror, and is more than three times smaller compared with what can be achieved by the best ground-based survey telescopes under optimum viewing conditions. The telescope is composed of a 1.2 m-diameter three-mirror “anastigmatic Korsch” arrangement that feeds two instruments: a wide-field visible imager (VIS) for the shape measurement of galaxies, and a near-infrared spectrometer and photometer (NISP) for their spectroscopic and photometric redshift measurements. An important PLM design driver is to maintain a high and stable image quality over a large field of view. Building on the heritage of previous European high-stability telescopes such as Gaia, which is mapping the stars of the Milky Way with high precision, all mirrors, the telescope truss and the optical bench are made of silicon carbide, a ceramic material that combines extreme stiffness with very good thermal conduction. The PLM structure is passively cooled to a stable temperature of around 130 K, and a secondary mirror mechanism will be employed to refocus the telescope image on the VIS detector plane after launch and cool down.

The VIS instrument receives light in one broad visible band covering the wavelength range 0.55–0.90 μm. To avoid additional image distortions, it has no imaging optics of its own and is equipped with a camera made up of 36 4 k × 4 k-pixel CCDs with a pixel scale of 0.1 arc second that must be aligned to a precision better than 15 μm over a distance of 30 cm. Pixel-wise, the VIS camera is the second largest camera that will be flown in space after Gaia’s and will produce the largest images ever generated in space. Unlike Gaia, VIS will compress and transmit all raw scientific images to Earth for further data processing. The instrument is capable of measuring the shapes of about 55,000 galaxies per image field of 0.5 square degrees. The NISP instrument, on the other hand, provides near-infrared photometry in the wavelength range 0.92–2.0 μm and has a slit-less spectroscopy mode equipped with three identical grisms (grating prisms) covering the wavelength range 1.25–1.85 μm. The grisms are mounted in different orientations to separate overlapping spectra of neighbouring objects, and the NISP device is capable of delivering redshifts for more than 900 galaxies per image field. The NISP focal plane is equipped with 16 near infrared HgCdTe detector arrays of 2 k × 2 k pixels with 0.3 arcsec pixels, which represents the largest near-infrared focal plane ever built for a space mission.

The exquisite accuracy and stability of Euclid’s instruments will provide certainty that any observed galaxy-shape distortions are caused by gravitational lensing and are not a result of artefacts in the optics. The telescope will deliver a field of view of more than 0.5 square degrees, which is an area comparable to two full Moons, and the flat focal plane of the Korsch configuration places no extra requirements on the surface shape of the sensors in the instruments. As the VIS and NISP instruments share the same field of view, Euclid observations can be carried out through both channels in parallel. Besides the Euclid satellite data, the Euclid mission will combine the photometry of the VIS and NISP instruments with complementary ground-based observations from several existing and new telescopes equipped with wide-field imaging or spectroscopic instruments (such as CFHT, ESO/VLT, Keck, Blanco, JST and LSST). These combined data will be used to derive an estimate of redshift for the two billion galaxies used for weak lensing, and to decouple coherent weak gravitational-lensing patterns from intrinsic alignments of galaxies. Organising the ground-based observations over both hemispheres and making these data compatible with the Euclid data turns out to be a very complex operation that involves a huge data volume, even bigger than the Euclid satellite data volume.

Ground control

One Euclid field of 0.5 square degrees will generate 520 Gb/day of VIS compressed data and 240 Gb/day of NISP compressed data, and one such field is obtained in an observing period lasting about 1 hour and 15 minutes. All raw science data are transmitted to the ground via a high-density link. Even though the nominal mission will last for six years, mapping out the 36% of the sky at the required sensitivity and accuracy within this time involves large amounts of data to be transmitted at a rate of around 850 Gb/day during just four hours of contact with the ground station. The complete processing pipeline from Euclid’s raw data to the final data products is a large IT project involving a few hundred software engineers and scientists, and has been broken down into functions handled by almost a dozen separate expert groups. A highly varied collection of data sets must be homogenised for subsequent combination: data from different ground and space-based telescopes, visible and near-infrared data, and slit-less spectroscopy. Very precise and accurate shapes of galaxies are measured, giving two orders of magnitude improvement with respect to current analyses.

Based on the current knowledge of the Euclid mission and the present ground-station development, no showstoppers have been identified. Euclid should meet its performance requirements at all levels, including the design of the mission (a survey of 15,000 square degrees in less than six years) and for the space and ground segments. This is very encouraging and most promising, taking into account the multiplicity of challenges that Euclid presents.

On the scientific side, the Euclid mission meets the precision and accuracy requested to characterise the source of the accelerating expansion of the universe and decisively reveal its nature. On the technical side, there are difficult challenges to be met in achieving the required precision and accuracy of galaxy-shape, photometric and spectroscopic redshift measurements. Our current knowledge of the mission provides a high degree of confidence that we can overcome all of these challenges in time for launch.

The geometry of the universe

Quantum fluctuations

The evolution of structure is seeded by quantum fluctuations in the very early universe, which were amplified by inflation. These seeds grew to create the cosmic microwave background (CMB) anisotropies after approximately 100,000 years and eventually the dark-matter distribution of today. In the same way that supernovae provide a standard candle for astronomical observations, periodic fluctuations in the density of the visible matter called baryon acoustic oscillations (BAO) provide a standard cosmological length scale that can be used to understand the impact of dark energy. By comparing the distance of a supernova or structure with its measured redshift, the geometry of the universe can be obtained.

Hydrodynamical cosmological simulations of a ΛCDM universe at three different epochs (left-to-right, image left), corresponding to redshift z = 6, z = 2 and our present epoch. Each white point represents the concentration of dark matter, gas and stars, the brightest regions being the densest. The simulation shows the growth rate of structure and the formation of galaxies, clusters of galaxies, filaments and large-scale structures over cosmic time. Euclid uses the large-scale structures made out of matter and dark matter as a standard yardstick: starting from the CMB, we assume that the typical scale of structures (or the peak in the spatial power spectrum) increases proportionally with the expansion of the universe. Euclid will determine the typical scale as a function of redshift by analysing power spectra at several redshifts from the statistical analysis of the dark-matter structures (using the weak lensing probe) or the ordinary matter structures based on the spectroscopic redshifts from the BAO probe. The structures will evolve with redshift also due to the properties of gravity. Information on the growth of structure at different scales in addition to different redshifts is needed to discriminate between models of dark energy and modified gravity

.

Tracking cosmic structure

Gravitational-lensing effects produced by cosmic structures on distant galaxies (right). Numerical simulations (below) show the distribution of dark matter (filaments and clumps with brightness proportional to their mass density) over a line of sight of one billion light-years. The yellow lines show how light beams emitted by distant galaxies are deflected by mass concentrations located along the line of sight. Each deflection slightly modifies the original shape of the lensed galaxies, increasing their original intrinsic ellipticity by a small amount.

Since all distant galaxies are lensed, all galaxies eventually show a coherent ellipticity pattern projected on the sky that directly reveals the projected distribution of dark matter and its power spectrum. The 3D distribution of dark matter can then be reconstructed by slicing the universe into redshift bins and recovering the ellipticity pattern at each redshift. The growth rate of cosmic structures derived from this inversion process strongly depends on the nature of dark energy and gravity, and will be detected by the outstanding image quality of Euclid’s VIS instrument.

The post Euclid to pinpoint nature of dark energy appeared first on CERN Courier.

]]>
Feature Due for launch in 2020, ESA’s Euclid probe will track galaxies and large areas of sky to find the cause of the cosmic acceleration. https://cerncourier.com/wp-content/uploads/2017/04/CCeuc1_04_17.jpg
Gravitational lens challenges cosmic expansion https://cerncourier.com/a/gravitational-lens-challenges-cosmic-expansion/ Fri, 17 Mar 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/gravitational-lens-challenges-cosmic-expansion/ An international group of astronomers has made an independent measurement of how fast the universe is expanding.

The post Gravitational lens challenges cosmic expansion appeared first on CERN Courier.

]]>

Using galaxies as vast gravitational lenses, an international group of astronomers has made an independent measurement of how fast the universe is expanding. The newly measured expansion rate is consistent with earlier findings in the local universe based on more traditional methods, but intriguingly remains higher than the value derived by the Planck satellite – a tension that could hint at new physics.

The rate at which the universe is expanding, defined by the Hubble constant, is one of the fundamental quantities in cosmology and is usually determined by techniques that use Cepheid variables and supernovae as points of reference. A group of astronomers from the H0LiCOW collaboration led by Sherry Suyu of the Max Planck Institute for Astrophysics in Germany, ASIAA in Taiwan and the Technical University of Munich, used gravitational lensing to provide an independent measurement of this constant. The gravitational lens is made of a galaxy that deforms space–time and hence bends the light travelling from a background quasar, which is an extremely luminous and variable galaxy core. This bending results in multiple images, as seen from Earth, of the same quasar that are almost perfectly aligned with the lensing galaxy (see image).

While being simple in theory, in practice the new technique is rather complex. A straightforward equation relates the Hubble constant to the length of the deflected light rays between the quasar and Earth. Since the brightness of a quasar changes over time, astronomers can see the different images of the quasar flicker at different times, and the delays between them depend on the lengths of the paths the light has taken. Deriving the Hubble constant therefore depends on very precise modelling of the distribution of the mass in the lensing galaxy, as well as on several hundred accurate measurements of the multiple images of the quasar to derive its variability pattern over many years.

A possible explanation of this discrepancy… could involve an additional source of dark radiation in the early universe.

This complexity explains why the measurement of the Hubble constant – reported in a separate publication by H0LiCOW collaborator Vivien Bonvin from the EPFL in Switzerland and co-workers – relies on a total of four papers by the H0LiCOW collaboration. The obtained value of H0 = 71.9±2.7 km s–1 Mpc–1 is in excellent agreement with other recent determinations in the local universe using classical cosmic-distance ladder methods. One of these, by Adam Riess and collaborators, finds an even higher value of the Hubble constant (H0 = 73.2±1.7 km s–1 Mpc–1) and has therefore triggered a lot of interest in recent months.

The reason is that such values are in tension with the precise determination of the Hubble constant by the Planck satellite. Assuming standard “Lambda Cold Dark Matter” cosmology, the Planck collaboration derived from the cosmic-microwave-background radiation a value of H0 = 67.9±1.5 km s–1 Mpc–1 (CERN Courier May 2013 p12). The discrepancy between Planck’s probe of the early universe and local values of the Hubble constant could be an indication that we are missing a vital ingredient in our current understanding of the universe.

A possible explanation of this discrepancy, according to Riess and colleagues, could involve an additional source of dark radiation in the early universe, corresponding to a significant increase in the effective number of neutrino species. It will be interesting to follow this debate in the coming years, when new observing facilities and also new parallax measurements of Cepheid stars by the Gaia satellite will reduce the uncertainty of the Hubble constant determination to a per cent or less.

The post Gravitational lens challenges cosmic expansion appeared first on CERN Courier.

]]>
News An international group of astronomers has made an independent measurement of how fast the universe is expanding. https://cerncourier.com/wp-content/uploads/2018/06/CCast1_03_17.jpg
WIMP no-show in gamma-ray background https://cerncourier.com/a/wimp-no-show-in-gamma-ray-background/ Wed, 15 Feb 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/wimp-no-show-in-gamma-ray-background/ A possible additional contribution from WIMP annihilation could not be identified, using NASA’s Fermi Gamma-ray Space Telescope.

The post WIMP no-show in gamma-ray background appeared first on CERN Courier.

]]>

Although the night sky appears dark between the stars and galaxies that we can see, a strong background emission is present in other regions of the electromagnetic spectrum. At millimetre wavelengths, the cosmic microwave background (CMB) dominates this emission, while a strong X-ray background peaks at sub-nanometre wavelengths. For the past 50 years it has also been known that a diffuse gamma-ray background at picometre wavelengths also illuminates the sky away from the strong emission of the Milky Way and known extra-galactic sources.

This so-called isotropic gamma-ray background (IGRB) is expected to be uniform on large scales, but can still contain anisotropies on smaller scales. The study of these anisotropies is important for identifying the nature of the unresolved IGRB sources. The best candidates are star-forming galaxies and active galaxies, in particular blazars, which have a relativistic jet pointing towards the Earth. Another possibility to be investigated is whether there is a detectable contribution from the decay or the annihilation of dark-matter particles, as predicted by models of weakly interacting massive particles (WIMPs).

Using NASA’s Fermi Gamma-ray Space Telescope, a team led by Mattia Fornasa from the University of Amsterdam in the Netherlands studied the anisotropies of the IGRB in observations acquired over more than six years. This follows earlier results published in 2012 by the Fermi collaboration and shows that there are two different classes of gamma-ray sources. A specific type of blazar appears to dominate at the highest energies, while at lower frequencies star-forming galaxies or another class of blazar is thought to imprint a steeper spectral slope in the IGRB. A possible additional contribution from WIMP annihilation could not be identified by Fornasa and collaborators.

The constraints on dark matter will improve with new data continuously collected by Fermi

The first step in such an analysis is to exclude the sky area most contaminated by the Milky Way and extra-galactic sources, and then to subtract remaining galactic contributions and the uniform emission of the IGRB. The resulting images include only the IGRB anisotropies, which can be characterised by computing the associated angular power spectrum (APS) similarly to what is done for the CMB anisotropies. The authors do this both for a single image (“auto-APS”) and between images recorded in two different energy regions (“cross-APS”).

The derived auto-APS and cross-APS are found to be consistent with a Poisson distribution, which means they are constant on all angular scales. This absence of scale dependence in gamma-ray anisotropies suggests that the main contribution comes from distant active galactic nuclei. On the other hand, the emission by star-forming galaxies and dark-matter structures would be dominated by their local distribution that is less uniform on the sky and thus would lead to enhanced power at characteristic angular scales. This allowed Fornasa and co-workers to derive exclusion limits on the dark-matter parameter space. Although less stringent than the best limits achieved from the average intensity of the IGRB or from the observation of dwarf spheroidal galaxies, they independently confirm the absence, so far, of a gamma-ray signal from dark matter.

The constraints on dark matter will improve with new data continuously collected by Fermi, but a potentially more promising approach is to complement them at higher gamma-ray energies with data from the future Cherenkov Telescope Array and possibly also with high-energy neutrinos detected by IceCube.

The post WIMP no-show in gamma-ray background appeared first on CERN Courier.

]]>
News A possible additional contribution from WIMP annihilation could not be identified, using NASA’s Fermi Gamma-ray Space Telescope. https://cerncourier.com/wp-content/uploads/2018/06/CCast1_02_17.jpg
Compact star hints at vacuum polarisation https://cerncourier.com/a/compact-star-hints-at-vacuum-polarisation/ Fri, 13 Jan 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/compact-star-hints-at-vacuum-polarisation/ Astronomers may have found the first observational indication of a strange quantum effect called vacuum birefringence.

The post Compact star hints at vacuum polarisation appeared first on CERN Courier.

]]>

By studying an isolated neutron star, astronomers may have found the first observational indication of a strange quantum effect called vacuum birefringence, which was predicted in the 1930s by Werner Heisenberg and Hans Heinrich Euler.

Neutron stars are the very dense remnant cores of massive stars – at least 10 times more massive than our Sun – that have exploded as supernovae at the ends of their lives. In the 1990s, the Germany-led ROSAT space mission for soft X-ray astronomy discovered a new class of seven neutron stars that are known as the Magnificent Seven. The faint isolated objects emit pulses of X-rays every three to 11 seconds or so, but unlike most pulsars they have no detectable radio emission. The ultra-dense stars have an extremely high dipolar magnetic field (of the order 109–1010 T) and display an almost perfect black-body emission, making them unique laboratories to study neutron-star cooling processes.

A team led by Roberto Mignani from INAF Milan in Italy and the University of Zielona Gora, Poland, used ESO’s Very Large Telescope (VLT) at the Paranal Observatory in Chile to observe the neutron star RX J1856.5-3754. Despite being the brightest of the Magnificent Seven and located only around 400 light-years from Earth, its extreme dimness is at the limit of the VLT’s current capabilities to measure polarisation. The aim of the measurement was to detect a quantum effect predicted 80 years ago: since the vacuum is full of virtual particles that appear and vanish, a very strong magnetic field could polarise empty space and hence also light passing through it. Vacuum birefringence is too weak to be observed in laboratory experiments, but the phenomenon should be visible in the very strong magnetic fields around neutron stars.

ESO’s future European Extremely Large Telescope will allow astronomers to study this effect around many more neutron stars.

After careful analysis of the VLT data, Mignani and collaborators detected a significant degree (16%) of linear polarisation, which they say is likely due to vacuum birefringence occurring in the empty space surrounding RX J1856.5-3754. They claim that such a level of polarisation is not easily explained by other sources. For example, the contribution from dust grains in the interstellar medium were estimated to be less than 1%, which was corroborated by the detection of almost zero polarisation in the light from 42 nearby stars. The genuine thermal radiation of the neutron star is also expected to be polarised by its surface magnetic field, but this effect should cancel out if the emission comes from the entire surface of the neutron star over which the magnetic-field direction changes substantially.

The polarisation measurement in this neutron star constitutes the very first observational support for the predictions of QED vacuum polarisation effects. ESO’s future European Extremely Large Telescope will allow astronomers to study this effect around many more neutron stars, while the advent of X-ray polarimetric space missions offers another perspective to this new field of research.

The post Compact star hints at vacuum polarisation appeared first on CERN Courier.

]]>
News Astronomers may have found the first observational indication of a strange quantum effect called vacuum birefringence. https://cerncourier.com/wp-content/uploads/2018/06/CCast1_01_17.jpg
The dawn of a new era https://cerncourier.com/a/the-dawn-of-a-new-era/ Fri, 13 Jan 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-dawn-of-a-new-era/ Gravitational waves open a profound new vista on nature.

The post The dawn of a new era appeared first on CERN Courier.

]]>

One of the greatest scientific discoveries of the century took place on 14 September 2015. At 09.50 UTC on that day, a train of gravitational waves launched by two colliding black holes 1.4 billion light-years away passed by the Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) in Louisiana, US, causing a fractional variation in the distance between the mirrors of about one part in 1021. Just 7 ms later, the same event – dubbed GW150914 – was picked up by the twin aLIGO detector in Washington 3000 km away (figure 1). A second black-hole coalescence was observed on 26 December 2015 (GW151226) and a third candidate event was also recorded, although its statistical significance was not high enough to claim a detection. A search that had gone on for half a century had finally met with success, ushering in the new era of gravitational-wave astronomy.

Black holes are the simplest physical objects in the universe: they are made purely from warped space and time and are fully described by their mass and intrinsic rotation, or spin. The gravitational-wave train emitted by coalescing binary black holes comprises three main stages: a long “inspiral” phase, where gravitational waves slowly and steadily drain the energy and angular momentum from the orbiting black-hole pair; the “plunge and merger”, where black holes move at almost the speed of light and then coalesce into the newly formed black hole; and the “ringdown” stage during which the remnant black hole settles to a stationary configuration (figure 2). Each dynamical stage contains fingerprints of the astrophysical source, which can be identified by first tracking the phase and amplitude of the gravitational-wave train and then by comparing it with highly accurate predictions from general relativity.

aLIGO employs waveform models built by combining analytical and numerical relativity. The long, early inspiral phase, characterised by a weak gravitational field and low velocities, is well described by the post-Newtonian formalism (which expands the Einstein field equation and the gravitational radiation in powers of v/c, but loses accuracy as the two bodies come closer and closer). Numerical relativity provides the most accurate solution for the last stages of inspiral, plunge, merger and ringdown, but such models are time-consuming to produce – the state-of-the-art code of the Simulating eXtreme Spacetimes collaboration took three weeks and 20,000 CPU hours to compute the gravitational waveform for the event GW150914 and three months and 70,000 CPU hours for GW151226.

A few hundred thousand different waveforms were used as templates by aLIGO during the first observing run, covering compact binaries with total masses 2–100 times that of the Sun and mass ratios up to 1:99. Novel approaches to the two-body problem that extend post-Newtonian theory into the strong-field regime and combine it with numerical relativity had to be developed to provide aLIGO with accurate and efficient waveform models, which were based on several decades of steady work in general relativity (figure 3). Further theoretical work will be needed to deal with more sensitive searches in the future if we want to take full advantage of the discovery potential of gravitational-wave astronomy.

aLIGO’s first black holes

The two gravitational-wave signals observed by aLIGO have different morphologies that reveal quite distinct binary black-hole sources. GW150914 is thought to be composed of two stellar black holes with masses 36 MSun and 29 MSun, which formed a black hole of about 62 MSun rotating at almost 70% of its maximal rotation speed, while GW151226 had lower black-hole masses (of about 14 MSun and 8 MSun) and merged in a 21 MSun black-hole remnant. Although the binary’s individual masses for GW151226 have larger uncertainties compared with GW150914 (since the former happened at a higher frequency where aLIGO sensitivity degrades), the analysis ruled out the possibility that the lower-mass object in GW151226 was a neutron star. A follow-up analysis also revealed that the individual black holes had spins less than 70% of the maximal value, and that at least one of the black holes in GW151226 was rotating at 20% of its maximal value or faster. Finally, the aLIGO data show that the binaries that produced GW150914 and GW151226 were at comparable distances from the Earth and that the peak of the gravitational-wave luminosity was about 3 × 1056 erg/sec, making them by far the most luminous transient events in the universe.

Owing to the signal’s length and the particular orientation of the binary plane with respect to the aLIGO detectors, no information about the spin precession of the system could be extracted. It has therefore not yet been possible to determine the precise astrophysical production route for these objects. Whereas the predictions for the rate of binary black-hole mergers from astrophysical-formation mechanisms traditionally vary by several orders of magnitude, the aLIGO detections so far have already established the rate to be somewhat on the high side of the range predicted by astrophysical models at 9–240 per Gpc3 per year. Larger black-hole masses and higher coalescence rates raise the interesting possibility that a stochastic background of gravitational waves composed of unresolved signals from binary black-hole mergers could be observed when aLIGO reaches its design sensitivity in 2019.

The sky localisation of GW150914 and GW151226, which is mainly determined by recording the time delays of the signals arriving at the interferometers, extended over several hundred square degrees. This can be compared with the 0.2 square degrees covered by the full Moon as seen from the Earth, and makes it very hard to search for an electromagnetic counterpart to black-hole mergers. Nevertheless, the aLIGO results kicked off the first campaign for possible electromagnetic counterparts of gravitational-wave signals, involving almost 20 astronomical facilities spanning the gamma-ray, X-ray, optical, infrared and radio regions of the spectrum. No convincing evidence of electromagnetic signals emitted by GW150914 and GW151226 was found, in line with expectations from standard astrophysical scenarios. Deviations from the standard scenario may arise if one considers dark electromagnetic sectors, spinning black holes with strong magnetic fields that need to be sustained until merger, and black holes surrounded by clouds of axions (see “Linking waves to particles”).

The new aLIGO observations have put the most stringent limits on higher post-Newtonian terms.

aLIGO’s observations allow us to test general relativity in the so-far-unexplored, highly dynamical and strong-field gravity regime. As the two black holes that emitted GW150914 and GW151226 started to merge, the binary’s orbital period varied considerably and the phase of the gravitational-wave signal changed accordingly. It is possible to obtain an analytical representation of the phase evolution in post-Newtonian theory, in which the coefficients describe a plethora of dynamical and radiative physical effects, and long-term timing observations of binary pulsars have placed precise bounds on the leading-order post-Newtonian coefficients. However, the new aLIGO observations have put the most stringent limits on higher post-Newtonian terms – setting upper bounds as low as 10% for some coefficients (figure 4). It was even possible to investigate potential deviations during the non-perturbative coalescence phase, and again general relativity passed this test without doubt.

The first aLIGO observations could neither test the second law of black-hole mechanics, which states that the black-hole entropy cannot decrease, nor the “no-hair” theorem, which says that a black hole is only described by mass and spin, for which we require to extract the mass and spin of the final black hole from the data. But we expect that future, multiple gravitational-wave detections with higher signal-to-noise ratios will shed light on these important theoretical questions. Despite those limitations, aLIGO has provided the most convincing evidence to date that stellar-mass compact objects in our universe with masses larger than roughly five solar masses are described by black holes: that is, by the solutions to the Einstein field equations (see “General relativity at 100”).

From binaries to cosmology

During its first observation run, lasting from mid-September 2015 to mid-January 2016, aLIGO did not detect gravitational waves from binaries composed of either two neutron stars, or a black hole and a neutron star. Nevertheless, it set the most stringent upper limits on the rates of such processes: 12.6 × 103 and 3.6 × 103 per Gpc3 per year, respectively. The aLIGO rates imply that we expect to detect those binary systems a few years after aLIGO and the French–Italian experiment Virgo reach their design sensitivity. Observing gravitational waves from binaries made up of matter is exciting because it allows us to infer the neutron-star equation of state and also to unveil the possible origin of short-hard gamma-ray bursts (GRBs) – enormous bursts of electromagnetic radiation observed in distant galaxies.

Neutron stars are extremely dense objects that form when massive stars run out of nuclear fuel and collapse. The density in the core is expected to be more than 1014 times the density of the Sun, at which the standard structure of nuclear matter breaks down and new phases of matter such as superfluidity and superconductivity may appear. All mass and spin parameters being equal, the gravitational-wave train emitted by a binary containing a neutron star differs from the one emitted by two black holes only in the late inspiral phase, when the neutron star is tidally deformed or disrupted. By tracking the gravitational-wave phase it will be possible to measure the tidal deformability parameter, which contains information about the neutron-star interior, and ultimately to discriminate between some equations of state. The merger of double neutron stars and/or black-hole–neutron-star binaries is currently considered the most likely source of short-hard GRBs, and we expect a plethora of electromagnetic signals from the coalescence of such compact objects that will test the short-hard GRB/binary-merger paradigm.

Bursts of gravitational waves lasting for tenths of milliseconds are also produced during the catastrophic final moments of all stars, when the stellar core undergoes a sudden collapse (or supernova explosion) to a neutron star or a black hole. At design sensitivity, aLIGO and Virgo could detect bursts from the core’s “bounce”, provided that the supernova took place in the Milky Way or neighbouring galaxies, with more extreme emission scenarios observable to much further distances. Highly magnetised rotating neutron stars called pulsars are also promising astrophysical sources of gravitational waves. Mountains just a few centimetres in height on the crust of pulsars can cause the variation in time of the pulsar’s quadrupole moment, producing a continuous gravitational-wave train at twice the rotation frequency of the pulsar. The most recent LIGO all-sky searches and targeted observations of known pulsars have already started to invade the parameter space of astrophysical interest, setting new upper limits on the source’s ellipticity, which depends on the neutron-star’s equation of state.

Lastly, several physical mechanisms in the early universe could have produced gravitational waves, such as cosmic inflation, first-order phase transitions and vibrations of fundamental and/or cosmic strings. Being that gravitational waves are almost unaffected by matter, they provide us with a pristine snapshot of the source at the time they were produced. Thus, gravitational waves may unveil a period in the history of the universe around its birth that we cannot otherwise access. The first observation run of aLIGO has set the most stringent constraints on the stochastic gravitational-wave background, which is generally expressed by the dimensionless energy density of gravitational waves, of < 1.7 × 10−7. Digging deeper, at design sensitivity aLIGO is expected to reach a value of 10−9, while next-generation detectors such as the Einstein Telescope and the Cosmic Explorer may achieve values as low as 10−13 – just two orders of magnitude above the background predicted by the standard “slow-roll” inflationary scenario.

Grand view

The sensitivity of existing interferometer experiments on Earth will be improved in the next 5–10 years by employing a quantum-optics phenomenon called squeezed light. This will reduce the sky-localisation errors of coalescing binaries, provide a better measurement of tidal effects and the neutron-star equation of state in binary mergers, and enhance our chances of observing gravitational waves from pulsars and supernovae. The ability to identify the source of gravitational waves will also improve over time, as upgraded and new gravitational-wave observatories come online.

Furthermore, pulsar signals offer an alternative Pulsar Timing Array (PTA) detection scheme that is currently operating. Gravitational waves passing through pulsars and the Earth would modify the time of arrival of the pulses, and searches for correlated signatures in the pulses’ times of arrival from the most stable known pulsars by PTA projects could detect the stochastic gravitational-wave background from unresolved supermassive binary black-hole inspirals in the 10−9–10−7 Hz frequency region. Results from the North-American NANOGrav, European EPTA and Australian PPTA collaborations have already set interesting upper limits on the astrophysical background, and could achieve a detection in the next five years.

The past year has been a milestone for gravitational-wave research in space, with the results of the LISA Pathfinder mission published in June 2016 exceeding all expectations and proving that LISA, planned for 2034, will work successfully (see “Catching a gravitational wave”). LISA would be sensitive to gravitational waves between 10−4–10−2 Hz, thus detecting sources different from the ones observed on the Earth such as supermassive binary black holes, extreme mass-ratio inspirals, and the astrophysical stochastic background from white-dwarf binaries in our galaxy. In the meantime, a new ground facility to be built in 10–15 years – such as the Einstein Telescope in Europe and the Cosmic Explorer in the US – will be required to maximise the scientific potential of gravitational-wave physics and astrophysics. These future detectors will allow such high sensitivity to binary coalescences that we can probe binary black holes in all our universe, enabling the most exquisite tests of general relativity in the highly dynamical, strong-field regime. That will challenge our current knowledge of gravity, fundamental and nuclear physics, unveiling the nature of the most extreme objects in our universe.

The post The dawn of a new era appeared first on CERN Courier.

]]>
Feature Gravitational waves open a profound new vista on nature. https://cerncourier.com/wp-content/uploads/2018/06/CCvis1_01_17.jpg
General relativity at 100 https://cerncourier.com/a/general-relativity-at-100/ Fri, 13 Jan 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/general-relativity-at-100/ Testing Einstein’s masterpiece with ever increasing precision.

The post General relativity at 100 appeared first on CERN Courier.

]]>

Einstein’s long path towards general relativity (GR) began in 1907, just two years after he created special relativity (SR), when the following apparently trivial idea occurred to him: “If a person falls freely, he will not feel his own weight.” Although it was long known that all bodies fall in the same way in a gravitational field, Einstein raised this thought to the level of a postulate: the equivalence principle, which states that there is complete physical equivalence between a homogeneous gravitational field and an accelerated reference frame. After eight years of hard work and deep thinking, in November 1915 he succeeded in extracting from this postulate a revolutionary theory of space, time and gravity. In GR, our best description of gravity, space–time ceases to be an absolute, non-dynamical framework as envisaged by the Newtonian view, and instead becomes a dynamical structure that is deformed by the presence of mass-energy.

GR has led to profound new predictions and insights that underpin modern astrophysics and cosmology, and which also play a central role in attempts to unify gravity with other interactions. By contrast to GR, our current description of the fundamental constituents of matter and of their non-gravitational interactions – the Standard Model (SM) – is given by a quantum theory of interacting particles of spins 0, ½ and 1 that evolve within the fixed, non-dynamical Minkowski space–time of SR. The contrast between the homogeneous, rigid and matter-independent space–time of SR and the inhomogeneous, matter-deformed space–time of GR is illustrated in figure 1.

The universality of the coupling of gravity to matter (which is the most general form of the equivalence principle) has many observable consequences such as: constancy of the physical constants; local isotropy of space; local Lorentz invariance; universality of free fall and universality of gravitational redshift. Many of these have been verified to high accuracy. For instance, the universality of the acceleration of free fall has been verified on Earth at the 10–13 level, while the local isotropy of space has been verified at the 10–22 level. Einstein’s field equations (see panel below) also predict many specific deviations from Newtonian gravity that can be tested in the weak-field, quasi-stationary regime appropriate to experiments performed in the solar system. Two of these tests – Mercury’s perihelion advance, and light deflection by the Sun – were successfully performed, although with limited precision, soon after the discovery of GR. Since then, many high-precision tests of such post-Newtonian gravity have been performed in the solar system, and GR has passed each of them with flying colours.

Precision tests

Similar to what is done in precision electroweak experiments, it is useful to quantify the significance of precision gravitational experiments by parameterising plausible deviations from GR. The simplest, and most conservative, deviation from Einstein’s pure spin-2 theory is defined by adding a long-range (massless) spin-0 field, φ, coupled to the trace of the energy-momentum tensor. The most general such theory respecting the universality of gravitational coupling contains an arbitrary function of the scalar field defining the “observable metric” to which the SM matter is minimally and universally coupled.

In the weak-field slow-motion limit, appropriate to describing gravitational experiments in the solar system, the addition of φ modifies Einstein’s predictions only through the appearance of two dimensionless parameters, γ and β. The best current limits on these “post-Einstein” parameters are, respectively, (2.1±2.3) × 10–5 (deduced from the additional Doppler shift experienced by radio-wave beams connecting the Earth to the Cassini spacecraft when they passed near the Sun) and < 7 × 10–5, from a study of the global sensitivity of planetary ephemerides to post-Einstein parameters.

In the regime of radiative and/or strong gravitational fields, by contrast, pulsars (rotating neutron stars emitting a beam of radio waves) in gravitationally bound orbits have provided crucial tests of GR. In particular, measurements of the decay in the orbital period of binary pulsars have provided direct experimental confirmation of the propagation properties of the gravitational field. Theoretical studies of binaries in GR have shown that the finite velocity of propagation of the gravitational interaction between the pulsar and its companion generates damping-like terms at order (v/c)5 in the equations of motion that lead to a small orbital period decay. This has been observed in more than four different systems since the discovery of binary pulsars in 1974, providing direct proof of the reality of gravitational radiation. Measurements of the arrival times of pulsar signals have also allowed precision tests of the quasi-stationary strong-field regime of GR, since their values may depend both on the unknown masses of the binary system and on the theory of gravity used to describe the strong self-gravity of the pulsar and its companion (figure 2).

The radiation revelation

Einstein realised that his field equations had wave-like solutions in two papers in June 1916 and January 1918 (see panel below). For many years, however, the emission of gravitational waves (GWs) by known sources was viewed as being too weak to be of physical significance. In addition, several authors – including Einstein himself – had voiced doubts about the existence of GWs in fully nonlinear GR.

The situation changed in the early 1960s when Joseph Weber understood that GWs arriving on Earth would have observable effects and developed sensitive resonant detectors (“Weber bars”) to search for them. Then, prompted by Weber’s experimental effort, Freeman Dyson realised that, when applying the quadupolar energy-loss formula derived by Einstein to binary systems made of neutron stars, “the loss of energy by gravitational radiation will bring the two stars closer with ever-increasing speed, until in the last second of their lives they plunge together and release a gravitational flash at a frequency of about 200 cycles and of unimaginable intensity.” The vision of Dyson has recently been realised thanks, on the one hand, to the experimental development of drastically more sensitive non-resonant kilometre-scale interferometric detectors and, on the other hand, to theoretical advances that allowed one to predict in advance the accurate shape of the GW signals emitted by coalescing systems of neutron stars and black holes (BHs).

The recent observations of the LIGO interferometers have provided the first detection of GWs in the wave zone. They also provide the first direct evidence of the existence of BHs via the observation of their merger, followed by an abrupt shut-off of the GW signal, in complete accord with the GR predictions.

BHs are perhaps the most extraordinary consequence of GR, because of the extreme distortion of space and time that they exhibit. In January 1916, Karl Schwarzschild published the first exact solution of the (vacuum) Einstein equations, supposedly describing the gravitational field of a “mass point” in GR. It took about 50 years to fully grasp the meaning and astrophysical plausibility of these Schwarzschild BHs. Two of the key contributions that led to our current understanding of BHs came from Oppenheimer and Snyder, who in 1939 suggested that a neutron star exceeding its maximum possible mass will undergo gravitational collapse and thereby form a BH, and from Kerr 25 years later, who discovered a generalisation of the Schwarzschild solution describing a BH endowed both with mass and spin.

The Friedmann models still constitute the background models of the current, inhomogeneous cosmologies.

Another remarkable consequence of GR is theoretical cosmology, namely the possibility of describing the kinematics and the dynamics of the whole material universe. The field of relativistic cosmology was ushered in by a 1917 paper by Einstein. Another key contribution was the 1924 paper of Friedmann that described general families of spatially curved, expanding or contracting homogeneous cosmological models. The Friedmann models still constitute the background models of the current, inhomogeneous cosmologies. Quantitative confirmations of GR on cosmological scales have also been obtained, notably through the observation of a variety of gravitational lensing systems.

Dark clouds ahead

In conclusion, all present experimental gravitational data (universality of free fall, post-Newtonian gravity, radiative and strong-field effects in binary pulsars, GW emission by coalescing BHs and gravitational lensing) have been found to be compatible with the predictions of Einstein’s theory. There are also strong constraints on sub-millimetre modifications of Newtonian gravity from torsion-balance tests of the inverse square law.

One might, however, wish to keep in mind the presence of two dark clouds in our current cosmology, namely the need to assume that most of the stress-energy tensor that has to be put on the right-hand side of the GR field equations to account for the current observations is made of yet unseen types of matter: dark matter and a “cosmological constant”. It has been suggested that these signal a breakdown of Einstein’s gravitation at large scales, although no convincing theoretical modification of GR at large distances has yet been put forward.

GWs, BHs and dynamical cosmological models have become essential elements of our description of the macroscopic universe. The recent and bright beginning of GW astronomy suggests that GR will be an essential tool for discovering new aspects of the universe (see “The dawn of a new era”). A century after its inception, GR has established itself as the standard theoretical description of gravity, with applications ranging from the Global Positioning System and the dynamics of the solar system, to the realm of galaxies and the primordial universe.

However, in addition to the “dark clouds” of dark matter and energy, GR also poses some theoretical challenges. There are both classical challenges (notably the formation of space-like singularities inside BHs), and quantum ones (namely the non-renormalisability of quantum gravity – see “Gravity’s quantum side”). It is probable that a full resolution of these challenges will be reached only through a suitable extension of GR, and possibly through its unification with the current “spin ≤ 1” description of particle physics, as suggested both by supergravity and by superstring theory.

It is therefore vital that we continue to submit GR to experimental tests of increasing precision. The foundational stone of GR, the equivalence principle, is currently being probed in space at the 10–15 level by the MICROSCOPE satellite mission of ONERA and CNES. The observation of a deviation of the universality of free fall would imply that Einstein’s purely geometrical description of gravity needs to be completed by including new long-range fields coupled to bulk matter. Such an experimental clue would be most valuable to indicate the road towards a more encompassing physical theory.

General relativity makes waves

There are two equivalent ways of characterising general relativity (GR). One describes gravity as a universal deformation of the Minkowski metric, which defines a local squared interval between two infinitesimally close space–time points and, consequently, the infinitesimal light cones describing the local propagation of massless particles. The metric field gμν is assumed in GR to be universally and minimally coupled to all the particles of the Standard Model (SM), and to satisfy Einstein’s field equations:

equation 1

Here, Rμν denotes the Ricci curvature (a nonlinear combination of gμν and of its first and second derivatives), Tμν is the stress-energy tensor of the SM particles (and fields), and G denotes Newton’s gravitational constant.

The second way of defining GR, as proven by Richard Feynman, Steven Weinberg, Stanley Deser and others, states that it is the unique, consistent, local, special-relativistic theory of a massless spin-2 field. It is then found that the couplings of the spin-2 field to the SM matter are necessarily equivalent to a universal coupling to a “deformed” space–time metric, and that the propagation and self-couplings of the spin-2 field are necessarily described by Einstein’s equations.

Following the example of Maxwell, who had found that the electromagnetic-field equations admit propagating waves as solutions, Einstein found that the GR field equations admit propagating gravitational waves (GWs). He did so by considering the weak-field limit (gμν  = ημν + hμν) of his equations, namely,

equation 2

where hμν =  hμν – ½h ημν. When choosing the co-ordinate system so as to satisfy the gravitational analogue of the Lorenz gauge condition, so that

equation 3

the linearised field equations simplify to the diagonal inhomogeneous wave equation, which can be solved by retarded potentials.

There are two main results that derive from this wave equation: first, a GW is locally described by a plane wave with two transverse tensorial polarisations (corresponding to the two helicity states of the massless spin-2 graviton) and travelling at the velocity of light; second, a slowly moving, non self-gravitating source predominantly emits a quadupolar GW.

The post General relativity at 100 appeared first on CERN Courier.

]]>
Feature Testing Einstein’s masterpiece with ever increasing precision. https://cerncourier.com/wp-content/uploads/2018/06/CCgen1_01_17.jpg
Catching a gravitational wave https://cerncourier.com/a/catching-a-gravitational-wave/ Fri, 13 Jan 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/catching-a-gravitational-wave/ The technology behind LIGO’s epochal discovery.

The post Catching a gravitational wave appeared first on CERN Courier.

]]>

Gravitational waves alternatively compress and stretch space–time as they propagate, exerting tidal forces on all objects in their path. Detectors such as Advanced LIGO (aLIGO) search for this subtle distortion of space–time by measuring the relative separation of mirrors at the ends of long perpendicular arms, which form a simple Michelson interferometer with Fabry–Perot cavities in the arms: a beam splitter directs laser light to mirrors at the ends of the arms and the reflected light is recombined to produce an interference pattern. When a gravitational wave passes through the detector, the strain it exerts changes the relative lengths of the arms and causes the interference pattern to change.

The arms of the aLIGO detectors are each 4 km long to help maximise the measured length change. Even on this scale, however, the induced length changes are tiny: the first detected gravitational waves, from the merger of two black holes, changed the arm length of the aLIGO detectors by just 4 × 10–18 m, which is approximately 200 times smaller than the proton radius. Achieving the fantastically high sensitivity required to detect this event was the culmination of decades of research and development.

Battling noise

The idea of using an interferometer to detect gravitational waves was first concretely proposed in the 1970s and full-scale detectors began to be constructed in the mid-1990s, including GEO600 in Germany, Virgo in Italy and the LIGO project in the US. LIGO consists of detectors at two sites separated by about 3000 km – Hanford (in Washington state) and Livingston in Louisiana – and undertook its first science runs in 2002–2008. Following a major upgrade, the observatory restarted in September 2015 as aLIGO with an initial sensitivity four times greater than its predecessor. Since the detectors measure strain in space–time, the effective increase in volume, or event rate, of aLIGO is a factor 43 higher.

A major issue facing aLIGO designers is to isolate the detectors from various noise sources. At a frequency of around 10 Hz, the motion of the Earth’s surface or seismic noise is about 10 orders of magnitude larger than required, with the seismic noise falling off at higher frequencies. A powerful solution is to suspend the mirrors as pendulums: a pendulum acts as a low-pass filter, providing significant reductions in motion at frequencies above the pendulum frequency. In aLIGO, a chain of four suspended masses is used to provide a factor 107 reduction in seismic motion. In addition, the entire suspension is attached to an advanced seismic isolation system using a variety of active and passive techniques, which further isolate noise by a factor 1000. At 10 Hz, and in the absence of other noise sources, these systems could already increase the sensitivity of the detectors to roughly 10–19 m/(Hz). At even lower frequencies (10 μHz), the daily tides stretch and shrink the Earth by the order of 0.4 mm over 4 km.

Another source of low-frequency noise arises from moving mass interacting with the detector mirrors via the Newtonian inverse square law. The dominant source of this noise is from surface seismic waves, which can produce density fluctuations of the Earth’s surface close to the interferometer mirrors and result in a fluctuating gravitational force on them. While methods of monitoring and subtracting this noise are being investigated, the performance of Earth-based detectors is likely to always be limited at frequencies below 1 Hz by this noise source.

Thermal noise associated with the thermal energy of the mirrors and their suspensions can also cause the mirrors to move, providing a significant noise source at low-to-mid-range frequencies. The magnitude of thermal noise is related to the mechanical loss of the materials: similar to a high-quality wine glass, a material with a low loss will ring for a long time with a pure note because most of the thermal motion is confined to frequencies close to the resonance. For this reason, aLIGO uses fibres fabricated from fused silica – a type of very pure glass with very low mechanical loss – for the final stage of the mirror suspension. Pioneered in the GEO600 detector near Hanover in Germany, the use of silica fibres in place of the steel wires used in the initial LIGO detectors significantly reduces thermal noise from suspension.

aLIGO also has much reduced quantum noise compared with the original LIGO.

Low-loss fused silica is also used for the 40 kg interferometer mirrors, which use multi-layered optical coatings to achieve the high reflectivity required. For aLIGO, a new optical coating was developed comprising a stack of alternating layers of silica and titania-doped “tantala”, reducing the coating thermal noise by about 20%. However, at the aLIGO design sensitivity (which is roughly 10 times higher than the initial aLIGO set-up) thermal noise will be the limiting noise source at frequencies of around 60 Hz – close to the frequency at which the detectors are most sensitive.

aLIGO also has much reduced quantum noise compared with the original LIGO. This noise source has two components: radiation-pressure noise and shot noise. The former results from fluctuations in the number of photons hitting the detector mirrors, which is more significant at lower frequencies, and has been reduced by using mirrors four times heavier than the initial LIGO mirrors. Photon shot noise, resulting from statistical fluctuations in the number of photons at the output of the detector, limits sensitivity at higher frequencies. Since shot noise is inversely proportional to the square root of the power, it can be reduced by using higher laser power. In the first observing run of aLIGO, 100 kW of laser power was circulating in the detector arms, with the potential to increase it to up to 750 kW in future runs. Optical cavities are also used to store light in the arms and build up laser power.

In addition to reductions in these fundamental noise sources, many other technological improvements were required to reduce more technical noise sources. Improvements over the initial LIGO detector included a thermal compensation system to reduce thermal lensing effects in the optics, reduced electronic noise in control circuits and finer polishing of the mirror substrates to reduce the amount of scattered light in the detectors.

Upgrades on the ground

Having detected their first gravitational wave almost as soon as they switched on in September 2015, followed by a further event a few months later, the aLIGO detectors began their second observation run on 30 November. Dubbed “O2”, it is scheduled to last for six months. More observation runs are envisaged, with more upgrades in sensitivity taking place between them.

The next major upgrade, expected in around 2018, will see the injection of “squeezed light” to further reduce quantum noise. However, to gain the maximum sensitivity improvement from squeezing, a reduction in coating thermal noise is also likely to be required. With these and other relatively short-term upgrades, it is expected that a factor-two improvement over the aLIGO design sensitivity could be achieved. This would allow events such as the first detection to be observed with a signal-to-noise ratio almost 10 times better than the initial result. Further improvements in sensitivity will almost certainly require more extensive upgrades or new facilities, possibly involving longer detectors or cryogenic cooling of the mirrors.

aLIGO is expected to soon be joined in observing runs by Advanced Virgo, giving a network of three geographically separated detectors and thus improving our ability to locate the position of gravitational-wave sources on the sky. Discussions are also under way for an aLIGO site in India. In Japan, the KAGRA detector is under construction: this detector will use cryogenic cooling to reduce thermal noise and is located underground to reduce seismic and gravity gradient effects. When complete, KAGRA is expected to have similar sensitivity to aLIGO.

Longer term, in Europe a detector known as the Einstein Telescope (ET) has been proposed to provide a factor 10 more sensitivity than aLIGO. ET would not only have arms measuring 10 km long but would take a new approach to noise reduction using two very different detectors: a high-power room-temperature interferometer optimised for sensitivity at high frequencies, where shot noise limits performance, and a low-power cryogenic interferometer optimised for sensitivity at low frequencies (where performance is limited by thermal noise). ET would require significant changes in detector technology and also be constructed underground to reduce the effect of seismic noise and gravity-gradient noise on low-frequency sensitivity.

The final frontier

Obtaining significantly improved sensitivity at lower frequencies is difficult on Earth because they are swamped by local mass motion. Gaining sensitivity at very low frequencies, which is where we must look for signals from massive black-hole collisions and other sources that will provide exquisite science results, is only likely to be achieved in space. This concept has been on the table since the 1970s and has evolved into the Laser Interferometer Space Antenna (LISA) project, which is led by the European Space Agency (ESA) with contributions from 14 European countries and the US.

A survey mission called LISA Pathfinder was launched on 3 December 2015 from French Guiana. It is currently located 1.5 million  km away at the first Earth–Sun Lagrange point, and will take data until the end of May 2017. The aim of LISA Pathfinder was to demonstrate technologies for a space-borne gravitational-wave detector based on the same measurement philosophy as that used by ground-based detectors. The mission has clearly demonstrated that we can place test masses (gold–platinum cubes with 46 mm sides separated by 38 cm) into free fall, such that the only varying force acting on them is gravity. It has also validated a host of complementary techniques, including: operating a drag-free spacecraft using cold gas thrusters; electrostatic control of free-floating test masses; short-arm interferometry and test-mass charge control. When combined, these novel features allow differential accelerometry at the 10–15 g level, which is the sensitivity needed for a space-borne gravitational-wave detector. Indeed, if Pathfinder test-mass technology were used to build a full-scale LISA detector, it would recover almost all of the science originally anticipated for LISA without any further improvements.

The success of Pathfinder, coming hot on the heels of the detection of gravitational waves, is a major boost for the international gravitational-wave community. It comes at an exceptional time for the field, with ESA currently inviting proposals for the third of its Cosmic Vision “large missions” programme. Developments are now needed to move from LISA Pathfinder to LISA proper, but these are now well understood and technology development programmes are planned and under way. The timeline for this mission leads to a launch in the early 2030s and the success of Pathfinder means we can look forward with excitement to the fantastic science that will result.

The post Catching a gravitational wave appeared first on CERN Courier.

]]>
Feature The technology behind LIGO’s epochal discovery. https://cerncourier.com/wp-content/uploads/2018/06/CClig1_01_17.jpg
Linking waves to particles https://cerncourier.com/a/linking-waves-to-particles/ Fri, 13 Jan 2017 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/linking-waves-to-particles/ Gravitational waves could also shed light on the microscopic world.

The post Linking waves to particles appeared first on CERN Courier.

]]>

Black holes are arguably humankind’s most intriguing intellectual construction. Featuring a curvature singularity where space–time “ends” and tidal forces are infinite, black-hole interiors cannot be properly understood without a quantum theory of gravity. They are defined by an event horizon – a surface beyond which nothing escapes to the outside – and an exterior region called a photosphere, which is able to trap light rays. These uncommon properties explain why black holes were basically ignored for half a century, considered little more than a bizarre mathematical solution of Einstein’s equations but one without counterpart in nature.

LIGO’s discovery of gravitational waves provides the strongest evidence to date for the existence of black holes, but these tiny distortions of space–time have much more to tell us. Gravitational waves offer a unique way to test the basic tenets of general relativity, some of which have been taken for granted without observations. Are black holes the simplest possible macroscopic objects? Do event horizons and black holes really exist, or is their formation halted by some as-yet unknown mechanism? In addition, gravitational waves can tell us if gravitons are massless and if extra-light degrees of freedom fill the universe, as predicted in the 1970s by Peccei and Quinn in an attempt to explain the smallness of the neutron electric-dipole moment, and more recently by string theory. Ultralight fields affect the evolution of black holes and their gravitational-wave emission in a dramatic way that should be testable with upcoming gravitational-wave observatories.

The existence of black holes

The standard criterion with which to identify a black hole is straightforward: if an object is dark, massive and compact, it’s a black hole. But are there other objects which could satisfy the same criteria? Ordinary stars are bright, while neutron stars have at most three solar masses and therefore neither is able to explain observations of very massive dark objects. In recent years, however, unknown physics and quantum effects in particular have been invoked that change the structure of the horizon, replacing it by a hard surface. In this scenario, the exterior region – including the photosphere – would remain unchanged, but black holes would be replaced by very compact, dark stars. These stars could be made of normal matter under extraordinary quantum conditions or of exotic matter such as new scalar particles that may form “boson stars”.

Unfortunately, the formation of objects invoking poorly understood quantum effects is difficult to study. The collapse of scalar fields, on the other hand, can theoretically allow boson stars to form, and these may become more compact and massive through mergers. Interestingly, there is mounting evidence that compact objects without horizons but with a photosphere are unstable, ruling out entire classes of alternatives that have been put forward.

Gravitational waves might soon provide a definite answer to such questions. Although current gravitational-wave detections are not proof for the existence of black holes, they are a strong indicator that photospheres exist. Whereas observations of electromagnetic processes in the vicinities of black holes only probe the region outside of the photosphere, gravitational waves are sensitive to the entire space–time and are our best probe of strong-field regions.

A typical gravitational-wave signal generated by a small star falling head-on into a massive black hole looks like that in figure 1. As the star crosses the photosphere, a burst of radiation is emitted and a sequence of pulses dubbed  “quasinormal ringing” follow, determined by the characteristic modes of the black hole. But if the star falls into a quantum-corrected or exotic compact object with no horizon, part of the burst generated during the crossing of the photosphere reflects back at the object surface. The resulting signal in a detector would thus initially look the same, but be followed by lower amplitude “echoes” trapped between the photosphere and the surface of the object (figure 1, lower panel). These echoes, although tricky to dig out in noisy data, would be a smoking gun for new physics. With increasing sensitivity in detectors such as LIGO and Virgo, observations will be pushing back the object’s surface closer to the horizon, perhaps even to the point where we can detect the echo of quantum effects.

Dark questions

Understanding strong-field gravity with gravitational waves can also test the nature of dark matter. Although dark matter may interact very feebly with Standard Model particles, according to Einstein’s equivalence principle it must fall just like any other particle. If dark matter is composed of ultralight fields, as recent studies argue, then black holes may serve as excellent dark-matter detectors. You might ask how a monstrous, supermassive black hole could ever be sensitive to ultralight fields. The answer lies in superradiant resonances. When black holes rotate, as most do, they display an interesting effect discovered in the 1970s called superradiance: if one shines a low-frequency lamp on a rotating black hole, the scattered beam is brighter. This happens at the expense of the hole’s kinetic energy, causing the spin of the black-hole to decrease.

Not only electromagnetic waves, but also gravitational waves and any other bosonic field can be amplified by a rotating black hole. In addition, if the field is massive, low-energy fluctuations are trapped near the horizon and are forced to interact repeatedly with the black hole, producing an instability. This instability extracts rotational energy and transfers it to the field, which grows exponentially in amplitude and forms a rotating cloud around the black hole. For a one-million solar-mass black hole and a scalar field with a mass of 10–16 eV, the timescale for this to take place is less than two minutes. Therefore, the very existence of ultralight fields is constrained by the observation of spinning black holes. With this technique, one can place unprecedented bounds on the mass of axion-like particles, another popular candidate for dark matter. For example, we know from current astrophysical observations that the mass of dark photons must be smaller than 10–20 eV, which is 100 times better than accelerator bounds. The technique relies only on measurements of the mass and spin of black holes, which will be known with unprecedented precision with future gravitational-wave observations.

Superradiance, together with current electromagnetic observations of spinning black holes, can also be used to constrain the mass of the graviton, since any massive boson would trigger superradiant instabilities. Spin measurements of the supermassive black hole in galaxy Fairall 9 requires the mass of the graviton to be lighter than 5 × 10–23 eV – an impressive number which is even more stringent than the bound recently placed by LIGO.

Gravitational lighthouses

Furthermore, numerical simulations suggest that the superradiant instability mechanism eventually causes a slowly evolving and non-symmetric cloud to form around the black hole, emitting periodic gravitational waves like a gravitational “lighthouse”. This would not only mean that black holes are not as simple as we thought, but lead to a definite prediction: some black holes should be emitting nearly monochromatic gravitational waves whose frequency is dictated only by the field’s mass. This raises terrific opportunities for gravitational-wave science: not only can gravitational waves provide the first direct evidence of ultralight fields and of possible new effects near the horizon, but they also carry detailed information about the black-hole mass and spin. If light fields exist, the observation of a few hundred black holes should show “gaps” in the mass-spin plane corresponding to regions where spinning black holes are too unstable to exist.

This is a surprising application of gravitational science, which can be used to investigate the existence of new particles such as those possibly contributing to the dark matter. The idea of using observations of supermassive black holes to provide new insights not accessible in laboratory experiments would certainly be exciting. Perhaps these new frontiers in gravitational-wave astrophysics, in addition to probing the most extreme objects, will also give us a clearer understanding of the microscopic universe.

The post Linking waves to particles appeared first on CERN Courier.

]]>
Feature Gravitational waves could also shed light on the microscopic world. https://cerncourier.com/wp-content/uploads/2018/06/CCwto1_01_17.jpg
n_TOF deepens search for missing cosmic lithium https://cerncourier.com/a/n_tof-deepens-search-for-missing-cosmic-lithium/ https://cerncourier.com/a/n_tof-deepens-search-for-missing-cosmic-lithium/#respond Fri, 11 Nov 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/n_tof-deepens-search-for-missing-cosmic-lithium/ CERN’s neutron time-of-flight (n_TOF) facility has filled in a missing piece of the cosmological-lithium problem puzzle.

The post n_TOF deepens search for missing cosmic lithium appeared first on CERN Courier.

]]>

An experiment at CERN’s neutron time-of-flight (n_TOF) facility has filled in a missing piece of the cosmological-lithium problem puzzle, according to a report published in Physical Review Letters. Along with a few other light elements such as hydrogen and helium, much of the lithium in the universe is thought to have been produced in the very early universe during a process called Big-Bang nucleosynthesis (BBN). For hydrogen and helium, BBN theory is in excellent agreement with observations. But the amount of lithium (7Li) observed is about three times smaller than predicted – a discrepancy known as the cosmological-lithium problem.

The n_TOF collaboration has now made a precise measurement of one of the key processes involved – 7Be(n,α)4He – in an attempt to solve the mystery. The production and destruction of the unstable 7Be isotope regulates the abundance of cosmological lithium, but estimates of the probability of 7Be destruction via this channel have relied on a single measurement made in 1963 of thermal energies at the Ispra reactor in Italy. Therefore, a possible explanation for the higher theoretical value could be an underestimation of the destruction of primordial 7Be, in particular in reactions with neutrons.

Now, n_TOF has measured the cross-section of the 7Be(n,α)4He reaction over a wide range of neutron energies with a high level of accuracy. This was possible thanks to the extremely high luminosity of the neutron beam in the recently constructed experimental area (EAR2) at the n_TOF facility.

The results indicate that, at energies relevant for BBN, the probability for this reaction is 10 times smaller than that used in theoretical calculations. The destruction rate of 7Be is therefore even smaller than previously supposed, ruling out this channel as the source of the missing lithium and deepening the mystery of the cosmological-lithium problem

The post n_TOF deepens search for missing cosmic lithium appeared first on CERN Courier.

]]>
https://cerncourier.com/a/n_tof-deepens-search-for-missing-cosmic-lithium/feed/ 0 News CERN’s neutron time-of-flight (n_TOF) facility has filled in a missing piece of the cosmological-lithium problem puzzle. https://cerncourier.com/wp-content/uploads/2016/11/CCnew3_10_16.jpg
Hubble misses 90% of distant galaxies https://cerncourier.com/a/hubble-misses-90-of-distant-galaxies/ https://cerncourier.com/a/hubble-misses-90-of-distant-galaxies/#respond Fri, 11 Nov 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/hubble-misses-90-of-distant-galaxies/ A team of astronomers has estimated that the number of galaxies in the observable universe is around two trillion.

The post Hubble misses 90% of distant galaxies appeared first on CERN Courier.

]]>

A team of astronomers has estimated that the number of galaxies in the observable universe is around two trillion (2 × 1012), which is 10 times more than could be observed by the Hubble Space Telescope in a hypothetical all-sky survey. Although the finding does not affect the matter content of the universe, it shows that small galaxies unobservable by Hubble were much more numerous in the distant, early universe.

Asking how many stars and galaxies there are in the universe might seem a simple enough question, but it has no simple answer. For instance, it is only possible to probe the observable universe, which is limited to the region from where light could reach us in less time than the age of the universe. The Hubble Deep Field images captured in the mid-1990s gave us the first real insight into this fundamental question: myriad faint galaxies were revealed, and extrapolating from the tiny area on the sky suggested that the observable universe contains about 100 billion galaxies.

Now, an international team led by Christopher Conselice of the University of Nottingham in the UK has shown that this number is at least 10 times too low. The conclusion is based on a compilation of many published deep-space observations from Hubble and other telescopes. Conselice and co-workers derived the distance and the mass of the galaxies to deduce how the number of galaxies in a given mass interval evolves over the history of the universe. The team extrapolated its results to infer the existence of faint galaxies, which the current generation of telescopes cannot observe, and found that galaxies are less big and more numerous in the distant universe compared with local regions. Since less-massive galaxies are also the dimmest and therefore the most difficult to observe at great distances, the researchers conclude that the Hubble ultra-deep-field observations are missing about 90% of all galaxies in any observed area in the sky. The total number of galaxies in the observable universe, they suggest, is more like two trillion.

This intriguing result must, however, be put in context. Critically, the galaxy count depends heavily on the lower limit that one chooses for the galaxy mass: since there are more low-mass than high-mass galaxies, any change in this value has huge effects. Conselice and his team took a stellar-mass limit of one million solar masses, which is a very small value corresponding to a galaxy 1000 times smaller than the Large Magellanic Cloud (which is itself about 20–30 times less massive than the Milky Way). The authors explain that were they to take into account even smaller galaxies of 100,000 solar masses, the estimated total number of galaxies would be seven times greater.

The result also does not mean that the universe contains more visible matter than previously thought. Rather, it shows that the bigger galaxies we see in the local universe have been assembled via multiple mergers of smaller galaxies, which were much more numerous in the early, distant universe. While the vast majority of these small, faint and remote galaxies are not yet visible with current technology, they offer great opportunities for future observatories, in particular the James Webb Space Telescope (Hubble’s successor), which is planned for launch in 2018.

The post Hubble misses 90% of distant galaxies appeared first on CERN Courier.

]]>
https://cerncourier.com/a/hubble-misses-90-of-distant-galaxies/feed/ 0 News A team of astronomers has estimated that the number of galaxies in the observable universe is around two trillion. https://cerncourier.com/wp-content/uploads/2016/11/CCast1_10_16.jpg
Testing times for space–time symmetry https://cerncourier.com/a/testing-times-for-space-time-symmetry/ https://cerncourier.com/a/testing-times-for-space-time-symmetry/#comments Fri, 11 Nov 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/testing-times-for-space-time-symmetry/ Numerous experiments, many of them at CERN, are testing for violations of Lorentz and CPT symmetry in the search for new physics.

The post Testing times for space–time symmetry appeared first on CERN Courier.

]]>

Throughout history, our notion of space and time has undergone a number of dramatic transformations, thanks to figures ranging from Aristotle, Leibniz and Newton to Gauss, Poincaré and Einstein. In our present understanding of nature, space and time form a single 4D entity called space–time. This entity plays a key role for the entire field of physics: either as a passive spectator by providing the arena in which physical processes take place or, in the case of gravity as understood by Einstein’s general relativity, as an active participant.

Since the birth of special relativity in 1905 and the CPT theorem of Bell, Lüders and Pauli in the 1950s, we have come to appreciate both Lorentz and CPT symmetry as cornerstones of the underlying structure of space–time. The former states that physical laws are unchanged when transforming between two inertial frames, while the latter is the symmetry of physical laws under the simultaneous transformations of charge conjugation (C), parity inversion (P) and time reversal (T). These closely entwined symmetries guarantee that space–time provides a level playing field for all physical systems independent of their spatial orientation and velocity, or whether they are composed of matter or antimatter. Both have stood the tests of time, but in the last quarter century these cornerstones have come under renewed scrutiny as to whether they are indeed exact symmetries of nature. Were physicists to find violations, it would lead to profound revisions in our understanding of space and time and force us to correct both general relativity and the Standard Model of particle

Accessing the Planck scale

Several considerations have spurred significant enthusiasm for testing Lorentz and CPT invariance in recent years. One is the observed bias of nature towards matter – an imbalance that is difficult, although perhaps possible, to explain using standard physics. Another stems from the synthesis of two of the most successful physics concepts in history: unification and symmetry breaking. Many theoretical attempts to combine quantum theory with gravity into a theory of quantum gravity allow for tiny departures from Lorentz and CPT invariance. Surprisingly, even deviations that are suppressed by 20 orders of magnitude or more are experimentally accessible with present technology. Few, if any, other experimental approaches to finding new physics can provide such direct access to the Planck scale.

Unfortunately, current models of quantum gravity cannot accurately pinpoint experimental signatures for Lorentz and CPT violation. An essential milestone has therefore been the development of a general theoretical framework that incorporates Lorentz and CPT violation into both the Standard Model and general relativity: the Standard Model Extension (SME), as formulated by Alan Kostelecký of Indiana University in the US and coworkers beginning in the early 1990s. Due to its generality and independence of the underlying models, the SME achieves the ambitious goal of allowing the identification, analysis and interpretation of all feasible Lorentz and CPT tests (see panel below). Any putative quantum-gravity remnants associated with Lorentz breakdown enter the SME as a multitude of preferred directions criss-crossing space–time. As a result, the playing field for physical systems is no longer level: effects may depend slightly on spatial orientation, uniform velocity, or whether matter or antimatter is involved. These preferred directions are the coefficients of the SME framework; they parametrise the type and extent of Lorentz and CPT violation, offering specific experiments the opportunity to try to glimpse them.

The Standard Model Extension

At the core of attempts to detect violations in space–time symmetry is the Standard Model Extension (SME) – an effective field theory that contains not just the SM but also general relativity and all possible operators that break Lorentz symmetry. It can be expressed as a Lagrangian in which each Lorentz-violating term has a coefficient that leads to a testable prediction of the theory.

Lorentz and CPT research is unique in the exceptionally wide range of experiments it offers. The SME makes predictions for symmetry-violating effects in systems involving neutrinos, gravity, meson oscillations, cosmic rays, atomic spectra, antimatter, Penning traps and collider physics, among others. In the case of free particles, Lorentz and CPT violation lead to a dependence of observables on the direction and magnitude of the particles’ momenta, on their spins, and on whether particles or antiparticles are studied. For a bound system such as atomic and nuclear states, the energy spectrum depends on its orientation and velocity and may differ from that of the corresponding antimatter system.

The vast spectrum of experiments and latest results in this field were the subject of the triennial CPT conference held at Indiana University in June this year (see panel below), highlights from which form the basis of this article.

The seventh triennial CPT conference

A host of experimental efforts to probe space–time symmetries were the focus of the week-long Seventh Meeting on CPT and Lorentz Symmetry (CPT’16) held at Indiana University, Bloomington, US, on 20–24 June, which are summarised in the main text of this article. With around 120 experts from five continents discussing the most recent developments in the subject, it has been the largest of all meetings in this one-of-a-kind triennial conference series. Many of the sessions included presentations involving experiments at CERN, and the discussions covered a number of key results from experiments at the Antiproton Decelerator and future improvements expected from the commissioning of ELENA. The common thread weaving through all of these talks heralds an exciting emergent era of low-energy Planck-reach fundamental physics with antimatter.

CERN matters

As host to the world’s only cold-antiproton source for precision antimatter physics (the Antiproton Decelerator, AD) and the highest-energy particle accelerator (the Large Hadron Collider, LHC), CERN is in a unique position to investigate the microscopic structure of space–time. The corresponding breadth of measurements at these extreme ends of the energy regime guarantees complementary experimental approaches to Lorentz and CPT symmetry at a single laboratory. Furthermore, the commissioning of the new ELENA facility at CERN is opening brand new tests of Lorentz and CPT symmetry in the antimatter sector (see panel below).

Cold antiprotons offer powerful tests of CPT symmetry

CPT – the combination of charge conjugation (C), parity inversion (P) and time reversal (T) – represents a discrete symmetry between matter and antimatter. As the standard CPT test framework, the Standard Model Extension (SME) possesses a feature that might perhaps seem curious at first: CPT violation always comes with a breakdown of Lorentz invariance. However, an extraordinary insight gleaned from the celebrated CPT theorem of the 1950s is that Lorentz symmetry already contains CPT invariance under “mild smoothness” assumptions: since CPT is essentially a special Lorentz transformation with a complex-valued velocity, the symmetry holds whenever the equations of physics are smooth enough to allow continuation into the complex plane. Unsurprisingly, then, the loss of CPT invariance requires Lorentz breakdown, an argument made rigorous in 2002. Lorentz violation, on the other hand, does not imply CPT breaking.

That CPT breaking comes with Lorentz violation has the profound experimental implication that CPT tests do not necessarily have to involve both matter and antimatter: hypothetical CPT violation might also be detectable via the concomitant Lorentz breaking in matter alone. But this feature comes at a cost: the corresponding Lorentz tests typically cannot disentangle CPT-even and CPT-odd signals and, worse, they may even be blind to the effect altogether. Antimatter experiments decisively brush aside these concerns, and the availability at CERN of cold antiprotons has thus opened an unparalleled avenue for CPT tests. In fact, all six fundamental-physics experiments that use CERN’s antiprotons have the potential to place independent limits on distinct regions of the SME’s coefficient space. The upcoming Extra Low ENergy Antiproton (ELENA) ring at CERN (see “CERN soups up its antiproton source”) will provide substantially upgraded access to antiprotons for these experiments.

One exciting type of CPT test that will be conducted independently by the ALPHA, ATRAP and ASACUSA experiments is to produce antihydrogen, an atom made up of an antiproton and a positron, and compare its spectrum to that of ordinary hydrogen. While the production of cold antihydrogen has already been achieved by these experiments, present efforts are directed at precision spectroscopy promising clean and competitive constraints on various CPT-breaking SME coefficients for the proton and electron.

At present, the gravitational interaction of antimatter remains virtually untested. The AEgIS and GBAR experiments will tackle this issue by dropping antihydrogen atoms in the Earth’s gravity field. These experiments differ in their detailed set-up, but both are projected to permit initial measurements of the gravitational acceleration, g, for antihydrogen at the per cent level. The results will provide limits on SME coefficients for the couplings between antimatter and gravity that are inaccessible with other experiments.

A third fascinating type of CPT test is based on the equality of the physical properties of a particle and its antiparticle, as guaranteed by CPT invariance. The ATRAP and BASE experiments have been advocating such a comparison between protons and antiprotons confined in a cryogenic Penning trap. Impressive results for the charge-to-mass ratios and g factors have already been obtained at CERN and are poised for substantial future improvements. These measurements permit clean bounds on SME coefficients of the proton with record sensitivities.

Regarding the LHC, the latest Lorentz- and CPT-violation physics comes from the LHCb collaboration, which studies particles made up of b quarks. The experiment’s first measurements of SME coefficients in the Bd and Bs systems, published in June this year, have improved existing results by up to two orders of magnitude. LHCb also has competition from other major neutral-meson experiments. These involve studies of the Bs system at the Tevatron’s DØ experiment, recent searches for  Lorentz and CPT violation with entangled kaons at KLOE and the upcoming KLOE-2 at DAΦNE in Italy, as well as results on CPT-symmetry tests in Bd mixing and decays from the BaBar experiment at SLAC. The LHC’s general-purpose ATLAS and CMS experiments, meanwhile, hold promise for heavy-quark studies. Data on single-top production at these experiments would allow the world’s first CPT test for the top quark, while the measurement of top–antitop production can sharpen by a factor of 10 the earlier measurements of CPT-even Lorentz violation at DØ.

Other possibilities for accelerator tests of Lorentz and CPT invariance include deep inelastic scattering and polarised electron–electron scattering. The first ever analysis of the former offers a way to access previously unconstrained SME coefficients in QCD employing data from, for example, the HERA collider at DESY. Polarised electron–electron scattering, on the other hand, allows constraints to be placed on currently unmeasured Lorentz violations in the Z boson, which are also parameterised by the SME and have relevance for SLAC’s E158 data and the proposed MOLLER experiment at JLab. Lorentz-symmetry breaking would also cause the muon spin precession in a storage ring to be thrown out of sync by just a tiny bit, which is an effect accessible to muon g-2 measurements at J-PARC and Fermilab.

Historically, electromagnetism is perhaps most closely associated with Lorentz tests, and this idea continues to exert a sustained influence on the field. Modern versions of the classical Michelson–Morley experiment have been realised with tabletop resonant cavities as well as with the multi-kilometre LIGO interferometer, with upcoming improvements promising unparalleled measurements of the SME’s photon sector. Another approach for testing Lorentz and CPT symmetry is to study the energy- and direction-dependent dispersion of photons as predicted by the SME. Recent observations by the space-based Fermi Large Area Telescope severely constrain this effect, placing tight limits on 25 individual non-minimal SME coefficients for the photon.

AMO techniques

Experiments in atomic, molecular and optical (AMO) physics are also providing powerful probes of Lorentz and CPT invariance and these are complementary to accelerator-based tests. AMO techniques excel at testing Lorentz-violating effects that do not grow with energy, but they are typically confined to normal-matter particles and cannot directly access the SME coefficients of the Higgs or the top quark. Recently, advances in this field have allowed researchers to carry out interferometry using systems other than light, and an intriguing idea is to use entangled wave functions to create a Michelson–Morley interferometer within a single Yb+ ion. The strongly enhanced SME effects in this system, which arise due to the ion’s particular energy-level structure, could improve existing limits by five orders of magnitude.

Other AMO systems, such as atomic clocks, have long been recognised as a backbone of Lorentz tests. The bright SME prospects arising from the latest trend toward optical clocks, which are several orders of magnitude more precise than traditional varieties based on microwave transitions, are being examined by researchers at NIST and elsewhere. Also, measurements on the more exotic muonium atom by J-PARC and by the PSI can place limits on the SME’s muon coefficients, which is a topic of significant interest in light of several current puzzles involving the muon.

From neutrinos to gravity

Unknown neutrino properties, such as their mass, and tension between various neutrino measurements have stimulated a wealth of recent research including a number of SME analyses. The breakdown of Lorentz and CPT symmetry would cause the ordinary neutrino–neutrino and antineutrino–antineutrino oscillations to exhibit unusual direction, energy and flavour dependence, and would also induce unconventional neutrino–antineutrino mixing and kinematic effects – the latter leading to modified velocities and dispersion, as measured in time-of-flight experiments. Existing and planned neutrino experiments offer a wealth of opportunities to examine such effects. For example: upcoming results from the Daya Bay experiment should yield improved limits on Lorentz violation from antineutrino–antineutrino mixing; EXO has obtained the first direct experimental bound on a difficult-to-access “counter-shaded” coefficient extracted from the electron spectrum of double beta decay; T2K has announced new constraints on the a-and-c coefficients tightened by a factor of two using the muon-neutrino; and IceCube promises extreme sensitivities to “non-minimal” effects with kinematical studies of astrophysical neutrinos, such as Cherenkov effects of various kinds.

The modern approach to Lorentz and CPT tests remains as active as ever.

The feebleness of gravity makes the corresponding Lorentz and CPT tests in this SME sector particularly challenging. This has led researchers from HUST in China and from Indiana University to use an ingenious tabletop experiment to seek Lorentz breaking in the short-range behaviour of the gravitational force. The idea is to bring gravitationally interacting test masses to within submillimetre ranges of one another and observe their mechanical resonance behaviour, which is sensitive to deviations from Lorentz symmetry in the gravitational field. Other groups are carrying out related cutting-edge measurements of SME gravity coefficients with laser ranging of the Moon and other solar-system objects, while analysis of the gravitational-wave data recently obtained by LIGO has already yielded many first constraints on SME coefficients in the gravity sector, with the promise of more to come.

After a quarter century of experimental and theoretical work, the modern approach to Lorentz and CPT tests remains as active as ever. As the theoretical understanding of Lorentz and CPT violation continues to evolve at a rapid pace, it is remarkable that experimental studies continue to follow closely behind and now stretch across most subfields of physics. The range of physical systems involved is truly stunning, and the growing number of different efforts displays the liveliness and exciting prospects for a research field that could help to unlock the deepest mysteries of the universe.

The post Testing times for space–time symmetry appeared first on CERN Courier.

]]>
https://cerncourier.com/a/testing-times-for-space-time-symmetry/feed/ 1 Feature Numerous experiments, many of them at CERN, are testing for violations of Lorentz and CPT symmetry in the search for new physics. https://cerncourier.com/wp-content/uploads/2016/11/CCcpt3_10_16.jpg
Cosmic rays continue to confound https://cerncourier.com/a/cosmic-rays-continue-to-confound/ https://cerncourier.com/a/cosmic-rays-continue-to-confound/#respond Fri, 11 Nov 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/cosmic-rays-continue-to-confound/ Five years of data from the AMS experiment on board the International Space Station reveal intriguing features, says Sam Ting.

The post Cosmic rays continue to confound appeared first on CERN Courier.

]]>

The International Space Station (ISS) is the largest and most complex engineering project ever built in space. It has also provided a unique platform from which to conduct the physics mission of the Alpha Magnetic Spectrometer (AMS). Over the past five years on board the ISS, AMS has orbited the Earth every 93 minutes at an altitude of 400 km and recorded 85 billion cosmic-ray events with energies reaching the multi-TeV range. AMS has been collecting its unprecedented data set and beaming it down to CERN since 2011, and is expected to continue to do so for the lifetime of the ISS.

AMS is a unique experiment in particle physics. The idea for a space-based detector developed after the cancellation of the Superconducting Super Collider in the US in 1993. The following year, an international group of physicists who had worked together for many years at CERN’s LEP collider had a discussion with Roald Sagdeev, former director of the Soviet Institute of Space Research, about the possibility of performing a precision particle-physics experiment in space. Sagdeev arranged for the team to meet with Daniel Goldin, the administrator of NASA, and in May 1994 the AMS collaboration presented the science case for AMS at NASA’s headquarters. Goldin advised the group that use of the ISS as a platform required strong scientific endorsement from the US Department of Energy (DOE) and, after the completion of a detailed technical review of AMS science, the DOE and NASA formalised responsibilities for AMS deployment on the ISS on 20 September 1995.

A 10 day precursor flight of AMS (AMS-01) was carried out in June 1998, demonstrating for the first time the viability of using a precision, large-acceptance magnetic spectrometer in space for a multi-year mission. The construction of AMS-02 for the ISS started immediately afterwards in collaborating institutes around the world. With the loss of the shuttle Columbia in 2003 and the resulting redirection of space policy, AMS was removed from the space-shuttle programme in October 2005. However, the importance of performing fundamental science on the ISS was widely recognised and supported by the NASA Space Station management under the leadership of William Gerstenmaier. In 2008, the US Congress unanimously agreed that AMS be reinstated, mandating an additional flight for the shuttle Endeavour with AMS as its prime payload. Shortly after installation on the ISS in May 2011, AMS was powered on and began collecting and transmitting data (CERN Courier July/August 2011 p18).

The first five years

Much has been learnt in the first five years of AMS about operating a particle-physics detector in space, especially the challenges presented by the ever changing thermal environment and the need to monitor the detector elements and electronics 24 hours per day, 365 days per year. Communications with NASA’s ISS Mission Control Centers are also an essential requirement to ensure the operations of the ISS – such as sudden, unscheduled power cuts and attitude changes – do not disrupt the operations of AMS or imperil the detector.

Of course, it is the data recorded by AMS from events in the distant universe that are the most rich scientifically. AMS is able to detect both elementary particles – namely electrons, positrons, protons and antiprotons – in addition to nuclei of helium, lithium and heavier elements up to indium. The large acceptance and multiple redundant measurements allow AMS to analyse the data to an accuracy of approximately 1%. Combined with its atmosphere-free window on the cosmos, its long-duration exposure time and its extensive calibration at the CERN test beam, this allows AMS to greatly improve the accuracy of previous charged cosmic-ray observations. This is opening up new avenues through which to investigate the nature of dark matter, the existence of heavy antimatter and the true properties of primordial cosmic rays.

The importance of precision studies of positrons and antiprotons as a means to search for the origin of dark matter was first pointed out by theorists John Ellis and, independently, by Michael Turner and Frank Wilczek. They noted that annihilations of the leading dark-matter candidate, neutralinos, produce energies that transform neutralinos into ordinary particles such as positrons and antiprotons. Crucially, this characteristic excess of positrons and antiprotons in cosmic rays can be measured. The characteristic signature of dark-matter annihilations is a sharp drop-off of these positron and antiproton excesses at high energies, due to the finite mass of the colliding neutralinos. In addition, since dark matter is ubiquitous, the excesses of the fluxes should be isotropic.

Early low-energy measurements by balloons and satellites indicated that both the positron fraction (that is, the ratio of the positron flux to the flux of electrons and positrons) and the antiproton-to-proton fluxes are larger than predicted by models based on the collisions of cosmic rays. The superior precision of AMS over previous experiments is now allowing researchers to investigate such features, in particular the drop-off in the positron and antiproton excesses, in unprecedented detail.

The first major result from AMS came in 2013 and concerned the positron fraction (CERN Courier October 2013 p22). This highly accurate result showed that, up to a positron energy of 350 GeV, the positron fraction fits well to dark-matter models. This result generated widespread interest in the community and motivated many new interpretations of the positron-fraction excess, for instance whether it is due to astrophysical sources or propagation effects. In 2014, AMS published the positron and electron fluxes, which showed that their behaviours are quite different from each other and that neither can be fitted with the single-power-law assumption underpinning the traditional understanding of cosmic rays.

A deepening mystery

The latest AMS results are based on 17.6 million electrons and positrons and 350,000 antiprotons. In line with previous AMS measurements, the positron flux exhibits a distinct difference from the electron flux, both in its magnitude and energy dependence (figure 1). The positrons show a unique feature: they have a tendency to drop-off sharply at energies above 300 GeV, as expected from dark-matter collisions or new astrophysical phenomena. The positron fraction decreases with energy and reaches a minimum at 8 GeV. It then increases with energy and rapidly exceeds the predictions from cosmic-ray collisions, reaching a maximum at 265 GeV and then beginning to fall off. Whereas neither the electron flux nor the positron flux can be described by a single power law, surprisingly the sum of the electron and positron fluxes can be described very accurately by a single power law above an energy of 30 GeV.

Since astrophysical sources of cosmic-ray positrons and electrons may induce some degree of anisotropy in their arrival directions, it is also important to measure the anisotropy of cosmic-ray events recorded by AMS. Using the latest data set, a systematic search for anisotropies has been carried out on the electron and positron samples in the energy range 16–350 GeV. The dipole-anisotropy amplitudes measured on 82,000 positrons and 1.1 million electrons are 0.014 for positrons and 0.003 for electrons, which are consistent with the expectations from isotropy.

The latest AMS results on the fluxes and flux ratio of electrons and positrons exhibit unique and previously unobserved features. These include the energy dependence of the positron fraction, the existence of a maximum at 265 GeV in the positron fraction, the exact behaviour of the electron and positron fluxes and, in particular, the sharp drop-off of the positron flux. These features require accurate theoretical interpretation as to their origin, be it from dark-matter collisions or new astrophysical sources.

Concerning the measured antiproton-to-proton flux ratio (figure 2), the new data show that this ratio is independent of rigidity (defined as the momentum per unit charge) in the rigidity range 60–450 GV. This is contrary to traditional cosmic-ray models, which assume that antiprotons are produced only in the collisions of cosmic rays and therefore that the ratio decreases with rigidity. In addition, due to the large mass of antiprotons, the observed excess of the antiproton-to-proton flux ratio cannot come from pulsars. Indeed, the excess is consistent with some of the latest model predictions based on dark-matter collisions as well as those based on new astrophysical sources. Unexpectedly, the antiproton-to-positron flux ratio is also independent of rigidity in the range 60–450 GV (CERN Courier October 2016 p8). This is considered as a major result from the five-year summary of AMS data.

The upshot of these new findings in elementary-particle cosmic rays is that the rigidity dependences of the fluxes of positrons, protons and antiprotons are nearly identical, whereas the electron flux has a distinctly different rigidity dependence. This is unexpected because electrons and positrons lose much more energy in the galactic magnetic fields than do protons and antiprotons.

Nuclei in cosmic rays

Most of the cosmic rays flying through the cosmos comprise protons and nuclei, and AMS collects nuclei simultaneously with elementary particles to enable an accurate understanding of both astrophysical phenomena and cosmic-ray propagation. The latest AMS results shed light on the properties of protons, helium, lithium and heavier nuclei in the periodic table. Protons, helium, carbon and oxygen are traditionally assumed to be primary cosmic rays, which means they are produced directly from a source such as supernova remnants.

Protons and helium are the two most abundant charged cosmic rays. They have been measured repeatedly by many experiments over many decades, and their energy dependence has traditionally been assumed to follow a single power law. In the case of lithium, which is assumed to be produced from the collision of primary cosmic rays with the interstellar medium and therefore yields a single power law but with a different spectral index, experimental data have been very limited.

No one has a clue what could be causing these spectacular effects

Sam Ting

The latest AMS data reveal, with approximately 1% accuracy, that the proton, helium and lithium fluxes as a function of rigidity all deviate from the traditional single-power-law dependence at a rigidity of about 300 GV (figure 3). It is completely unexpected that all three deviate from a single power law, that all three deviations occur at about the same rigidity and increase at higher rigidities, and that the three spectra can be fitted with double power laws above a rigidity of 45 GV. In addition, it has long been assumed that since both protons and helium are primary cosmic rays with the same energy dependence at high energies, their flux ratio would be independent of rigidity. The AMS data show that above rigidities of 45 GV, the flux ratio decreases with rigidity and follows a single-power-law behaviour. Despite being a secondary cosmic ray, lithium also exhibits the same rigidity behaviour as protons and helium. It is fair to say that, so far, no one has a clue what could be causing these spectacular effects.

The latest AMS measurement of the boron-to-carbon flux ratio (B/C) also contains surprises (figure 4). Boron is assumed to be produced through the interactions of primary cosmic rays such as carbon and oxygen with the interstellar medium, which means that B/C provides information both on cosmic-ray propagation and on the properties of the interstellar medium. The B/C ratio does not show any significant structures, in contrast to many cosmic-ray propagation models that assume such behaviour at high rigidities (including a class of propagation models that explain the observed AMS positron fraction). Cosmic-ray propagation is commonly modelled as relativistic gas diffusion through a magnetised plasma, and models of the magnetised plasma predict different behaviours of B/C as a function of rigidity. At rigidities above 65 GV, the latest AMS data can be well fitted by a single power law with spectral index Δ in agreement with the Kolmogorov model of turbulence, which predicts Δ = –1/3 asymptotically.

Building a spectrometer in space

AMS is a precision, multipurpose TeV spectrometer measuring 5 × 4 × 3 m and weighing 7.5 tonnes. It consists of a transition radiation detector (TRD) to identify electrons and positrons; a permanent magnet together with nine layers of silicon tracker (labelled 1 to 9) to measure momentum up to the multi-TeV range and to identify different species of particles and nuclei via their energy loss; two banks of time-of-flight (TOF) counters to measure the direction and velocity of cosmic rays and identify species by energy loss; veto counters (ACC) surrounding the inner bore of the magnet to reject cosmic rays from the side; a ring-image Cherenkov counter (RICH) to measure the cosmic-ray energy and identify particle species; and an electromagnetic calorimeter (ECAL) to provide 3D measurements of the energy and direction of electrons and positrons, and distinguish them from antiprotons, protons and other nuclei.

Future directions

Much has been learnt from the unexpected physics results from the first five years of AMS. Measuring many different species of charged cosmic rays at the same time with high accuracy provides unique input for the development of a comprehensive theory of cosmic rays, which have puzzled researchers for a century. AMS data are also providing new information that is essential to our understanding of the origin of dark matter, the existence of heavy antimatter, and the properties of charged cosmic rays in the cosmos.

The physics potential of AMS is the reason why the experiment receives continuous support. AMS is a US DOE and NASA-sponsored international collaboration and was built with European participation from Finland, France, Germany, Italy, Portugal, Spain and Switzerland, together with China, Korea, Mexico, Russia, Taiwan and the US. CERN has provided critical support to AMS, with CERN engineers engaged in all phases of the construction. Of particular importance was the extensive calibration of the AMS detector with different particle test beams at various energies, which provided key reference points for verifying the detector’s operation in space.

AMS will continue to collect data at higher energies and with high precision during the lifetime of the ISS, at least until 2024. To date, AMS is the only long-duration precision magnetic spectrometer in space and, given the challenges involved in such a mission, it is likely that it will remain so for the foreseeable future.

The post Cosmic rays continue to confound appeared first on CERN Courier.

]]>
https://cerncourier.com/a/cosmic-rays-continue-to-confound/feed/ 0 Feature Five years of data from the AMS experiment on board the International Space Station reveal intriguing features, says Sam Ting. https://cerncourier.com/wp-content/uploads/2016/11/CCams1_10_16.jpg
Gaia compiles largest ever stellar survey https://cerncourier.com/a/gaia-compiles-largest-ever-stellar-survey/ https://cerncourier.com/a/gaia-compiles-largest-ever-stellar-survey/#respond Fri, 14 Oct 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/gaia-compiles-largest-ever-stellar-survey/ On 13 September, 1000 days after the satellite’s launch, the Gaia team published a preliminary catalogue of more than a billion stars.

The post Gaia compiles largest ever stellar survey appeared first on CERN Courier.

]]>

The largest all-sky survey of celestial objects has been compiled by ESA’s Gaia mission. On 13 September, 1000 days after the satellite’s launch, the Gaia team published a preliminary catalogue of more than a billion stars, far exceeding the reach of ESA’s Hipparcos mission completed two decades ago.

Astrometry – the science of charting the sky – has undergone tremendous progress over the centuries, from naked-eye observations in antiquity to Gaia’s sophisticated space instrumentation today. The oldest known comprehensive catalogue of stellar positions was compiled by Hipparchus of Nicaea in the 2nd century BC. His work, which was based on even earlier observations by Assyro-Babylonian astronomers, was handed down 300 years later by Ptolemy in his 2nd century treatise known as the Almagest. Although it listed the positions of 850 stars with a precision of less than one degree, which is about twice the diameter of the Moon, this work was significantly surpassed only in 1627 with the publication of a catalogue of about 1000 stars by the Danish astronomer Tycho Brahe, who achieved a precision of about 1 arcminute by using large quadrants and sextants.

Gaia has an astrometric accuracy about 100 times better than Hipparcos.

The first stellar catalogue compiled with the aid of a telescope was published in 1725 by English astronomer John Flamsteed, listing the positions of almost 3000 stars with a precision of 10–20 arcseconds. The precision increased significantly during the following centuries, with the use of photographic plates by the Yale Trigonometric Parallax Catalogue reaching 0.01 arcsecond in 1995. ESA’s Hipparcos mission, which operated from 1989 to 1993, was the first space telescope devoted to measuring stellar positions. The Hipparcos catalogue, released in 1997, provides the position, parallax and proper motion of 117,955 stars with a precision of 0.001 arcsecond. The “parallax” is a small displacement of the star’s position after a six-month interval, offering a different viewpoint from Earth’s annual orbit around the Sun and allowing the star’s distance to be derived.

While Hipparcos could probe the stars to distances of about 300 light-years, Gaia’s objective is to extend this to a significant fraction of the size of our Galaxy, which spans about 100,000 light-years. To achieve this, Gaia has an astrometric accuracy about 100 times better than Hipparcos. As a comparison, if Hipparcos could measure the angle that corresponds to the height of an astronaut standing on the Moon, Gaia would be able to measure the astronaut’s thumbnail.

Gaia was launched on 19 December 2013 towards the Lagrangian point L2, which is a prime location to look at the sky away from disturbances from the Sun, Earth and Moon. Although the first data release already comprises about a billion stars observed during the first 14 months of the mission, there was not enough time to disentangle the proper motion from the parallax. This could only be computed with higher precision for about two million stars previously observed by Hipparcos.

The new catalogue gives an impression of the great capabilities of Gaia. More observations are needed to make a dynamic 3D map of the Milky Way and to find and characterise possible brightness variations of all these stars. Gaia will then be able to provide the parallax distance of many periodic stars such as Cepheids, which are crucial in the accurate determination of the cosmic-distance ladder.

The post Gaia compiles largest ever stellar survey appeared first on CERN Courier.

]]>
https://cerncourier.com/a/gaia-compiles-largest-ever-stellar-survey/feed/ 0 News On 13 September, 1000 days after the satellite’s launch, the Gaia team published a preliminary catalogue of more than a billion stars. https://cerncourier.com/wp-content/uploads/2016/10/CCast1_09_16.jpg
Secrets of discovery https://cerncourier.com/a/secrets-of-discovery/ https://cerncourier.com/a/secrets-of-discovery/#respond Fri, 14 Oct 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/secrets-of-discovery/ Interview with Kip Thorne, who predicted LIGO’s observation of gravitational waves.

The post Secrets of discovery appeared first on CERN Courier.

]]>

Did you expect that gravitational waves would be discovered during your lifetime?

Yes, and I thought it quite likely it would come from two colliding black holes of just the sort that we did see. I wrote a popular book called Black Holes and Time Warps: Einstein’s Outrageous Legacy, published in 1994, and I wrote a prologue to this book during my honeymoon in Chile in 1984. In that prologue, I described the observation of two black holes, both weighing 25 solar masses, spiralling together and merging and producing three solar masses of energy and gravitational waves, and that’s very close to what we’ve seen. So I was already in the 1980s targeting black holes as the most likely kind of source; for me this was not a surprise, it was a great satisfaction that everything came out the way I thought it probably would.

Can you summarise how an instrument such as LIGO could observe such a weak and rare phenomenon?

The primary inventor of this kind of gravitational-wave detector is Ray Weiss at MIT. He not only conceived the idea, in parallel with several other people, but he, unlike anybody else, identified all of the major sources of noise that would have to be dealt with in the initial detector and he invented ways to deal with each of those. He estimated how much noise would remain after the experiment did what he proposed to limit each noise source, and concluded that the sensitivity that could be reached would be good enough. There was a real possibility of seeing the waves that I as a theorist and colleagues were predicting. Weiss wrote a paper in 1972 describing all of this and it is one of the most powerful papers I’ve ever read, perhaps the most powerful experiment-related paper. Before I read it, I had heard about his idea and concluded it was very unlikely to succeed because the required sensitivities were so great. I didn’t have time to really study it in depth, but it turned out I was wrong. I was sceptical until I had discussions with Weiss and others in Moscow. I then became convinced, and decided that I should devote most of the rest of my career to helping them succeed in the detection of gravitational waves.

How will the new tool of “multi-messenger astronomy” impact on our understanding of the universe?

Concerning the colliding black hole that we’ve seen so far, astronomers who rely on electromagnetic signals have not seen anything coming from them. It’s conceivable that in the future something may be seen because disturbances caused when two black holes collide and merge can lead to X-ray or perhaps optical emissions. We also expect to see many other sources of gravitational waves. Neutron stars orbiting each other are expected to collide and merge, which is thought to be a source of gamma-ray bursts that have already been seen. We will see black holes tear apart and destroy a companion neutron star, again producing a very strong electromagnetic emission as well as neutrino emission. So the co-ordinated gravitational and electromagnetic observation and neutrino observations will be very powerful. With all of these working together in “multi-messenger” astronomy, there’s a great richness of information. That really is the future of a large portion of this field. But part of this field will be things like black holes, where we see only gravitational waves.

Do gravitational waves give us a bearing on gravitons?

Although we are quite sure gravitational waves are carried by gravitons, there is no chance to see individual gravitons based on the known laws of physics. Just as we do not see individual photons in a radio wave because there are so many photons working together to produce the radio wave, there are even more gravitons working together to produce gravitational waves. In technical terms, the mean occupation number of the gravitational-wave field that is seen is absolutely enormous, close to 1040. With so many gravitons there is no hope, unfortunately, to see individual gravitons.

Will we ever reconcile gravity with the three other forces?

I am quite sure gravity will be reconciled with the other three forces. I think it is quite likely this will be done through some version of string theory or M theory, which many theorists are now working on. When it does happen, the resulting laws of quantum gravity will allow us to address questions related to the nature of the birth of the universe. It would also tell us whether or not it is possible to build time machines to go backward in time, what is the nature of the interior of a black hole, and address many other interesting questions. This is a tremendously important effort, by far the most important research direction in theoretical physics today and recent decades. There’s no way I could contribute very much there.

Regarding future large-scale research infrastructures, such as those proposed within CERN’s Future Circular Collider programme, what are the lessons to be learnt from LIGO?

Maybe the best thing to learn is having superb management of large physics budgets, which is essential to make the project succeed. We’ve had excellent management, particularly with Barry Barish, who transformed LIGO and took over as director when we were just about ready to begin construction (Robbie Waught, who had helped us write a proposal to get the funding from the NSF and Congress, also got two research teams at Caltech and MIT to work together in an effective manner). Barry created the modern LIGO and he is an absolutely fantastic project director. Having him lead us through that transition into the modern LIGO was absolutely essential to our success, plus a very good experiment idea and a superb team, of course.

You were an adviser to the blockbuster film Interstellar. Do you have any more science and arts projects ahead?

I am 76. I was a conventional professor for almost 50 years, and I decided for my next 50 years that I want to do something different. So I have several different collaborations: one on a second film; collaborations in a multimedia concert about sources of gravitational waves with Hans Zimmer and Paul Franckman, who did the music and visual effects for Interstellar; and collaborations with Chapman University art professor Lia Halloran on a book with her paintings and my poetry about the warped side of the universe. I am having great fun entering collaborations between scientists and artists and I think, at this point of my life, if I have a total failure with trying to write poetry, well that’s alright: I’ve had enough success elsewhere.

The post Secrets of discovery appeared first on CERN Courier.

]]>
https://cerncourier.com/a/secrets-of-discovery/feed/ 0 Feature Interview with Kip Thorne, who predicted LIGO’s observation of gravitational waves. https://cerncourier.com/wp-content/uploads/2016/10/CCint1_09_16.jpg
AMS reports unexpected result in antiproton data https://cerncourier.com/a/ams-reports-unexpected-result-in-antiproton-data/ https://cerncourier.com/a/ams-reports-unexpected-result-in-antiproton-data/#respond Fri, 16 Sep 2016 12:55:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/ams-reports-unexpected-result-in-antiproton-data/ AMS has now measured the antiproton flux and the antiproton-to-proton flux ratio in primary cosmic rays with unprecedented precision.

The post AMS reports unexpected result in antiproton data appeared first on CERN Courier.

]]>

Researchers working on the AMS (Alpha Magnetic Spectrometer) experiment, which is attached to the International Space Station, have reported precision measurements of antiprotons in primary cosmic rays at energies never before attained. Based on 3.49 × 105 antiproton events and 2.42 × 109 proton events, the AMS data represent new and unexpected observations of the properties of elementary particles in the cosmos.

Assembled at CERN and launched in May 2011, AMS is a 7.5 tonne detector module that measures the type, energy and direction of particles. The goals of AMS are to use its unique position in space to search for dark matter and antimatter, and to study the origin and propagation of charged cosmic rays: electrons, positrons, protons, antiprotons and nuclei. So far, the collaboration has published several key measurements of energetic cosmic-ray electrons, positrons, protons and helium, for example finding an excess in the positron flux (CERN Courier November 2014 p6). This latter measurement placed constraints on existing models and gave rise to new ones, including collisions of dark-matter particles, astrophysical sources and collisions of cosmic rays – some of which make specific predictions about the antiproton flux and the antiproton-to-proton flux ratio in cosmic rays.

With its latest antiproton results, AMS has now simultaneously measured all of the charged-elementary-particle cosmic-ray fluxes and flux ratios. Due to the scarcity of antiprotons in space (being outnumbered by protons by a factor 10,000), experimental data on antiprotons are limited. Using the first four years of data, AMS has now measured the antiproton flux and the antiproton-to-proton flux ratio in primary cosmic rays with unprecedented precision. The measurements, which demanded AMS provide a separation power of approximately 106, provide precise experimental information over an extended energy range in the study of elementary particles travelling through space.

The antiproton (p), proton (p), and positron (e+) fluxes are found to have nearly identical rigidity dependence

In the absolute-rigidity (the absolute value of the momentum/charge) range 60–500 GV, the antiproton (p), proton (p), and positron (e+) fluxes are found to have nearly identical rigidity dependence, while the electron (e) flux exhibits a markedly different rigidity dependence. In the absolute-rigidity range below 60 GV, the p/p, p/e+ and p/e+ flux ratios each reach a maximum, while in the range 60–500 GV these ratios unexpectedly show no rigidity dependence.

“These are precise and completely unexpected results. It is difficult to imagine why the flux of positrons, protons and antiprotons have exactly the same rigidity dependence and the electron flux is so different,” says AMS-spokesperson Samuel Ting. “AMS will be on the Space Station for its lifetime. With more statistics at higher energies, we will probe further into these mysteries.”

The post AMS reports unexpected result in antiproton data appeared first on CERN Courier.

]]>
https://cerncourier.com/a/ams-reports-unexpected-result-in-antiproton-data/feed/ 0 News AMS has now measured the antiproton flux and the antiproton-to-proton flux ratio in primary cosmic rays with unprecedented precision. https://cerncourier.com/wp-content/uploads/2016/09/CCnew3_08_16.jpg
Earth-like planet orbits our nearest star https://cerncourier.com/a/earth-like-planet-orbits-our-nearest-star/ https://cerncourier.com/a/earth-like-planet-orbits-our-nearest-star/#respond Fri, 16 Sep 2016 12:55:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/earth-like-planet-orbits-our-nearest-star/ Astronomers have found clear evidence of a planet orbiting the closest star to Earth, Proxima Centauri.

The post Earth-like planet orbits our nearest star appeared first on CERN Courier.

]]>

Astronomers have found clear evidence of a planet orbiting the closest star to Earth, Proxima Centauri. The extrasolar planet is only slightly more massive than the Earth and orbits its star within the habitable zone, where the temperature would allow liquid water on its surface. The discovery represents a new milestone in the search for exoplanets that possibly harbour life.

Since the discovery of the first exoplanet in 1995, more than 3000 have been found. Most were detected either via radial velocity or transit techniques. The former relies on spectroscopic measurements of the weak back-and-forth wobbling of the star induced by the gravitational pull of the orbiting planet, while the latter method measures the slight drop in the star’s brightness due to the occultation of part of its surface when the planet passes in front of it.

Exoplanets discovered so far exhibit a diverse range of properties, with masses ranging from Earth-like values to several times the mass of Jupiter. Massive planets close to their parent star are the easiest to find: the first known exoplanet, called 51 Peg b, was a gaseous Jupiter-sized planet (a “hot Jupiter”) with a temperature of the order of 1000 °C due to its proximity to the star. The ultimate goal of exoplanet hunters is to find an Earth twin or at least an Earth-sized planet at the right distance from its parent star to have liquid water on its surface. This condition defines the habitable zone, which is the range of distance around the star that would be suitable for life.

Proxima Centauri b orbits the star (Proxima Centauri) in only 11.2 days and has a minimum mass of 1.27 Earth masses.

Proxima Centauri b matches this condition and is also a special planet for us because it orbits our nearest star, located just 4.2 light-years away. Near does not necessarily mean bright, however. Proxima Centauri is actually a cool red star that is much too dim to be seen with the naked eye and, with a mass about eight times smaller than the Sun, it is also around 600 times less luminous. The habitable zone around this red-dwarf star is therefore at much shorter distances than the corresponding distances in our solar system – equivalent to a small fraction of the orbit of Mercury. Proxima Centauri b orbits the star in only 11.2 days and has a minimum mass of 1.27 Earth masses. The exact value of the mass cannot be determined by the radial-velocity method because it depends on the unknown inclination of the orbit with respect to the line of sight.

During the first half of 2016, Proxima Centauri was regularly observed with the HARPS spectrograph on the ESO 3.6 m telescope at La Silla in Chile, and simultaneously monitored by other telescopes around the world. This campaign, which was led by Guillem Anglada-Escudé of Queen Mary University of London and shared publicly online as it happened, was called the Pale Red Dot.

The final results have now been published, concluding with a discussion on the habitability of the planet. Whether there is an atmosphere and liquid water on the surface is the subject of intense debate because red-dwarf stars can display quite violent behaviour. The main threats identified in the paper are tidal locking (for example, does the planet  always present the same face to the star, as does our Moon?), strong stellar magnetic fields and strong flares with high ultraviolet and X-ray fluxes. Whereas robotic exploration is some time away, the future European Extremely Large Telescope (E-ELT) should be able to see the planet and probe its atmosphere spectroscopically.

The post Earth-like planet orbits our nearest star appeared first on CERN Courier.

]]>
https://cerncourier.com/a/earth-like-planet-orbits-our-nearest-star/feed/ 0 News Astronomers have found clear evidence of a planet orbiting the closest star to Earth, Proxima Centauri. https://cerncourier.com/wp-content/uploads/2016/09/CCsci16_08_16.jpg
Galactic map sheds light on dark energy https://cerncourier.com/a/galactic-map-sheds-light-on-dark-energy/ https://cerncourier.com/a/galactic-map-sheds-light-on-dark-energy/#respond Fri, 12 Aug 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/galactic-map-sheds-light-on-dark-energy/ The map shows galaxies being pulled towards each other by dark matter, while on much larger scales it reveals the effect of dark energy ripping the universe apart.

The post Galactic map sheds light on dark energy appeared first on CERN Courier.

]]>

The largest 3D map of distant galaxies ever made has allowed one of the most precise measurements yet of dark energy, which is currently driving the accelerating expansion of the universe. The new measurements, which were carried out by the Baryon Oscillation Spectroscopic Survey (BOSS) programme of the Sloan Digital Sky Survey-III, took five years to make and include 1.2 million galaxies over one quarter of the sky – equating to a volume of 650 cubic billion light-years.

BOSS measures the expansion rate by determining the size of baryonic acoustic oscillations, which are remnants of primordial acoustic waves. “We see a dramatic connection between the sound-wave imprints seen in the cosmic microwave background to the clustering of galaxies 7–12 billion years later,” says co-leader of the BOSS galaxy-clustering working group Rita Tojeiro. “The ability to observe a single well-modelled physical effect from recombination until today is a great boon for cosmology.”

The map shows galaxies being pulled towards each other by dark matter, while on much larger scales it reveals the effect of dark energy ripping the universe apart. It also reveals the coherent movement of galaxies toward regions of the universe with more matter, with the observed amount of in-fall explained well by general relativity. The results have been submitted to the Monthly Notices of the Royal Astronomical Society.

The post Galactic map sheds light on dark energy appeared first on CERN Courier.

]]>
https://cerncourier.com/a/galactic-map-sheds-light-on-dark-energy/feed/ 0 News The map shows galaxies being pulled towards each other by dark matter, while on much larger scales it reveals the effect of dark energy ripping the universe apart. https://cerncourier.com/wp-content/uploads/2016/08/CCnew16_07_16.jpg
Hitomi probes turbulence in galaxy cluster https://cerncourier.com/a/hitomi-probes-turbulence-in-galaxy-cluster/ https://cerncourier.com/a/hitomi-probes-turbulence-in-galaxy-cluster/#respond Fri, 12 Aug 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/hitomi-probes-turbulence-in-galaxy-cluster/ With its very first observation, Japan’s Hitomi X-ray satellite has discovered that the gas in the Perseus cluster of galaxies is much less turbulent than expected.

The post Hitomi probes turbulence in galaxy cluster appeared first on CERN Courier.

]]>

With its very first observation, Japan’s Hitomi X-ray satellite has discovered that the gas in the Perseus cluster of galaxies is much less turbulent than expected. The unprecedented measurement opens the way towards a better determination of the mass of galaxy clusters, which has important cosmological implications.

Hitomi, which translates to “pupil of the eye”, is an X-ray observatory built and operated by the Japanese space agency (JAXA) in collaboration with more than 60 institutes and 200 scientists and engineers from Japan, the US, Canada and Europe. Launched on 17 February this year, Hitomi functioned for just over a month before operators lost contact on 26 March, when the spacecraft started to spin very rapidly, leading to its partial disintegration. It was a tragic end to a very promising mission that would have used a micro-calorimeter to achieve unprecedented spectral resolution in X-rays. Cooled down to 0.05 K, the soft X-ray spectrometer (SXS) was designed to record the precise energy of each incoming X-ray photon.

The cluster gas has very little turbulent motion

Hitomi targeted the Perseus cluster just a week after it arrived in space to measure the turbulence in the cluster to a precision of 10 km s–1, compared with the upper limit set by XMM-Newton of 500 km s–1. The SXS micro-calorimeter met expectations and measured a velocity of only 164±10 km s–1 along the line-of-sight. This low velocity came as a surprise for the Hitomi collaboration, especially because at the core of the cluster lies the highly energetic active galaxy NGC 1275. It indicates that the cluster gas has very little turbulent motion, with a turbulent pressure being only four per cent of the heat pressure of the hot intra-cluster gas. This is extraordinary, considering that NGC 1275 is pumping jetted energy into its surroundings to create bubbles of extremely hot gas.

Previously, it was thought that these bubbles induce turbulence, which keeps the central gas hot, but researchers now have to think of other ways to heat the gas. One possibility is sound waves, which would allow energy to be spread into the medium without global movement of the gas. The precise determination of the turbulence in the Perseus cluster allows a better determination of its mass, which depends on the ratio of turbulent to quiescent gas. Generalising the result of an almost negligible contribution of turbulent pressure in the central core of galaxy clusters impacts not just cluster physics but also cosmological simulations.

The impressive results of Hitomi only reinforce astronomers’ sense of loss. As this and several missions have shown, equipping an X-ray satellite with a micro-calorimeter is a daunting challenge. NASA’s Chandra X-ray Observatory, which launched in 1999, dropped the idea due to budget constraints. JAXA took over the calorimeter challenge on its ASTRO-E spacecraft, but the probe was destroyed in 2000 shortly after rocket lift-off. This was followed by the Suzaku satellite, launched in 2005, in which a leak in the cooling system destroyed the calorimeter. This series of failures is especially dramatic for the scientists and engineers developing such high-precision instruments over two decades – especially in the case of Hitomi, for which the SXS instrument worked perfectly until the loss of the satellite due to problems with the attitude control. Researchers may now have to wait more than a decade to use a micro-calorimeter in space, until ESA’s Athena mission, which is tentatively scheduled for launch in the late 2020s.

The post Hitomi probes turbulence in galaxy cluster appeared first on CERN Courier.

]]>
https://cerncourier.com/a/hitomi-probes-turbulence-in-galaxy-cluster/feed/ 0 News With its very first observation, Japan’s Hitomi X-ray satellite has discovered that the gas in the Perseus cluster of galaxies is much less turbulent than expected. https://cerncourier.com/wp-content/uploads/2016/08/CCast1_07_16.jpg
ESO signs largest ever ground-based astronomy contract https://cerncourier.com/a/eso-signs-largest-ever-ground-based-astronomy-contract/ https://cerncourier.com/a/eso-signs-largest-ever-ground-based-astronomy-contract/#respond Fri, 08 Jul 2016 08:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/eso-signs-largest-ever-ground-based-astronomy-contract/ The European Extremely Large Telescope (E-ELT) will be the largest optical/near-infrared telescope in the world.

The post ESO signs largest ever ground-based astronomy contract appeared first on CERN Courier.

]]>

The European Extremely Large Telescope (E-ELT) will be the largest optical/near-infrared telescope in the world, boasting a primary mirror 39 m in diameter. Its aim is to measure the properties of the first stars and galaxies and to probe the nature of dark matter and dark energy, in addition to tracking down Earth-like planets.

At a ceremony in Garching bei München, Germany, on 25 May, the European Southern Observatory (ESO) signed a contract with the ACe Consortium for the construction of the dome and telescope structure of the E-ELT. With an approximate value of €400 million it is the largest contract ever awarded by ESO and the largest contract ever in ground-based astronomy. The occasion also saw the unveiling of the construction design of the E-ELT, which is due to enter operation in 2024.

The construction of the E-ELT dome and telescope structure can now commence, taking telescope engineering into new territory. The contract includes not only the enormous 85 m-diameter rotating dome, with a total mass of around 5000 tonnes, but also the telescope mounting and tube structure, with a total moving mass of more than 3000 tonnes. Both of these structures are by far the largest ever built for an optical/infrared telescope and dwarf all existing ones.

The E-ELT is being built on Cerro Armazones, a 3000 m-high peak about 20 km from ESO’s Paranal Observatory. The access road and leveling of the summit have already been completed and work on the dome is expected to start on site in 2017.

The post ESO signs largest ever ground-based astronomy contract appeared first on CERN Courier.

]]>
https://cerncourier.com/a/eso-signs-largest-ever-ground-based-astronomy-contract/feed/ 0 News The European Extremely Large Telescope (E-ELT) will be the largest optical/near-infrared telescope in the world. https://cerncourier.com/wp-content/uploads/2016/07/CCnew12_06_16.jpg
Neutron-star mergers create heaviest elements https://cerncourier.com/a/neutron-star-mergers-create-heaviest-elements/ https://cerncourier.com/a/neutron-star-mergers-create-heaviest-elements/#respond Fri, 08 Jul 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/neutron-star-mergers-create-heaviest-elements/ While core-collapse supernovae were thought to be the prime production site, a new study suggests that elements heavier than zinc originate from the merger of two neutron stars.

The post Neutron-star mergers create heaviest elements appeared first on CERN Courier.

]]>

The origin of some of the heaviest chemical elements is due to rapid neutron capture, but the precise location where this cosmic alchemy takes place has been under debate for several decades. While core-collapse supernovae were thought to be the prime production site, a new study suggests that elements heavier than zinc originate from the merger of two neutron stars. Such a dramatic event would have been responsible for the extreme heavy-element enrichment observed in several stars of an ancient dwarf galaxy called Reticulum II.

Nuclear fusion in the core of massive stars produces elements up to and including iron, which is a stable nucleus with the highest binding energy per nucleon. Building heavier nuclei requires energy to compensate for the loss of nuclear binding and is therefore almost impossible to achieve experimentally. But under certain conditions, stars can produce heavier elements by allowing them to capture protons or neutrons.

The relative abundance of certain elements therefore tells researchers whether nucleosynthesis followed an s- or an r-process.

Neutron capture, which is unaffected by Coulomb repulsion, occurs either slowly (s) or rapidly (r). Slow neutron captures occur at a pace that allows the nucleus to undergo beta decay prior to a new capture, and therefore to grow following the line of nuclear stability. The r-process, on the other hand, causes a nucleus to accumulate many additional neutrons prior to radioactive decay. The relative abundance of certain elements therefore tells researchers whether nucleosynthesis followed an s- or an r-process. The rare-earth element europium is a typical r-process element, as are gold, lead and uranium.

For the r-process to work, nuclei need to be under heavy neutron bombardment in conditions that are only found in dramatic events such as a core-collapse supernova or in mergers of two neutron stars. The supernova hypothesis has long been the most probable candidate for the r-process, whereas other scenarios involving rarer events, such as encounters between a neutron star and a black hole, have only been considered since the 1970s. One way to distinguish between the two hypotheses is to study low-metallicity galaxies in which the enrichment of heavy elements is low. This enables astrophysicists to determine if the enrichment is a continuous process or the result of rare events, which would result in stronger differences from one galaxy to the other.

Alexander Ji from the Massachusetts Institute of Technology, US, and colleagues were lucky to find extreme relative abundances of r-process elements in stars located in the ultra-faint dwarf galaxy Reticulum II. Although nearby and in orbit around the Milky Way, this galaxy was only recently discovered and found to be among the most metal-poor galaxies known. This means that Reticulum II formed all of its stars within about the first three-billion years after the Big Bang, and is therefore only enriched in elements heavier than helium by a few generations of stars.

High-resolution spectroscopic measurements of the nine brightest stars in Reticulum II carried out by the team indicate a very strong excess of europium and barium compared with iron in seven of the stars. These abundances exceed by two-to-three orders of magnitude those in any other ultra-faint dwarf galaxy, suggesting that a single rare event produced these r-process elements. The results also show that this event could be a neutron-star merger, but not an ordinary core-collapse supernova. Although it is not possible to conclude that the majority of our gold and uranium comes from neutron-star mergers, the study certainly gives more weight to such a hypothesis in the 60 year-long debate about the origin of r-process elements.

The post Neutron-star mergers create heaviest elements appeared first on CERN Courier.

]]>
https://cerncourier.com/a/neutron-star-mergers-create-heaviest-elements/feed/ 0 News While core-collapse supernovae were thought to be the prime production site, a new study suggests that elements heavier than zinc originate from the merger of two neutron stars. https://cerncourier.com/wp-content/uploads/2016/07/CCast1_06_16.jpg
Protons accelerated to PeV energies https://cerncourier.com/a/protons-accelerated-to-pev-energies/ https://cerncourier.com/a/protons-accelerated-to-pev-energies/#respond Fri, 20 May 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/protons-accelerated-to-pev-energies/ The HESS collaboration has now found evidence that there is a "Pevatron" in the central 33 light-years of the Milky Way.

The post Protons accelerated to PeV energies appeared first on CERN Courier.

]]>

The High Energy Stereoscopic System (HESS) – an array of Cherenkov telescopes in Namibia – has detected gamma-ray emission from the central region of the Milky Way at energies never reached before. The likely source of this diffuse emission is the supermassive black hole at the centre of our Galaxy, which would have accelerated protons to peta-electron-volt (PeV) energies.

The Earth is constantly bombarded by high-energy particles (protons, electrons and atomic nuclei). Being electrically charged, these cosmic rays are randomly deflected by the turbulent magnetic field pervading our Galaxy. This makes it impossible to directly identify their source, and led to a century-long mystery as to their origin. A way to overcome this limitation is to look at gamma rays produced by the interaction of cosmic rays with light and gas in the neighbourhood of their source. These gamma rays travel in straight lines, undeflected by magnetic fields, and can therefore be traced back to their origin.

When a very-high-energy gamma ray reaches the Earth, it interacts with a molecule in the upper atmosphere, producing a shower of secondary particles that emit a short pulse of Cherenkov light. By detecting these flashes of light using telescopes equipped with large mirrors, sensitive photodetectors, and fast electronics, more than 100 sources of very-high-energy gamma rays have been identified over the past three decades. HESS is the only state-of-the-art array of Cherenkov telescopes that is located in the southern hemisphere – a perfect viewpoint for the centre of the Milky Way.

Earlier observations have shown that cosmic rays with energies up to approximately 100 tera-electron-volts (TeV) are produced by supernova remnants and pulsar-wind nebulae. Although theoretical arguments and direct measurements of cosmic rays suggest a galactic origin of particles up to PeV energies, the search for such a “Pevatron” accelerator has been unsuccessful, so far.

The HESS collaboration has now found evidence that there is a “Pevatron” in the central 33 light-years of the Galaxy. This result, published in Nature, is based on deep observations – obtained between 2004 and 2013 – of the surrounding giant molecular cloud extending approximately 500 light-years. The production of PeV protons is deduced from the obtained spectrum of gamma rays, which is a power law extending to multi-TeV energies without showing a high-energy cut-off. The spatial localisation comes from the observation that the cosmic-ray density decreases with a 1/r relation, where r is the distance from the galactic centre. The 1/r profile indicates a quasi-continuous central injection of protons during at least about 1000 years.

Given these properties, the most plausible source of PeV protons is Sagittarius A*, the supermassive black hole at the centre of our Galaxy. According to the authors, the acceleration could originate in the accretion flow in the immediate vicinity of the black hole or further away, where a fraction of the material falling towards the black hole is ejected back into the environment. However, to account for the bulk of PeV cosmic rays detected on Earth, the currently quiet supermassive black hole would have had to be much more active in the past million years. If true, this finding would dramatically influence the century-old debate concerning the origin of these enigmatic particles.

The post Protons accelerated to PeV energies appeared first on CERN Courier.

]]>
https://cerncourier.com/a/protons-accelerated-to-pev-energies/feed/ 0 News The HESS collaboration has now found evidence that there is a "Pevatron" in the central 33 light-years of the Milky Way. https://cerncourier.com/wp-content/uploads/2016/05/CCast1_05_16.jpg
AugerPrime looks to the highest energies https://cerncourier.com/a/augerprime-looks-to-the-highest-energies/ https://cerncourier.com/a/augerprime-looks-to-the-highest-energies/#respond Fri, 20 May 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/augerprime-looks-to-the-highest-energies/ The world’s largest cosmic-ray experiment, the Pierre Auger Observatory, is embarking on its next phase, named AugerPrime.

The post AugerPrime looks to the highest energies appeared first on CERN Courier.

]]>
 

Since the start of its operations in 2004, the Auger Observatory has illuminated many of the open questions in cosmic-ray science. For example, it confirmed with high precision the suppression of the primary cosmic-ray energy spectrum for energies exceeding 5 × 1019 eV, as predicted by Kenneth Greisen, Georgiy Zatsepin and Vadim Kuzmin (the “GZK effect”). The collaboration has searched for possible extragalactic point sources of the highest-energy cosmic-ray particles ever observed, as well as for large-scale anisotropy of arrival directions in the sky (CERN Courier December 2007 p5). It has also published unexpected results about the specific particle types that reach the Earth from remote galaxies, referred to as the “mass composition” of the primary particles. The observatory has set the world’s most stringent upper limits on the flux of neutrinos and photons with EeV energies (1 EeV = 1018 eV). Furthermore, it contributes to our understanding of hadronic showers and interactions at centre-of-mass energies well above those accessible at the LHC, such as in its measurement of the proton–proton inelastic cross-section at √s = 57 TeV (CERN Courier September 2012 p6).

The current Auger Observatory

The Auger Observatory learns about high-energy cosmic rays from the extensive air showers they create in the atmosphere (CERN Courier July/August 2006 p12). These showers consist of billions of subatomic particles that rain down on the Earth’s surface, spread over a footprint of tens of square kilometres. Each air shower carries information about the primary cosmic-ray particle’s arrival direction, energy and particle type. An array of 1600 water-Cherenkov surface detectors, placed on a 1500 m grid covering 3000 km2, samples some of these particles, while fluorescence detectors around the observatory’s perimeter observe the faint ultraviolet light the shower creates by exciting the air molecules it passes through. The surface detectors operate 24 hours a day, and are joined by fluorescence-detector measurements on clear moonless nights. The duty cycle for the fluorescence detectors is about 10% that of the surface detectors. An additional 60 surface detectors in a region with a reduced 750 m spacing, known as the infill array, focus on detecting lower-energy air showers whose footprint is smaller than that of showers at the highest energies. Each surface-detector station (see image above) is self-powered by a solar panel, which charges batteries in a box attached to the tank (at left in the image), enabling the detectors to operate day and night. An array of 153 radio antennas, named AERA and spread over a 17 km2 area, complements the surface detectors and fluorescence detectors. The antennas are sensitive to coherent radiation emitted in the frequency range 30–80 MHz by air-shower electrons and positrons deflected in the Earth’s magnetic field.

The motivation for AugerPrime and its detector upgrades

The primary motivation for the AugerPrime detector upgrades is to understand how the suppressed energy spectrum and the mass composition of the primary cosmic-ray particles at the highest energies are related. Different primary particles, such as γ-rays, neutrinos, protons or heavier nuclei, create air showers with different average characteristics. To date, the observatory has deduced the average primary-particle mass at a given energy from measurements provided by the fluorescence detectors. These detectors are sensitive to the number of air-shower particles versus depth in the atmosphere through the varying intensity of the ultraviolet light emitted along the path of the shower. The atmospheric depth of the shower’s maximum number of particles, a quantity known as Xmax, is deeper in the atmosphere for proton-induced air showers relative to showers induced by heavier nuclei, such as iron, at a given primary energy. Owing to the 10% duty cycle of the fluorescence detectors, the mass-composition measurements using the Xmax technique do not currently extend into the energy region E > 5 × 1019 eV where the flux suppression is observed. AugerPrime will capitalise on another feature of air showers induced by different primary-mass particles, namely, the different abundances of muons, photons and electrons at the Earth’s surface. The main goal of AugerPrime is to measure the relative numbers of these shower particles to obtain a more precise handle on the primary cosmic-ray composition with increased statistics at the highest energies. This knowledge should reveal whether the flux suppression at the highest energies is a result of a GZK-like propagation effect or of astrophysical sources reaching a limit in their ability to accelerate the highest-energy primary particles.

The key to differentiating the ground-level air-shower particles lies in improving the detection capabilities of the surface array. AugerPrime will cover each of the 1660 water-Cherenkov surface detectors with planes of plastic-scintillator detectors measuring 4 m2. Surface-detector stations with scintillators above the Cherenkov detectors will allow the Auger team to determine the electron/photon versus muon abundances of air showers more precisely compared with using the Cherenkov detectors alone. The scintillator planes will be housed in light-tight, weatherproof enclosures, attached to the existing water tanks with a sturdy support frame, as shown above. The scintillator light will be read out with wavelength-shifting fibres inserted into straight extruded holes in the scintillator planes, which are bundled and attached to photomultiplier tubes. Also above, an image shows how the green wavelength-shifting fibres emerge from the scintillator planes and are grouped into bundles. Because the surface detectors operate 24 hours a day, the AugerPrime upgrade will yield mass-composition information for the full data set collected in the future.

The AugerPrime project also includes other detector improvements. The dynamic range of the Cherenkov detectors will be extended with the addition of a fourth photomultiplier tube. Its gain will be adjusted so that particle densities can be accurately measured close to the core of the highest-energy air showers. New electronics with faster sampling of the photomultiplier-tube signals will better identify the narrow peaks created by muons. New GPS receivers at each surface-detector station will provide better timing accuracy and calibration. A subproject of AugerPrime called AMIGA will consist of scintillator planes buried 1.3 m under the 60 surface detectors of the infill array. The AMIGA detectors are directly sensitive to the muon content of air showers, because the electromagnetic components are largely absorbed by the overburden.

The AugerPrime Symposium

In November 2015, the Auger scientists combined their biannual collaboration meeting in Malargüe, Argentina, with a meeting of its International Finance Board and dignitaries from many of its collaborating countries, to begin the new phase of the experiment in an AugerPrime Symposium. The Finance Board endorsed the development and construction of the AugerPrime detector upgrades, and a renewed international agreement was signed in a formal ceremony for continued operation of the experiment for an additional 10 years. The observatory’s spokesperson, Karl-Heinz Kampert from the University of Wuppertal, said: “The symposium marks a turning point for the observatory and we look forward to the exciting science that AugerPrime will enable us to pursue.”

While continuing to collect extensive air-shower data with its current detector configuration and publishing new results, the Auger Collaboration is focused on finalising the design for the upgraded AugerPrime detectors and making the transition to the construction phase at the many collaborating institutions worldwide. Subsequent installation of the new detector components on the Pampa Amarilla is no small task, with the 1660 surface detectors spread across such a large area. Each station must be accessed with all-terrain vehicles moving carefully on rough desert roads. But the collaboration is up to the challenge, and AugerPrime is foreseen to be completed in 2018 with essentially no interruption to current data-taking operations.

• For more information, see auger.org/augerprime.

The post AugerPrime looks to the highest energies appeared first on CERN Courier.

]]>
https://cerncourier.com/a/augerprime-looks-to-the-highest-energies/feed/ 0 Feature The world’s largest cosmic-ray experiment, the Pierre Auger Observatory, is embarking on its next phase, named AugerPrime. https://cerncourier.com/wp-content/uploads/2016/05/CCaug1_05_16-1.jpg
CALET sees events in millions https://cerncourier.com/a/calet-sees-events-in-millions/ https://cerncourier.com/a/calet-sees-events-in-millions/#respond Fri, 15 Apr 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/calet-sees-events-in-millions/ The CALorimetric Electron Telescope has observed more than a hundred million events at energies above 10 GeV.

The post CALET sees events in millions appeared first on CERN Courier.

]]>

Just a few months after its launch and the successful completion of the on-orbit commissioning phase aboard the International Space Station, the CALorimetric Electron Telescope (CALET) has started observations of high-energy charged particles and photons coming from space. To date, more than a hundred million events at energies above 10 GeV have been recorded and are under study.

CALET is a space mission led by JAXA with the participation of the Italian Space Agency (ASI) and NASA. CALET is also a CERN-recognised experiment; the collaboration used CERN’s beams to calibrate the instrument, which was launched from the Tanegashima Space Center on 19 August 2015, on board the Japanese H2-B rocket. After berthing with the ISS a few days later, CALET was robotically extracted from the transfer-vehicle HTV5, operated by JAXA, and installed on the external platform JEM-EF of the Japanese module (KIBO). The check-out phase went smoothly, and after data calibration and verification, CALET moved to regular observation mode in mid-October 2015. The data-taking will go on for period of two years, initially, with a target of five years.

The first data sets are confirming that all of the instruments are working extremely well.

CALET is designed to study electrons, nuclei and γ-rays coming from space. In particular, one of its main goals is to perform precision measurements of the detailed shape of the electron spectrum above 1 TeV. High-energy electrons are expected to come from less than a few-thousand light-years from Earth, as they quickly lose energy travelling in space. Their detection might reveal the presence of nearby astronomical source(s) where electrons are accelerated. The high end of the spectrum is particularly interesting because it could provide a clue to possible signatures of dark matter.

The first data sets are confirming that all of the instruments are working extremely well. The event image above (raw data) shows the detailed shape of the development of a shower of secondary particles generated by the impact of a candidate electron with an estimated energy greater than 1 TeV. The high-resolution energy measurement is provided by CALET’s deep, homogeneous calorimeter equipped with lead-tungstate (PbWO4) crystals preceded by a high-granularity (1 mm scintillating fibres) pre-shower calorimeter with advanced imaging capabilities. The depth of the instrument ensures good containment of electromagnetic showers in the TeV region.

In the coming months, thanks to its ability to identify cosmic nuclei from hydrogen to beyond iron, CALET will be able to study the high-energy hadronic component of cosmic rays. CALET will focus on the deviation from a pure power law that has been recently observed in the energy spectra of light nuclei. It will extend the present data to energies in the multi-TeV region with accurate measurements of the curvature of the spectrum as a function of energy, and of the abundance ratio of secondary to primary nuclei – an important ingredient to understand cosmic-ray propagation in the Galaxy.

The post CALET sees events in millions appeared first on CERN Courier.

]]>
https://cerncourier.com/a/calet-sees-events-in-millions/feed/ 0 News The CALorimetric Electron Telescope has observed more than a hundred million events at energies above 10 GeV. https://cerncourier.com/wp-content/uploads/2016/04/CCnew12_04_16.jpg
Fast radio bursts reveal unexpected properties https://cerncourier.com/a/fast-radio-bursts-reveal-unexpected-properties/ https://cerncourier.com/a/fast-radio-bursts-reveal-unexpected-properties/#respond Fri, 15 Apr 2016 07:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/fast-radio-bursts-reveal-unexpected-properties/ Two studies show that fast radio bursts (FRBs) have a richer phenomenology than initially thought and might originate in two different classes.

The post Fast radio bursts reveal unexpected properties appeared first on CERN Courier.

]]>

Two studies show that fast radio bursts (FRBs) have a richer phenomenology than initially thought and might originate in two different classes. While a group could, for the first time, pinpoint the location of a FRB and constrain baryon density in the intergalactic medium, a second study has found repeated FRBs from the same source, which cannot be of cataclysmic origin.

FRBs are very brief flashes of radio emission lasting just a few milliseconds. Although the first FRB was recorded in 2001, it was detected and recognised as a new class of astronomical events only six years later (CERN Courier November 2007 p10). It has been overlooked until re-analysis of the data searching for very short radio pulses. Since then, more than 10 other FRBs have been detected, and they all suggest very powerful events occurring at cosmological distances (CERN Courier September 2013 p14). Unlike for gamma-ray bursts (GRBs), there is a way to infer the distance via the time delay of the pulse observed at different radio frequencies. This delay increases towards lower radio frequencies and is proportional to the dispersion measure (DM), which refers to the integrated density of free electrons along the line of sight from the source to Earth.

The real-time detection of a FRB at the Parkes radio telescope has now made it possible, for the first time, to quickly search for afterglow emission, which has routinely been done for GRBs for more than a decade (CERN Courier June 2003 p12). Only two hours after the burst, the Australia Telescope Compact Array (ATCA) observed the field and identified two variable compact sources. One of them was rapidly fading and is very likely the counterpart of the FRB. This achievement is reported in Nature by a collaboration led by Evan Keane of Swinburne University of Technology in Australia and project scientist of the Square Kilometre Array Organisation.

What makes the study so interesting is that the precise localisation of the afterglow allowed identification of the FRB’s host galaxy and, therefore, via its redshift of z = 0.492±0.008, the precise distance to the event. With this information, the DM can be used to measure the density of ionised baryons in the intergalactic medium. The obtained value of ΩIGM = 4.9±1.3, expressed in per cent of the critical density of the universe, is in good agreement with the cosmological determinations by the WMAP and Planck satellites.

The second paper, also published in Nature, reports the discovery of a series of FRBs from the same source. A total of 10 new bursts were recorded in May and June 2015, and correspond in location and DM to a FRB first detected in 2012. This unexpected behaviour was found by Paul Scholz, a PhD student at McGill University in Montreal, Canada, sifting through data from the Arecibo radio telescope in Puerto Rico. The recurrence of bursts on minute-long timescales cannot come from a cataclysmic event, but is likely to be from a young, highly magnetised neutron star, according to lead author Laura Spitler of the Max Planck Institute for Radioastronomy in Bonn, Germany. It is likely that this FRB is of a different nature to other FRBs.

The status of the field is reminiscent of that of GRBs in the 1990s, with the first afterglow detections and redshift determinations in 1997, and the earlier understanding that soft gamma repeaters are distinct from genuine extragalactic GRBs, which are cataclysmic events like supernova explosions and neutron star mergers.

The post Fast radio bursts reveal unexpected properties appeared first on CERN Courier.

]]>
https://cerncourier.com/a/fast-radio-bursts-reveal-unexpected-properties/feed/ 0 News Two studies show that fast radio bursts (FRBs) have a richer phenomenology than initially thought and might originate in two different classes. https://cerncourier.com/wp-content/uploads/2016/04/CCast1_04_16.jpg
Gamma-ray excess is not from dark matter https://cerncourier.com/a/gamma-ray-excess-is-not-from-dark-matter/ https://cerncourier.com/a/gamma-ray-excess-is-not-from-dark-matter/#respond Fri, 18 Mar 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/gamma-ray-excess-is-not-from-dark-matter/ Two research teams found that the gamma rays of the excess emission at the galactic centre are not distributed as expected from dark matter.

The post Gamma-ray excess is not from dark matter appeared first on CERN Courier.

]]>

An excess of gamma rays at energies of a few GeV was found to be a good candidate for a dark-matter signal. Two years later, a pair of research articles refute this interpretation by showing that the excess photons detected by the Fermi Gamma-ray Space Telescope are not smoothly distributed as expected for dark-matter annihilation. Their clustering reveals instead a population of unresolved point sources, likely millisecond pulsars.

The Milky Way is thought to be embedded in a dark-matter halo with a density gradient increasing towards the galactic centre. The central region of our Galaxy is therefore a prime target to find an electromagnetic signal from dark-matter annihilation. If dark matter is made of weakly interacting massive particles (WIMPs) heavier than protons, such a signal would naturally be in the GeV energy band. A diffuse gamma-ray emission detected by the Fermi satellite and having properties compatible with a dark-matter origin created hope in recent years of finally detecting this elusive form of matter more directly than only through gravitational effects.

Two independent studies published in Physical Review Letters are now disproving this interpretation. Using different statistical-analysis methods, the two research teams found that the gamma rays of the excess emission at the galactic centre are not distributed as expected from dark matter. They both find evidence for a population of unresolved point sources instead of a smooth distribution.

The study, led by Richard Bartels of the University of Amsterdam, the Netherlands, uses a wavelet transformation of the Fermi gamma-ray images. The technique consists of a convolution of the photon count map with a wavelet kernel shaped like a Mexican hat, with a width tuned near the Fermi angular resolution of 0.4° in the relevant energy band of 1–4 GeV. The intensity distribution of the derived wavelet peaks is found to be inconsistent with that expected from a truly diffuse origin of the emission. The distribution suggests instead that the entire excess emission is due to a population of mostly undetected point sources with characteristics matching those of millisecond pulsars.

In the coming decade, new facilities at radio frequencies will be able to detect hundreds of new millisecond pulsars in the central region of the Milky Way.

These results are corroborated by another study led by Samuel Lee of the Broad Institute in Cambridge and Princeton University. This US team used a new statistical method – called a non-Poissonian template fit – to estimate the contribution of unresolved point sources to the gamma-ray excess emission at the galactic centre. The team’s results predict a new population of hundreds of point sources hiding below the detection threshold of Fermi. The possibility of detecting the brightest ones in the years to come with ongoing observations would confirm this prediction.

In the coming decade, new facilities at radio frequencies will be able to detect hundreds of new millisecond pulsars in the central region of the Milky Way. This would definitively rule out the dark-matter interpretation of the GeV excess seen by Fermi. In the meantime, the quest towards identifying the nature of dark matter will go on, but little by little the possibilities are narrowing down.

The post Gamma-ray excess is not from dark matter appeared first on CERN Courier.

]]>
https://cerncourier.com/a/gamma-ray-excess-is-not-from-dark-matter/feed/ 0 News Two research teams found that the gamma rays of the excess emission at the galactic centre are not distributed as expected from dark matter. https://cerncourier.com/wp-content/uploads/2016/03/CCast1_03_16.jpg
LIGO: a strong belief https://cerncourier.com/a/ligo-a-strong-belief/ https://cerncourier.com/a/ligo-a-strong-belief/#respond Fri, 18 Mar 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/ligo-a-strong-belief/ Twenty years of hard work with a strong belief that the endeavour would lead to a historic breakthrough.

The post LIGO: a strong belief appeared first on CERN Courier.

]]>

On 11 February, the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo collaborations published a historic paper in which they showed a gravitational signal emitted by the merger of two black holes. The signal has been observed with 5σ significance and is the first direct observation of gravitational waves.

This result comes after 20 years of hard work by a large collaboration of scientists operating the two LIGO observatories in the US. Barry Barish, Linde professor of physics, emeritus, at the California Institute of Technology and former director of the Global Design Effort for the International Linear Collider (ILC), led the LIGO endeavour from 1994 to 2005. On the day of the official announcement to the scientific community and the public, Barish was at CERN to give a landmark seminar that captivated the whole audience gathered in the packed Main Auditorium.

The CERN Courier had the unique opportunity to interview Barish just after the announcement.

Professor Barish, this achievement comes after 20 years of hard work, uncertainties and challenges. This is what research is all about, but what was the greatest challenge you had to overcome during this long period?

It really was to do anything that takes 20 years and still be supported and have the energy to reach completion. We started long before that, but the project itself started in 1994. LIGO is an incredible technical achievement. The idea that you can take on high risk in such a scientific endeavour requires a lot of support, diligence and perseverance. In 1994, we convinced the US National Science Foundation to fund the project, which became the biggest programme to be funded. After that, it took us 10 years to build it and to make it work well, plus 10 years to improve the sensitivity and bring it to the point where we were able to detect the gravitational waves. And along the way no one had done this before.

Indeed, the experimental set-up we used to detect the gravitational signal is an enormous extrapolation from anything that was done before. As a physicist, you learn that extrapolating a factor of two can be within reach, but a factor of 10 sounds already like a dream. If you compare the first 40 m interferometer we built on the CALTECH campus with the two 4000 m interferometers we have now, you already have an idea of the enormous leap we had to make. The leap of 100 in size also involved at least that in complexity and sophistication, eventually achieving more than 10,000 times the sensitivity of the original 40 m prototype.

The two signals were perfectly consistent, and this gave us total trust in our data.

The experimental confirmation of the existence of the gravitational waves could have a profound impact on the future of astrophysics and gravitational physics. What do you think are the most important consequences of the discovery?

The discovery opens two new areas of research for physics. One is on the general-relativity theory itself. Gravitational waves are a powerful way of testing the heart of the theory by investigating the strong-field realm of gravitational physics. Even with just this first event – the merging of two black holes – we have created a true laboratory where you can study all of this, and understanding general relativity at an absolutely fundamental level is now opening up.

The second huge consequence of the discovery is that we can now look at the universe with a completely new “telescope”. So far, we have used and built all kinds of telescopes: infrared, ultraviolet, radio, optical… And the idea of recent years has been to look at the same things in different bandwidths.

However, no such previous instrument could have seen what we saw with the LIGO interferometers. Nature has been so generous with us that the very first event we have seen is new astrophysics, as astronomers had never seen stellar black holes of these masses. With just the first glimpse at the universe with gravitational waves, we now know that they exist in pairs and that they can merge. This is all new astrophysics. When we designed LIGO, we thought that the first thing we would see gravitational waves emitted by was neutron stars. It would still be a huge discovery, but it would not be new astrophysical information. We have been really lucky.

Over the next century, this field will provide a completely new way of doing an incredible amount of new science. And somehow we had a glimpse of that with the first single event.

What were your feelings upon seeing the event on your screen?

We initially thought that it could be some instrumental crazy thing. We had to worry about many possible instrumental glitches, including whether someone had purposely injected a fake event into our data stream. To carefully check the origin of the signal, we tracked back the formation of the event data from the two interferometers, and we could see that the signal was recorded within seven milliseconds – exactly the time we expect for the same event to appear on the second interferometer. The two signals were perfectly consistent, and this gave us total trust in our data.

I must admit that I was personally worried as, in physics, it is always very dangerous to claim anything with only one event. However, we proceeded to perform the analysis in the most rigorous way and, indeed, we followed the normal publication path, namely the submission of the paper to the referees. They confirmed that what we submitted was scientifically well-justified. In this way, we had the green light to announcing the discovery to the public.

At the seminar you were welcomed very warmly by the audience. It was a great honour for the CERN audience to have you give the talk in person, just after your colleagues’ announcement in the US. What are you bringing back from this experience?

I was very happy to be presenting this important achievement in the temple of science. The thing that made me feel that we made the case well was that people were interested in what we have done and are doing. In the packed audience, nobody seemed to question our methodology, analysis or the validity of our result. We have one single event, but this was good enough to convince me and also my colleagues that it was a true discovery. I enjoyed receiving all of the science questions from the audience – it was really a great moment for me.

• The LIGO and Virgo collaborations are currently working on analysing the rest of the data from the run that ended on 12 January. New information is expected to be published in the coming months. In the meantime, the discovery event is available in open data (see https://losc.ligo.org) for anyone who wants to analyse it.

The post LIGO: a strong belief appeared first on CERN Courier.

]]>
https://cerncourier.com/a/ligo-a-strong-belief/feed/ 0 Feature Twenty years of hard work with a strong belief that the endeavour would lead to a historic breakthrough. https://cerncourier.com/wp-content/uploads/2016/03/CCbbg1_03_16.jpg
Neutrons in full flight at CERN’s n_TOF facility https://cerncourier.com/a/neutrons-in-full-flight-at-cerns-n-tof-facility/ https://cerncourier.com/a/neutrons-in-full-flight-at-cerns-n-tof-facility/#respond Fri, 18 Mar 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/neutrons-in-full-flight-at-cerns-n_tof-facility/ The facility features two beamlines and two experimental halls.

The post Neutrons in full flight at CERN’s n_TOF facility appeared first on CERN Courier.

]]>

Accurate knowledge of the interaction probability of neutrons with nuclei is a key parameter in many fields of research. At CERN, pulsed bunches from the Proton Synchrotron (PS) hit the spallation target and produce beams of neutrons with unique characteristics. This allows scientists to perform high-resolution measurements, particularly on radioactive samples.

The story of the n_TOF facility goes back to 1998, when Carlo Rubbia and colleagues proposed the idea of building a neutron facility to measure neutron-reaction data needed for the development of an energy amplifier. The facility eventually became fully operational in 2001, with a scientific programme covering neutron-induced reactions relevant for nuclear astrophysics, nuclear technology and basic nuclear science. During the first major upgrade of the facility in 2009, the old spallation target was removed and replaced by a new target with an optimised design, which included a decoupled cooling and moderation circuit that allowed the use of borated water to reduce the background due to in-beam hydrogen-capture γ rays. A second improvement was the construction of a long-awaited “class-A” workplace, which made it possible to use unsealed radioactive isotopes in the first experimental area (EAR1) at 200 m from the spallation target. In 2014, n_TOF was completed with the construction of a second, vertical beamline and a new experimental area – EAR2.

One of the most striking features of neutron–nucleus interactions is the resonance structures observed in the reaction cross-sections at low-incident neutron energies. Because the electrically neutral neutron has no Coulomb barrier to overcome, and has a negligible interaction with the electrons in matter, it can directly penetrate and interact with the atomic nucleus, even at very low kinetic energies in the order of electron-volts. The cross-sections can show variations of several orders of magnitude on an energy scale of only a few eV. The origin of these resonances is related to the excitation of nuclear states in the compound nuclear system formed by the neutron and the target nucleus, at excitation energies lying above the neutron binding energy of typically several MeV. In figure 1, the main cross-sections for a typical heavy nucleus are shown as a function of energy. The position and extent of the resonance structures depend on the nucleus. Also shown on the same energy scale are Maxwellian neutron energy distributions for fully moderated neutrons by water at room temperature, for fission neutrons, and for typical neutron spectra in the region from 5 to 100 keV, corresponding to the temperatures in stellar environments of importance for nucleosynthesis.

The wide neutron energy range is one of the key features of the n_TOF facility.

In nuclear astrophysics, an intriguing topic is understanding the formation of nuclei present in the universe and the origin of chemical elements. Hydrogen and smaller amounts of He and Li were created in the early universe by primordial nucleosynthesis. Nuclear reactions in stars are at the origin of nearly all other nuclei, and most nuclei heavier than iron are produced by neutron capture in stellar nucleosynthesis. Neutron-induced reaction cross-sections also reveal the nuclear-level structure in the vicinity of the neutron binding energy of nuclei. Insight into the properties of these levels brings crucial input to nuclear-level density models. Finally, neutron-induced reaction cross-sections are a key ingredient in applications of nuclear technology, including future developments in medical applications and the transmutation of nuclear waste, accelerator-driven systems and nuclear-fuel-cycle investigations.

The wide neutron energy range is one of the key features of the n_TOF facility. The kinetic energy of the particles is directly related to their time-of-flight: the start time is given by the impact of the proton beam on the spallation target and the arrival time is measured in the EAR1 and EAR2 experimental areas. The high neutron energies are directly related to the 20 GeV/c proton-induced spallation reactions in the lead target. Neutrons are subsequently partially moderated to cover the full energy range. Energies as low as about 10 MeV corresponding to long times of flight can be exploited and measured at n_TOF because of its pulsed bunches spaced by multiples of 1.2 s, sent by the PS. This allows long times of flight to be measured without any overlap into the next neutron cycle.

Higher flux

Another unique characteristic of n_TOF is the very high number of neutrons per proton burst, also called instantaneous neutron flux. In the case of research with radioactive samples irradiated with the neutron beam, the high flux results in a very favourable ratio between the number of signals due to neutron-induced reactions and those due to radioactive decay events, which contribute to the background. While the long flight path of EAR1 (200 m from the spallation target) results in a very high kinetic-energy resolution, the short flight path of EAR2 (20 m from the target) has a neutron flux that is higher than that of EAR1 by a factor of about 25. The neutron fluxes in EAR1 and EAR2 are shown in figure 2. The higher flux opens the possibility for measurements on nuclei with very low mass or low reaction cross-sections within a reasonable time. The shorter flight distance of about a factor 10 also ensures that the entire neutron energy region is measured in a 10 times shorter interval. For measurements of neutron-induced cross-sections on radioactive nuclei, this means 10 times less acquired detector signals due to radioactivity. Therefore the combination of the higher flux and the shorter time interval results in an increase of the signal-to-noise ratio of a factor 250 for radioactive samples. This characteristic of EAR2 was, for example, used in the first cross-section measurement in 2014, when the fission cross-section of the highly radioactive isotope 240Pu was successfully measured. An earlier attempt of this measurement in EAR1 was not conclusive. An example from 2015 is the measurement of the (n,α) cross-section of the also highly radioactive isotope 7Be, relevant for the cosmological Li problem in Big Bang nucleosynthesis.

The most important neutron-induced reactions that are measured at n_TOF are neutron-capture and neutron-fission reactions. Several detectors have been developed for this purpose. A 4π calorimeter consisting of 40 BaF2 crystals has been in use for capture measurements since 2004. Several types of C6D6-based liquid-scintillator detectors are also used for measurements of capture γ rays. Different detectors have been developed for charged particles. For fission measurements, ionisation chambers, parallel-plate avalanche counters and the fission-fragment spectrometer STEFF have been operational. MicroMegas-based detectors have been used for fission and (n,α) measurements. Silicon detectors for measuring (n,α) and (n,p) reactions have been developed and used more recently, even for in-beam measurements.

The measurements at CERN’s neutron time-of-flight facility n_TOF, with its unique features, contribute substantially to our knowledge of neutron-induced reactions. This goes together with cutting-edge developments in detector technology and analysis techniques, the design of challenging experiments, and training a new generation of physicists working in neutron physics. This work has been actively supported since the beginning of n_TOF by the European Framework Programmes. A future development currently being studied is a possible upgrade of the spallation target, to optimise the characteristics of the neutron beam in EAR2. The n_TOF collaboration, consisting of about 150 researchers from 40 institutes, looks forward to another year of experiments from its scientific programme in both EAR1 and EAR2, continuing its 15 year history of measuring high-quality neutron-induced reaction data.

 

Further Reading
CERN-Proceedings-2015-001, p32

The post Neutrons in full flight at CERN’s n_TOF facility appeared first on CERN Courier.

]]>
https://cerncourier.com/a/neutrons-in-full-flight-at-cerns-n-tof-facility/feed/ 0 Feature The facility features two beamlines and two experimental halls. https://cerncourier.com/wp-content/uploads/2016/03/CCneu3_03_16.jpg
LUNA observes a rare nuclear reaction that occurs in giant red stars https://cerncourier.com/a/luna-observes-a-rare-nuclear-reaction-that-occurs-in-giant-red-stars/ https://cerncourier.com/a/luna-observes-a-rare-nuclear-reaction-that-occurs-in-giant-red-stars/#respond Fri, 12 Feb 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/luna-observes-a-rare-nuclear-reaction-that-occurs-in-giant-red-stars/ LUNA has observed three low-energy resonances in the neon-sodium cycle, responsible for sodium production in red giants and energy generation.

The post LUNA observes a rare nuclear reaction that occurs in giant red stars appeared first on CERN Courier.

]]>

In December, the Laboratory for Underground Nuclear Astrophysics (LUNA) experiment reported the first direct observation of sodium production in giant red stars, one of the nuclear reactions that are fundamental to the formation of the elements that make up the universe.

LUNA is a compact linear accelerator for light ions (maximum energy 400 keV). A unique facility, it is installed in a deep-underground laboratory and shielded from cosmic rays. The experiment aims to study the nuclear reactions that take place inside stars, where elements that make up matter are formed and then driven out by gigantic explosions and scattered as cosmic dust.

For the first time, LUNA has observed three low-energy resonances in the neon-sodium cycle, the 22Ne(p,γ)23Na reaction, responsible for sodium production in red giants and energy generation. LUNA recreates the energy ranges of nuclear reactions and, with its accelerator, goes back in time to one hundred million years after the Big Bang, when the first stars formed and the processes that gave rise to the huge variety of elements in the universe started.

This result is an important piece in the puzzle of the origin of the elements in the universe, which LUNA has been studying for 25 years. Stars assemble atoms through a complex system of nuclear reactions. A very small fraction of these reactions have been studied at the energies existing inside of the stars, and a large part of those few cases have been observed using LUNA.

A high-purity germanium detector with relative efficiency up to 130% was used for this particular experiment, together with a windowless gas target filled with enriched gas. The rock surrounding the underground facility at the Gran Sasso National Laboratory and additional passive shielding protected the experiment from cosmic rays and ambient radiation, making the direct observation of such a rare process possible.

The post LUNA observes a rare nuclear reaction that occurs in giant red stars appeared first on CERN Courier.

]]>
https://cerncourier.com/a/luna-observes-a-rare-nuclear-reaction-that-occurs-in-giant-red-stars/feed/ 0 News LUNA has observed three low-energy resonances in the neon-sodium cycle, responsible for sodium production in red giants and energy generation. https://cerncourier.com/wp-content/uploads/2016/02/CCnew15_02_16.jpg
Is there a ‘ninth planet’ after all? https://cerncourier.com/a/is-there-a-ninth-planet-after-all/ https://cerncourier.com/a/is-there-a-ninth-planet-after-all/#respond Fri, 12 Feb 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/is-there-a-ninth-planet-after-all/ Mike Brown and one of his colleagues, the theorist Konstantin Batygin have found indications of the presence of a very distant heavy planet orbiting the Sun.

The post Is there a ‘ninth planet’ after all? appeared first on CERN Courier.

]]>

Pluto was considered to be the ninth planet of the solar system, until it was relegated to a “dwarf planet” by the International Astronomical Union (IAU) in 2006. It was judged to be too small among many other trans-Neptunian objects to be considered a real planet. Almost 10 years later, two astronomers have now found indications of the presence of a very distant heavy planet orbiting the Sun. While it is still to be detected, it is already causing a great deal of excitement in the scientific community and beyond.

Pluto was discovered in 1930 by a young American astronomer, Clyde Tombaugh, who tediously looked at innumerable photographic plates to detect an elusive planet moving relative to background stars. With the progressive discovery – since the 1990s – of hundreds of objects orbiting beyond Neptune, Pluto is no longer alone in the outer solar system. It even lost its status of the heaviest trans-Neptunian object with the discovery of Eris in 2003. This forced the IAU to rethink the definition of a planet and led to the exclusion of Pluto from the strict circle of eight planets.

Eris is not the only massive trans-Neptunian object found by Mike Brown, an astronomer of the California Institute of Technology (Caltech), US, and colleagues. There are also Quaoar (2002), Sedna (2003), Haumea (2004) and Makemake (2005), all only slightly smaller than Pluto and Eris. Despite these discoveries, almost nobody during recent years would have thought that there could still be a much bigger real planet in the outskirts of our solar system. But this is what Mike Brown and one of his colleagues, the theorist Konstantin Batygin, now propose.

The evidence comes from an unexpected clustering of perihelion positions and orbital planes of a group of objects just outside of the orbit of Neptune

The two astronomers deduced the existence of a ninth planet through mathematical modelling and computer simulations, but have not yet observed the object directly. The evidence comes from an unexpected clustering of perihelion positions and orbital planes of a group of objects just outside of the orbit of Neptune, in the so-called Kuiper belt. All six objects with the most elongated orbits – with semi-major axes greater than 250 AU – share similar perihelion positions and pole orientations. The combined statistical significance of this clustering is 3.8σ, assuming that Sedna and the five other peculiar planetoids have the same observational bias as other known Kuiper-belt objects.

Batygin and Brown then show that a planet with more than about 10 times the mass of the Earth in a distant eccentric orbit anti-aligned with the six objects would maintain the peculiar configuration of their orbits. This possible ninth planet would rotate around the Sun about 20 times further out than Neptune, therefore completing one full orbit only approximately once every 10,000 years. Batygin’s simulations of the effect of this new planet further predict the existence of a population of small planetoids in orbits perpendicular to the plane of the main planets. When Brown realised that such peculiar objects exist and have indeed already been identified, he became convinced about the existence of Planet Nine.

Observers now know along which orbit they should look for Planet Nine. If it happens to be found, this would be a major discovery: the third planet to be discovered since ancient times after Uranus and Neptune and, as with the latter, it would have been first predicted to exist via calculations.

The post Is there a ‘ninth planet’ after all? appeared first on CERN Courier.

]]>
https://cerncourier.com/a/is-there-a-ninth-planet-after-all/feed/ 0 News Mike Brown and one of his colleagues, the theorist Konstantin Batygin have found indications of the presence of a very distant heavy planet orbiting the Sun. https://cerncourier.com/wp-content/uploads/2016/02/CCast1_02_16.jpg
The eye that looks at galaxies far, far away https://cerncourier.com/a/the-eye-that-looks-at-galaxies-far-far-away/ https://cerncourier.com/a/the-eye-that-looks-at-galaxies-far-far-away/#respond Fri, 12 Feb 2016 09:00:00 +0000 https://preview-courier.web.cern.ch:8888/Cern-mock-may/the-eye-that-looks-at-galaxies-far-far-away/ Take a virtual tour of ESO’s breathtaking installation.

The post The eye that looks at galaxies far, far away appeared first on CERN Courier.

]]>

Night is falling over Cerro Paranal, a 2600 m peak within the mountain range running along Chile’s Pacific coastline. As our eyes gradually become accustomed to total obscurity and we start to catch a glimpse of the profile of the domes on top of the Cerro, we are overwhelmed by the breathtaking view of the best starry sky we have ever seen. The centre of the Milky Way is hanging over our heads, together with the two Magellanic Clouds and the four stars of the Southern Cross. The galactic centre is so star-dense that it looks rather like a 3D object suspended in the sky.

Not a single artificial light source is polluting the site, which is literally in the middle of nowhere, because the closest inhabited area is about 130 km away. The air in the austral winter in the Atacama desert is cold, but there is almost no wind, and no noise can be heard as I walk in the shadow of four gigantic (30 m tall) metal domes housing the four 8.2 m-diameter fixed unit telescopes (UTs) and four 1.8 m-diameter movable auxiliary telescopes (ATs), that make up the Very Large Telescope (VLT). Yet dozens of astronomers are working not far away, in a building right below the platform on top of the Cerro, overlooking the almost permanent cloud blanket over the Pacific Ocean.

As we enter the control room, I immediately feel a sense of déjà vu: a dozen busy and mostly young astronomers are drinking coffee, eating crisps and talking in at least three different languages, grouped around five islands of computer terminals.

Welcome to the nerve centre of the most complex and advanced optical telescope in the world. From here, all of the instrumentation is remotely controlled through some 100 computers connected to the telescopes by bunches of optical fibres. Four islands are devoted to the operation of all of the components of the VLT telescopes, from their domes to the mirrors and the imaging detectors, and the fifth is entirely devoted to the controls of interferometry.

 

Highly specialised ESO astronomers take their night shifts in this room 300 nights per year, on average. Most observations are done in service mode (60–70% of the total time), with ESO staff doing observations for other astronomers within international projects that have gone through an evaluation process and have been approved. The service mode guarantees full flexibility to reschedule observations and match them with the most suitable atmospheric conditions. The rest of the time is “visitor mode”, with the astronomer in charge of the project leading the observations, which is particularly useful whenever any real-time decision is needed.

The shift leader tonight is an Italian from Padova. He swaps from one screen to the next, trying to ignore the television crew’s microphones and cameras, while giving verbal instructions to a young Australian student. He is activating one of the VLT’s adaptive-optics systems, hundreds of small pistons positioned under the mirrors to change their curvature up to thousands of times per second, to counteract any distortion caused by atmospheric turbulence. “Thanks to adaptive optics, the images obtained with the VLT are as sharp as if we were in space,” he explains briefly, before leaning back on one of the terminals.

Complex machinery

Adaptive optics is not the only astronomers’ dream come true at the VLT. The VLT’s four 8.2 m-diameter mirrors are the largest single-piece light-collecting surface in the world, and the best application of active optics – the trick ESO scientists use to correct for gravitationally induced deformations as the telescope changes its orientation and so maintain the optics of the vast surface. The telescope mirrors are controlled by an active support system powered by more than 250 computers, working in parallel and positioned locally in each structure, to apply the necessary force to the mirrors to maintain their alignment with one another. The correcting forces have a precision of 5 g and keep the mirror in the ideal position, changing it every 3 minutes with 10 nm precision. The forces are applied on the basis of the analysis of the image of a real star, taken during the observations, so that the telescope is self-adjusting. The weight of the whole structure is incredibly low for its size. The 8.2 m-diameter reflecting surface is only 17 cm thick, and the whole mirror weighs 22 tonnes; its supporting cell weighs only 10 tonnes. Another technological marvel is the secondary mirror, a single-piece lightweight hyperbolic mirror that can move in all directions along five degrees of freedom. With its 1.2 m diameter, it is the second largest object entirely made in beryllium, after the Space Shuttle doors.

But the secret of the VLT’s uniqueness lies in a tunnel under the platform. Optical interferometry is the winning idea that enables the VLT to achieve yet unsurpassed ultra-high image resolution, by combining the light collected by the main 8.2 m UTs and the 1.8 m ATs. The physics principle behind the idea stems from Young’s 19th century two-slit experiment, and was first applied to radio astronomy, where wavelengths are long. But in the wavelength domains of visible and infrared light, interferometry becomes a much greater challenge. It is interesting to note that the idea of using optical interferometry became a real option for the VLT at the ESO conference held at CERN in 1977 (cf Claus Madsen The Jewel on the Mountain Top Wiley-VCH).

With special permission from the director and taking advantage of a technical stop to install a new instrument, we are able to visit the interferometry instrumentation room and tunnel under the platform – a privilege granted to few. The final instrument that collects and analyses all of the light coming from the VLT telescopes, after more than 25 different reflections, is kept like a jewel in a glass box in the instrumentation room. Nobody can normally get this close to it, because even the turbulence generated by a human presence can disturb its high-precision work. Following the path of the light, we enter the interferometry tunnel. The dominant blue paint of the metal rails and the size of the tunnel trigger once again an inevitable sense of déjà vu. Three horizontal telescopes travel seamlessly on two sets of four 60 m-long rails – the “delay lines” where the different arrival times of photons on each of the telescopes is compensated for with ultra precision. These jewels of technology move continuously along the rails without electric contact, thanks to linear engines with coils interacting directly with the magnets in the engine; no cable is connected to the telescopes on the rails because the signals are transmitted by laser, and electricity is conveyed by the rails themselves to enable precision and smooth movement. The system is so precise that it can detect and automatically adapt to earthquakes, and measure the vibrations provoked in the mountain by the waves of the Pacific Ocean 12 km away. Nowhere else has interferometry reached such complexity and been pushed so far.

Delivering science at a high rate

The resolution obtained by the Very Large Telescope Interferometer (VLTI – the name given to the telescopes when they function in this mode) is equivalent to the resolution of a 100 m-diameter mirror. Moreover, the Auxiliary Telescopes are mounted on tracks, and can move over the entire telescope platform, enabling the VLTI to obtain an even better final resolution. The combined images of the 4+4 telescopes allow the same light collection capacity as a much larger individual mirror, therefore making the VLT the largest optical instrument in the world.

Up to 15% of refereed science papers based on ESO data are authored by researchers not involved in the original data generation

Another revolution introduced by the VLT has to do with e-science. The amount of data generated by the new high-capacity VLT science instruments drove the development of end-to-end models in astronomy, introducing electronic proposal submission and service observing with processed and raw science and engineering data fed back to everyone involved. The expansion of the data links in Latin America enabled the use of high-speed internet connections spanning continents, and ESO has been able to link its observatories to the data grid. “ESO practises an open-access policy (with regulated, but limited propriety rights for science proposers) and holds public-survey data as well. Indeed, it functions as a virtual observatory on its own,” says Claus Madsen, senior counsellor for international relationships at ESO. Currently, up to 15% of refereed science papers based on ESO data are authored by researchers not involved in the original data generation (e.g. as proposers), and an additional 10% of the papers are partly based on archival data. Thanks also to this open-access policy, the VLT has become the most productive ground-based facility for astronomy operating at visible wavelengths, with only the Hubble Space Telescope generating more scientific papers.

Watch the video at https://cds.cern.ch/record/2128425.

The post The eye that looks at galaxies far, far away appeared first on CERN Courier.

]]>
https://cerncourier.com/a/the-eye-that-looks-at-galaxies-far-far-away/feed/ 0 Feature Take a virtual tour of ESO’s breathtaking installation. https://cerncourier.com/wp-content/uploads/2016/02/CCvlt3_02_16.jpg