, , , , , , , ,

Hitting the Books: Why AI won’t be taking our cosmology jobs

The problem with studying the universe around us is that it is simply too big. The stars overhead remain too far away to interact with directly, so we are relegated to testing our theories on the formation of the galaxies based on observable data. 

Simulating these celestial bodies on computers has proven an immensely useful aid in wrapping our heads around the nature of reality and, as Andrew Pontzen explains in his new book, The Universe in a Box: Simulations and the Quest to Code the Cosmos, recent advances in supercomputing technology are further revolutionizing our capability to model the complexities of the cosmos (not to mention myriad Earth-based challenges) on a smaller scale. In the excerpt below, Pontzen looks at the recent emergence of astronomy-focused AI systems, what they’re capable of accomplishing in the field and why he’s not too worried about losing his job to one.  

white background green wireframe of a box with orange scatterplot inside and around, black text
Riverhead Books

Adapted from THE UNIVERSE IN A BOX: Simulations and the Quest to Code the Cosmos by Andrew Pontzen published on June 13, 2023 by Riverhead, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2023 Andrew Pontzen.


As a cosmologist, I spend a large fraction of my time working with supercomputers, generating simulations of the universe to compare with data from real telescopes. The goal is to understand the effect of mysterious substances like dark matter, but no human can digest all the data held on the universe, nor all the results from simulations. For that reason, artificial intelligence and machine learning is a key part of cosmologists’ work.

Consider the Vera Rubin Observatory, a giant telescope built atop a Chilean mountain and designed to repeatedly photograph the sky over the coming decade. It will not just build a static picture: it will particularly be searching for objects that move (asteroids and comets), or change brightness (flickering stars, quasars and supernovae), as part of our ongoing campaign to understand the ever-changing cosmos. Machine learning can be trained to spot these objects, allowing them to be studied with other, more specialized telescopes. Similar techniques can even help sift through the changing brightness of vast numbers of stars to find telltale signs of which host planets, contributing to the search for life in the universe. Beyond astronomy there are no shortage of scientific applications: Google’s artificial intelligence subsidiary DeepMind, for instance, has built a network that can outperform all known techniques for predicting the shapes of proteins starting from their molecular structure, a crucial and difficult step in understanding many biological processes.

These examples illustrate why scientific excitement around machine learning has built during this century, and there have been strong claims that we are witnessing a scientific revolution. As far back as 2008, Chris Anderson wrote an article for Wired magazine that declared the scientific method, in which humans propose and test specific hypotheses, obsolete: ‘We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.’

I think this is taking things too far. Machine learning can simplify and improve certain aspects of traditional scientific approaches, especially where processing of complex information is required. Or it can digest text and answer factual questions, as illustrated by systems like ChatGPT. But it cannot entirely supplant scientific reasoning, because that is about the search for an improved understanding of the universe around us. Finding new patterns in data or restating existing facts are only narrow aspects of that search. There is a long way to go before machines can do meaningful science without any human oversight.

To understand the importance of context and understanding in science, consider the case of the OPERA experiment which in 2011 seemingly determined that neutrinos travel faster than the speed of light. The claim is close to a physics blasphemy, because relativity would have to be rewritten; the speed limit is integral to its formulation. Given the enormous weight of experimental evidence that supports relativity, casting doubt on its foundations is not a step to be taken lightly.

Knowing this, theoretical physicists queued up to dismiss the result, suspecting the neutrinos must actually be traveling slower than the measurements indicated. Yet, no problem with the measurement could be found – until, six months later, OPERA announced that a cable had been loose during their experiment, accounting for the discrepancy. Neutrinos travelled no faster than light; the data suggesting otherwise had been wrong.

Surprising data can lead to revelations under the right circumstances. The planet Neptune was discovered when astronomers noticed something awry with the orbits of the other planets. But where a claim is discrepant with existing theories, it is much more likely that there is a fault with the data; this was the gut feeling that physicists trusted when seeing the OPERA results. It is hard to formalize such a reaction into a simple rule for programming into a computer intelligence, because it is midway between the knowledge-recall and pattern-searching worlds.

The human elements of science will not be replicated by machines unless they can integrate their flexible data processing with a broader corpus of knowledge. There is an explosion of different approaches toward this goal, driven in part by the commercial need for computer intelligences to explain their decisions. In Europe, if a machine makes a decision that impacts you personally – declining your application for a mortgage, maybe, or increasing your insurance premiums, or pulling you aside at an airport – you have a legal right to ask for an explanation. That explanation must necessarily reach outside the narrow world of data in order to connect to a human sense of what is reasonable or unreasonable.

Problematically, it is often not possible to generate a full account of how machine-learning systems reach a particular decision. They use many different pieces of information, combining them in complex ways; the only truly accurate description is to write down the computer code and show the way the machine was trained. That is accurate but not very explanatory. At the other extreme, one might point to an obvious factor that dominated a machine’s decision: you are a lifelong smoker, perhaps, and other lifelong smokers died young, so you have been declined for life insurance. That is a more useful explanation, but might not be very accurate: other smokers with a different employment history and medical record have been accepted, so what precisely is the difference? Explaining decisions in a fruitful way requires a balance between accuracy and comprehensibility.

In the case of physics, using machines to create digestible, accurate explanations which are anchored in existing laws and frameworks is an approach in its infancy. It starts with the same demands as commercial artificial intelligence: the machine must not just point to its decision (that it has found a new supernova, say) but also give a small, digestible amount of information about why it has reached that decision. That way, you can start to understand what it is in the data that has prompted a particular conclusion, and see whether it agrees with your existing ideas and theories of cause and effect. This approach has started to bear fruit, producing simple but useful insights into quantum mechanics, string theory, and (from my own collaborations) cosmology.

These applications are still all framed and interpreted by humans. Could we imagine instead having the computer framing its own scientific hypotheses, balancing new data with the weight of existing theories, and going on to explain its discoveries by writing a scholarly paper without any human assistance? This is not Anderson’s vision of the theory-free future of science, but a more exciting, more disruptive and much harder goal: for machines to build and test new theories atop hundreds of years of human insight.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-universe-in-a-box-andrew-pontzen-riverhead-books-153005483.html?src=rss

https://www.engadget.com/hitting-the-books-universe-in-a-box-andrew-pontzen-riverhead-books-153005483.html?src=rss


November 2024
M T W T F S S
 123
45678910
11121314151617
18192021222324
252627282930  

About Us

Welcome to encircle News! We are a cutting-edge technology news company that is dedicated to bringing you the latest and greatest in everything tech. From automobiles to drones, software to hardware, we’ve got you covered.

At encircle News, we believe that technology is more than just a tool, it’s a way of life. And we’re here to help you stay on top of all the latest trends and developments in this ever-evolving field. We know that technology is constantly changing, and that can be overwhelming, but we’re here to make it easy for you to keep up.

We’re a team of tech enthusiasts who are passionate about everything tech and love to share our knowledge with others. We believe that technology should be accessible to everyone, and we’re here to make sure it is. Our mission is to provide you with fun, engaging, and informative content that helps you to understand and embrace the latest technologies.

From the newest cars on the road to the latest drones taking to the skies, we’ve got you covered. We also dive deep into the world of software and hardware, bringing you the latest updates on everything from operating systems to processors.

So whether you’re a tech enthusiast, a business professional, or just someone who wants to stay up-to-date on the latest advancements in technology, encircle News is the place for you. Join us on this exciting journey and be a part of shaping the future.

Podcasts

TWiT 1007: All the Hotdogs in the World – China's 'Salt Typhoon' Hack, Google on the Chopping Block, Recall AI This Week in Tech (Audio)

In this episode of This Week in Tech, the panel tackles the "biggest hack in US history," the future of AI, and the role of government in tech. From the Chinese hack's implications to Microsoft's AI-powered Recall, the Supreme Court's tech-related cases, and the push for social media age verification, Leo Laporte, Patrick Beja, Wesley Faulkner, and Alex Wilhelm provide insightful analysis and lively discussion on the most pressing issues facing the industry today. China's "Salt Typhoon" hack, dubbed the "worst hack in our nation's history," which compromised US telecommunications infrastructure and allowed surveillance of high-profile individuals The panel debates the challenges of securing outdated infrastructure and the role of government in regulating tech companies DOJ's push for Google to sell off Chrome to break its search monopoly, and the potential implications for competition and innovation Alex Wilhelm's article "If you like startups, you should love anti-trust" and the importance of fostering competition in the tech industry Microsoft's Windows 365 Link, a $349 mini PC that streams Windows from the cloud, and the potential for thin client computing Microsoft's Recall AI feature, which records and indexes users' screen activity, raising security concerns but offering potential benefits for users The Supreme Court's involvement in cases related to Facebook's Cambridge Analytica data breach and the fate of America's low-income broadband fund The panel also discusses their personal experiences with parenting in the digital age and the challenges of balancing screen time, privacy, and education for children Meta's push for Apple and Google to verify users' ages on social media platforms, and the challenges of implementing effective age verification while protecting user privacy Amazon's talks with Instacart, Uber, Ticketmaster, and others to enhance its AI-powered Alexa assistant Spirit Airlines filing for bankruptcy amidst financial losses and mounting debt payments Alex laments the addition of ads to Amazon Prime Video and the panel debates the tradeoffs of bundled subscription services Host: Leo Laporte Guests: Patrick Beja, Wesley Faulkner, and Alex Wilhelm Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsors: Melissa.com/twit shopify.com/twit veeam.com lookout.com bitwarden.com/twit
  1. TWiT 1007: All the Hotdogs in the World – China's 'Salt Typhoon' Hack, Google on the Chopping Block, Recall AI
  2. TWiT 1006: Underwater Alien Civilizations – Bluesky Growth, Tyson Vs. Paul, AI Granny
  3. TWiT 1005: $125,000 in Baguettes – iPod Turns 23, The $1.1M AI Painting, Roblox
  4. TWiT 1004: Embrace Uncertainty – Political Texts, Daylight Saving Time, Digital Ad Market
  5. TWiT 1003: CrabStrike – Delta Sues Crowdstrike, Hospital AI, Surge Pricing