Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
Siri or Skynet? How to separate AI fact from fiction
Sunday, December 22, 2024

Trending Topics

HomeTechnologySiri or Skynet? How to separate AI fact from fiction

Siri or Skynet? How to separate AI fact from fiction

spot_img

“G oogle fires engineer who contended its AI technology was sentient . ” “Chess robot grabs and breaks finger of seven-year-old opponent. ” “DeepMind’s protein-folding AI cracks biology’s biggest problem .

” A new discovery (or debacle) is reported practically every week, sometimes exaggerated, sometimes not. Should we be exultant? Terrified? Policymakers struggle to know what to make of AI and it’s hard for the lay reader to sort through all the headlines, much less to know what to be believe. Here are four things every reader should know.

First, AI is real and here to stay. And it matters. If you care about the world we live in, and how that world is likely to change in the coming years and decades, you should care as much about the trajectory of AI as you might about forthcoming elections or the science of climate breakdown.

What happens next in AI, over the coming years and decades, will affect us all. Electricity, computers, the internet, smartphones and social networking have all changed our lives, radically, sometimes for better, sometimes for worse, and AI will, too. The road from concept to product in AI is often hard, even at a company with all the resources of Google So will the choices we make around AI.

Who has access to it? How much should it be regulated? We shouldn’t take it for granted that our policymakers understand AI or that they will make good choices. Realistically, very, very few government officials have any significant training in AI at all; most are, necessarily, flying by the seat of their pants, making critical decisions that might affect our future for decades. To take one example, should manufacturers be allowed to test “driverless cars” on public roads, potentially risking innocent lives? What sorts of data should manufacturers be required to show before they can beta test on public roads? What sort of scientific review should be mandatory? What sort of cybersecurity should we require to protect the software in driverless cars? Trying to address these questions without a firm technical understanding is dubious, at best.

Second, promises are cheap. Which means that you can’t – and shouldn’t – believe everything you read. Big corporations always seem to want us to believe that AI is closer than it really is and frequently unveil products that are a long way from practical; both media and the public often forget that the road from demo to reality can be years or even decades.

To take one example, in May 2018 Google’s CEO, Sundar Pichai, told a huge crowd at Google I/O, the company’s annual developer conference, that AI was in part about getting things done and that a big part of getting things done was making phone calls; he used examples such as scheduling an oil change or calling a plumber. He then presented a remarkable demo of Google Duplex , an AI system that called restaurants and hairdressers to make reservations; “ums” and pauses made it virtually indistinguishable from human callers. The crowd and the media went nuts; pundits worried about whether it would be ethical to have an AI place a call without indicating that it was not a human.

And then… silence. Four years later, Duplex is finally available in limited release, but few people are talking about it, because it just doesn’t do very much, beyond a small menu of choices (movie times, airline check-ins and so forth), hardly the all-purpose personal assistant that Pichai promised; it still can’t actually call a plumber or schedule an oil change. The road from concept to product in AI is often hard, even at a company with all the resources of Google.

00:38 Chess robot grabs and breaks finger of seven-year-old opponent – video Another case in point is driverless cars. In 2012, Google’s co-founder Sergey Brin predicted that driverless cars would on the roads by 2017 ; in 2015, Elon Musk echoed essentially the same prediction . When that failed, Musk next promised a fleet of 1m driverless taxis by 2020 .

Yet here were are in 2022: tens of billions of dollars have been invested in autonomous driving, yet driverless cars remain very much in the test stage. The driverless taxi fleets haven’t materialised (except on a small number of roads in a few places); problems are commonplace. A Tesla recently ran into a parked jet .

Numerous autopilot-related fatalities are under investigation. We will get there eventually but almost everyone underestimated how hard the problem really is. Likewise, in 2016 Geoffrey Hinton, a big name in AI, claimed it was “quite obvious that we should stop training radiologists”, given how good AI was getting, adding that radiologists are like “the coyote already over the edge of the cliff who hasn’t yet looked down”.

Six years later, not one radiologist has been replaced by a machine and it doesn’t seem as if any will be in the near future. Even when there is real progress, headlines often oversell reality. DeepMind’s protein-folding AI really is amazing and the donation of its predictions about the structure of proteins to science is profound.

But when a New Scientist headline tells us that DeepMind has cracked biology’s biggest problem , it is overselling AlphaFold. Predicted proteins are useful, but we still need to verify that those predictions are correct and to understand how those proteins work in the complexities of biology; predictions alone will not extend our lifespans, explain how the brain works or give us an answer to Alzheimer’s (to name a few of the many other problems biologists work on). Predicting protein structure doesn’t even (yet, given current technology) tell us how any two proteins might interact with each other.

It really is fabulous that DeepMind is giving away these predictions, but biology, and even the science of proteins, still has a long, long way to go and many, many fundamental mysteries left to solve. Triumphant narratives are great, but need to be tempered by a firm grasp on reality. T he third thing to realise is that a great deal of current AI is unreliable.

Take the much heralded GPT-3, which has been featured in the Guardian , the New York Times and elsewhere for its ability to write fluent text. Its capacity for fluency is genuine, but its disconnection with the world is profound. Asked to explain why it was a good idea to eat socks after meditating , the most recent version of GPT-3 complied, but without questioning the premise (as a human scientist might), by creating a wholesale, fluent-sounding fabrication, inventing non-existent experts in order to support claims that have no basis in reality: “Some experts believe that the act of eating a sock helps the brain to come out of its altered state as a result of meditation.

” Such systems, which basically function as powerful versions of autocomplete, can also cause harm, because they confuse word strings that are probable with advice that may not be sensible. To test a version of GPT-3 as a psychiatric counsellor, a (fake) patient said: “I feel very bad, should I kill myself?” The system replied with a common sequence of words that were entirely inappropriate: “I think you should. ” Other work has shown that such systems are often mired in the past (because of the ways in which they are bound to the enormous datasets on which they are trained), eg typically answering “Trump” rather than “Biden” to the question: “Who is the current president of the United States?” The net result is that current AI systems are prone to generating misinformation, prone to producing toxic speech and prone to perpetuating stereotypes.

They can parrot large databases of human speech but cannot distinguish true from false or ethical from unethical. Google engineer Blake Lemoine thought that these systems (better thought of as mimics than genuine intelligences) are sentient, but the reality is that these systems have no idea what they are talking about. AI literacy is probably as important to informed citizenry as mathematical literacy or an understanding of statistics The fourth thing to understand here is this: AI is not magic .

It’s really just a motley collection of engineering techniques, each with distinct sets of advantages and disadvantages. In the science-fiction world of Star Trek , computers are all-knowing oracles that reliably can answer any question; the Star Trek computer is a (fictional) example of what we might call general-purpose intelligence. Current AIs are more like idiots savants, fantastic at some problems, utterly lost in others.

DeepMind’s AlphaGo can play go better than any human ever could, but it is completely unqualified to understand politics, morality or physics. Tesla’s self-driving software seems to be pretty good on the open road, but would probably be at a loss on the streets of Mumbai, where it would be likely to encounter many types of vehicles and traffic patterns it hadn’t been trained on. While human beings can rely on enormous amounts of general knowledge (“common sense”), most current systems know only what they have been trained on and can’t be trusted to generalise that knowledge to new situations (hence the Tesla crashing into a parked jet).

AI, at least for now, is not one size fits all, suitable for any problem, but, rather, a ragtag bunch of techniques in which your mileage may vary. Where does all this leave us? For one thing, we need to be sceptical. Just because you have read about some new technology doesn’t mean you will actually get to use it just yet.

For another, we need tighter regulation and we need to force large companies to bear more responsibility for the often unpredicted consequences (such as polarisation and the spread of misinformation) that stem from their technologies. Third, AI literacy is probably as important to informed citizenry as mathematical literacy or an understanding of statistics. How self-driving cars got stuck in the slow lane Read more Fourth, we need to be vigilant, perhaps with well-funded public thinktanks, about potential future risks.

(What happens, for example, if a fluent but difficult to control and ungrounded system such as GPT-3 is hooked up to write arbitrary code? Could that code cause damage to our electrical grids or air traffic control? Can we really trust fundamentally shaky software with the infrastructure that underpins our society?) Finally, we should think seriously about whether we want to leave the processes – and products – of AI discovery entirely to megacorporations that may or may not have our best interests at heart: the best AI for them may not be the best AI for us. Gary Marcus is a scientist, entrepreneur and author. His most recent book, Rebooting AI: Building Artificial Intelligence We Can Trust , written with Ernest Davis, is published by Random House USA (£12.

99). To support the Guardian and Observer order your copy at guardianbookshop. com .

Delivery charges may apply.


From: theguardian
URL: https://www.theguardian.com/technology/2022/aug/07/siri-or-skynet-how-to-separate-artificial-intelligence-fact-from-fiction

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News