Dubai Tech News

4 dangers that most worry AI pioneer Geoffrey Hinton

4 dangers that most worry AI pioneer Geoffrey Hinton AP Updated: May 4th, 2023, 07:20 IST in Sci-Tech 0 Representational image Share on Facebook Share on Twitter Share on WhatsApp Share on Linkedin San Francisco: Geoffrey Hinton, an award-winning computer scientist known as the “godfather of artificial intelligence,” is having some serious second thoughts about the fruits of his labours. Hinton helped pioneer AI technologies critical to a new generation of highly capable chatbots such as ChatGPT. But in recent interviews, he says that he recently resigned a high-profile job at Google specifically to share his concerns that unchecked AI development could pose danger to humanity.

“I have suddenly switched my views on whether these things are going to be more intelligent than us,” he said in an interview with MIT Technology Review. “I think they’re very close to it now and they will be much more intelligent than us in the future…. How do we survive that?” Hinton is not alone in his concerns.

Shortly after the Microsoft-backed startup OpenAI released its latest AI model called GPT-4 in March, more than 1,000 researchers and technologists signed a letter calling for a six-month pause on AI development because, they said, it poses “profound risks to society and humanity. ” Here’s a look at Hinton’s biggest concerns about the future of AI … And humanity. IT’S ALL ABOUT THE NEURAL NETWORKS Our human brains can solve calculus equations, drive cars and keep track of the characters in “Succession” thanks to their native talent for organizing and storing information and reasoning out solutions to thorny problems.

The roughly 86 billion neurons packed into our skulls — and, more important, the 100 trillion connections those neurons forge among themselves — make that possible. By contrast, the technology underlying ChatGPT features between 500 billion and a trillion connections, Hinton said in the interview. While that would seem to put it at a major disadvantage relative to us, Hinton notes that GPT-4, the latest AI model from OpenAI, knows “hundreds of times more” than any single human.

Maybe, he suggests, it has a “much better learning algorithm” than we do, making it more efficient at cognitive tasks. AI MAY ALREADY BE SMARTER THAN US Researchers have long noted that artificial neural networks take much more time to absorb and apply new knowledge than people do, since training them requires tremendous amounts of both energy and data. That’s no longer the case, Hinton argues, noting that systems like GPT-4 can learn new things very quickly once properly trained by researchers.

That’s not unlike the way a trained professional physicist can wrap her brain around new experimental findings much more quickly than a typical high school science student could. That leads Hinton to the conclusion that AI systems might already be outsmarting us. Not only can AI systems learn things faster, he notes, they can also share copies of their knowledge with each other almost instantly.

“It’s a completely different form of intelligence,” he told the publication. “A new and better form of intelligence. ” WARS AND RUMORS OF WARS What would smarter-than-human AI systems do? One unnerving possibility is that malicious individuals, groups or nation-states might simply co-opt them to further their own ends.

Hinton is particularly concerned that these tools could be trained to sway elections and even to wage wars. Election misinformation spread via AI chatbots, for instance, could be the future version of election misinformation spread via Facebook and other social media platforms. And that might just be the beginning.

“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” Hinton said in the article. “He wouldn’t hesitate. ” A SHORTAGE OF SOLUTIONS What’s not clear is how anyone would stop a power like Russia from using AI technology to dominate its neighbours or its own citizens.

Hinton suggests that a global agreement similar to the 1997 Chemical Weapons Convention might be a good first step toward establishing international rules against weaponised AI. Though it’s also worth noting that the chemical weapons compact did not stop what investigators found were likely Syrian attacks using chlorine gas and the nerve agent sarin against civilians in 2017 and 2018 during the nation’s bloody civil war. AP Tags: AI Geoffrey Hinton Share Tweet Send Share Suggest A Correction Enter your email to get our daily news in your inbox.

Leave this field empty if you’re human: Related Posts London-based firm Nothing to release its Phone (2) May 3, 2023 AI in medical imaging may magnify health inequities: Study May 3, 2023 LinkedIn’s new AI feature to write messages to hiring team May 3, 2023 Google Chrome world’s most popular desktop browser May 3, 2023 After India, S. Korea calls for strict safety standards over EV fires May 3, 2023 SpaceX Starship to be relaunched in 6 to 8 weeks: Elon Musk May 3, 2023 Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * Comment * Name * Email * Website Save my name, email, and website in this browser for the next time I comment.

Δ.


From: orissapost
URL: https://www.orissapost.com/4-dangers-that-most-worry-ai-pioneer-geoffrey-hinton/

Exit mobile version