Dubai Tech News

Could You Get Paid To Do AI Safety Research – And Should You?

AI Could You Get Paid To Do AI Safety Research – And Should You? Calum Chace Contributor Opinions expressed by Forbes Contributors are their own. “The AI guy” Following New! Follow this author to stay notified about their latest stories. Got it! Oct 27, 2022, 06:00am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin Ross Nordby thinks artificial general intelligence may be much nearer than most people suspect Ross Nordby How close are we to artificial general intelligence (AGI), a machine with all the cognitive ability of an adult human? Surveys of AI researchers indicate that professionals think the most likely timeline is a decade or so either side of the middle of this century.

That is not very long, but quite a few well-informed people think it could be even sooner. One such person is Ross Nordby, who explains his thinking on the latest episode of the London Futurist Podcast . The startling progress of AI Ross is a programmer with deep expertise in real-time physics for video games.

A few years ago he began experimenting with deep learning, the type of AI which has dominated the field since the Big Bang in 2012, when a team led by Geoff Hinton achieved startling results in an image recognition competition. The AI community realised that deep learning (a revival of an old technique known as artificial neural networks, but with more operating layers), had become very powerful, and it went on to create near-miraculous products like modern search, maps, image recognition, and translation services. Ross was so startled by the rapid progress that AI was making that he applied to the Long-Term Future Fund for a grant to enable him to start working on AI safety – the project of making sure that extremely advanced AI will be beneficial to humans, and not damaging.

The specific question Ross is working on is whether we can make AI interpretable. The AGI that Ross is interested in is defined by capability. It isn’t necessary for an AI to be conscious or even sentient in order for it to be able to cause great harm if it is not safe.

Is AI safety a distraction? Some people object that this subject is abstract and theoretical, and a distraction from more pressing problems, like pandemics, and global warming. Ross’ reply is that all of these are serious problems, and we cannot choose to solve just one or two of them: we have to solve them all. MORE FOR YOU $100M Magic: Why Bruno Mars And Other Stars Are Ditching Their Managers Parch Delivers All The Fun Of Drinking Without The Booze Money Is A Catalyst, But It Won’t Make You Happier In the case of AI, there are hundreds of thousands of people around the world developing AI systems which could contribute to the development of AGI.

There are only around 300 people worldwide focused on AI alignment. This is a big increase on the number just a few years ago, but Ross insists it is nowhere near enough. Since the Big Bang in 2012, AI has continually surprised researchers.

In 2015, few of them expected machines to beat world champions of the board game Go within a decade, but the next year Deep Mind’s system AlphaGo did exactly that. AlphaGo was quickly superseded by systems like AlphaZero and MuZero, which were actually simpler, and more flexible. More recently, AlphaTensor framed matrix multiplication as a game, and became superhuman at it.

A second Big Bang in AI in 2017? There was arguably a second Big Bang in AI in 2017, when a paper was published on a new type of AI called Transformers. No-one expected the new machines to achieve what they have. Earlier this year, a system called Minerva got 50% of the answers correct on a set of math problems – a higher score than some maths PhDs achieve.

Professional illustrators are now feeling threatened by the output of systems like Dall-E, and Stable Diffusion. The conclusion Ross draws is that intelligence may be easier to simulate than we thought. He acknowledges that the machines often do stupid things.

In the jargon, they are “brittle”. But he argues that their failings are less severe than they appear. In part, this is because we continue to increase the amount of compute power available to them, but it is more because we are learning how to use them better.

It turns out these machines can not only process natural language: they can write computer code, they can explain why jokes are funny, and they can do maths reasoning. An early version of GPT-3, one of the best-known Transformer systems, exhibited the ability to model a human’s mental process as well as a machine’s. This could turn out to be a forerunner of common sense, which many people see as the critical missing component in machine intelligence.

One researcher has found that when a specially configured version of GPT-3 was faced with a problem it could not solve, it was able to write a programme which solved the problem for it. AGI by 2030? Given all this, Ross now thinks there is a 50% probability that AGI will arrive by 2030, and a higher than 90% probability by 2050. This makes the AI safety problem urgent as well as important.

He accepts it is possible that human intelligence, which of course is still not well understood, may contain elements which cannot be replicated in machines for a long time, or perhaps forever. But he sees this is an unlikely scenario. If Ross is right, what is to be done? If AI research became a political issue, it probably wouldn’t help.

It would simply become another part of the culture wars. What is really needed is more work on the hard technical problems. The great majority of people cannot assist with this, so why worry them unnecessarily – and perhaps counter-productively? On the other hand, if more people were convinced of the possibility of near-term AI, and understood what that entails, there might be pressure on politicians to fund a much greater level of work on AI alignment and AI safety.

Even without getting the politicians on board, there are now significant funds available for people to do AI safety research. Not many people have the requisite skills, and they are generally in highly paid jobs. But Ross says that if, like him, they want to play a role in making our futures safer, they could probably obtain funding to spend at least part of their time working in the field without taking a pay cut.

Follow me on Twitter or LinkedIn . Check out my website or some of my other work here . Calum Chace Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/calumchace/2022/10/27/could-you-get-paid-to-do-ai-safety-research–and-should-you/

Exit mobile version