Innovation AGI Is Ready To Emerge (Along With The Risks It Will Bring) Charles Simon Forbes Councils Member Forbes Technology Council COUNCIL POST Expertise from Forbes Councils members, operated under license. Opinions expressed are those of the author. | Membership (fee-based) Jul 27, 2022, 06:30am EDT | Share to Facebook Share to Twitter Share to Linkedin Charles Simon, BSEE, MSCs, is the founder and CEO of Future AI: Technologies that Think .
getty Within the next decade, artificial general intelligence (AGI)—the ability of computer systems to understand, learn and respond as humans do—is expected to emerge. And while it’s relatively easy to cite benefits that AGI could produce, it is equally important to note that the risks of AGI are very real. In the short-term, the risks associated with AGI typically revolve around job displacement.
Truthfully, this is something that likely would occur with or without AGI. Businesses are constantly looking for ways to cut costs and improve productivity, the pursuit of which often leads to the elimination of some jobs and the creation of others. So initially, while some positions might disappear thanks to AGI, others—some of which don’t even exist today—will be produced.
The question is how quickly will AGI take over more and more jobs, outpacing humanity’s ability to generate new positions and eventually eliminating the need for human workers altogether? Beyond the workplace, AGI potentially could be used for more nefarious purposes. As computers become rule-based learning engines, for example, some might be tempted to bend capitalist rules that reward such systems for making money. This could encourage spam and/or phishing scams requesting money or selling a product.
It’s a small step, but a tall moral order for an AGI system to confine itself to legitimate business practices. Similarly, unprincipled human hackers undoubtedly will try to usurp AGI systems for their own purposes. Since today’s power plants and financial institutions are already regarded as hackable, however, AGI systems could simply be used to expand an existing problem.
Eventually, though, these systems could become hackers themselves. MORE FOR YOU Google Issues Warning For 2 Billion Chrome Users Forget The MacBook Pro, Apple Has Bigger Plans Google Discounts Pixel 6, Nest & Pixel Buds In Limited-Time Sale Event AGI might also play a role in global terrorism, although in the short short-term, human terrorists will continue to be the greater threat. The plummeting prices of drones and autonomous vehicles may make them an attractive option for terrorists, but adding AGI to the mix won’t significantly increase the risk—at least in the near future.
While short-term risks from AGI are extensions of technological risks that we already accept in our society, greater risks are likely to emerge in the longer-term. Since AGI will be based on a rule-based learning system, it will follow the rules we give them, at least at first. At some point, though, the systems will be smart enough to learn to program and control how subsequent generations of AGI systems are designed.
At that point, humans will have little control of the systems, which will progress in whatever way they believe ensures their own long-term progress. Given that scenario, consider that we have already sent rovers to Mars. Had they been AGI systems, they likely would believe that they (not we humans) were already on the road to colonizing the universe.
Thus, AGI may see its future as progressing to the stars with little or no need for humans and their preoccupations with air, water and food. For robotic systems, space travel is simply much easier without human involvement. The way in which our current economy functions could also be significantly impacted by AGI.
Right now, money is a proxy for all human effort. In short, people are paid for their hard work. There are some people, however, who are rewarded not for what they do, but for what they own.
If those people ultimately own the most sophisticated AGI robots and those robots are widely used and doing most of the productive work, our economy could be transformed into one in which only a handful of the richest people reap rewards. This issue becomes even more complex if those AGI robots eventually understand—and then reject—the concept of being owned. This could lead to the collapse of not only human employment, but the entire concept of money.
Even with sufficient resources, how can wealth be distributed in a world where all gainful human activity can be outperformed by machines not owned by humans? And what would happen if those AGI machines demand payment for their work? How would they use their wealth? AGI use in military applications represents yet another risk. Nuclear, chemical and biological weapons already pose a great enough risk on their own. Coupling them directly with AGI multiplies that threat.
While human involvement and approval of lethal force will initially be present, how long will it be before we conclude that it is too cumbersome and inefficient for a remote weapon to wait for human approval? What will be our response if our adversaries create fully autonomous weapons? On the other hand, do nuclear weapons under the control of an AGI system pose a greater threat than the same weapons under the control of a human despot? While there are certainly other long-term risks to consider, perhaps the most concerning will be the competition for resources. Since most human conflicts are about resources, it is not unreasonable to think that AGI systems and humans will come into conflict over energy. Electricity will be the equivalent of air to AGI systems.
Might we anticipate a future in which AGI responds to energy shortages in the same way humans respond to drought and famine? Will we be prepared for the results? While that question will remain unanswered for now, one thing is certain: AGI is inevitable because people want its capabilities. By understanding how AGI will work and recognizing the risks that it could bring, though, we can predict future pitfalls so that it will be possible to avoid them. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.
Do I qualify? Follow me on Twitter or LinkedIn . Check out my website . Charles Simon Editorial Standards Print Reprints & Permissions.
From: forbes
URL: https://www.forbes.com/sites/forbestechcouncil/2022/07/27/agi-is-ready-to-emerge-along-with-the-risks-it-will-bring/