Sunday, November 24, 2024

Trending Topics

HomeTechnologyThe Trouble With AI: Human Intelligence

The Trouble With AI: Human Intelligence

spot_img

Enterprise & Cloud The Trouble With AI: Human Intelligence Gil Press Senior Contributor Opinions expressed by Forbes Contributors are their own. I write about technology, entrepreneurs and innovation. Following New! Follow this author to stay notified about their latest stories.

Got it! Sep 27, 2022, 09:00am EDT | Share to Facebook Share to Twitter Share to Linkedin The trouble with AI is not that it is going to rule us, that its ethics are questionable, that it can be used irresponsibly… The trouble with AI is that no one knows what “AI” actually means. The trouble with AI is that it lacks a clear definition, that it suffers from the unique nature of its creators’ intelligence and the fuzzy language they use. “Intelligent machines” will not have our imagination, our creativity, our shared experiences and .

. . [+] traditions.

getty A definition, according to leading lexicographers (an occupation Samuel Johnson defined as “a writer of dictionaries; a harmless drudge that busies himself in tracing the original, and detailing the signification of words”), tells us the meaning of a word, providing us “with precise statement of the essential nature of a thing. ” But the Oxford English Dictionary (OED) also notes an “obsolete and rare” meaning, namely “The setting of bounds or limits; limitation, restriction. ” Setting the bounds is especially important when you are discussing a new concept, a new technology or tool, a new stage in a certain evolution.

What is defined must be clearly distinguishable from similar, related, associated concepts or technologies or stages. Specifically in the case of AI, as it involves the use of computer hardware and software, a clear definition needs to distinguish it from other computer-based technologies and tools. Compounding the matter (and the resulting confusion) is that “AI” was defined very early in the history of modern computing.

John McCarthy, who coined the term in 1955, provided this definition: “For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving. ” MORE FOR YOU AI Startup Navina Leverages Amazon Web Services To Improve Patient Care The first rule of definitions produced by human intelligence is “use poorly defined and understood concepts in your definition. ” For example, “intelligent” or “intelligence.

” The OED (to stay with the same authoritative lexicographers) says that intelligence is “The faculty of understanding” and “The action or fact of mentally apprehending something. ” But a different version of the OED (concise, 3 rd edition) states intelligence to be “the ability to acquire and apply knowledge and skills. ” An intelligent person may conclude that there is some confusion regarding what intelligence actually is.

Indeed, a compilation of about 70 different definitions of “intelligence” was published a few years ago, demonstrating that many intelligent people, beyond just lexicographers, cannot agree what precisely makes them “intelligent. ” At the time McCarthy defined AI, computers were often called “thinking machines. ” This fervent conviction, the unflagging belief that human intelligence can recreate itself in a machine, has persisted for a long time.

Already in the 1820s, for example, the Difference Engine—a mechanical calculator—was referred to by Charles Babbage’s contemporaries as his “thinking machine. ” More than a century and a half later, computer software pioneer Edmund Berkeley wrote in his 1949 book Giant Brains: Or Machines That Think : “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.

” And so on, to today’s gullible media, over-promising AI researchers, highly-intelligent scientists and commentators, and certain very rich people, all assuming that the human brain is nothing but a “meat machine” (per Marvin Minsky) and that calculations and similar computer operations are tantamount to thinking and intelligence. One of the most prominent recent examples of the failure of the people who do AI, who write and talk about AI, and who think about AI, to define clearly the essence of what they do or talk about for a living, is the 2016 report summarizing the 2-year effort by a large group of very prominent AI researchers to establish the baseline for The One Hundred Year Study on Artificial Intelligence , “a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society. ” To begin with, the authors of the report (from Stanford, MIT, Harvard, Microsoft, etc.

) inform us that not having a definition of what they study is actually a good thing: “Curiously, the lack of a precise, universally accepted definition of AI probably has helped the field to grow, blossom, and advance at an ever-accelerating pace. ” Still, they offer the definition used by Nils Nilsson: “Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment. ” This definition, like most attempts at defining AI, follows McCarthy’s, but to its great credit also tries to define “intelligence.

” “Like Nilsson,” says the AI100 committee, “the Study Panel takes a broad view that intelligence lies on a multidimensional spectrum. According to this view, the difference between an arithmetic calculator and a human brain is not one of kind, but of scale, speed, degree of autonomy, and generality… Although our broad interpretation places the calculator within the intelligence spectrum, such simple devices bear little resemblance to today’s AI. The frontier of AI has moved far ahead and functions of the calculator are only one among the millions that today’s smartphones can perform.

AI developers now work on improving, generalizing, and scaling up the intelligence currently found on smartphones. In fact, the field of AI is a continual endeavor to push forward the frontier of machine intelligence. Ironically, AI suffers the perennial fate of losing claim to its acquisitions, which eventually and inevitably get pulled inside the frontier, a repeating pattern known as the ‘AI effect’ or the ‘odd paradox’—AI brings a new technology into the common fold, people become accustomed to this technology, it stops being considered AI, and newer technology emerges.

The same pattern will continue in the future. AI does not ‘deliver’ a life-changing product as a bolt from the blue. Rather, AI technologies continue to get better in a continual, incremental way.

” So why call it “AI” and not “ modern computing ”? In 1949 (and 1955), the modern computer or the “thinking machine” was a giant calculator. By 2016 it became a smart phone. To paraphrase the statement above, “the field of computer science is a continual endeavor to push forward the frontier of what computers can do.

” Why call it AI? Especially if you define it in such a way as to include the basic function of the earliest computers? Wasn’t Herbert Simon more on the mark when he suggested calling “complex information processing” what he and McCarthy and Minsky and others were doing in the 1950s? The AI100 panel may have grasped the need to offer after all something that will distinguish “AI” from the rest of computer science and/or explain what is so unique about the current stage in its evolution that it requires a new label. The panel added “An operational definition—AI can also be defined by what AI researchers do. ” Which brings us to the second rule of definitions produced by human intelligence: “the more circular your definition, the more intelligent it sounds.

” Not only is this operational definition circular, it is also misleading. What most “AI researchers” do today is very different from what McCarthy and other mainstream AI researchers were doing for about four decades. What became to be known as “Symbolic AI” was based on the belief that it is possible to come up with rules and algorithms that can be translated into computer programs that would mimic humans’ cognitive functions and actions such as problem-solving.

Ten years ago, a sophisticated statistical analysis technique that has been called “deep learning” since the late 1980s decisively outperformed other approaches in image identification. The “artificial neural networks” at the heart of deep learning were first demonstrated in the 1950s but ten years ago advanced dramatically due to two factors: the availability of “big data” (the huge amounts of labeled images available on the Web in the case of image identification) and the availability of vastly increased computing power in the form of Graphical Processing Units or GPUs. Today’s “AI” is simply the most recent stage of computer-based learning from data .

The evolution of computer technology over the last 75 years can be divided into two major eras: the first has been focused mainly on improving the speed of computers and the second (starting in the 1970s) on storing, organizing, and analyzing the data that computers, those perennial observers, collect. Both foci (or main preoccupations) have been overlapping, of course, with statistical thinking and data analysis already present in the operations of the ENIAC , first fully electronic computer operating in the U. S.

(with the invention of the Monte Carlo Simulation by Stan Ulam and John Von Neumann). “Computational statistics,” “machine learning,” “predictive analytics,” “data science,” are some of the labels that have been given over the years to the marriage of computing and statistical analysis. “Deep Neural Networks” and “AI” are the latest.

Calling identifying patterns, finding correlations, and classifying data “artificial intelligence” is wrong, misleading, and dangerous. It leads to misplaced capital allocation, misplaced government action, and misplaced fears and excitement. The OECD released a report analyzing VC investments in 8,300 AI firms worldwide over 2012-20.

The OECD “considers an ‘AI start-up’ to be a private company that researches and delivers all or part of an AI system, or products and services that rely heavily on AI systems. ” AI is what AI companies do. And everybody does AI today.

If you are a budding entrepreneur, why not tell VCs that you are “doing AI” since no one knows for sure what “AI” is? The result has been a funding bubble, similar to the “dot-com boom” which was also driven by a poorly-defined buzzword, “the new economy. ” Another boom that the misplaced fascination with “AI” has created is an irresponsible government regulation boom—more on this in my next post. And I wrote many times in the past about AI-induced anxiety, delusions, and fantasies.

Here’s my 2015 prediction : “I’m sure we will have smart machines that could perform special tasks, augmenting our capabilities and improving our lives. That many jobs will be taken over by algorithms and robots, and many others will be created because of them, as we have seen over the last half-century. And that bad people will use these intelligent machines to harm other people and that we will make many mistakes relying too much on them and not thinking about all the consequences of what we are developing.

But intelligent machines will not have a mind of their own. Intelligent machines will not have our imagination, our creativity, our unique human culture. Intelligent machines will not take over because they will never be human.

” I also wrote before about the many intelligent people in modern times that want to play God, to recreate their intelligence in a machine. This was not entirely accurate—they are not trying to recreate human intelligence, they are promising to create a better intelligence , one free of all human biases, foibles, fuzzy language, and poorly-defined concepts. Anyone using the terms “Artificial General Intelligence (AGI)” or “Human-Level Intelligence” actually means “Better Intelligence” or “Never Failing Intelligence” or “Absolutely Reasonable and Rational Intelligence” or “The Most Moral Intelligence.

” We would not have modern computing and its most recent “AI” phase without human creativity, imagination, shared experiences and traditions, language and stored knowledge, for better and for worse. We will never have AGI because our human intelligence simply cannot define (and write the code for) an idealized, most rational and moral, better human. Follow me on Twitter or LinkedIn .

Check out my website . Gil Press Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/gilpress/2022/09/27/the-trouble-with-ai-human-intelligence/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News