Enterprise & Cloud History Of AI In 33 Breakthroughs: The First ‘Thinking Machine’ Gil Press Senior Contributor Opinions expressed by Forbes Contributors are their own. I write about technology, entrepreneurs and innovation. Following New! Follow this author to stay notified about their latest stories.
Got it! Oct 30, 2022, 05:52pm EDT | Share to Facebook Share to Twitter Share to Linkedin Many histories of AI start with Homer and his description of how the crippled, blacksmith god Hephaestus fashioned for himself self-propelled tripods on wheels and “golden” assistants, “in appearance like living young women” who “from the immortal gods learned how to do things. ” I prefer to stay as close as possible to the notion of “artificial intelligence” in the sense of intelligent humans actually creating, not just imagining, tools, mechanisms, and concepts for assisting our cognitive processes or automating (and imitating) them. UNITED STATES – CIRCA 1943: Machine’s Can’t Think (Photo by Buyenlarge/Getty Images) Getty Images In 1308, Catalan poet and theologian Ramon Llull completed Ars generalis ultima (The Ultimate General Art), further perfecting his method of using paper-based mechanical means to create new knowledge from combinations of concepts.
Llull devised a system of thought that he wanted to impart to others to assist them in theological debates, among other intellectual pursuits. He wanted to create a universal language using a logical combination of terms. The tool Llull created was comprised of seven paper discs or circles, that listed concepts (e.
g. , attributes of God such as goodness, greatness, eternity, power, wisdom, love, virtue, truth, and glory) could be rotated to create combinations of concepts to produce answers to theological questions. Llull’s system was based on the belief that only a limited number of undeniable truths exists in all fields of knowledge and by studying all combinations of these elementary truths, humankind could attain the ultimate truth.
His art could be used to “banish all erroneous opinions” and to arrive at “true intellectual certitude removed from any doubt. ” MORE FOR YOU $100M Magic: Why Bruno Mars And Other Stars Are Ditching Their Managers What Are Putin’s ‘Filtration Camps’ And Why Are They Concerning? Protecting Your Boundaries Can Be Limiting: 3 Times It’s Best To Flex In early 1666, 19-year-old Gottfried Leibniz wrote De Arte Combinatoria ( On the Combinatorial Art ), an extended version of his doctoral dissertation in philosophy. Influenced by the works of previous philosophers, including Ramon Llull, Leibniz proposed an alphabet of human thought.
All concepts are nothing but combinations of a relatively small number of simple concepts, just as words are combinations of letters, he argued. All truths may be expressed as appropriate combinations of concepts, which in turn can be decomposed into simple ideas. Leibniz wrote: “Thomas Hobbes, everywhere a profound examiner of principles, rightly stated that everything done by our mind is a computation.
” He believed such calculations could resolve differences of opinion: “The only way to rectify our reasonings is to make them as tangible as those of the mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate, without further ado, to see who is right” ( The Art of Discovery , 1685). In addition to settling disputes, the combinatorial art could provide the means to compose new ideas and inventions. “Thinking machines” has been the common portrayal in modern times of the new, mechanical, incarnations of these early descriptions of cognitive aids.
Already in the 1820s, for example, the Difference Engine—a mechanical calculator—was referred to by Charles Babbage’s contemporaries as his “thinking machine. ” Wala the thinking data file is able to find out certain data-groups through operating the several . .
. [+] levers, 1932 (Photo by Imagno/Getty Images) Getty Images More than a century and a half later, computer software pioneer Edmund Berkeley wrote in his 1949 book Giant Brains: Or Machines That Think : “These machines are similar to what a brain would be if it were made of hardware and wire instead of flesh and nerves… A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.
” And so on, to today’s gullible media, over-promising AI researchers, highly-intelligent scientists and commentators, and certain very rich people, all assuming that the human brain is nothing but a “meat machine” (per AI pioneer Marvin Minsky) and that calculations and similar computer operations are tantamount to thinking and intelligence. In contrast, Leibniz—and Llull before him—were anti-materialists. Leibniz rejected the notion that perception and consciousness can be given mechanical or physical explanations.
Perception and consciousness cannot possibly be explained mechanically, he argued, and therefore could not be physical processes. In Monadology (1714), Leibniz wrote: “One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles , that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill.
Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception. ” For Leibniz, no matter how complex the inner workings of a “thinking machine,” nothing about them reveals that what is being observed are the inner workings of a conscious being.
Two and a half centuries later, the founders of the new discipline of “artificial intelligence,” materialists all, assumed that the human brain is a machine, and therefore, could be replicated with physical components, with computer hardware and software. They believed that they were well on their way to finding the basic computations, the universal language of “intelligence,” to creating a machine that will think, decide, act just like humans or even better than humans. This is when being rational was replaced by being digital.
The founding document of the discipline, the 1955 proposal for the first AI workshop , stated that it is based on “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. ” Twenty years later, Herbert Simon and Allan Newel in their Turing Award lecture , formalized the field’s goals and convictions as The Physical Symbol System Hypothesis: “A physical symbol system has the necessary and sufficient means for general intelligent action. ” Soon thereafter, however, AI started to shift paradigms, from symbolism to connectionism, from defining (and programming) every aspect of learning and thinking, to statistical inference or finding connections or correlations leading to learning based on observations or experience.
With the advent of the Web and the creation of lots and lots of data in which to find correlations, buttressed by advances in the power of computers and the invention of sophisticated statistical analysis methods, we have arrived at the triumph of “deep learning,” and its contribution to the very large improvements in computers’ ability to perform tasks such as identifying images, responding to questions, and textual analysis. Recently, some new tweaks to deep learning have produced AI programs that can write (“this stuff is like… alchemy!” said one of the creators of the creative machine), engage in conversations (“I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent,” said another AI creator), and create images from text input, even videos. In 1726, Jonathan Swift published Gulliver’s Travels in which he described (possibly as a parody of Llull’s system), a device that generates at random permutations of word sets.
The professor in charge of this invention “showed me several volumes in large Folio already collected, of broken sentences, which he intended to piece together, and out of those rich materials to give the world a complete body of all Arts and Sciences. ” There you have it, brute force deep learning in the 18 th century. Over a decade ago, when the new-old discipline of “data science” emerged, bringing to the fore the sophisticated statistical analysis that is the foundation of deep learning, some observers and participants reminded us that “correlation does not imply causation.
” A Swift today would probably add: “Correlation does not imply creativity. ” Follow me on Twitter or LinkedIn . Check out my website .
Gil Press Editorial Standards Print Reprints & Permissions.
From: forbes
URL: https://www.forbes.com/sites/gilpress/2022/10/30/history-of-ai-in-33-breakthroughs-the-first-thinking-machine/