Dubai Tech News

Celebrating 80 Years Of Hallucinating About Artificial Intelligence

AI Hallucination concept. Artificial intelligence, especially generative AI, dominated all things tech in 2023, generating a banner year on Wall Street, new applications and regulations, doomsday scenarios and baited-breath expectations. In short, a flood of hallucinations, humanity’s favorite mode of escapism, especially popular among people that can afford to indulge in imagining a different, better, more intelligible world, of their own creation.

The particular genre of fabricated, distorted, invented reality that claims to be based on “science,” got its start eighty years ago this month. Neurophysiologist Warren S. McCulloch and logician Walter Pitts published “ A Logical Calculus of the Ideas Immanent in Nervous Activity ” in the December 1943 issue of The Bulletin of Mathematical Biophysics .

It later became the inspiration for the development of computer-based “artificial neural networks” and their popular description as “mimicking the brain. ” Scientists know today much more about the brain than they knew in 1943, but “we’re still in the dark about how it works” according to the Allen Institute . But writing a paper presenting, according to McCulloch’s biographer Tara Abraham, “a theoretical account of the logical relations between idealized neurons, with purported implications for how the central nervous system functioned as a whole,” did not require any empirical knowledge.

As a matter of fact, “McCulloch was pulled in opposite directions—between the messy, subjective, clinically motivated world of the laboratory and the abstract, pencil-and-paper world of mathematical biology,” explains Abraham. “McCulloch’s and Pitts’ goal was to move beyond empirical evidence and represent the functional relationships between neurons in terms of Boolean logic: to embody reasoning in the physiology of the brain. ” McCulloch and Pitts needed to make “certain theoretical presuppositions,” specifically that “the activity of a neuron is an all-or-none process and that the structure of the net does not change with time.

While McCulloch and Pitts admitted that this was an abstraction, they emphasized that their goal was not to present a factual description of neurons, but rather to design ‘fictitious nets’ composed of neurons whose connections and thresholds are unaltered. ” Thus was born a fictional account of how the brain works, the presumed on-or-off activity of neurons as the equivalent of true-or-false propositions. McCulloch and Pitts were influenced by and working in the nascent field of mathematical biology of which The Bulletin of Mathematical Biophysics was a founding publication.

Mathematical symbols always add “credibility” and a “scientific” aura to an academic paper, especially when they appear in papers in biology, economics, or psychology. Scientific theories, however, are tested by empirical evidence. McCulloch and Pitts’ theory, especially its implications that sensory input was going straight to the brain where it was processed by their presumed digital (on and off, ones and zeroes) neurons, was tested by experiments on frogs conducted by their friend and colleague, Jerome Lettvin.

Together with McCulloch, Pitts and the biologist Humberto Maturana, Lettvin subjected the frogs to various visual experiences and recorded the information the eye sent to the brain. “To everyone’s surprise,” writes Amanda Gefter, “instead of the brain computing information digital neuron by digital neuron using the exacting implement of mathematical logic, messy, analog processes in the eye were doing at least part of the interpretive work. ” The results of the experiments were reported in the 1959 paper “What the Frog’s Eye Tells the Frog’s Brain,” which became “ a seminal paper in cognitive science.

” Later, Lettvin summarized the career trajectory of the McCulloch and Pitts hallucination: “The whole field of neurology and neurobiology ignored the structure, the message and the form of McCulloch and Pitts’ theory. Instead, those who were inspired by it were those who were destined to become the aficionados of a new venture, now called artificial intelligence, which proposed to realize in a programmatic way the ideas generated by the theory. ” The McCulloch and Pitts theory was the inspiration for “connectionism,” the specific variant of artificial intelligence dominant today (now called “deep learning”), which its aficionados have finally succeeded in realizing in real-world applications.

The development and embellishment of the McCulloch and Pitts hallucination about neurons firing or not firing continued in 1949, when psychologist Donald Hebb advanced a theory of how neural networks could learn. Hebb’s theory is often summarized as “neurons that fire together wire together,” arguing that synapses—the connections between neurons—strengthen over time with the repeated reactivation of one neuron by another or weaken in the absence of such reactivation. Today’s buzz about the computer algorithms that are presumed to emulate the learning processes of the human brain is derived from these 1940s hallucinations, “the attribution of ‘reality’ to logical and mathematical models” in the absence of “experimental facts,” in the words of Ralph Lillie, a contemporary physiologist.

Today’s AI aficionados are not bothered by the absence of facts, nor by the presence of experimental facts that contradict the theory they rely on. In the 2017 paper “ Neuroscience-Inspired Artificial Intelligence ,” the authors, led by Demis Hassabis (co-founder of DeepMind, currently leading Google’s AI work), wrote that McCulloch, Pitts, and Hebb ”opened up the field of artificial neural network research, and they continue to provide the foundation for contemporary research on deep learning. ” In 1967, Lettvin debated Timothy Leary about the merits and dangers of the hallucinogenic drug LSD.

They discussed possible ways of extricating the world from what they both agreed was a miserable situation. ” Leary, “founder and head of his own LSD religion,” argued that man had always turned on, be it through flagellation, sexuality, or some other means. ‘Today,’ he said, ‘the sacrament is chemical.

’” Lettvin “cited case after case of people who lost their ability to do their work or became psychotic following LSD experiences” and concluded that LSD is different from other escape drugs such as alcohol or marijuana in that “the user is not assured that he will return to his pre-experience state of rationality. ” In any event, the “state of rationality” was questionable, with or without drugs. Lettvin laid out his unflattering view of the scientific community in his talk at a 1971 UNESCU conference, warning of a new religion, “the faith imposed on people by a self-elected administrating priesthood”: “The most vicious thing that this public science says… is that truth is in number, numbers are in machines, machines are not human and therefore just.

They are spared the original sin. In the hands of an unscrupulous and power-grasping priesthood, this efficient tool, just as earlier, the Final Man, has become an instrument of bondage. ” As a member of the MIT faculty, Lettvin spent his time, he said, “with those jolly friars that tend the computer,” and he wanted to prepare his audience for “a glimpse of the savior himself, as noble in concept as any modem enterprise, possibly the noblest of them all, but also the most vicious in effect.

This king, this bright star in the diadem of our paper universe is a project called Artificial Intelligence. ” Just like today, the goal of the developers of artificial intelligence in Lettvin’s time was “to change the machine from being sorcerer’s apprentice to being itself the sorcerer. ” And just like AI developers today, telling us AGI is around the corner, “they are seeking a universal bug-killer, for all that stands between them and the final conquest of cognition are these few bugs.

” A few months before McCulloch and Pitts’ paper was published, John Mauchly and J. Presper Eckert of the Moore School at the University of Pennsylvania, submitted a proposal for building an “electronic calculator” to the U. S.

Army’s Ballistics Research Laboratory. The result was the ENIAC, the first electronic general-purpose computer, unveiled to the public in February 1946. The work on the ENIAC also resulted in the paper defining the architecture of modern computers to this day, John von Neumann’s June 1945 First Draft of a Report on EDVAC .

“Following McCulloch and Pitts,” wrote von Neumann, “we ignore the more complicated aspects of neuron functioning… It is easily seen that these simplified neuron functions can be imitated by telegraph relays or vacuum tubes. ” What von Neumann—and all computer hardware and software developers that followed—engaged in was not “science” but engineering. He used the McCulloch and Pitts theory about how neurons functioned to help illustrate various computer “elements” with familiar terms from human anatomy—”memory” was one such term in von Neumann usage that survived to this day.

Ingenious computer engineering begat artificial intelligence in the 1940s, endowing machines with unprecedented speed of calculation. The subsequent evolution of modern computers—pushed forward by the human intelligence of the engineers developing them—added more and more functionality to computers, all the way to today’s efficient text and image processing. Artificial intelligence is what computers do and what computer engineers invent without any understanding of how our brains work.

Hallucinations about “artificial general intelligence” or AGI may motivate some of them, but they do not contribute at all to their success in steadily expanding what computers can do. .


From: forbes
URL: https://www.forbes.com/sites/gilpress/2023/12/30/celebrating-80-years-of-hallucinating-about-artificial-intelligence/

Exit mobile version