Sunday, November 24, 2024

Trending Topics

HomeTechnologyAI Ethics Asking Aloud Whether Large Language Models And Their Bossy Believers Are Taking AI Down A Dead-End Path

AI Ethics Asking Aloud Whether Large Language Models And Their Bossy Believers Are Taking AI Down A Dead-End Path

spot_img

AI AI Ethics Asking Aloud Whether Large Language Models And Their Bossy Believers Are Taking AI Down A Dead-End Path Lance Eliot Contributor Opinions expressed by Forbes Contributors are their own. Dr. Lance B.

Eliot is a world-renowned expert on Artificial Intelligence (AI) and Machine Learning (ML). New! Follow this author to stay notified about their latest stories. Got it! Aug 30, 2022, 08:00am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin Is the AI bandwagon of leveraging Large Language Models proceeding on a proper quest or is it a red .

. . [+] herring? getty Maybe a dead-end in AI is up ahead.

You wouldn’t know of it from the breathtaking news about AI all told. Nearly all of the recent AI advances are portrayed as eye-catching headline-grabbing and altogether remarkable, perhaps even sensational (or, some say grumbly, sensationalized). Those breathtaking exhortations are especially being made about the latest AI capabilities that employ a set of techniques and technologies coined as Large Language Models (LLMs).

You have undoubtedly heard or seen references to LLMs by their instances such as GPT-3 or BERT. Another notable example consists of LaMDA (Language Model for Dialogue Applications) which garnered outsized attention due to a Google engineer that proclaimed this AI to be sentient (it wasn’t). For my recent coverage of the confusion being raised by those that claim their devised AI has reached sentience (i.

e. , contentions that are notably false and misguided), see the link here . I’ll momentarily provide you with a quick explanation of how LLMs work.

They are in fact doing some remarkable computational pattern matching. But this is a far cry from being the final say in attaining either sentient AI or at least Artificial General Intelligence (AGI). Some suggest that with enough upscaling, LLMs will reach such a pinnacle.

Skeptics doubt this. They argue that LLMs are potentially going to be an AI dead-end. Why so? All in all, a rising concern is being gradually voiced, namely that Large Language Models are sucking all the air out of the AI room, as it were.

Just about everyone is totally preoccupied with LLMs. The thing is, there is a strident belief that LLMs only provide one avenue and even at that a limited one on the rocky road to achieving the zenith of AI (we’ll address closely these worries in a moment). MORE FOR YOU Black Google Product Manager Stopped By Security Because They Didn’t Believe He Was An Employee Vendor Management Is The New Customer Management, And AI Is Transforming The Sector Already What Are The Ethical Boundaries Of Digital Life Forever? Qualms are that our current preoccupation with LLM is denying sorely needed attention to other AI rising possibilities.

Not only that but there are also turf wars and fiefdoms that seem to bossily buttress LLMs into a veritable AI-laden fortress and spew acrid attacks upon any non-LLMs that might try to see the light of day. That’s a double whammy posturing, namely favoring one AI approach as in LLMs and meanwhile disfavoring or bashing any alternative AI approaches. For skeptics and those also struggling to push ahead in other AI quarters, this seems unfair, unwise, and extremely narrow-minded.

Notably, this raises serious AI Ethics considerations. For my ongoing and extensive coverage of AI Ethics and Ethical AI issues, see the link here and the link here , just to name a few. Let’s begin this analysis by first clarifying what Large Language Models consist of.

I will proffer a simple explanation that gets at the essence of LLMs. That being said, those steeped in LLMs might get a bit sour if the description does not seem to fully portray the grandeur of LLMs. As such, keep in mind that the explanation here is merely for getting the fundamentals on the table.

A lot of added complexity ultimately does enter into the picture, and for which I’ve covered elsewhere such as the link here . We shall begin by discussing and making an analogy to the autocomplete features of modern-day text entry. You almost certainly are familiar with today’s autocomplete functionalities whereby while typing a sentence in a word processor or an email the system tries to suggest the next word that you are likely going to want to write.

Sometimes the predicted word is entirely on-target, while on other occasions the predicted word is amiss. Part of this can be improved by the system trying to examine the context of the sentence to calculate what word would seem the most likely fit. For example, if you are typing a sentence about dogs, the system might proffer the word “bark” when your sentence says that the dog started to make a sound.

A sentence that was mentioning that a cat was about to make a sound would presumably not likely be using the word “bark” and thus the system might be able to estimate from the words being used the more likely predictable wording is perhaps “meow” (or a similar expression). Notice that the system is trying to identify which words you are using and how those words form a kind of pattern. The odds are that much of what we write has somewhat already been written, at least in terms of sentence structure and portions of the sentence.

Imagine that whatever sentence you wanted to write was being compared to all other sentences in existence as posted on the Internet. The chances are that the words you are using are to some degree in the same order or similar phrasing as sentences or sentence fragments that others have written already. Bump up this pattern matching from the word level amidst words within sentences to entail a search for and an auto-completion of entire sentences that are composed into paragraphs.

Once again, there is potentially a good chance that an entire sentence or two that you’ve written will be followed by another sentence that is somewhat comparable to other multiple sentences (within paragraphs) that have already been written. Ratchet this autocomplete functionality to do the same with whole paragraphs. There is some chance that the paragraph you are writing is going to be followed by a subsequent paragraph of a semi-predictable nature.

Remember that you are writing words, sentences, and paragraphs based on your own human capabilities as a pattern matcher. Likely, the more that you’ve read books and articles, there is strong the possibility that your own writing will to some degree reflect that of other writings that already exist. I’m not saying that you are one-for-one plagiarizing other writings.

You are merely reusing the textual patterns that you’ve seen and learned over time. The odds are that other people reading your text will find the text easier to read due to the wording being akin to other writings they’ve previously seen. This is an overall predictive aspect of what we tend to write.

You would be somewhat hard-pressed to write a sentence that is entire without any semblance of a pattern abiding by prior written sentences. If you did so, the chances are that the sentence would seem strange and arduous for reading purposes (some suggest that poetry or song lyrics are evocative of this out-of-the-ordinary pattern breaking). Large Language Models are kind of doing the same type of textual or language-oriented pattern matching, though computationally in a computer and notably not presumably in the same wetware or thinking manner that humans do (I’ll elaborate on this shortly).

Generally, the more of an abundance of preexisting text that the AI has been exposed to, the better the computational pattern matching of language and linguistics can potentially be. To develop or “train” these AI LLMs, the usual approach involves doing massively widespread text scrapping from the Internet, plus possibly including other textual tombs that might be available via other sources that are not on the Internet. The more text, the merrier, some would assert.

You can undoubtedly see why the coined name of Large Language Models seems appropriate. The AI is modeling human language and doing so on a large scale. If the AI was narrowly linguistically modeled on one book only, such as the Dr.

Seuss book of Green Eggs and Ham , there would not be much that the AI LLM could do for you. The predictive capacity would be extremely limited as a result of the toy-like toddler wording in Dr. Seuss.

On the other hand, by trying to leverage immense volumes of text on the Internet and from other sources, there is a lot of meaty stuff for the LLM to computationally pattern match onto. The model becomes larger and larger in size, which can hopefully produce a greater and greater propensity of doing the language or linguistics pattern matching. All told, some starkly like to say that the LLMs are nothing more than mimicries.

Is it fair to contend that LLMs are mimicries? Some say yes, others adamantly say no. Here’s the basis for the conundrum. The LLM does not “understand” the text that has been used as training for the AI.

I have put the word “understand” into quotes because we usually reserve the connotations associated with being able to understand or comprehend to capacities of human thinking. People understand things. Claiming that a computer system or contemporary AI is able to “understand” would seem an inappropriate stretch.

This is an example of anthropomorphizing of AI, whereby we sometimes use words that express human capabilities and try to overuse them by applying the words to computers and AI. Not a good idea. Doing so blurs the line between what AI of today consists of.

The implication is that the current AI is on par with human thinking and the inner workings of the brain (which it is decidedly not). Please realize that the LLMs are all devised essentially around mathematics and computational pattern matching. Humans in contrast would presumably have an actual understanding or semblance of comprehension about what they read or write.

Because the AI of today is functioning on a computational or mathematical basis, the argument goes that the LLMs are nothing more than mimicries. These are automation that merely mimic human language. They are not thinking machines that understand or comprehend what the language means.

To the LLM, this is all about pushing words around here or there, strictly based on pattern matching. A human would seemingly know what the words mean and why the words are being used. The LLM does not possess that same semblance of comprehension.

Now that I’ve laid out that foundational aspect, there are admittedly some quite impressive results that can occur due to the large-scaling properties of today’s LLM. When you see sentences or paragraphs that are outputted by an LLM, you would swear that the wording was human-written. Of course, in a sense, this is true.

The wording was based on human wording and involves moving around the words based on massive pattern matching. It is somewhat like a parrot that is parroting human words, though we don’t imagine that the parrot knows what the words truly connotate. The words are remarkably human-like because they are mimicry of human language.

LLMs are garnering a lot of attention from those inside and outside of AI. Researchers have been jumping onto the LLM bandwagon. AI conferences often have a significant focus on LLMs.

Government funding for AI is continuing to flow toward LLM advancement. Investors and venture capitalists seem to be piling into the LLM startup scene. News coverage about AI is at times primarily concentrated on LLMs, though the journalists and the readers are often unaware that an LLM is at the center or core of the AI application being mentioned.

You might suggest that LLMs are a secret known to AI insiders. The LLM moniker is not particularly highly recognized or popularized. The public at large is more than likely to assume all AI is composed of the same kinds of techniques and technologies, not realizing that numerous specialties are working under the hood.

Within the AI community, there has been increasingly grousing about the LLM as a one size fits all approach to AI. They argue that it is like crafting a diet consisting of only one item. We need to ensure that a more full-bodied diet exists in the AI field.

It is fine to heap accolades onto LLMs but do so in a moderated fashion. This moderated fashion would consist of still relishing other competing AI approaches, and in addition, avoiding overstating the LLMs and denigrating the other AI avenues. I’ll dig more so into this.

The crux will consist of these four major points: Language is a limited construct Language is heaven-forbid AI symbol manipulation reborn Language could be a bit of a red herring Language as being modeled is not actionable It might be useful to first clarify what I mean when referring to AI overall and also provide a brief overview of Machine Learning and Deep Learning that plays into all of this too. There is a great deal of confusion as to what Artificial Intelligence is. I would also like to introduce the precepts of AI Ethics to you, which will be especially integral to the remainder of this discourse.

Stating the Record About AI Let’s make sure we are on the same page about the nature of today’s AI. There isn’t any AI today that is sentient. We don’t have this.

We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as The Singularity, see my coverage at the link here ). Realize that today’s AI is not able to “think” in any fashion on par with human thinking.

When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning and Deep Learning, which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities.

Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking. Part of the issue is our tendency to anthropomorphize computers and especially AI. When a computer system or AI seems to act in ways that we associate with human behavior, there is a nearly overwhelming urge to ascribe human qualities to the system.

It is a common mental trap that can grab hold of even the most intransigent skeptic about the chances of reaching sentience. For my detailed analysis of such matters, see the link here . To some degree, that is why AI Ethics and Ethical AI is such a crucial topic.

The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications.

Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including the assessment of how AI Ethics gets adopted by firms. Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised.

The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws. Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient.

They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. See for example my coverage at the link here . In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here , for example.

I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here. Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored: Transparency Justice & Fairness Non-Maleficence Responsibility Privacy Beneficence Freedom & Autonomy Trust Sustainability Dignity Solidarity Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI.

This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts. Let’s keep things down to earth and focus on today’s computational non-sentient AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models.

Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly.

There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se. Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases.

You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL. You could somewhat use the famous or infamous adage of garbage-in garbage-out.

The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities. Not good.

I believe that I’ve now set the stage adequately to deeply examine LLMs. The Vocalized Critique of Large Language Models I mentioned that these are the key points I’d like to cover: Language is a limited construct Language is heaven-forbid AI symbol manipulation reborn Language could be a bit of a red herring Language as being modeled is not actionable As I unpack those facets, I ask that you keep an open mind. For those of you that do not know much about AI, I assume your mind is relatively open and not laden with preconceived hardcore opinions on the matter at hand.

Those of you that already do know about AI might have stridently formulated some forceful opinions about how AI ought to be developed and what we should be doing next. Please be aware that the criticisms noted below about LLM are intended to bring to open light the concerns being expressed by some that feel the LLM craze is tilting precariously the existent balance of AI pursuits. This discussion is meant to be a constructive form of criticism, rather than an acrimonious potshot banter.

In a world that seems to have gone crazily polarizing on all manner of topics, the hope is that we can take an unvarnished look at LLMs to see what some contend are significant concerns or weaknesses. I will afterward cover the various proposed “solutions” underlying alleviating or dealing with those misgivings. We cautiously enter the coliseum and the games begin.

Language Is A Limited Construct An often-noted concern about LLMs is that they rely upon language as their cornerstone. Language is said to be a limited construct. You might be puzzled that this would seem to be an expressed concern.

Language seems to make the world go around. Language is essential. Without language, we seemingly would be sitting around in caves and stuck on the evolutionary scale.

Let’s unpack the contentions involved. One viewpoint is that language is a mode of conveyance or communication. Your brain thinks of something and then has to convert that into a language for the purpose of sharing the thought with other humans.

The assertion is that your brain is not thinking in the language. Language is not the essence of thinking. Language is merely a tool used to get your inner thoughts into a format and shape that can be shared with others.

As such, if you go along with that theory, the emphasis on a language basis for AI is presumably misguided. You are not getting to the inner sanctum of thinking. Instead, you are dealing with that stuff surrounding the thinking.

You are outside the secret sauce of how the brain thinks. Language is certainly a key element to humankind, no doubt about it. The problem though is that it isn’t the Wizard of Oz.

It isn’t the engine that makes the car go. Language is an outer ring. Sure, being able to get AI to deal with the outer ring is exemplary and necessary, but this doesn’t get our hands into the real essence of cognitive being.

There is a fashionable thought experiment used to help elucidate the idea (see my related discussion at the link here ). Suppose we had the means to connect our brains with other brains. A direct link.

Maybe this is accomplished via some as-yet discovered telepathic connection. Or in a more conventional implementation, perhaps it is some kind of cable that goes from your head to someone else’s head. The gist is that you would have an unfettered way to have one brain communicate on a thinking basis with another active brain.

Thoughts would seamlessly flow from one brain over to the other brain. Per this fanciful notion that might someday come to pass, the notion is that the brains would be able to communicate without having to use the intermediary step of language. Rather than a brain needing to convert thoughts into a language format, and then conveying the language formatted version of the thought, the brain would directly pass along the thought unfettered.

No conversion is needed. Language in this guise is considered an intermediary. The use of language is fraught with difficulties because it seemingly requires conversion from inner thoughts into the language at hand.

This might result in distorting the thought. Thoughts might get shorted or badly converted. A generated thought has to get transformed and might or might not fit well within the constraints of the language being used.

Envision that your thoughts are round pegs and they are being squeezed into square holes as part of the mental conversion process of thoughts to language. All in all, language then is claimed to somewhat get in the way of getting to the true foundations of how humans think. Language is quite alluring.

Language is front and center. Language is showy. Some point out disparagingly that language is the simpler angle to try and deal with.

Diving into the mind is a lot harder. The implication is that the LLM efforts are taking the “easy” path rather than the harder path. And, worse still, LLM is said to be taking a path that is a dead-end for figuring out the true nature of intelligence and trying to undertake the same in AI.

The energies going toward LLMs are all stuck on the outer ring. For the moment, this is exciting. Ultimately, the outer ring isn’t going to be enough.

We need to delve into the inner core. LLMs are going to presumably distract us from that quest. That is an unabashed accusatory finger being wagged.

Does that seem like a convincing argument to you? There are numerous counterarguments. I’ll sketch a few. First, it could be that the mind does in fact think in terms of language.

Thus, we seemingly should devoutly reject the professed (false) contention that the brain and language are essentially two separate aspects. Knock that falsehood to the ground. The posture is that the mind and language are fully intertwined, inseparably so.

Anyone trying to devise AI based on how human intelligence manifests itself has to crack the code on both language and the thinking that underlies language. You can’t set aside language and believe that you are otherwise going to chip away at how our minds function. If you did so, the effort would be rather fruitless or at least ultimately lead you back to the need to dig into language too.

Second, if nothing else, language at least shapes the mind. The point here is that even if the mind and language are not intertwined per se, the nature of language has a significant shaping effect on the mind. Language is a worthy subject all unto itself.

We would be doing a disservice to pretend that language isn’t the 600-pound gorilla when it comes to how the mind garners its thinking prowess. Third, there might be no other viable means to crack open the thoughts of the human mind other than via language. It could be that if we were somehow able to do brain-to-brain linkages, nothing much would happen.

Why? Because without language no protocol establishes how the brains are able to share their thoughts. The thoughts are mush and unable to be conveyed without language. You are locked into the bedrock nature-given ironclad principle that there must be language to do sharing, regardless of whether via the written word, the spoken word, or a direct connection of cable or magical telepathy.

And so on the counterarguments go. Language Is Heaven-Forbid AI Symbol Manipulation Reborn You might be vaguely aware that the 1980s and 1990s were a heyday period for AI. The emphasis at the time was on Expert Systems (ES) or also referred to as Knowledge-Based Systems (KBS) or Rule-Based Systems (RBS).

A considered belief was that human knowledge could be expressed in the form of symbolic logic. An AI developer would meet with human experts in a given domain, such as the medical domain, and try to get them to articulate the rules that they used to undertake medical diagnoses. These rules were then encoded into a type of programming language or some kind of coding that would codify the identified rules.

For various reasons, this era of AI eventually seemed to hit a wall. The subsequent time period became known as the AI Winter. During this wintery slowdown of AI, the Machine Learning and Deep Learning approaches gradually became popularized.

This was partially due to the realization that vast stores of online digitized data were available, along with the lessening cost of the computing required to execute or run the ML/DL. In any case, the resultant computational pattern matching has become known as being considered sub-symbolic, meaning that rather than how ES/KBS/RBS worked with symbols, the ML/DL deals with rudimentary data rather than symbols. The present-day is known as the AI Spring and appears to have shown that sub-symbolic is the “winning approach” to AI in comparison to the symbolic approach that apparently ran out of steam.

Here’s the twist that comes to the fore about LLMs. You could argue that LLMs are focused once again on symbolic facets of AI. Yikes! Some are shaking their heads in disgust that we seemed to not have learned a lesson from the prior AI era.

The symbolic approach was said to have faltered in reaching true AI. Egg was left on the face of AI. The sub-symbolic approach has garnered great favor of late.

Do not walk backward. Do not allow yourselves to be drawn into the symbolic gambit. It is a trap.

Do you find that a convincing position? There are counterarguments aplenty. For example, one such counterargument is that we ought to be looking for the best of both worlds. Let’s make use of sub-symbolic approaches and simultaneously make use of symbolic approaches.

We don’t need to pit them against each other. I’ve covered this co-joining, often referred to as a neuro-symbolic AI, as discussed at the link here . Language Could Be A Bit Of A Red Herring I already mentioned that one expressed qualm is that the LLM approach does not seem to deal with any kind of “understanding” when it comes to the handling or manipulation of language.

It is said to be pure mimicry. I also stated that this is worrisome too because the LLMs showcase the appearance of human thinking when in fact they aren’t doing anything of the kind. That is a two-for-one badness.

You’ve got this AI that isn’t doing any true AI, meanwhile, it gives the false appearance that it is. That is an example of a red herring when relying upon language (i. e.

, a distraction from where we need to be). Another loudly voiced concern is that the LLMs are primarily focused on text. Text is the mainstay medium in which language is being dissected for LLMs.

Web scrapping grabs up gobs and gobs of text from the Internet. This becomes the basis for the computational pattern matching models that are being devised. You can imagine that some are perturbed that we are shoehorning language into only that which can be conveyed via text.

Humans write text and read text, though humans also do a lot of visualization beyond mere text. We make drawings. We use pictures.

We share in a visual manner far beyond that of text alone. We also have physical mannerisms such as shrugging our shoulders and waving our hands. We have unspoken or unexpressed societal customs and we adopt rituals that aren’t necessarily written down.

Text by itself is not the whole story when it comes to communicating amongst humans. The LLMs are said to be shallow due to a myopic perspective by concentrating solely on text. There is also an allied concern that text from the Internet is not necessarily the best means of gleaning language and human knowledge.

Not everyone uses the Internet. Those that do use the Internet are a subset of humanity. Within the use of the Internet and text, there is a lot of crummy text that we would likely not wish to find embedded or pattern matched into the LLMs.

News reports have from time to time pointed out that some of the LLMs are prone to making racist remarks or expressing other undue biases and discriminatory dialogues. How does this happen? Well, those untoward narratives exist on the Internet and become part and parcel of the models and their pattern matching within the LLMs. We are back to the GIGO or garbage-in garbage-out consternations.

The text then is not the only game in town. LLM being text-oriented is taking a narrow slice of language and furthermore selectively doing so by primarily gleaning the text from the Internet. There are counterarguments to these complaints.

For example, some LLMs are branching out beyond text and incorporating visualizations such as drawings, pictures, and videos. The LLM proponents say that we need to wait and watch to see where the LLMs advance, rather than being nitpicky right now. Let the LLMs mature.

Another counterpoint is that the LLMs can be devised to possibly ignore an inflammatory text or catch the outlandish text before it is emitted by an LLM when generating textual responses. This appears to deal with the abhorrent aspect that the Internet is a source riddled with adverse content. As an aside, there are counters to the counterarguments.

For example, if the LLMs are built to filter out so-called bad content, how do we know that the bad content is in fact bad content? AI developers might decide to exclude content that has value and does not constitute “bad content” depending upon your definition. Round and round this goes. Language As Being Modeled Is Not Actionable Consider for a moment the enactment of human knowledge.

If you were to read a book about how to build bridges, you would be able to likely go out and build a bridge. You have leveraged the stated language about bridges into internal knowledge about bridges, and you have in turn enacted the knowledge you gleaned into an actionable result. You built a bridge.

The problem with LLMs is that they are not enactors, some acutely point out. They are symbol manipulators. They push words here or there.

They do not enact language into becoming something tangible or workable. They don’t turn language into knowledge (you, the reader of the emitted text from an LLM have to take on that chore), and they do not cause that knowledge to arise in actions in the real world (you, the human, have to take action if so needed or desired). Humans read things.

This becomes knowledge in their minds. They take this newfound knowledge and can use it for action. The LLMs don’t do this.

I’m sure you would like to be apprised of some counterarguments. First, even if LLMs only were confined to non-actionable activities, perhaps being rated as glorified writing tools, they still would have tremendous value. Humans could be action takers.

The LLMs are helping humans, who in turn take action. You could toss the same criticism at any kind of online tools such as email and collaboration tools. They don’t generate action, but they do aid a semblance of knowledge transfer, and they can inevitably lead to action by the humans that rely upon those capabilities.

Second, there is no reason to believe that LLMs cannot be attached to or in some relatively direct means produce actions. You could potentially have an LLM that generates instructions, and those instructions are sent to a robot in a factory that is making goods of one kind or another. Therefore, the LLM brought about an enacted action in the real world.

Be forewarned that the aspect of using LLMs to direct real-world activities is quite alarming to some. Suppose the LLM generates instructions that go to a robot and the robot goes berserk because the instructions are amiss. Chilling.

The counter to that counter of the counter is that there could be safeguards put in place to prevent the robot from performing wrongful acts. Etc. Conclusion You have now become part of the ongoing dialogue and at times heated debate about whether LLMs are taking us to the true AI or they are distracting or possibly an utter dead-end instead.

With this heaping of concerns, you might be pondering what can be done about all of this. Here are the usual suggestions: Abandon LLM (a radical proposition) Continue LLM and ignore the criticisms (a position taken by some pundits) Keep going with LLM and mildly entertain the grievances Strive along but adjust LLM and also allow other AI avenues airtime too Other A rather radical proposition is that LLMs are so bad in terms of taking us down an AI dead-end that we ought to abandon the pursuit. Right away.

Right now. Put your pencils down. I don’t think there’s much chance of that happening.

Another suggestion is that LLM should continue along and pretty much ignore all the overt jealousy and angry noise being made by those that aren’t aboard. If the critics or skeptics have a better approach to AI, good luck to them. Stay out of our neck of the woods and do their own thing.

Stop carping. The nearer middle-ground suggestion is that LLM keeps going but at least takes into account the noted limitations and concerns. Perhaps this doesn’t move the needle on where LLM is headed.

Nonetheless, some of the input might aid LLM efforts. A more encompassing take on this entails LLM adjusting quite significantly as a result of the ongoing qualms and concerns. In addition, no more attacks on other AI approaches.

Allow for a large enough tent to accommodate a slew of AI approaches. With those possibilities now rolling around in your noggin as potential next steps, I’ll provide my final closing remarks for now. Friedrich Nietzsche, the renowned philosopher, once reportedly said: “All I need is a sheet of paper and something to write with, and then I can turn the world upside down.

” A handy quote. You see, LLMs are more powerful in their effect than might seem at first glance, and we need to realize that those devising such AI are not just playing with a fancy wordsmithing apparatus, they are potentially also playing with fire. It can be said that LLMs metaphorically can turn the world upside down.

Keep your eyes and ears wide open, and keep your mind right-side up when it comes to where AI is heading and the heady role of those Large Language Models. Follow me on Twitter . Lance Eliot Editorial Standards Print Reprints & Permissions.


From: forbes
URL: https://www.forbes.com/sites/lanceeliot/2022/08/30/ai-ethics-asking-aloud-whether-large-language-models-and-their-bossy-believers-are-taking-ai-down-a-dead-end-path/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News