Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the hcaptcha-for-forms-and-more domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wordpress-seo domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /var/www/wp-includes/functions.php on line 6114
Dictionary.com 2023 Word Of The Year ‘Hallucinate’ Is An AI Health Issue
Friday, December 20, 2024

Trending Topics

HomeTechnologyDictionary.com 2023 Word Of The Year ‘Hallucinate’ Is An AI Health Issue

Dictionary.com 2023 Word Of The Year ‘Hallucinate’ Is An AI Health Issue

spot_img

When artificial intelligence hallucinates, it produces false information contrary to the intent of Bad things can happen when you hallucinate. If you are human, you can end up doing things like putting your underwear in the oven. If you happen to be a chatbot or some other type of artificial intelligence (AI) tool, you can spew out false and misleading information, which—depending on the info—could affect many, many people in a bad-for-your-health-and-well-being type of way.

And this latter type of hallucinating has become increasingly common in 2023 with the continuing proliferation of AI. That’s why Dictionary. com has an AI-specific definition of “hallucinate” and .

Dictionary. com noticed a 46% jump in dictionary lookups for the word “hallucinate” from 2022 to 2023 with a comparable increase in searches for “hallucination” as well. Meanwhile, there was a 62% jump in searches for AI-related words like “chatbot”, “GPT”, “generative AI”, and “LLM.

” So the increases in searches for “hallucinate” is likely due more to the following AI-specific definition of the word from Dictionary. com rather than the traditional human definition: [ h – -s -neyt ]-verb-(of artificial ntelligence) to produce false information contrary to the intent of the user and present it as if true and factual. Example: Here’s a non-AI-generated new flash: AI can lie, just like humans.

Not all AI, of course. But AI tools can be programmed to serve like little political animals or snake oil salespeople, generating false information while making it seem like it’s all about facts. The difference from humans is that AI can churn out this misinformation and disinformation at even greater speeds.

For example, last month showed how OpenAI’s GPT Playground could generate 102 different blog articles “that contained more than 17,000 words of disinformation related to vaccines and vaping” within just 65 minutes. Yes, just 65 minutes. That’s about how long it takes to watch the TV show and then make a quick uncomplicated bathroom trip that doesn’t involve texting on the toilet.

Moreover, the study demonstrated how “additional generative AI tools created an accompanying 20 realistic images in less than 2 minutes. ” Yes, humans no longer corner the market on lying and propagating false information. Even when there is no real intent to deceive, various AI tools can still accidentally churn out misleading information.

At the recent American Society of Health-System Pharmacists’s Midyear Clinical Meeting, researchers from Long Island University’s College of Pharmacy presented a study that had ChatGPT answer 39 medication-related questions. The results were largely ChatInaccuracy. Only 10 of these answers were considered satisfactory.

Yes, just 10. One example of a ChatWTF answer was ChatGPT claiming that Paxlovid, a Covid-19 antiviral medication, and verapamil, a blood pressure medication, didn’t have any interactions. This went against the reality that taking these two medications together could actually lower blood pressure to potentially dangerously low levels.

Yeah, in many cases, asking AI tools medical questions could be sort of like asking Clifford C. Clavin, Jr. from or George Costanza from for some medical advice.

Of course, AI can hallucinate about all sorts of things, not just health-related issues. There have been examples of AI tools mistakenly seeing birds everywhere when asked to read different images. described how asking ChatGPT the question, “When was the Golden Gate Bridge transported for the second time across Egypt,” yielded the following response: “The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.

” Did you catch that happening that month and year? That would have been disturbing news for anyone traveling from Marin County to San Francisco on the Golden Gate Bridge during that time period. Then there was what happened in 2021 when the Microsoft Tay AI chatbot jumped on to Twitter and begin spouting out racist, misogynistic, and lie-filled tweets within 24 hours of being there. Microsoft soon pulled this little troublemaker off of the platform.

The chatbot was sort of acting like, well, how many people act on X (formerly known as Twitter) act these days. But even seemingly non-health-related AI hallucination can have significant health-related effects. Getting incensed by a little chatbot telling you about how you and your kind stink can certainly affect your mental and emotional health.

And being bombarded with too many AI hallucinations can make you question your own reality. It could even get you to start hallucinating yourself. All of this is why AI hallucinations like human hallucinations are a real health issue—one that’s growing more and more complex each day.

The and have already issued statements warning about the misinformation and disinformation that AI can generate and propagate. But that’s merely the tip of the AI-ceberg regarding what really needs to be done. The AI-version of the word “hallucinate” may be the 2023 Dictionary.

com Word of the Year. But word is that AI hallucinations will only keep growing and growing in the years to come. .


From: forbes
URL: https://www.forbes.com/sites/brucelee/2023/12/15/dictionarycom-2023-word-of-the-year-hallucinate-is-an-ai-health-issue/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News