Saturday, November 23, 2024

Trending Topics

HomeTechnologyFrom Camping To Cheese Pizza, ‘Algospeak’ Is Taking Over Social Media

From Camping To Cheese Pizza, ‘Algospeak’ Is Taking Over Social Media

spot_img

I f you’ve seen people posting about “camping” on social media, there’s a chance they’re not talking about how to pitch a tent or which National Parks to visit. The term recently became “algospeak” for something entirely different: discussing abortion-related issues in the wake of the Supreme Court’s overturning of Roe v. Wade.

Social media users are increasingly using codewords, emojis and deliberate typos—so-called “algospeak”—to avoid detection by apps’ moderation AI when posting content that is sensitive or might break their rules. Siobhan Hanna, who oversees AI data solutions for Telus International, a Canadian company that has provided human and AI content moderation services to nearly every major social media platform including TikTok, said “camping” is just one phrase that has been adapted in this way. “There was concern that algorithms might pick up mentions” of abortion, Hanna said.

More than half of Americans say they’ve seen an uptick in algospeak as polarizing political, cultural or global events unfold, according to new Telus data from a survey of 1,000 people in the U. S. last month.

And almost a third of Americans on social media and gaming sites say they’ve “used emojis or alternative phrases to circumvent banned terms,” like those that are racist, sexual or related to self-harm, according to the data. Algospeak is most commonly being used to sidestep rules prohibiting hate speech, including harassment and bullying, Hanna said, followed by policies around violence and exploitation. We’ve come a long way since “pr0n” and the eggplant emoji.

These ever-evolving workarounds present a growing challenge for tech companies and the third-party contractors they hire to help them police content. While machine learning can spot overt violative material, like hate speech, it can be far harder for AI to read between the lines on euphemisms or phrases that to some seem innocuous, but in another context, have a more sinister meaning. The term “cheese pizza,” for example, has been widely used by accounts offering to trade explicit imagery of children.

The corn emoji is frequently used to talk about or try to direct people to porn (despite an unrelated viral trend that has many singing about their love of corn on TikTok). And past Forbes reporting has revealed the double-meaning of mundane sentences, like “touch the ceiling,” used to coax young girls into flashing their followers and showing off their bodies. “One of the areas that we’re all most concerned about is child exploitation and human exploitation,” Hanna told Forbes .

It’s “one of the fastest-evolving areas of algospeak. ” But Hanna said it’s not up to Telus whether certain algospeak terms should be taken down or demoted. It’s the platforms that “set the guidelines and make decisions on where there may be an issue,” she said.

“We are not typically making radical decisions on content,” she told Forbes . “They’re really driven by our clients that are the owners of these platforms. We’re really acting on their behalf.

” For instance, Telus does not clamp down on algospeak around high stakes political or social moments, Hanna said, citing “camping” as one example. The company declined to say if any of its clients have banned certain algospeak terms. The “camping” references emerged within 24 hours of the Supreme Court ruling and surged over the next couple of weeks, according to Hanna.

But “camping” as an algospeak phenomenon petered out “because it became so ubiquitous that it wasn’t really a codeword anymore,” she explained. That’s typically how algospeak works: “It will spike, it will garner a lot of attention, it’ll start moving into a kind of memeification, and [it] will sort of die out. ” New forms of algospeak also emerged on social media around the Ukraine-Russia war, Hanna said, with posters using the term “unalive,” for example—rather than mentioning “killed” and “soldiers” in the same sentence—to evade AI detection.

And on gaming platforms, she added, algospeak is frequently embedded in usernames or “gamertags” as political statements. One example: numerical references to “6/4,” the anniversary of the 1989 Tiananmen Square massacre in Beijing. “Communication around that historical event is pretty controlled in China,” Hanna said, so while that may seem “a little obscure, in those communities that are very, very tight knit, that can actually be a pretty politically heated statement to make in your username.

” Telus also expects to see an uptick in algospeak online around the looming midterm elections. Other ways to avoid being moderated by AI involve purposely misspelling words or replacing letters with symbols and numbers, like “$” for “S” and the number zero for the letter “O. ” Many people who talk about sex on TikTok, for example, refer to it instead as “seggs” or “seggsual.

” In algospeak, emojis “are very commonly used to represent something that the emoji was not originally envisioned as,” Hanna said. In some contexts, that can be mean-spirited, but harmless: The crab emoji is spiking in the U. K.

as a metaphoric eye-roll, or crabby response, to the death of Queen Elizabeth, she said. But in other cases, it’s more malicious: The ninja emoji in some contexts has been substituted for derogatory terms and hate speech about the Black community, according to Hanna. Few laws regulating social media exist, and content moderation is one of the most contentious tech policy issues on the government’s plate.

Partisan disagreements have stymied legislation like the Algorithmic Accountability Act , a bill aimed at ensuring AI (like that powering content moderation) is managed in an ethical, transparent way. In the absence of regulations, social media giants and their outside moderation companies have been going it alone. But experts have raised concerns about accountability and called for scrutiny of these relationships.

Telus provides both human and AI-assisted content moderation, and more than half of survey participants emphasized it’s “very important” to have humans in the mix. “The AI may not pick up the things that humans can,” one respondent wrote. And another: “People are good at avoiding filters.

”.


From: forbes
URL: https://www.forbes.com/sites/alexandralevine/2022/09/16/algospeak-social-media-survey/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News