Wednesday, October 23, 2024

Trending Topics

HomeBusinessInside Meta’s Oversight Board: 2 Years of Pushing Limits

Inside Meta’s Oversight Board: 2 Years of Pushing Limits

spot_img

On the morning of Thursday, June 30, 2022, two large luxury buses pulled up to a grand hotel in Menlo Park, California. Milling on the driveway were the members, staffers, and trustees of the Oversight Board. Set up two years ago by Facebook, now Meta , this august gaggle exists to second-guess the company’s most controversial actions.

The board members, who’d already logged countless hours on video calls and email, were spending their first week together in person. The buses rumbled off, whisking the 23 Zoom buddies to Meta’s headquarters 4 miles away. The group made its way across the mammoth Gehry–designed complex to a verdant outdoor amphitheater known as the Bowl.

Sheryl Sandberg, Meta’s outgoing chief operating officer, greeted the crowd in the midday heat. Next up was Nick Clegg, the company’s president for global affairs. Clegg was almost startling in his effusive praise of the board.

He was taking questions from the members when, suddenly, the large screens in the Bowl lit up with a familiar face. Mark Zuckerberg ’s expressionless visage peered down at the sweaty visitors. Though Zuckerberg had personally willed into being this body of overseers—overseeing him —he had never met with all its current members.

Meta’s founder and CEO didn’t share his location, but a fair guess would have been that he was at his Hawaiian island retreat, where he had spent much of the previous year. Staring into his webcam, Zuckerberg congratulated the board on its work so far. Free expression, he said, has always been part of his company’s mission—but sometimes people use their voices to put others in danger.

Meta shouldn’t be making so many decisions on speech by itself. Zuckerberg finished his talk with a wholehearted endorsement. “This has been important to me from the beginning,” he said, “and I’m committed to the board for the long term.

” Indeed, a few weeks later, Meta announced it would give the board $150 million—more than double its original commitment—to keep the project going through 2025. So far, the board has received nearly 2 million appeals on content and ruled on 28 of them. It has made 119 recommendations to Meta.

Its judgments have involved wampum belts, blackface, and the removal of a former US president from Facebook. Some critics see the Oversight Board as an exercise in corporate ass-covering by a bunch of Meta’s puppets. If the company doesn’t want to make a controversial call, it can push the board to take a position on the issue and, conveniently, take the heat.

Emi Palmor, a board member who once served as the director general of Israel’s Justice Ministry, says she’s frequently approached in the supermarket by people seeking tech support for Meta apps. “I want to murder the person who chose the name Oversight Board,” she says. “It is an unexplainable term.

” But since it started hearing cases in the fall of 2020, the board has won grudging respect from the human rights organizations and content moderation wonks who pay attention to its work. “People thought it would be a total fiasco,” says Evelyn Douek , a Stanford law professor who follows the board closely. “But in some real ways, it has brought some accountability to Facebook.

” Meta, meanwhile, is declaring victory. “I’m absolutely delighted—thrilled, thrilled, thrilled with the progress,” Clegg says. The board’s approach to cases “is exactly what you should expect between a social media platform and an independent oversight entity.

” The truth is more complicated, and Clegg’s ebullient praise and Zuckerberg’s encouraging mahalo make board members nervous. If one of the world’s most transgressive companies thinks that the oversight is going fantastically, how great can the board be? Suzanne Nossel, a member who is also the CEO of the literature and human rights nonprofit PEN America, thinks it’s too early to make a call. “We’ve only just begun to figure out how to do this work,” she says.

The board has figured out one big thing: It has an opportunity, with caveats, to alter how the internet’s Goliaths treat the speech of billions of people. Even after more than two decades of social media, the way platforms patrol their corridors can seem arbitrary and self-serving . Imperfect algorithms and armies of undertrained, overworked moderators make life-altering decisions.

People scramble to contest them, filing millions of appeals every month. They dig through help pages, argue with bots, and most often give up in frustration. The policies that supposedly balance free expression and safety were drawn up by companies whose priorities are growth and profit.

“The platform was not designed with integrity in mind,” says Jamal Greene, a Columbia law professor who is one of the board’s cochairs. “It was designed with reach in mind. ” No one wants the government to step in and bash out rulings on edgy posts.

But online speech is still speech, and people expect some rights around it. The Oversight Board is a first stab at securing those liberties and, in its most ambitious form, a chance to stem some chaos. But the deeper the board’s members get into the issues, the more they find themselves bumping up against the edges of what Meta will let them do.

​​The great experiment of the Oversight Board started on a bike ride. In January 2018, Noah Feldman, a professor at Harvard Law School, was visiting the Bay Area and crashing at his friend Sheryl Sandberg’s house. One day, he was pedaling around the local foothills when his mind turned to Facebook.

The problem with his host’s social media employer, he thought, was that no matter what it decided on a given piece of content, someone would be mad at the company. Perhaps it could benefit from a separation of powers. By the end of his ride he had a suggestion for Sandberg: Facebook should create its own version of the Supreme Court , an independent body that would examine the biggest complaints about the company’s decisions.

Sandberg brought the idea to Zuckerberg, who had been pummeled for months about speech on his platform and was now thinking about “governance” as a way to signal that he wasn’t the dictator of the world’s expression. He embraced the concept. In June of that year, I met Zuckerberg at Facebook’s headquarters for a walk through its 9-acre rooftop gardens.

As we strolled, he shared a vision of an independent body that would make binding decisions on content. “We need to figure out the mechanism for appointment—but they don’t report to me,” he said. “They’re not likely going to be Facebook employees.

” He understood then that he would need to fend off the impression that the overseers were his flunkies. Zuckerberg’s MO for new initiatives is to rely on loyal long-time lieutenants to make them happen. In this case, Facebook used an internal team of governance nerds.

It was headed by Brent Harris, an attorney with experience in climate and environmental work, and Heather Moore, who had worked in the US Attorney’s Office in Newark, New Jersey. Both said they saw it as a chance to help people on the platform. (Harris now heads a governance group at Meta that includes the board’s support team.

) For a company that once boasted of moving fast, Facebook set up its board with the cautious deliberation of a 19th-century government railway bureaucracy. Buy-in was not universal. “I was skeptical we would get much benefit,” says Monika Bickert, who heads global content policy.

(It would be her rules that the board would question. ) But the team plodded forward, set up a series of workshops, and solicited suggestions from outsiders on how the board should operate. Some participants would wind up filling its seats.

By 2020, Facebook had set up the board as an independent trust with a $130 million grant. The company would pay up to 40 board members six-figure salaries for what was estimated to be 15 hours of work each week. A full-time staff would support the effort, like clerks for Supreme Court justices.

A lengthy charter set the ground rules. The meat of the board’s activities would be handling disagreements over individual posts. Perhaps Facebook or Instagram had removed someone’s post for violating its terms, and the user wanted to contest that decision.

The board could rule on posts, but not ads, algorithms, or groups. (That stuff might come later. ) A case selection committee, made up of board members, would extract from the sea of appeals the cases the board would take on, then assign them to five-person panels.

Those groups would evaluate their case and reach a decision. Facebook was bound to honor the board’s rulings on individual posts. But there was more.

The board could include in its case rulings sweeping recommendations, which the company could take or leave. If it rejected the suggestions, it would have to explain itself, but that would be it. The board could get a crack at the company’s knottiest conundrums through a “policy advisory opinion”—a request directly from Meta for the board to review an especially controversial decision.

Meta could again accept or reject whatever the board advised. To this day, Facebook and Instagram users are not guaranteed that when some robot blocks their speech, a human being will ever see their complaints. In May 2020, the company announced it had recruited a distinguished collection of lawyers, journalists, and human rights activists to become the board’s first 20 members , including four cochairs.

There was a former prime minister of Denmark, a Pulitzer Prize–winning former newspaper editor, and a Nobel Peace Prize laureate. All the members had one thing in common—a resolve that they be seen as independent of the company funding their paychecks. Still, Facebook’s critics were ready to call out the Oversight Board as a sham .

Jessica Gonzalez is the co-CEO of Free Press, a group opposed to corporate control of media, and one of a motley collection of company detractors—including full-time Meta apostate Roger McNamee and Nobel laureate Maria Ressa —who created a shadow organization called the Real Facebook Oversight Board; it is dedicated to issuing body blows to everything its namesake does. The really real board “is a PR stunt,” Gonzalez says, “that gives Facebook cover for not adequately investing in the integrity of its systems and not doing enough to keep people safe. ” In January 2021, the board ruled on its first cases—and set a pot of tension on simmer.

The previous October, a Brazilian Instagram user touting a breast cancer awareness campaign had posted an image with several examples of post-surgery breasts. An algorithm trained to seek and destroy nipple content took down the post. Once the board accepted the case, the company decided to manually review the post.

Nudity for the sake of medical awareness was within Instagram’s rules, so the policy standards team restored the post. With the issue now moot, the company told the board to drop the case. The members declined.

Their insistence was a message: While their decisions were nominally about individual pieces of content, the real work was in interrogating company policies. They were out to change Meta. In the write-up of their decision—reaffirming that the post should stay up—the board members exposed how this seemingly trivial, fixable mistake was a window into a deeper failure.

The company overly relied on algorithms, which in this case didn’t pick up the Portuguese for “breast cancer. ” Removing the post, the board argued, raised “human rights concerns. ” Citing the International Covenant on Civil and Political Rights, a foundational United Nations treaty, the board wrote, “Any restriction on freedom of expression must be for a legitimate aim.

” It recommended that anytime a user appeals an algorithmic decision of this sort, that person should automatically be granted a human content moderator. “We basically asserted our authority even though Facebook had decided to reinstate the content,” says board member Ronaldo Lemos, a law professor from Brazil. “At the same moment we said, ‘We want to talk about algorithms.

’” A pretty reasonable request—except the company did not follow up on the board’s recommendation. To this day, Facebook and Instagram users are not guaranteed that when some robot blocks their speech, a human being will ever see their complaints. The board was imagining a world in which social media platforms would have to at least treat their users like human beings.

The members would keep pressing to make that happen, because, well, human rights are their thing. The board had issued only a handful of rulings when a bombshell of a case dropped: the suspension of President Donald Trump. In the heated hours of the insurrection on January 6, 2021, Trump blessed the violent protests in posts on Facebook and Instagram.

The company swiftly removed the posts and suspended him from both platforms indefinitely. The MAGA crowd cried censorship. Anti-Trumpers were outraged that the ban wasn’t permanent.

On January 21—perhaps not coincidentally, after a new US president had been inaugurated—Facebook told the board members to figure it out . “It was a very, very simple decision,” Clegg says of requesting a public advisory opinion. “Just imagine if we hadn’t deferred that decision to them.

People would’ve quite rightly said, ‘You’ve created an oversight board, and you won’t even share with them this dilemma of what to do with the former elected president of the most powerful democracy on the planet. ’” For the board, though, the moment was perilous. Both pro- and anti-Trump observers were ready to pounce on any misstep; a clumsy move could have sunk the whole experiment.

After months of deliberation, the board backed the company’s decision to remove the former president’s incendiary words on Facebook and Instagram and to boot him from the platforms. But the board once again demanded that the company make its policies more explicit. In its ruling that spring , the board excoriated Facebook for basically making decisions on the fly—and for refusing to provide a time frame for the ex-president’s restoration.

By not having clear standards for suspensions, the company was failing the public. “Facebook shunned its responsibility,” said board cochair Helle Thorning-Schmidt, a former prime minister of Denmark. The board’s commentary on that high-profile case pointed to one of its obsessions: Facebook’s lack of transparency about its own rules.

The board returned to it frequently and became adept at choosing complaints with the most potential for broad impact. “Case selection is the whole game,” says Nicolas Suzor, a board member and law professor from Australia. Suzor is at times on the selection committee that decides which issues the board wants to address and has staffers sifting through thousands of appeals to find cases that fit.

In April 2021, the committee plucked out a case that came to be known as Ocalan’s Isolation. Abdullah Ocalan is a founding member of the Kurdistan Workers’ Party (PKK), a group that Facebook had designated a “dangerous entity. ” He is currently incarcerated on a Turkish prison island in perpetual solitary confinement.

A few months earlier, an Instagram user in the US had posted a picture of Ocalan with the words “y’all ready for this conversation?” and urged people to discuss the conditions of the prisoner’s confinement. Facebook removed it. Company policy bans posts in support of people involved in dangerous entities.

This post wasn’t that. The board was eager to tackle the issue. “You have an organization that you can’t talk about,” says board member Julie Owono, who is executive director of the digital rights organization Internet Sans Frontières.

“Yet you have a leader whose situation has been internationally recognized as a violation of the person’s human rights. ” Researchers within the company started digging up background information on the case, much of it from Facebook’s private databases. While going through files, they stumbled on an embarrassing detail: The issue of Ocalan’s imprisonment had come up before.

The company had even created a special policy that allowed posts from users who advocated for humane treatment but weren’t themselves PKK supporters. But that instruction, written in 2017, was never made public. It was evidently forgotten inside the company too, as it routinely took down posts regarding the conditions of Ocalan’s confinement.

Facebook was violating its own rules. “When I found out about that disconnection, I thought, that’s precisely why I came here,” Owono says. In its first year, the board steadily pushed the company to fix its imperious attitude toward complaints.

Users were seldom informed why posts were taken down or why seemingly obvious violations were allowed to remain. The board views this Kafkaesque behavior as one of the company’s ongoing insults to human rights. “It was something I wouldn’t have thought was even a problem before I joined the board,” says Greene, one of the cochairs.

“But we realized it’s a huge problem. ” In 2021 alone, six of their 20 rulings recommended that when the company removes a person’s content, it should inform the user what rule they broke. The battle proved to the board that its mission was not to rule on the fate of one post or another, but to make Meta own up to the monster it has created.

When I bring this up with Clegg, he acts as if the board’s continued pounding on this topic is the greatest thing since targeted ads. “A thousand percent!” he says. “The main early, consistent drumbeat of criticism we’ve had from the board—and I think it’s totally understandable—is that you’re not explaining to users where you stand, and users feel you are applying arbitrary decisions.

” Citing the board’s criticisms, Meta revealed this summer that it was creating a customer service group to provide explanations of its takedowns and suspensions. It took multiple decisions, but the board had made its point . Now, “Meta is more transparent with its users about what they’ve done wrong,” Greene says.

The battle proved to the board that its mission is not to decide the fate of one post or another, but to make Meta own up to the monster it has created. On the page of the board’s website where users lodge their complaints, the text does not read, “Get your post restored” or “Fix this bad decision. ” The call to action says, in giant letters, “Appeal to shape the future of Facebook and Instagram.

” While the board racked up points with that win, it still has limited leverage. When the board makes recommendations, a Meta working group determines whether the company will implement them. “We treat the board the way we do a regulator,” says Harris, the lawyer who helped set up the board and remains its closest contact within Meta.

There is, of course, a difference. While there are consequences for ignoring a regulator, Meta is free to do as it wishes. Of the board’s 87 recommendations through the end of 2021, Meta claims to have fully implemented only 19, though it reports progress on another 21.

The company brushed off another 13 recommendations by saying, without elaboration, it is “work Meta already does. ” Other recommendations are outright refused. “We don’t have a police force,” Owono says.

“But I don’t think it prevents us from holding the company accountable, at least to its users. ” A board committee is studying how to make their recommendations harder to dodge. By early 2022, two themes were emerging in the relationship between Meta and its Oversight Board.

In some company quarters, the board’s decisions were having a positive effect. Even Meta’s content policy head, Bickert—whom one board insider cited to me as a powerful internal detractor of the effort—says that she now often asks herself, “What would the board think?” Some board members, however, were feeling increasingly frustrated with the boundaries they were forced to work within and the obstacles they felt that Meta was intentionally placing in their path. One point of friction is how the board grows.

In an early conversation I had with Meta’s Harris and Moore, the idea was that the company would help choose the first tranche of members, then step aside. But in the board’s charter, the company gave itself a say in selecting the full complement of 40 members. Meta employees remain deeply involved in hiring and are a factor in why the board is still far short of the total number set out in its charter.

“While it’s hard to find the right kind of people, I don’t know that’s an excuse for operating at 50 percent capacity,” says Douek, the Stanford law professor who keeps an eye on the board’s activities. Meta’s influence became hard to miss when the board invited Renée DiResta to interview. DiResta, the technical research manager of the Stanford Internet Observatory, was interested in becoming a member, she says, because it “would be an opportunity to shape the direction of something that I think has real potential.

” DiResta has degrees in political science and computer science. Beginning in April 2021, she underwent multiple interviews. On paper, her inclusion made a lot of sense.

The Oversight Board lacks experts on algorithms, so her presence would fill a void. But there was a problem: She has been a consistent critic of Meta’s failure to deal with the harmful disinformation on its platforms. In March 2022, DiResta got an email rejecting her application.

“They said they were going in a different direction,” she says. That direction, it turned out, was the same as before. The board proceeded to add three more members who, like the first 20, are lawyers or journalists with no technical background.

One person familiar with the process says it was Meta’s reservations that put the kibosh on the nomination. Harris, of Meta, says that “the company has expressed concern in some instances about who may or may not be more effective in certain lights as a board member. ” Meta further explains it is not unusual for multiple people to withhold their endorsement, and that the exceptions are the candidates who earn consensus and get hired.

(That’s a big reason why the board has trouble filling its vacancies. ) If the board were truly independent, of course, it would never solicit, let alone entertain, Meta’s concerns. Around the time of DiResta’s rejection, board members were also fuming over another dispute with Meta.

They wanted access to a basic company-owned tool that would help them choose and contextualize their cases. Called CrowdTangle, the software is essential for analyzing the impact of Facebook and Instagram posts. It is used internally and by selected outside researchers and media organizations.

Getting access seemed like a no-brainer; investigating a case without it is like assessing damage to a coal mine without a flashlight. The board spent months asking for access, yet Meta still didn’t grant the request. It seemed clear that someone at Meta didn’t want the board to have it.

Ultimately, the issue came up in a March 2022 meeting with Clegg, who seemed taken aback by the board members’ frustration. He promised to break the logjam, and a few weeks later the board finally got the tool it should have had from the start. “We had to fight them to get it, which was baffling,” says Michael McConnell, a Stanford law professor who is one of the board’s cochairs.

“But we did it. ” No sooner had that skirmish been resolved than another incident roiled the waters. When Russian troops invaded Ukraine last February, Facebook and Instagram were quickly overwhelmed with questionable, even dangerous content.

Posts promoting violence, such as “death to the Russian invaders,” were in clear violation of Meta’s policies, but banning them might suggest the company was rooting for those invaders. In March, Meta announced that in this narrow instance, it would temporarily allow such violent speech. It turned to the board for backup and asked for a policy advisory opinion.

The board accepted the request, eager to ponder the human rights conundrum involved. It prepared a statement and set up appointments to brief reporters on the upcoming case. “There are plenty of people in the company for whom we’re more of an irritation,” says board member Michael McConnell.

“Nobody really likes people looking over their shoulders and criticizing. ” But just before the board announced its new case, Meta abruptly withdrew the request. The stated reason was that an investigation might put some Meta employees at risk.

The board formally accepted the explanation but blasted it in private meetings with the company. “We made it very clear to Meta that it was a mistake,” says Stephen Neal, the chair of the Oversight Board Trust, who noted that if safety were indeed the reason, that would have been apparent before Meta requested the policy advisory opinion. When I asked whether Neal suspected that the board’s foes wanted to prevent its meddling in a hot-button issue, he didn’t deny it.

In what seemed like an implicit return blow, the board took on a case that addressed the very issues raised by Meta’s withdrawn advisory opinion. It involved a Russian-language post from a Latvian user that showed a body, presumably dead, lying on the ground and quoted a famous Soviet poem that reads, “Kill the fascist so he will lie on the ground’s backbone … Kill him! Kill him!” Other members also noticed the mixed feelings inside Meta. “There are plenty of people in the company for whom we’re more of an irritation,” McConnell says.

“Nobody really likes people looking over their shoulders and criticizing. ” Since the board members are accomplished people who were probably chosen in part because they aren’t bomb throwers, they’re not the type to declare outright war on Meta. “I don’t approach this job thinking that Meta is evil,” says Alan Rusbridger, a board member and former editor of The Guardian .

“The problem that they’re trying to crack is one that nobody on earth has ever tried to do before. On the other hand, I think there has been a pattern of dragging them screaming and kicking to give us the information we’re seeking. ” There are worse things than no information.

In one case, Meta gave the board the wrong information—which may soon lead to its most scathing decision yet. During the Trump case, Meta researchers had mentioned to the board a program called Cross Check. It essentially gave special treatment to certain accounts belonging to politicians, celebrities, and the like.

The company characterized it to the board as a limited program involving only “a small number of decisions. ” Some board members saw it as inherently unfair, and in their recommendations in the Trump case, they asked Meta to compare the error rates in its Cross Check decisions with those on ordinary posts and accounts. Basically, the members wanted to make sure this odd program wasn’t a get-out-of-jail-free card for the powerful.

Meta refused, saying the task wasn’t feasible. (This excuse seems to be a go-to when the company wants to bounce the board’s suggestions. ) Meta also pointed the board to one of its previous statements: “We remove content from Facebook no matter who posts it, when it violates our standards.

” In September 2021, The Wall Street Journal began publishing leaked documents showing that Cross Check actually involved millions of accounts. The program wound up shielding so much improper content that even its own employees had condemned it as allowing the powerful to circumvent the company’s rules. (One example: Trump’s infamous Black Lives Matter–related post that said, “When the looting starts, the shooting starts.

” Another was a soccer star’s nude photos of a woman who accused him of rape. ) In a May 2019 internal memo, dismayed Facebook researchers had written, “We are knowingly exposing users to misinformation that we have the processes and resources to mitigate. ” Another internal paper put it bluntly: “We are not actually doing what we say we do publicly.

” Meta was busted. Its claims to the board about the Cross Check system were at best a gross understatement. “I thought it was extremely disrespectful that Facebook so openly lied to the Oversight Board,” says former employee Frances Haugen , who leaked the papers and has met with the board privately to discuss the program.

The board demanded that Meta explain itself, and the company admitted, according to the board’s transparency report, that it “should have not said that Cross Check only applied to ‘a small number of decisions. ’” The board stated that if it couldn’t trust Meta to provide accurate information, the entire exercise would crumble. Suzanne Nossel, the PEN CEO, says she worried that the company’s deceptions might hobble their project.

“I was chagrined and concerned about the credibility of the board, our ability to carry out our work,” she says. Meta’s next move was reminiscent of its buck-passing in the Trump decision—it asked the board for its views on the program. Over the next few months, the board set up a committee to study Cross Check.

Most of the meetings were virtual. But in April, the committee managed to meet for several days in New York City. The six members of the board and their prodigious staff took over several meeting rooms at a law firm in Midtown.

After much pleading on my part, I sat in on one of their deliberations—the first time a journalist was allowed in an official Oversight Board session. (I had to agree not to attribute quotes to members by name. ) It should not be the last; the mere glimpse I got showed just how frank and determined these semi-outsiders were to change the company that had brought them together.

Can Meta claim the right to favor certain customers? Of course not, because it is so entwined with the way people express themselves around the globe. At one point, a board member cried out in frustration: “Is being on Facebook a basic human right?” Fifteen people gathered around a set of tables arranged in a rectangle and set up with all the formality of a United Nations summit. A team of translators was on hand so every member could speak their native language, and each participant got an iPod Touch through which to listen to the translations.

Once the conversation got underway, it quickly became heated. Some members abandoned their home tongues and spoke in less-polished English so the others could hear their urgency straight from their mouths. I wound up monitoring perhaps an hour of a much longer session.

From what I could perceive, the board was evaluating the program from a human rights perspective. The members seemed to have already concluded that Cross Check embodied inequality, the exact opposite of Meta’s claim that “we remove content from Facebook no matter who posts it, when it violates our standards. ” One member referred to those in the program as the Privileged Post Club.

The board members seemed to understand Meta’s argument that giving special treatment to well-known accounts could be expeditious. Employees could more quickly assess whether an improper post was excusable for its “newsworthiness. ” But the members zeroed in on the program’s utter lack of transparency.

“It’s up to them to say why it should be private,” the cochair who was moderating the session remarked. The members discussed whether Meta should make public all the details of the program. One suggestion was that the Privileged Posters be labeled.

After listening to all this back-and-forth, one member finally burst out an objection to the entire concept of the program. “The policies should be for all people!” she exclaimed. It was becoming clear that the problems with the Cross Check program were the same seemingly intractable problems of content moderation at scale.

Meta is a private service—can it claim the right to favor certain customers? Of course not, because Meta is so entwined with the way people express themselves around the globe. At one point, a member cried out in frustration: “Is being on Facebook a basic human right?” Meta, meanwhile, was still not sharing critical facts about the program. Was Cross Check singling out people solely to clear questionable content, or was it giving some people extra scrutiny? The board hadn’t gotten an answer.

After that meeting, members and staffers met with Meta officials and unloaded on them. “We were pretty blunt and tenacious in trying to get the information we wanted,” Rusbridger told me later. “They were a bit bruised; they thought we had behaved discourteously.

” He says that the board got some of the details it sought—but not all of them. Despite the frustrations so far, or perhaps because of them, the members are hoping to maneuver the board into a more visible, consequential spot. In October 2022, it announced that in recent months, Meta had been accepting more of its recommendations.

Going forward, it might try to take on a wider range of cases, including ones on ads and groups. “I think we could double or triple the number of cases we handle without dramatically changing the nature of our operations,” says Neal, the chair of the trust. “But let’s assume we were doing 100 cases a year—is that alone enough to have a real impact on where platform content moderation is going? If you want to think about bigger impacts, you need to think about a much bigger organization.

” The board could start by filling all its open slots. It could also start critiquing Meta’s algorithms. Even though they fall outside the board’s scope of influence, some of the group’s recommendations have implicated the company’s code.

“We have our own freedom of speech,” says Palmor, the lawyer from Israel. “Even if we don’t talk directly about the algorithm, we do take into consideration the way content spreads. ” The next step would be to get more expertise on how algorithms actually operate, and to make more direct rulings.

(Hiring Renée DiResta would have helped with that. ) Then there are the policy advisory opinions, the big-issue examinations that, to date, have all originated within Meta. Members wish they could also add to the list.

If Tawakkol Karman, a board member and Nobel Peace Prize–winning journalist, had her way, she would demand action on Meta’s notoriously high volume of bogus accounts, which she calls “a disaster. ” “They breed misinformation, hatred, and conflict, and at the same time, fake accounts are recruited to attack the real accounts,” she says. “It’s become a tool of oppressors.

” So does the board have plans to address the issue? “We are working on this,” she says. The board is now exploring how it might exercise its power beyond Meta. Neal says the organization is considering a role in the execution of the European Union’s Digital Services Act, which will introduce a breathtaking suite of rules on digital platforms, including social media.

The act includes a provision for mandatory appeals systems. Joining the effort might stretch the board thin but could also bring it closer to becoming, as some members dream, a more global force in content policy, with influence over other companies. Never mind that Twitter, Snap, YouTube, and TikTok aren’t exactly beating down the doors to get a piece of the Oversight Board.

(Twitter’s new CEO had, uh, tweeted to say he’s setting up an advisory committee. Almost instantly, the Oversight Board responded with an offer to help, but so far he hasn’t accepted. ) The board’s decisions don’t even cover Meta-owned WhatsApp.

“I think we are making a difference,” Palmor says. “Do I think that the board has enough impact? My answer is no. I wish we had made more of a difference.

” Yet both within Meta and on the board, people seem intoxicated by the idea of extended purview. For Meta, it would be a triumph if its competitors also had to play by its rules. “We’re not seeking to be the board for the industry,” says Thomas Hughes, who handles the board’s operation.

“But we are seeking to understand how we might interrelate with other companies” to share what they’ve learned and “how we might interact with companies setting up different types of councils or bodies to talk about standards. ” It’s ironic that a board convened to oversee Meta, a company whose sins spring from a mania for growth, now has its own visions of getting big fast. This article appears in the December 2022/January 2023 issue.

Subscribe now . Let us know what you think about this article. Submit a letter to the editor at mail@wired.

com . .


From: wired
URL: https://www.wired.com/story/inside-metas-oversight-board-two-years-of-pushing-limits/

DTN
DTN
Dubai Tech News is the leading source of information for people working in the technology industry. We provide daily news coverage, keeping you abreast of the latest trends and developments in this exciting and rapidly growing sector.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_img

Must Read

Related News