Open AI’s new boss is the same as the old boss. But the company—and the artificial intelligence industry—may have been profoundly changed by the past five days of high-stakes soap opera. Sam Altman, OpenAI’s CEO, cofounder, and figurehead, was removed by the board of directors on Friday.
By Tuesday night, after a mass protest by the majority of the startup’s staff, Altman was on his way back , and most of the existing board was gone. But that board , mostly independent of OpenAI’s operations, bound to a “for the good of humanity” mission statement, was critical to the company’s uniqueness. As Altman toured the world in 2023 , warning the media and governments about the existential dangers of the technology that he himself was building, he portrayed OpenAI’s unusual for-profit-within-a-nonprofit structure as a firebreak against the irresponsible development of powerful AI.
Whatever Altman did with Microsoft’s billions, the board could keep him and other company leaders in check. If he started acting dangerously or against the interests of humanity, in the board’s view, the group could eject him. “The board can fire me, I think that’s important,” Altman told Bloomberg in June.
“It turns out that they couldn’t fire him, and that was bad,” says Toby Ord, senior research fellow in philosophy at Oxford University, and a prominent voice among people who warn AI could pose an existential risk to humanity. The chaotic leadership reset at OpenAI ended with the board being reshuffled to consist of establishment figures in tech and former US secretary of the treasury Larry Summers. Two directors associated with the “effective altruism” movement, the only women, were removed from the board.
It has crystallized existing divides over how the future of AI should be governed. The outcome is seen very differently by doomers who worry that AI is going to destroy humanity; transhumanists who think the tech will hasten a utopian future; those who believe in freewheeling market capitalism; and advocates of tight regulation to contain tech giants that cannot be trusted to balance the potential harms of powerfully disruptive technology with a desire to make money. “To some extent, this was a collision course that had been set for a long time,” says Ord, who is also credited with cofounding the effective altruism movement, parts of which have become obsessed with the doomier end of the AI risk spectrum.
“If it’s the case that the nonprofit governance board of OpenAI was fundamentally powerless to actually affect its behavior, then I think that exposing that it was powerless was probably a good thing. ” The reason that OpenAI’s board decided to move against Altman remains a mystery . Its announcement that Altman was out of the CEO seat said he “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.
” An internal OpenAI memo later clarified that Altman’s ejection “was not made in response to malfeasance. ” Emmett Shear , the second of two interim CEOs to run the company between Friday night and Wednesday morning, wrote after accepting the role that he’d asked why Altman was removed. “The board did not remove Sam over any specific disagreement on safety,” he wrote.
“Their reasoning was completely different from that. ” He pledged to launch an investigation into the reasons for Altman’s dismissal. The vacuum has left space for rumors, including that Altman was devoting too much time to side projects or was too deferential to Microsoft.
It has also nurtured conspiracy theories, like the idea that OpenAI had created artificial general intelligence (AGI), and the board had flipped the kill switch on the advice of chief scientist, cofounder, and board member Ilya Sutskever. “What I know with certainty is we don’t have AGI,” says David Shrier, professor of practice, AI, and innovation, at Imperial College Business School in London. “I know with certainty there was a colossal failure of governance.
” Shrier, who has sat on several tech company boards, says that failure isn’t just because of the obvious tension between the board’s nonprofit mission and the commercial desires of the executives and investors involved in the for-profit unit of OpenAI. It’s also a function of the company’s rapid growth in size and influence, reflective of the AI industry’s growing clout. “ChatGPT took six weeks to go from zero to 100 million users.
The world wide web took seven years to get to that kind of scale,” he says. “Seven years is enough time for the human brain to catch up with technology. Six weeks, that’s barely enough time to schedule a board meeting.
” Despite the board’s supreme power on paper, the complexity and scale of OpenAI’s operations “clearly outstripped” the directors’ ability to oversee the company, Shrier says. He considers that alarming, given the real and immediate need to get a handle on the risks of AI technology. Ventures like OpenAI “are certainly not science projects.
They’re no longer even just software companies,” he says. “These are global enterprises that have a significant impact on how we think, how we vote, how we run our companies, how we interact with each other. And as such, you need a mature and robust governance mechanism in place.
” Regulators around the world will be watching what happens next at OpenAI carefully. As Altman negotiated to return to OpenAI on Tuesday, the US Federal Trade Commission voted to give staff at the regulator powers to investigate companies selling AI-powered services, allowing them to legally compel documents, testimony, and other evidence. The company’s boardroom drama also unfolded at a pivotal point in negotiations over the European Union’s landmark AI Act —a piece of legislation that could set the tone for regulations around the world.
Bruised by previous failures to mitigate the social impacts of technology platforms, the EU has increasingly taken a more muscular approach to regulating Big Tech. However, EU officials and member states have disagreed over whether to come down hard on AI companies or to allow a degree of self-regulation. One of the main sticking points in the EU negotiations is whether makers of so-called foundation models, like OpenAI’s GPT-4 , should be regulated or whether legislation should focus on the applications that foundational models are used to create.
The argument for singling out foundation models is that, as AI systems with many different capabilities, they will come to underpin many different applications on top, in the way that GPT-4 powers OpenAI’s chatbot ChatGPT. This week, France, Germany, and Italy said they supported “mandatory self-regulation through codes of conduct” for foundation models, according to a joint paper first reported by Reuters—effectively suggesting that OpenAI and others can be trusted to keep their own technology in check. France and Germany are home to two of Europe’s leading foundation model makers, Mistral and Aleph Alpha .
On X, Mistral CEO Arthur Mensch came out in favor of the idea that he could grade his own homework. “We don’t regulate the C language [a type of programming language] because one can use it to develop malware,” he said. But for supporters of a more robust regulatory regime for AI, the past few days’ events show that self-regulation is insufficient to protect society.
“What happened with this drama around Sam Altman shows us we cannot rely on visionary CEOs or ambassadors of these companies, but instead, we need to have regulation,” says Brando Benifei, one of two European Parliament lawmakers leading negotiations on the new rules. “These events show us there is unreliability and unpredictability in the governance of these enterprises. ” The high-profile failure of OpenAI’s governance structure is likely to amplify calls for stronger public oversight.
“Governments are the only ones who can say no to investors,” says Nicolas Moës, director of European AI Governance at the Future Society, a Brussels-based think tank. Rumman Chowdhury, founder of the nonprofit Humane Intelligence and former head of Twitter’s ethical AI team, says OpenAI’s crisis and reset should be a wake-up call. The events demonstrate that the notion of ethical capitalism—corporate structures that bind nonprofit and for-profit entities together—won’t work; government action is needed.
“In a way, I’m glad it happened,” Chowdhury said of Altman’s departure and reinstatement. Among those more pessimistic about the risks of AI, the Altman drama prompted mixed reactions. By bringing existential risk to the forefront of international conversations, from the podium of a multibillion-dollar tech company, OpenAI’s CEO had propelled relatively fringe ideas popular among a certain slice of effective altruists into the mainstream .
But people within the community that first incubated those notions weren’t blind to the inconsistency of Altman’s position, even as he boosted their fortunes. Altman’s strategy of raising billions of dollars and partnering with a tech giant to pursue ever more advanced AI while also admitting that he didn’t fully understand where it might lead was hard to align with his professed fears of extinction-level events. The three independent board members who reportedly led the decision to remove Altman all had connections to effective altruism (EA), and their vilification by some of Altman’s supporters—including major power brokers in Silicon Valley—sits uneasily even with members of the EA community who previously professed support for Altman.
Altman’s emergence as the public face of AI doomerism also annoyed many who are more concerned with the immediate risks posed by accessible, powerful AI than by science fiction scenarios. Altman repeatedly asked governments to regulate him and his company for the good of humankind: “My worst fear is that we—the field, the technology, the industry—cause significant harm to the world,” Altman told a Congressional hearing in May , saying he wanted to work with governments to prevent that. “I think the whole idea of talking about, ‘Please regulate us, because if you don’t regulate that we will destroy the world and humanity’ is total BS,” says Rayid Ghani, a distinguished career professor at Carnegie Mellon University who researches AI and public policy.
“I think it’s totally distracting from the real risks that are happening now around job displacement, around discrimination, around transparency and accountability. ” While Altman was ultimately restored, OpenAI and other leading AI startups look a little different as the dust settles after the five-day drama. ChatGPT’s maker and rivals working on chatbots or image generators feel less like utopian projects striving for a better future and more like conventional ventures primarily motivated to generate returns on the capital of their investors.
AI turns out to be much like other areas of business and technology, a field where everything happens in the gravitational field of Big Tech, which has the compute power, capital, and market share to dominate. OpenAI described the new board makeup announced yesterday as temporary and is expected to add more names to the currently all-male roster. The final shape of the board overseeing Altman is likely to be heavier on tech and lighter on doom, and analysts predict the board and company both are likely to cleave closer to Microsoft, which has pledged $13 billion to OpenAI.
Microsoft CEO Satya Nadella expressed frustration in media interviews on Monday that it was possible for the board to spring surprises on him. “I’ll be very, very clear: We’re never going to get back into a situation where we get surprised like this, ever again,” he said on a joint episode of the Pivot and Kara Swisher On podcasts. “That’s done.
” Although Altman has portrayed his restoration as a return to business as before, OpenAI is now expected to perform more directly as Microsoft’s avatar in its battle with Google and other giants. Meta and Amazon have also increased their investments in AI, and Amazon has committed a $1. 25 billion investment to Anthropic, started by former OpenAI staff in 2021.
“And so now, it’s not just a race between these AI labs, where the people who founded them, I think, genuinely care about the historic significance of what they could be doing,” Ord says. “It’s also now a race between some of the biggest companies in the world, and that’s changed the character of it. I think that that aspect is quite dangerous.
” Additional reporting by Khari Johnson. .
From: wired
URL: https://www.wired.com/story/sam-altman-second-coming-sparks-new-fears-ai-apocalypse/