Dubai Tech News

The Lesson Of OpenAI Is That Governance Matters

Corporate governance balances profits and ethics in AI companies. Too many people are learning the wrong lesson from the drama at OpenAI. Conventional wisdom seems have coalesced around the idea that OpenAI’s aborted attempt to fire Sam Altman – ultimately thwarted by a revolt of OpenAI’s investors and employees – was caused by its board having the wrong type of board members.

In other words, the board members were unqualified and unprepared to lead a major tech company, leading them to make poor decisions. Whether or not that accurately describes the (former) board members or their decision-making, this analysis fundamentally misunderstands what led to OpenAI’s standoff. In reality, OpenAI’s conitptions provide a case study in the results of poorly designed corporate governance.

OpenAI’s board and governance structure is misaligned with its mission. Replacing board members alone won’t solve that problem. In this article, Seth Berman, the CEO of , explains how his work with Anthropic, a competitor to OpenAI, generated a more resilient governance structure that aligned these competing pressures.

OpenAI sought to balance its profit motive with an ethical imperative – making Artificial Intelligence safe. Though OpenAI’s structure is unusual (a for-profit subsidiary owned and wholly controlled by a not-for-profit entity), OpenAI’s founders were not alone in thinking that AI developers must be concerned with the potential dangers of AI. The belief that AI could be destructive is extremely common in the AI industry – a recent survey found that industry insiders predicted there is a 10% chance that Artificial General Intelligence could destroy humanity.

Thus, a company that sought to create AI to benefit humanity while mitigating its risks is a powerful concept, attractive both for avoiding a moral harm (the potential destruction of humanity) and as a positive mission to which its stakeholders could strive. Indeed, OpenAI is not the only company set up with these goals. Anthropic PBC, an OpenAI competitor, was founded with a very similar mission, but with a very different governance mechanism, demonstrates that mission-driven governance is still possible if it is properly designed.

OpenAI’s governance structure granted formal power over its corporate affairs to the directors of its not-for-profit. These directors had one goal – to ensure the fulfillment of OpenAI’s mission. They had no fiduciary duty to OpenAI shareholders.

Shareholders had no vote and no warning before the board ousted Sam Altman as CEO, apparently believing that this was necessary to fulfill OpenAI’s mission. Investors and employees immediately revolted, and OpenAI went through days of chaos before the board caved and reinstated Mr. Altman.

Presumably the (now former) board members have learned a key lesson about power: formal power is not everything. Any leader – no matter what levers of power she thinks she holds – quickly loses power if no one follows. OpenAI’s employees refused to follow the board’s direction, and the board quickly lost its power.

OpenAI needed a governance structure designed to balanced its two goals (safety and profits), but its governance was in fact structured to consider only one of these goals. Anthropic’s governance – unlike OpenAI – is designed to balance these goals. Anthropic was founded in 2021 by former employees of OpenAI who disagreed with OpenAI’s direction.

They founded Anthropic with a mission to “develop and maintain AI for the long-term benefit of humanity,” very similar to OpenAI’s stated mission. The key difference between the companies is that Anthropic sought to create a governance structure that supported the entirety of its goals, while OpenAI decided to let its not-for-profit board maintain control even as its CEO was pursuing a vision that conflicting with the board’s conception of its mission. Unlike OpenAI, Anthropic is controlled by a corporate board of directors.

Some of these directors are selected by shareholders. One member is selected by a special entity, the Long-Term Benefit Trust, which is an independent body comprising five Trustees with backgrounds and expertise in AI safety, national security, public policy and social enterprise. At first, the Trust appoints only one board member.

Over time the number of board members the Trust appoints will increase to a majority. Even then, shareholders will appoint a minority of the board members. Most importantly, all the directors – even the ones appointed by the Trust – have a fiduciary duty to shareholders, and must consider both Anthropic’s mission and its profits.

This structure is intended to ensure that Anthropic responsibly balances the financial interests of shareholders with the interests of those affected by Anthropic’s conduct and its public benefit purpose. Unlike at OpenAI, it would not be possible for the Anthropic board of directors to remove its CEO without at least hearing the concerns of shareholders. Even when the Trust-appointed directors form a majority of the board, they will not have the unfettered authority to put mission over money the way OpenAI’s board did.

This is in part because as corporate board members they will have a fiduciary duty to shareholders that OpenAI’s not-for-profit board members did not, and in part because the Trust’s powers to appoint board members is balanced by failsafe provisions that allow changes to the Trust and its powers if sufficiently large supermajorities of stockholders agree. This will prevent the OpenAI situation – in which the only recourse its shareholders and employees had against what they perceive as a rogue board was to threaten to abandon the company. Designing Anthropic’s structure was a deliberative process that contemplated the different incentives and motivations of board members, and how to ensure that these reflected the interests of investors and the company’s mission.

The team considered the interplay between formal corporate power and informal power (such as the risk of investors walking away or employees quitting in mass), and what might happen if the Trust’s vision and the vision of shareholders came into conflict. This allowed Anthropic to craft mechanisms to resolve potential conflicts. The result is a resilient governance structure that balances Anthropic’s goals – its public mission and its commercial success.

Each goal is necessary not only because it would be unethical to create destructive but profitable AI, but also because the two goals working together ensure the ultimate success of the enterprise. After all, Anthropic’s mission-driven culture is part of what attracts the top talent and top investors that are the precursors to financial success. OpenAI’s chaos is certainly a lesson in novel corporate forms.

But the lesson is not that board members must only be drawn from the Silicon Valley investor class. The lesson is that mission-driven corporate governance has to be carefully crafted to ensure that it actually achieves all a corporation’s goals. .


From: forbes
URL: https://www.forbes.com/sites/tedladd/2023/12/15/the-lesson-of-openai-is-that-governance-matters/

Exit mobile version