New ‘OpenAI Files’ shed light on deep leadership concerns about Sam Altman and safety failures within the AI lab ...Middle East

Fortune - News
New ‘OpenAI Files’ shed light on deep leadership concerns about Sam Altman and safety failures within the AI lab
A new report called the “OpenAI Files” has tracked issues with governance, leadership, and safety culture at the influential AI lab. Compiled by two nonprofit watchdogs, the report draws on legal documents, media coverage, and insider accounts to question the company’s commitment to safe AI development. As OpenAI pivots toward a more profit-driven model, the report calls for reforms to ensure ethical leadership and public accountability.

A new report, dubbed the “OpenAI Files,” aims to shed light on the inner workings of the leading AI company as it races to develop AI models that may one day rival human intelligence. The files, which draw on a range of data and sources, question some of the company’s leadership team as well as OpenAI’s overall commitment to AI safety.

The lengthy report, which is billed as the “most comprehensive collection to date of documented concerns with governance practices, leadership integrity, and organizational culture at OpenAI,” was put together by two nonprofit tech watchdogs, the Midas Project and the Tech Oversight Project.

    It draws on sources such as legal complaints, social media posts, media reports, and open letters to try to assemble an overarching view of OpenAI and the people leading the lab. Much of the information in the report has already been shared by media outlets over the years, but the compilation of information in this way aims to raise awareness and propose a path forward for OpenAI that refocuses on responsible governance and ethical leadership.

    Much of the report focuses on the leaders behind the scenes at OpenAI, particularly CEO Sam Altman, who has become a polarizing figure within the industry. Altman was famously removed from his role as chief of OpenAI in November 2023 by the company’s non-profit board. He was reinstated after a chaotic week that included a mass employee revolt and a brief stint at Microsoft.

    The initial firing was attributed to concerns about his leadership and communication with the board, particularly regarding AI safety. But since then, it’s been reported that several executives at the time, including former OpenAI executives Mira Murati and Ilya Sutskever raised questions about Altman’s suitability for the role.

    According to an Atlantic article by Karen Hao, former Chief Technology Officer, Murati told staffers in 2023 that she didn’t feel “comfortable about Sam leading us to AGI,” while Sutskever said: “I don’t think Sam is the guy who should have the finger on the button for AGI.”

    Dario and Daniela Amodei, former VP of Research and VP of Safety and Policy at OpenAI, respectively, also criticized the company and Altman after leaving OpenAI in 2020. According to Karen Hao’s Empire of AI, the pair described Altman’s tactics as “gaslighting” and “psychological abuse” to those around them. Dario Amodei went on to co-found and take the CEO role at rival AI lab, Anthropic.

    Others, including prominent AI researcher and former co-lead of OpenAI’s super-alignment team, Jan Leike, have critiqued the company more publicly. When Leike departed for Anthropic in early 2024, he accused the company of letting safety culture and processes “take a backseat to shiny products” in a post on X.

    OpenAI at a crossroad

    The report comes at the AI lab is at somewhat of a crossroads itself. The company has been trying to shift away from its original capped-profit structure to lean into its for-profit aims. 

    OpenAI is currently completely controlled by its non-profit board, which is purely answerable to the company’s founding mission: ensuring that AI benefits all of humanity. This has led to several conflicting interests between the for-profit arm of the company and the non-profit board as the company tries to commercialize its products.

    The original plan to resolve this—to spin out OpenAI into an independent, for-profit company was scrapped in May and replaced with a new approach, which will turn OpenAI’s for-profit group into a public benefit corporation controlled by the non-profit.

    The “OpenAI files” aim to raise awareness about what is happening behind the scenes of one of the most powerful tech companies, but also to propose a path forward for OpenAI that focuses on responsible governance and ethical leadership as the company seeks to develop AGI.

    The report said: “OpenAI believes that humanity is, perhaps, only a handful of years away from developing technologies that could automate most human labor.”

    “The governance structures and leadership integrity guiding a project as important as this must reflect the magnitude and severity of the mission. The companies leading the race to AGI must be held to, and must hold themselves to, exceptionally high standards. OpenAI could one day meet those standards, but serious changes would need to be made.”

    Representatives for OpenAI did not respond to a request for comment from Fortune.

    This story was originally featured on Fortune.com

    Read More Details
    Finally We wish PressBee provided you with enough information of ( New ‘OpenAI Files’ shed light on deep leadership concerns about Sam Altman and safety failures within the AI lab )

    Apple Storegoogle play

    Also on site :