Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology9 min read

“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO - Ars Technica

OpenAI brainstorms ways AI can benefit humanity in effort to counter bad vibes. Discover insights about “the problem is sam altman”: openai insiders don’t trust

TechnologyInnovationBest PracticesGuideTutorial
“The problem is Sam Altman”: OpenAI Insiders don’t trust CEO - Ars Technica
Listen to Article
0:00
0:00
0:00

“The problem is Sam Altman”: Open AI Insiders don’t trust CEO - Ars Technica

Overview

“The problem is Sam Altman”: Open AI Insiders don’t trust CEOvar abtest_2148818 = new ABTest(2148818, 'click');

Open AI brainstorms ways AI can benefit humanity in effort to counter bad vibes.

Details

On the same day that Open AI released policy recommendations to ensure that AI benefits humanity if superintelligence is ever achieved, The New Yorker dropped a massive investigation into whether CEO Sam Altman can be trusted to actually follow through on Open AI’s biggest promises.

Parsing the publications side by side can be disorienting.

On the one hand, Open AI said it plans to push for policies to “keep people first” as AI starts “outperforming the smartest humans even when they are assisted by AI.” To achieve this, the company vows to remain “clear-eyed” and transparent about risks, which it acknowledged includes monitoring for extreme scenarios like AI systems evading human control or governments deploying AI to undermine democracy. Without proper mitigation of such risks, “people will be harmed,” Open AI warned, before describing how the company could be trusted to advocate for a future where achieving superintelligence means a “higher quality of life for all.”

On the other hand, The New Yorker interviewed more than 100 people familiar with how Altman conducts business. The publication also reviewed internal memos and interviewed Altman more than 12 times. The resulting story provides a lengthy counterpoint explaining why the public may struggle to trust Open AI’s CEO to “control the future” of AI, no matter how rosy the company’s vision may appear.

Overall, insiders painted Altman as a people-pleaser who tells others what they want to hear while questing for power in an alleged bid to always put himself first. As one board member summed up Altman, he has “two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

While The New Yorker found no “smoking gun,” its reporters reviewed messages from Open AI’s former chief scientist, Ilya Sutskever, and former research head, Dario Amodei, that documented “an accumulation of alleged deceptions and manipulations.” Many of the incidents could be shrugged off individually, but when taken together, both men concluded that Altman was not fostering a safe environment for advanced AI, The New Yorker reported.

“The problem with Open AI,” Amodei wrote, “is Sam himself.”

Altman either disputed claims in the story or else claimed to have forgotten about certain events. He also attributed some of his shifting narratives to the changing landscape of AI and admitted that he’s been conflict-avoidant in the past.

But his seeming contradictions are getting harder to ignore as scrutiny of Open AI intensifies amid growing government reliance on its models and lawsuits labeling its tech as unsafe.

Perhaps most visibly to the public, Altman has recently shifted away from positioning Open AI as a sort of savior blocking AI doomsday scenarios, instead adopting a “tone” of “ebullient optimism,” The New Yorker reported.

The policy recommendations echo this at times. Discussing the recommendations—which include experimenting with shorter workweeks and creating a public wealth fund to share AI profits—Open AI’s chief global affairs officer, Chris Lehane, confirmed to The Wall Street Journal that the company is urgently concerned about negative public opinions about AI. While announcing their big ideas to spare humanity from AI dangers, Open AI also promoted “a pilot program of fellowships and focused research grants of up to

100,000andupto100,000 and up to
1 million in API credits for work that builds on these and related policy ideas.”

However, The New Yorker’s report makes it easier to question whether the recommendations were rolled out to distract from mounting public fears about child safety, job displacement, or energy-guzzling data centers. One recent Harvard/MIT poll found that Americans’ biggest concern is that powering AI will hurt their quality of life, Axios reported. Ultimately, these concerns might sway votes for Democrats and Republicans ahead of the midterm elections, the WSJ noted, as data center moratoriums that could slow AI advancement are gaining traction.

For Altman and his company, getting the public to buy into their vision of AI at this critical juncture likely feels essential, since Republicans losing control of Congress could pave the way for stricter AI safety laws that The New Yorker noted that Altman has privately lobbied against.

Without trust in Altman, it’s likely a much harder sell to convince the public that Open AI isn’t simply saying whatever it will take to entrench its own dominance, the New Yorker suggested.

“We don’t have all, or even most of the answers,” Open AI said. Instead, the company characterized its “industrial policy for the intelligence age” as “initial ideas for an industrial policy agenda to keep people first during the transition to superintelligence.”

Calling for “common-sense” regulations and a public-private partnership to quickly iterate on successes, Open AI pitched “ambitious” policy ideas to ensure that everyone can access AI and profit from it. Its bushy-tailed vision acknowledged that it hopes to achieve what society never did: guarantee Internet access and ensure AI is “fairly deployed” across the US, with everyone trained to use it.

Worker protections are a focus of Open AI’s plan. Recommendations included involving workers in discussions on how AI systems work to improve productivity and make workplaces safer, as well as on how to “set clear limits on harmful uses of AI.” Open AI also suggested creating a tax on automated labor that could be used to fund core programs like Social Security, Medicaid, SNAP, and housing assistance as companies rely less on human labor. Among other enticing ideas was a plan to “incentivize employers and unions to run time-bound 32-hour/four-day workweek pilots with no loss in pay that hold output and service levels constant, then convert reclaimed hours into a permanent shorter week, bankable paid time off, or both.”

Additionally, Open AI proposed a “public wealth fund” that “provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth.”

“Returns from the Fund could be distributed directly to citizens, allowing more people to participate directly in the upside of AI-driven growth, regardless of their starting wealth or access to capital,” Open AI said.

As AI takes on more tasks, humans can gravitate toward care-centric work, Open AI suggested, recommending policy ideas to help displaced workers get training to work in health care, elderly care, daycare, or community service settings. To ensure people are attracted to those roles—historically undervalued as women’s work—Open AI suggested initiatives to help society recognize that caregiving is “economically valuable work.”

Human workers will also be needed to use AI to accelerate scientific advancements, Open AI said.

However, all these public benefits that Open AI promises can only be realized if we build a “resilient society” that can quickly respond to risky implementations and “keep AI safe, governable, and aligned with democratic values,” the company said.

That aspect of Open AI’s vision requires firms like Open AI to develop safety systems, among other efforts, that will help improve public trust in AI. And we should trust those systems will work and only interfere with these firms when actual dangers are looming, Open AI seems to suggest.

“As we progress toward superintelligence, there may come a point where a narrow set of highly capable models—particularly those that could materially advance chemical, biological, radiological, nuclear, or cyber risks—require stronger controls,” Open AI said.

When that day arrives, Open AI opined, there should be a global network in place to communicate emerging risks. However, only the firms with the most advanced models should be subjected to rigorous audits, so that smaller firms can still compete. That’s the path to ensure no firm’s dominant position can be abused to unfairly shut down rivals or weaken democratic values, Open AI said, while insisting that public input is vital to AI’s success.

Altman has previously persuaded “a tech-skeptical public that their priorities, even when mutually exclusive, are also his priorities,” The New Yorker reported. But for the public, which is already reporting alleged harms from Open AI models, it might be getting harder to entertain lofty ideas from a company that is led by “the greatest pitchman of his generation,” The New Yorker reported.

One Open AI researcher told The New Yorker that Altman’s promises can sometimes seem like a stopgap to overcome criticism until he reaches the next benchmark. When it comes to superintelligence, some optimistic experts think it could take two years, which is longer than Elon Musk stayed at Open AI before famously criticizing Altman’s leadership and leaving to start his own AI firm.

Altman “sets up structures that, on paper, constrain him in the future,” the Open AI researcher told The New Yorker. “But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.”

  1.          Why will today's lunar flyby only beam back low-resolution video?
    
  2.          "Cognitive surrender" leads AI users to abandon logical thinking, research finds
    
  3.          Artemis II is going so well that all we're left to talk about is frozen urine
    
  4.          CBP facility codes sure seem to have leaked via online flashcards
    
  5.          Teardown of unreleased LG Rollable shows why rollable phones aren't a thing
    

Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important.

Key Takeaways

  • “The problem is Sam Altman”: Open AI Insiders don’t trust CEOvar abtest_2148818 = new ABTest(2148818, 'click');

  • Open AI brainstorms ways AI can benefit humanity in effort to counter bad vibes

  • On the same day that Open AI released policy recommendations to ensure that AI benefits humanity if superintelligence is ever achieved, The New Yorker dropped a massive investigation into whether CEO Sam Altman can be trusted to actually follow through on Open AI’s biggest promises

  • Parsing the publications side by side can be disorienting

  • On the one hand, Open AI said it plans to push for policies to “keep people first” as AI starts “outperforming the smartest humans even when they are assisted by AI

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.