The Safety of AI: A Broken Promise?
The artificial intelligence powerhouse OpenAI promised the White House it would rigorously safety-test new versions of its groundbreaking technology to prevent harm. However, this spring, some members of OpenAI’s safety team felt pressured to speed through a new testing protocol to meet a May launch date, raising questions about the company’s commitment to public safety over commercial interests.
The AI safety testing process came under scrutiny this spring.
The previously unreported incident showcases the limits of President Biden’s strategy for thwarting AI harms. OpenAI’s CEO Sam Altman has been accused of prioritizing commercial interests over public safety, a stark departure from the company’s roots as an altruistic nonprofit.
“We basically failed at the process.”
The incident also raises questions about the federal government’s reliance on self-policing by tech companies to protect the public from abuses of generative AI, which executives say has the potential to remake virtually every aspect of human society.
The federal government’s approach to AI regulation is under scrutiny.
Andrew Strait, a former ethics and policy researcher at Google DeepMind, now associate director at the Ada Lovelace Institute in London, said allowing companies to set their own standards for safety is inherently risky.
The company’s leaders invited employees to celebrate the product, which would power ChatGPT, with a party at one of the company’s San Francisco offices before testing began on the model, GPT-4 Omni.
A party was planned to celebrate the launch of GPT-4 Omni before safety testing was complete.
This incident highlights the changing culture at OpenAI, where company leaders are accused of prioritizing commercial interests over public safety.
The Consequences of AI Harm
The potential consequences of AI harm are far-reaching and devastating. AI has the potential to remake virtually every aspect of human society, from work to war. If not properly regulated, AI could lead to catastrophic harm.
The potential consequences of AI harm are devastating.
A Call to Action
The incident serves as a wake-up call for the federal government to re-examine its approach to AI regulation. It is imperative that the government takes a more proactive role in ensuring that tech companies prioritize public safety over commercial interests.
A more proactive approach to AI regulation is needed.