OpenAI Unveils Comprehensive AI Safety Framework Empowering Board with Decision Reversal Capability
Since ChatGPT’s launch a year ago, the potential dangers of AI have been top of mind for both AI researchers and the general public. Generative AI technology has dazzled users with its ability to write poetry and essays, but also sparked safety concerns with its potential to spread disinformation and manipulate humans.
In April, a group of AI industry leaders and experts signed an open letter calling for a six-month pause in developing systems more powerful than OpenAI’s GPT-4, citing potential risks to society. A May Reuters/Ipsos poll found that more than two-thirds of Americans are concerned about the possible negative effects of AI and 61 percent believe it could threaten civilization.
Earlier this month, OpenAI delayed the launch of its custom GPT store until early 2024. OpenAI had introduced the custom GPTs and store during its first developer conference in November. The company is reportedly continuing to “make improvements” to GPTs based on customer feedback.
In a recent tumultuous episode for OpenAI, the company found itself in a state of upheaval when its board made the decision to dismiss CEO Sam Altman. However, in a surprising turn of events, Altman swiftly re-entered the company’s fold mere days following his removal, coinciding with a restructuring of the board itself.