Latest
OpenAI Announces Creation of Independent Safety Oversight Board
OpenAI, the company behind ChatGPT, announced on Monday that it is transforming its internal safety committee into an “independent” oversight board. This move comes amid growing concerns about the safety and security of advanced AI technologies.
The newly formed Safety and Security Committee will be chaired by Carnegie Mellon professor Zico Kolter. Initially unveiled in May, the committee originally included OpenAI CEO Sam Altman but will now operate with “independent governance,” according to the company’s blog post.
OpenAI has faced criticism over its handling of safety issues. In June, a group of current and former employees published an open letter warning about “the serious risks posed by these technologies.” The company has also seen high-profile resignations, including co-founder Ilya Sutskever, who cited safety concerns as the reason for his departure.
In July, five U.S. senators sent a letter to Altman raising questions about how the company is addressing safety risks. OpenAI’s announcement of an independent oversight board is seen as a response to these concerns.
The company stated that the committee will be briefed on all new AI models, and it will have the authority to delay the release of any models it deems unsafe. OpenAI added that the committee had already reviewed its latest model, code-named “Strawberry,” and classified it as having a “medium risk.”
“As part of its work, the Safety and Security Committee will continue to receive regular reports on technical assessments for current and future models, as well as ongoing post-release monitoring,” OpenAI said in its blog post. “We are building upon our model launch processes to establish an integrated safety and security framework with clearly defined success criteria for model launches.”
In addition to Kolter, other members of the oversight board include Quora CEO Adam D’Angelo, retired U.S. Army General and former NSA chief Paul Nakasone, and former Sony general counsel Nicole Seligman.
The establishment of the independent oversight board is seen as a significant step in addressing safety concerns as OpenAI continues to develop more advanced AI technologies. The board’s role in monitoring and evaluating the company’s models is expected to help improve transparency and accountability in AI development.
Latest
Flash Floods Devastate Thai Elephant Sanctuary, Killing Two Elephants and Forcing Evacuations
Latest
Severe Drought Causes Record Low Water Levels in Brazil’s Negro River
Latest
Oxford Scientists Develop First Ovarian Cancer Vaccine in Groundbreaking Research
-
Business8 months ago
Saudi Arabia’s Model for Sustainable Aviation Practices
-
Business8 months ago
Recent Developments in Small Business Taxes
-
Politics8 months ago
Who was Ebrahim Raisi and his status in Iranian Politics?
-
Business6 months ago
Carrectly: Revolutionizing Car Care in Chicago
-
Business7 months ago
Saudi Arabia: Foreign Direct Investment Rises by 5.6% in Q1
-
Technology8 months ago
Comparing Apple Vision Pro and Meta Quest 3
-
Politics8 months ago
Indonesia and Malaysia Call for Israel’s Compliance with ICJ Ruling on Gaza Offensive
-
Technology8 months ago
Recent Developments in AI Ethics in America