AI

[NEWS] Former Staff Testifies: Concerns Over Lack of Proper Safety Measures and Oversight in AGI Development

2024/09/24Editors of Iolite
SHARE
  • sns-x-icon
  • sns-facebook-icon
  • sns-line-icon
[NEWS] Former Staff Testifies: Concerns Over Lack of Proper Safety Measures and Oversight in AGI Development

Mentions the Risk of AGI Misuse

OpenAI's latest model, GPT-o1 AI, is the first to demonstrate the ability to help experts recreate known and emerging biological threats, a former staff member at the company told senators this week.

"OpenAI's new AI system is the first to demonstrate a response to biological weapons risks and can assist experts in developing plans to recreate known biological threats," former OpenAI technical staff member William Saunders said at the Senate Judiciary Committee's Privacy, Technology, and Law Subcommittee.

Saunders has worked as a technical staff member at OpenAI for three years. OpenAI is working to build general artificial intelligence (AGI) and has raised hundreds of billions of yen toward this goal. AI companies are making rapid progress toward building AGI. OpenAI defines AGI as "a highly autonomous system that outperforms humans in the most economically valuable tasks." This means an AI system that can operate autonomously for long periods of time and perform most tasks that humans can perform.

AGI will bring about major changes to society, including fundamental changes to the economy and employment. Sanders said, "AGI could be built in about three years. There is a risk that the system could autonomously carry out cyber attacks or help create new biological weapons, causing devastating damage."

 

Helen Toner (a board member of OpenAI who voted in favor of firing co-founder and CEO Sam Altman) also expects AGI to be realized in the near future. She testified, "Even if the shortest-term predictions are wrong, the idea that we will develop human-level AI in the next 10 or 20 years should be viewed as a realistic possibility, and we need to start taking significant steps now to prepare for it."

 

Sanders also expressed concern about the lack of proper safety measures and oversight in AGI development. She pointed out, "No one knows how to prove or ensure that AGI systems are safe and managed."

 

She criticized OpenAI's new approach to safe AI development for prioritizing profitability over safety. Sanders said, "OpenAI has pioneered these aspects of testing, but has repeatedly prioritized implementation over rigor. I believe there is a real risk that we will miss important dangerous features in future AI systems."

 

The testimony also revealed internal OpenAI challenges that emerged after Altman's firing.

 

"OpenAI's Superintelligence team, which was tasked with developing an approach to control AGI, no longer exists. The team's leader and many key researchers have resigned after struggling to obtain the necessary resources," he said.

 

Sanders' testimony added to the complaints and warnings that AI safety experts have raised about OpenAI's approach.

 

Ilya Sutskever, a co-founder of OpenAI who was involved in Altman's firing, resigned after the launch of GPT-4o and founded Safe Superintelligence Inc.

Autonomous AI will destroy humanity

OpenAI co-founder John Schulman and coordinator Jan Leike have resigned to join rival company Anthropik.

 

Leike criticized Altman's leadership, saying safety "took a back seat to flashy products."

 

Former OpenAI board members Toner and Tasha McCauley wrote in an Economist article that Altman has prioritized profits over responsible AI development, concealed important developments from the board, and fostered a toxic environment within the company.

 

In his statement, Sanders called for clear safety measures in AI development, not just from companies but also from independent organizations, and called for urgent regulatory action from Congress. He also called for the introduction of whistleblower protections in the technology industry. Sanders concluded by warning that "loss of control over autonomous AI systems could lead to the destruction of humanity."

Reference: William Saunders Written Testimony

Image: Shutterstock

Related articles:

OpenAI announces new AI "GPT-4o" with twice the processing speed

WorldCoin development company seeks partnership with OpenAI and PayPal

 

 

 

 

 

SHARE
  • sns-x-icon
  • sns-facebook-icon
  • sns-line-icon
Side Banner
MAGAZINE
Iolite Vol.10

Iolite Vol.10

November 2024 issueReleased on 2024/09/29

Interview Iolite FACE vol.10 David Schwartz, Hirata Roi PHOTO & INTERVIEW "Yukos" Special feature "Trends in the cryptocurrency industry in Japan", "Trump vs. Harris: What will happen to the cryptocurrency industry?", "Was the reputation economy a prophecy?" Interview: Simon Gerovich, Metaplanet Co., Ltd., Kim Dong-Gyu, CALIVERSE Series Tech and Future Sasaki Toshinao...etc.

MAGAZINE

Iolite Vol.10

November 2024 issueReleased on 2024/09/29
Interview Iolite FACE vol.10 David Schwartz, Hirata Roi PHOTO & INTERVIEW "Yukos" Special feature "Trends in the cryptocurrency industry in Japan", "Trump vs. Harris: What will happen to the cryptocurrency industry?", "Was the reputation economy a prophecy?" Interview: Simon Gerovich, Metaplanet Co., Ltd., Kim Dong-Gyu, CALIVERSE Series Tech and Future Sasaki Toshinao...etc.