Will AI Risks Lead to “Human Extinction”?
A group of current and former employees from major Silicon Valley companies involved in AI development have warned that these technologies could pose a threat of “human extinction.”
This warning came in an open letter signed by 13 former employees, mostly from companies like OpenAI, Anthropic, and DeepMind (a subsidiary of Google).
The letter emphasized that senior AI researchers need more protection to criticize new developments and directions in AI innovation.
The signatories expressed their belief in the unprecedented benefits that AI technology could offer humanity but also acknowledged the significant risks posed by these technologies, especially as they gain new powerful capabilities like engaging in direct voice conversations with humans and interacting with visual information such as creating videos and images.
The risks range from deepening inequality and manipulation to losing control over autonomous AI systems, which could potentially lead to human extinction.
Avoiding Effective Oversight
The letter stated that companies developing powerful AI technologies, including general AI (a theoretical system as intelligent or more intelligent than human intelligence), have strong financial incentives to avoid effective oversight by their employees, management, and the public.
They pointed out that any lab aiming to develop general AI should demonstrate that it deserves public trust and should grant employees a protected right to whistleblow.
The letter also called on companies to refrain from punishing or silencing current or former employees who speak publicly about AI risks.
Response from OpenAI
A spokesperson for OpenAI agreed that rigorous debate about AI is necessary given the importance of this technology.
The spokesperson confirmed that the company will continue to engage with governments, civil society, and other communities around the world.
They explained that OpenAI is taking several steps to ensure that its employees are heard and that its products are developed responsibly, including an anonymous hotline for workers and a safety and security committee that reviews the company’s developments.
OpenAI also highlighted its support for increased regulation of AI and voluntary commitments to the safety of this technology.
It is notable that several senior researchers have left the company in recent months, including co-founder Ilya Sutskever. This comes after the company experienced a significant internal board-level dispute last year, which temporarily removed CEO Sam Altman before he was reinstated within less than a week.