Leading AI experts have issued an urgent call for governments and companies to shift focus and resources towards AI safety. The clarion call, backed by notable figures like Turing Award winners and a Nobel laureate, suggests dedicating a minimum of one-third of AI research funding to ethical and safety considerations.
“AI is outpacing its safety measures” stated Yoshua Bengio, an AI pioneer. The experts advocate for immediate action to oversee AI’s rapid growth, a stance supported by key figures like Geoffrey Hinton and Daniel Kahneman.
Quick Facts:
- One-third of AI funding should go to safety and ethics.
- Companies must be accountable for AI-induced harms.
- Swift regulatory action is needed.
The experts insist on legal accountability for corporations if their AI systems cause foreseeable harm. This comes at a time when AI regulations are practically non-existent, even as the EU is still fine-tuning its first legislative package on the matter.
Contrary to corporate claims that more rules will raise compliance costs and risk, experts argue that lack of regulation is the real danger. The idea that oversight hinders innovation is deemed a hazardous approach, given AI’s potentially massive impact on society.