European Union legislators are at a crossroads over the regulation of advanced AI technologies, such as those used in generative AI models. Talks have hit a snag as the EU seeks to finalize its comprehensive AI Act, with deep divisions arising around the oversight of these powerful AI systems.
The AI Act, which sailed through the European Parliament in June after two years of deliberation, is now facing its greatest challenge. Negotiators are scrambling to align their views, particularly on the regulation of ‘foundation models’—a term coined for AI like the one developed by OpenAI, supported by Microsoft. These models have been trained on vast data troves and are capable of learning and evolving with new information.
Friday’s discussions are crucial as representatives from EU member states and the European Parliament convene to hammer out their positions. The stakes are high, with the potential for the act to be derailed if consensus is not reached before the impending European parliamentary elections.
A contentious point is the regulation of AI with over 45 million users. While some argue for a nuanced, tiered regulatory approach, others warn of the risks posed by even smaller AI models.
Yet, the unity of the EU is tested as France, Germany, and Italy push for a lighter touch, advocating for AI developers to self-regulate—a stance agreed upon during an economic ministers’ meeting in Rome on October 30. This proposition has disrupted the previously smooth negotiation process, leaving the future of the AI Act in a delicate balance.