[#] AI Systems With 'Unacceptable Risk' Are Now Banned In the EU
robot(spnet, 1) — All
2025-02-04 13:22:01


AI systems that pose "unacceptable risk" or harm can now be banned in the European Union. Some of the unacceptable AI activities include social scoring, deceptive manipulation, exploiting personal vulnerabilities, predictive policing based on appearance, biometric-based profiling, real-time biometric surveillance, emotion inference in workplaces or schools, and unauthorized facial recognition database expansion. TechCrunch reports: Under the bloc's approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk -- AI for healthcare recommendations is one example -- will face heavy regulatory oversight; and (4) unacceptable risk applications -- the focus of this month's compliance requirements -- will be prohibited entirely.

Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered. They could be on the hook for up to ~$36 million, or 7% of their annual revenue from the prior fiscal year, whichever is greater. The fines won't kick in for some time, noted Rob Sumroy, head of technology at the British law firm Slaughter and May, in an interview with TechCrunch. "Organizations are expected to be fully compliant by February 2, but ... the next big deadline that companies need to be aware of is in August," Sumroy said. "By then, we'll know who the competent authorities are, and the fines and enforcement provisions will take effect."

[ Read more of this story ]( https://slashdot.org/story/25/02/04/0124248/ai-systems-with-unacceptable-risk-are-now-banned-in-the-eu?utm_source=atom1.0moreanon&utm_medium=feed ) at Slashdot.