The EU AI Act seems designed to allow AI only for routine tasks while hindering its use in high-level problem-solving. This will endanger European AI startups and significantly damage EU productivity. THREAD on our post today in Silicon Continent. 1/9
An AI bank teller in the EU would need two humans to oversee it. A startup building an AI tutor faces countless hurdles before launching. The is the reality under the EU AI Actāa well-meaning but flawed attempt to regulate AI. 2/ https://t.co/zY8zFlRB7j
The Act classifies AI systems by risk: unacceptable, high, limited, and minimal. Unacceptable systems, like social scoring or workplace emotion recognition, are banned. Fines can reach ā¬15 million or 3% of global revenue. 3/
High-risk AIāwhich includs uses such as education, employment, law enforcement, and moreāfaces strict regulations. Before releasing an AI tutor, a startup must build risk management systems, document everything, get certified, and more. 4/
The Act also targets "General Purpose AI Models" like large language models, regulating them by capability, not use. Models trained with over 10Ā²āµ FLOPs are deemed "systemic risks" and face extra restrictions. 5/
These models must detail their training data, allow copyright holders to opt out, run risk assessments, and monitor their entire lifecycle. If used in high-risk areas, all other heavy regulations apply too. 6/
Enforcement is fragmented. Each of the 27 EU countries will have multiple bodies overseeing compliance, leading to inconsistency and confusion. Staffing these bodies with AI experts is a significant challenge. 7/
Startups will struggle to navigate this maze, giving an edge to big firms that can afford compliance. Innovation may stall, and Europe risks falling behind in AI development and application. 8/
Europe should rethink the AI Act before it fully takes effect, instead of stifling innovation with heavy rules. The stakes are too high to get this wrong. Read the full post by @pietergaricano. 9/9 https://t.co/zY8zFlRB7j