The world’s first artificial intelligence (AI) legislation went into effect in the EU. The AI Act, as it’s known, will regulate how companies develop and use the technology.
The AI Act includes all AI providers, deployers, importers, distribu- tors, and product manufacturers working within the EU and those outside the region if the system's output is intended to be used in the EU.The EU Parliament notes that its priorities include establishing a "technology-neutral, uniform definition for AI that could be applied to future AI systems." As per European Commission President Ursula von der Leyen, “AI is a general technology that is accessible, powerful and adaptable for a vast range of uses— both civilian and military. And it is moving faster than even its de- velopers anticipated. So, we have a narrowing window of opportunity to guide this technology responsibly.” The rules will be governed by the European Commission’s AI office. Member states have until August of 2025 to put together bodies that will handle execution of the law in their country.
Meanwhile, companies that already have a commercially avail- able product like ChatGPT will have a 36-month grace period to come into compliance. And if a company fails to comply with the new rules, it could face fines of $41 million or up to 7% of its global revenue.
TYPES OF TECHNOLOGIES UNDER AI ACT
Prohibited AI systems will be banned and could apply to AI which tries to predict whether a person might commit a crime based on their characteristics or one that scrapes the internet to bolster facial recognition systems.
High risk AI systems have the highest regulatory burden and includes AI that is used for critical infrastructure like electrical grids, systems that make employment decisions, and self-driving vehicles. AI companies will have to disclose their training datasets and prove human oversight.
Minimal risk systems also known as known as “general-use AI make up the largest chunk and include generative AI like OpenAI’s ChatGPT or Google’s Gemini. They will need to make sure their models are adhering to EU copyright rules and take proper cybersecurity precautions to protect users.
No risk systems include any AI use that doesn’t fall into the other three categories.
CRITICISM & APPROVAL
The law is facing criticism that it could discourage innovation before it even happens. Many experts believe that legislation isn’t a stand-alone solution and one size fits all is not the right approach. Also, by 2026 when Act’s provisions will come into effect, legislation risks might become outdated with rapid technology evolution. However, all agree that to ensure ethical and responsible use of AI, regulation is necessary. Implementing an adaptive regulatory system with close collaboration with AI Companies will be key to succes