European Union introduces AI Act, a landmark legislation to regulate AI

KEY HIGHLIGHTS
  • Landmark Legislation: EU policymakers endorse groundbreaking “AI Act,” a global first, marking a significant step towards comprehensive AI regulation.
  • 38-Hour Negotiation: Approval secured after an extensive 38-hour negotiation session, highlighting the complexity and importance of the legislative process.
  • Emphasis on Rights and Safety: EU Chief Ursula von der Leyen praises the AI Act for prioritizing the safety and fundamental rights of individuals and businesses, delivering on political commitments.
  • OpenAI’s Impact: Momentum for the AI Act grew after OpenAI’s ChatGPT brought AI awareness, underscoring the societal impact of rapidly evolving technologies.
  • Global Standard: The law addresses AI’s potential benefits and risks globally, encompassing issues like disinformation, job displacement, and copyright infringement.
  • Stringent Obligations for Tech Companies: Tech firms in the EU must disclose AI training data, conduct product testing, and face fines up to 7% of global revenue for violations.
The AI Act must gain approval from the European Parliament in France to become law
The AI Act must gain approval from the European Parliament in France to become law. (File: Vincent Kessler/Reuters)

European Union policymakers have reached a significant milestone by endorsing groundbreaking legislation to regulate artificial intelligence (AI), marking a significant step towards implementing comprehensive standards to oversee the impactful technology.

The approval for the “AI Act” was secured on Friday after an extensive 38-hour negotiation session between lawmakers and policymakers.

Ursula von der Leyen Hails Global-First AI Act

Ursula von der Leyen, the EU chief, lauded the AI Act as a “global first” that prioritizes safeguarding the rights of individuals and businesses. She expressed, “The AI Act is a global first. A unique legal framework for the development of AI you can trust. And for the safety and fundamental rights of people and businesses. A commitment we took in our political guidelines – and we delivered. I welcome today’s political agreement.”

The push to enact the “AI Act,” initially proposed by the EU’s executive arm in 2021, gained momentum following the widespread awareness brought about by OpenAI’s ChatGPT last year, thrusting the rapidly evolving field of AI into the public spotlight.

Regarded as a global standard, the law addresses the ambitions of governments seeking to harness AI’s potential benefits while mitigating risks spanning disinformation, job displacement, and copyright infringement.

The legislative process, which faced delays due to disagreements over regulating language models that scrape online data and the use of AI by law enforcement and intelligence services, will now proceed to member states and the EU parliament for approval.

Stringent Obligations for Tech Companies

According to the law’s provisions, tech companies operating in the EU will be obligated to disclose data used in training AI systems and conduct thorough testing of products, particularly those applied in high-risk sectors like self-driving vehicles and healthcare.

The legislation prohibits the indiscriminate scraping of images from the internet or security footage to establish facial recognition databases. However, it includes exceptions for the use of “real-time” facial recognition by law enforcement agencies investigating terrorism and serious crimes.

Violations by tech firms could result in fines up to seven percent of global revenue, dependent on the violation and company size. Restrictions on facial recognition software use by police and governments are outlined, with potential fines of up to 7% of global sales for non-compliance.

While the law awaits final approval, the political agreement signifies the establishment of its key principles. European policymakers emphasized the regulation of AI in high-risk areas, including law enforcement and critical services like water and energy. Transparency requirements would be imposed on creators of large AI systems, such as those powering the ChatGPT chatbot. Chatbots and software generating manipulated images (“deepfakes”) must explicitly indicate AI generation, as outlined in earlier drafts of the law.

Despite being lauded as a regulatory breakthrough, concerns linger about the law’s effectiveness. Several aspects are not expected to take effect for 12 to 24 months, posing a considerable time lag in the rapidly evolving field of AI. Until the final moments of negotiation, discussions centered on language nuances and the delicate balance between fostering innovation and safeguarding against potential harm.

The agreement reached in Brussels, following three days of negotiations, remains subject to final technical details. Parliament and the European Council, comprising representatives from the 27 union countries, will hold votes for the law’s conclusive passage.

Read Original Article on Al Jazeera

The information above is curated from reliable sources, modified for clarity. Slash Insider is not responsible for its completeness or accuracy. Please refer to the original source for the full article. Views expressed are solely those of the original authors and not necessarily of Slash Insider. We strive to deliver reliable articles but encourage readers to verify details independently.