Tech Firms Request Extension on EU AI Act Compliance
Tech Firms Seek Extension on EU AI Compliance
A coalition of tech organizations (DOT Europe, The Software Alliance (BSA), AmChamEU, and the Computer & Communications Industry Association (CCIA Europe)) have jointly urged the European Commission to grant more time to meet the requirements of the newly implemented AI Act.
In their letter, the companies emphasized the difficulties posed by the current summer recess, which they stated slows down their efforts in complying with the new act. The consultation period for the AI Act began on July 30, 2024, and is scheduled to run until September 10, 2024.
Thus, industry stakeholders can provide feedback on the proposed regulations, particularly those related to General Purpose Artificial Intelligence (GPAI). However, the organizations expressed concerns that the timeline is too short for them to properly analyze responses, given the complexity and significance of the AI regulations.
Hence, the tech companies proposed extending the deadline by at least two more weeks to ensure that the feedback they’ll provide is thorough and constructive. The companies stressed the importance of prioritizing quality over speed since the regulations will have far-reaching implications for the AI industry in Europe.
Impact of the Regulations on the AI Industry
The AI Act, which took effect on August 1, 2024, is a landmark piece of legislation aimed at regulating AI development and deployment within the European Union. The act categorizes AI systems based on the level of risk they pose to society; such risks determine the AI’s regulatory requirements.
Among the first regulations set to be enforced are prohibitions on certain AI applications, such as those exploiting individual vulnerabilities or engaging in non-consensual scraping of facial images from the internet or CCTV footage. The first phase of these prohibitions will take effect in February 2025, with additional requirements for GPAI models to be implemented by August 2025.
Also, GPAI systems, designed for multiple tasks rather than specific applications, will be subject to a new set of rules to ensure their safe and ethical use within the EU.
While the tech organizations recognized the importance of implementing these regulations for the future of the EU’s AI ecosystem, they noted that they would need adequate time to implement such regulations effectively. Thus, they are requesting an extension to ensure the regulations are robust and reflect the industry’s practical realities and challenges.
X Suspends Data Collection in the EU
Meanwhile, X, the social media platform owned by Elon Musk, has agreed to stop collecting and processing users’ data in the European Union (EU) and European Economic Area (EEA). This decision follows a recent court hearing in Ireland where the Irish Data Protection Commission (DPC) led the proceedings.
The DPC, responsible for regulating X’s activities in the region, took issue with the platform’s use of EU user data for training its artificial intelligence (AI) system known as “Grok.” The Commission further argued that X had been processing data from public posts of EU users between May 7, 2024, and August 1, 2024.
Following the court hearing, X agreed to halt these data processing activities. Accordingly, the DPC remarked that X’s decision is a positive step towards protecting the data rights of EU citizens.
Meanwhile, the DPC’s chairman, Dr Des Hogan, noted that the suspension would allow regulators to continue their investigations into X’s compliance with EU data protection laws, including the General Data Protection Regulation (GDPR).
Tensions Between AI Tech Giants and Regulators
This development is part of a broader pattern of regulatory challenges faced by X since Elon Musk acquired the platform in late 2022. Musk has made significant changes to the platform, including integrating AI.
However, these changes have attracted concerns from regulators across the globe, particularly in the EU, where data protection is a high priority. In July, reports surfaced that X had altered default settings, allowing user data to be used for training its AI without their consent.
These reports led to further investigations by the DPC. The EU has also accused X of breaching the Digital Services Act on multiple counts, with fines amounting to 6% of the platform’s global annual revenue.
However, Elon Musk criticized the EU’s actions, claiming that they are attempts to limit free speech on the platform. He also alleged that the European Commission had offered X a secret deal to restrict certain content.