The San Francisco-based artificial intelligence (AI) company is mulling exiting European Union following proposals to obligate companies to reveal the copyrighted materials utilized in AI development.
OpenAI chief executive Sam Altman portrayed reconsideration of the entity’s continued operations in Europe. The second thought for the company responsible for the popular ChatGPT emerges from concerns that it may fail to comply with the EU AI regulations.
Did the EU AI Regulations Overlook Multiple Elements?
Altman considers that the EU Parliament could have reconsidered the definition of general-oriented AI systems. He laments that the proposers behind the Artificial Intelligence Act overlooked multiple elements.
The critical issue compelling Altman to reconsider exiting Europe is the provision obligating AI to disclose the copyrighted elements utilized as input to develop generative AI tools.
The previous week saw Apple week ban employees from utilizing third-party AI tools to execute workplace duties. Apple replicates the move adopted by Samsung to prohibit reliance on AI from executing their work. Apple’s stance arises from concerns that third-party AI tools would become the weak point leaking confidential data subsequently collected and stored by the third-party servers.
Altman informed the Reuters reporter that the present version of the EU AI Act would result in over-regulation. Nevertheless, he portrayed optimism if the lawmakers recall it.
Altman’s views echo the stance adopted by the Future of Life Institute (FOLI). Its analysis of the EU AI Act admitted that the general purpose AI as captured, portrays a system of various uses. Such would have diverse uses, including intended and unintended by respective developers.
Altman and FOLI concerns related to the Artificial Intelligence Regulation Act that EU members approved in December.
In April, a group of EU Parliamentarians demanded that US President Biden contact the EU Commission President Ursula Leyen for a global summit to formulate AI development, control, and deployment governance principles.
EU Lawmakers and White House Urge Human-Centric AI Development
The EU lawmakers admitted the need for human-centric and safe AI development. The legislators acting on EU requests lamented the burgeoning AI products. They unanimously agreed on creating global standards to guide subsequent training of AI products.
A dozen EU Parliamentarians penned a letter seeking an urgent global summit to formulate governing principles regarding AI development.
The call has received a positive response, with the White House convening a meeting with the AI leaders. The meeting featured executives drawn from AI companies themed at safeguarding society. The representatives of Biden’s administration emphasized the government’s commitment to nurturing responsible artificial intelligence.
The delegation led by Vice President Kamala Harris acknowledged the surge in AI training and development inspired largely by the popularity of the ChatGPt program. The rapid adoption of the ChatGPT program propels the current discussions to enforce ethical and conscientious principles in AI practices.
Altman Admits Readiness to Quit if Unable to Comply with EU AI Act
Altman admitted that since Launching ChatGPT in November, its rapid rise had shocked global leaders. As such, it prompted regulators to open investigations to scrutinize OpenAI practices. A notable of regulators are considering imposing bans on the ChatGPT.
With Italy banning ChatGPT for privacy concerns, OpenAI undertook several updates, permitting users to erase their history.
In a recent discussion convened by the University College London on Wednesday, May 24, Altman revealed that OpenAI would attempt compliance with laid-out regulations before reevaluating whether to shut down Europe operations. Altman confessed that either OpenAI will meet the requirements to continue operating. Failure to comply will prompt the company to cease operations.
Editorial credit: Ascannio / Shutterstock.com