Techonology

Google, Meta criticize UK and EU AI regulations

Both Google and Meta have this week openly criticized European regulation of artificial intelligence, suggesting it will stifle the sector’s innovation potential.

Representatives from Facebook’s parent company, including Spotify, SAP, Ericsson, Klarna and others, signed a open letter Expressed its concern about “inconsistent regulatory decision-making” in Europe.

It said intervention from European data protection authorities has created uncertainty over what data they can use to train their AI models. The signatories are demanding consistent and speedy decisions regarding data regulations that allow the use of European data, similar to the General Data Protection Regulation.

The letter also highlights that the bloc will miss out on the latest “open” AI models, made freely available to all, and “multimodal” models, which accept input and use text, images, Generate output in speech, video and more. Format.

By stifling innovation in these areas, regulators are “depriving Europeans of the technological progress achieved in the US, China and India.” Also, without independent governance over European data, models “will not understand or reflect European knowledge, culture or languages.”

“We want to see Europe succeed and thrive in cutting-edge AI research and technology,” the letter reads. “But the reality is that Europe has become less competitive and less innovative than other regions and now risks falling further behind in the AI ​​age due to inconsistent regulatory decision-making.”

Look: According to Deloitte, businesses want to balance AI innovation and ethics

Google suggests copyright data could be allowed to train business models

Google has spoken separately about laws in the UK that prevent AI models from being trained on copyrighted material.

“If we don’t take proactive action, there’s a risk we’ll be left behind,” said Debbie Weinstein, Google’s UK managing director. Guardian,

“The unresolved copyright issue is a barrier to development, and one way to unblock it, obviously, from Google’s perspective, I think the government was in 2023 where TDM was being allowed for commercial use.”

TDM, or text and data mining, is the practice of copying copyrighted works. Currently it is allowed only for non-commercial purposes. The plan was to allow it for commercial purposes Dropped in February 2023 After being widely criticized by the creative industries.

Google has also released a document titled “”Unlocking the UK’s AI potentialThis week it suggests a number of policy changes, including allowing commercial TDM, setting up a publicly funded mechanism for computational resources and launching a National AI Skills Service.

WATCH: 83% of UK businesses are increasing pay for AI skills

According to the Guardian, it also calls for a “pro-innovation regulatory framework”, with a risk-based and context-specific approach and managed by public regulators such as the Competition and Markets Authority and the Information Commissioner’s Office.

Mark Warner, CEO of Faculty AI, a company supporting the UK government AI Security InstituteThat said, the debate around regulating AI demands nuance with respect to covering specific technology. So-called ‘narrow AI’ has been used safely for decades to perform very specific tasks like predicting consumer habits, but frontier AI is much newer.

“We don’t fully understand it, aren’t sure what it will do, and so can’t be sure it’s completely safe,” he told TechRepublic in an email.

“For existing AI systems, the light-touch approach remains the right path. For frontier AI, international agreements on restrictions, inspections, and investment in security research and technology are needed now and in the future.

An expert says AI law ‘doesn’t go far enough’

Some AI policy experts disagree with the idea that the EU’s current AI policies are harmful.

Hamid Ekbia, director of the Autonomous Systems Policy Institute at Syracuse University, told TechRepublic in an email: “The European approach derives from a civil-rights perspective. Its main advantage is that it provides a clear classification of risks based on the potential harm to consumers from the use of AI.

In this way, the law protects the rights of citizens by imposing strict regulations, including “high-risk systems”. They are used in educational or vocational training, employment, human resources, law enforcement, migration, asylum and border control management, he said.

“In my view, EU law does not go far enough; It supports innovation through a regulatory sandbox, which creates a controlled environment for developing and testing AI systems,” Ekbia said. “It also provides legal clarity for businesses. SMEs benefit from clarity in law and regulation, not the lack of it. “EU law helps small companies by providing a level playing field.”

EU regulations hit Big Tech’s AI plans

The EU represents a huge market for the world’s biggest tech companies 448 million peopleHowever, the implementation of the stringent AI Act and Digital Markets Act has prevented them from launching their latest AI products in the region.

In June, Meta Delayed in training its large language models on public content shared by adults on Facebook and Instagram in Europe following protests from Irish regulators. Meta AI, its Frontier AI assistant, is still not released within the block due to “Unexpected” rule,

Apple also will not initially make its new suite of generic AI capabilities, Apple Intelligence, available on devices in the EU, citing “regulatory uncertainties brought about by the Digital Markets Act.” bloomberg,

SEE: Apple Intelligence EU: Possible Mac release amid DMA rules

According to a statement that was provided by Apple spokesman Fred Sainz The VergeThe company is “concerned that DMA interoperability requirements could force us to compromise the integrity of our products in a way that could jeopardize user privacy and data security.”

European Commission spokesman Thomas Regnier told TechRepublic in an emailed statement: “All companies are welcome to offer their services in Europe, provided they comply with EU law.”

Google’s Bard chatbot was released in Europe four months after its launch in the US and UK, following privacy concerns raised by the Irish Data Protection Commission. It is believed that similar regulatory pressure delayed the arrival of its second variant, Gemini, in the region.

Ireland’s DPC this month launched a new investigation In Google’s AI model, PaLM 2, because it may violate GDPR rules. Specifically, it is looking at whether Google has adequately completed an assessment that would identify risks associated with the way it processes European people’s personal data to train models.

X also has Agreed to permanently stop the processing of personal data From public posts from EU users to train their AI model Grok. The DPC took Elon Musk’s company to the Irish High Court after it found it had not implemented mitigation measures such as opt-out options until several months after it began collecting data.

Many tech companies have their European headquarters in Ireland, as it has one of the lowest corporate tax rates in the EU at 12.5%, so the country’s data protection authority plays a primary role in regulating tech across the bloc.

The UK’s own AI rules remain unclear

The UK government’s stance on AI regulation has been mixed, partly due to the leadership change in July. Some representatives are also concerned that over-regulation could drive away the biggest tech players.

On July 31, Secretary of State for Science, Innovation and Technology Peter Kyle told executives from Google, Microsoft, Apple, Meta and other major tech players that the upcoming hey bill The focus will be on a large ChatGPAT-style foundation model built by a handful of companies, according to financial Times,

He also reassured them that it would not become a “Christmas tree bill” where more rules are added through the legislative process. He said the bill would primarily focus on making voluntary agreements between companies and the government legally binding and would turn the AI ​​Safety Institute into an “arm’s-length government institution.”

As seen with the EU, AI regulations delay the rollout of new products. Although the intention is to keep consumers safe, regulators run the risk of limiting their access to the latest technologies that could bring tangible benefits.

Meta has taken advantage of this lack of immediate regulation in the UK by announcing It will train its AI systems on public content shared on Facebook and Instagram In the country, which it is not currently in the EU

WATCH: Five-year delay in AI rollout in UK could cost economy £150+ billion, Microsoft report finds

On the other hand, in August, the Labor government blocked £1.3 billion of funding earmarked by the Conservatives for AI and technological innovation.

The UK government has also consistently indicated that it plans to take a tougher approach to regulating AI developers. of July king’s speech Said that the government will “strive to establish appropriate legislation to impose requirements on those working to develop the most powerful artificial intelligence models.”

#Google #Meta #criticize #regulations

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *