LONDON/STOCKHOLM (Reuters) – As the race to develop more powerful AI services like ChatGPT heats up, some regulators are relying on outdated laws to control technology that could upend the way societies and businesses operate.
The European Union is at the forefront of drafting new AI rules that could set a global standard for addressing privacy and safety concerns that have arisen with the rapid advances in the generative AI technology behind OpenAI’s ChatGPT.
But the legislation will take several years to be implemented.
“In the absence of regulations, the only thing governments can do is enforce the existing rules,” said Massimiliano Semenaghi, a European data management expert at consultancy BIP.
“If it’s a matter of protecting personal data, they apply data protection laws, and if it’s a threat to people’s safety, there are regulations that aren’t specifically laid out for AI, but they still apply.”
In April, national privacy watchdogs in Europe set up a task force to address issues with ChatGPT after Italian regulator Garante shut down the service, accusing OpenAI of violating the European Union’s General Data Protection Regulation, a broad privacy regime enacted in 2018.
ChatGPT was brought back after the US company agreed to install age-verification features and allow European users to block their information from being used to train an AI model.
A source close to Garanti told Reuters that the agency will begin examining other generative AI tools more broadly. Data protection authorities in France and Spain in April also launched investigations into OpenAI’s compliance with privacy laws.
Bring in the experts
Generative AI models have become known to make mistakes or “hallucinations,” spreading misinformation with extraordinary certainty.
Mistakes like this can have serious consequences. If a bank or government department uses AI to speed up the decision-making process, individuals may be unfairly rejected for loans or benefit payments. Big tech companies including Alphabet’s Google (GOOGL.O) and Microsoft Corp (MSFT.O) have stopped using AI products that are considered morally dubious, such as financial products.
Regulators aim to enforce existing rules that cover everything from copyright and data privacy to two main issues: the data entered into forms and the content they produce, according to six regulators and experts in the US and Europe.
Suresh Venkatasubramanian, a former White House technology adviser, said agencies in the two regions are being encouraged to “interpret and reinterpret their mandates.” He cited an investigation by the US Federal Trade Commission (FTC) into algorithms for discriminatory practices under current regulatory authorities.
In the European Union, proposals for a block AI law would force companies like OpenAI to disclose any copyrighted material — such as books or photographs — that is used to train their models, leaving them vulnerable to legal challenges.
Proving copyright infringement would not be straightforward, according to Sergei Lagodinsky, one of several politicians involved in drafting the EU proposals.
“It’s like reading hundreds of novels before you write your own,” he said. “If you actually copied and published something, that’s one thing. But if you’re not outright plagiarizing someone else’s material, it doesn’t matter what you trained yourself to do.
Creative thinking
French data regulator CNIL has begun “thinking creatively” about how existing laws can apply to AI, according to Bertrand Pelhase, its technology lead.
For example, discrimination claims in France are usually handled by the Defenseur des Droits (Defender of Rights). However, he said, its lack of experience with AI bias prompted the CNIL to take the lead on the issue.
“We are looking at the full range of impacts, although our focus remains on data protection and privacy,” he told Reuters.
The organization is considering using a provision of the General Data Protection Regulation (GDPR) that protects individuals from automated decision-making.
“At this point, I can’t say if that is legally sufficient,” said Bilhees. “It will take some time to build up an opinion, and there is a risk that different regulators will take different views.”
In Britain, the Financial Conduct Authority is one of several government regulators tasked with creating new guidelines covering artificial intelligence. A spokesman for the institute told Reuters that it is consulting with the Alan Turing Institute in London, along with other legal and academic institutions, to improve its understanding of the technology.
As regulators adjust to the pace of technological advancement, some industry insiders have called for greater engagement with corporate leaders.
Harry Borovik, general counsel at Luminance, a startup that uses artificial intelligence to process legal documents, told Reuters that dialogue between regulators and companies has been “limited” so far.
“This does not particularly bode well in terms of the future,” he said. “It appears that regulators are either slow or unwilling to implement approaches that would strike the right balance between consumer protection and business growth.”
(This story has been paraphrased to fix the spelling of Massimiliano, not Massimilano, in paragraph 4)
Additional reporting by Martin Coulter in London, Subanta Mukherjee in Stockholm, Kantaro Cumiya in Tokyo, Elvira Paulina in Milan; Editing by Kenneth Lee, Matt Scoffam, and Emilia Sithole Mataris
Our Standards: The Thomson Reuters Trust Principles.
#Regulators #dusting #rulebooks #engaging #generative #ChatGPT