(Reuters) – Rapid advances in artificial intelligence (AI) such as Microsoft-backed OpenAI’s ChatGPT are complicating governments’ efforts to agree laws governing the use of the technology.
Here are the latest steps national and international governing bodies are taking to regulate AI tools:
AUSTRALIA
* Seeking input on regulations
The government is consulting Australia’s main science advisory body and considering next steps, a spokesperson for the industry and science minister said in April.
BRITAIN
* Planning regulations
The Financial Conduct Authority, one of several state regulators that has been tasked with drawing up new guidelines covering AI, is consulting with the Alan Turing Institute and other legal and academic institutions to improve its understanding of the technology, a spokesperson told Reuters.
Britain’s competition regulator said in May it would start examining the impact of AI on consumers, businesses and the economy and whether new controls were needed.
Britain said in March it planned to split responsibility for governing AI between its regulators for human rights, health and safety, and competition, rather than creating a new body.
CHINA
* Planning regulations
The Chinese government will seek to initiate AI regulations in its country, billionaire Elon Musk said on June 5 after meeting with officials during his recent trip to China.
China’s cyberspace regulator in April unveiled draft measures to manage generative AI services, saying it wanted firms to submit security assessments to authorities before they launch offerings to the public.
Beijing will support leading enterprises in building AI models that can challenge ChatGPT, its economy and information technology bureau said in February.
EUROPEAN UNION
* Planning regulations
EU lawmakers agreed on June 14 to changes in a draft of the bloc’s AI Act. The lawmakers will now have to thrash out details with EU countries before the draft rules become legislation.
The biggest issue is expected to be facial recognition and biometric surveillance where some lawmakers want a total ban while EU countries want an exception for national security, defence and military purposes.
EU tech chief Margrethe Vestager said on May 31 that the U.S. and EU should push the AI industry to adopt a voluntary code of conduct within months to provide safeguards while new laws are developed.
The European Consumer Organisation (BEUC) has joined in the concern about ChatGPT and other AI chatbots, calling on EU consumer protection agencies to investigate the technology and the potential harm to individuals.
FRANCE
* Investigating possible breaches
France’s privacy watchdog CNIL said in April it was investigating several complaints about ChatGPT after the chatbox was temporarily banned in Italy over a suspected breach of privacy rules.
France’s National Assembly approved in March the use of AI video surveillance during the 2024 Paris Olympics, overlooking warnings from civil rights groups.
G7
* Seeking input on regulations
Group of Seven leaders meeting in Hiroshima, Japan, acknowledged on May 20 the need for governance of AI and immersive technologies and agreed to have ministers discuss the technology as the “Hiroshima AI process” and report results by the end of 2023.
G7 nations should adopt “risk-based” regulation on AI, G7 digital ministers said after a meeting in April in Japan.
IRELAND
* Seeking input on regulations
Generative AI needs to be regulated, but governing bodies must work out how to do so properly before rushing into prohibitions that “really aren’t going to stand up”, Ireland’s data protection chief said in April.
ISRAEL
* Seeking input on regulations
Israel has been working on AI regulations “for the last 18 months or so” to achieve the right balance between innovation and the preservation of human rights and civic safeguards, Ziv Katzir, director of national AI planning at the Israel Innovation Authority, said in June.
Israel published a 115-page draft AI policy in October and is collating public feedback ahead of a final decision.
ITALY
* Investigating possible breaches
Italy’s data protection authority plans to review other artificial intelligence platforms and hire AI experts, a top official said in May.
ChatGPT became available again to users in Italy in April after being temporarily banned over concerns by the national data protection authority in March.
JAPAN
* Investigating possible breaches
Japan’s privacy watchdog said on June 2 it has warned OpenAI not to collect sensitive data without people’s permission and to minimise the sensitive data it collects, adding it may take further action if it has more concerns.
SPAIN
* Investigating possible breaches
Spain’s data protection agency said in April it was launching a preliminary investigation into potential data breaches by ChatGPT. It has also asked the EU’s privacy watchdog to evaluate privacy concerns surrounding ChatGPT.
UNITED NATIONS
* Planning regulations
U.N. Secretary-General Antonio Guterres on June 12 backed a proposal by some AI executives for the creation of an AI watchdog like the International Atomic Energy Agency, but noted that “only member states can create it, not the Secretariat of the United Nations”.
Guterres has also announced plans to start work by the end of the year on a high-level AI advisory body to regularly review AI governance arrangements and offer recommendations.
U.S.
* Seeking input on regulations
Generative AI raises competition concerns and is a focus of the Federal Trade Commission’s Bureau of Technology along with its Office of Technology, the agency said in a blog post by the staff of the two offices in June.
Senator Michael Bennet wrote to leading tech firms on June 29 to urge them to label AI-generated content and limit the spread of material aimed at misleading users. He had introduced a bill in April to create a task force to look at U.S. policies on AI.
The National Institute of Standards and Technology, a non-regulatory agency that is part of the Commerce Department, will launch a public working group of expert volunteers on generative AI to help address its opportunities and develop guidance to confront its risks, it said on June 22.
President Joe Biden said on June 20 he would seek expert advice on the risks of AI to national security and the economy.
The U.S. Federal Trade Commission’s chief said in May the agency was committed to using existing laws to keep in check some of the dangers of AI, such as enhancing the power of dominant firms and “turbocharging” fraud.
(Compiled by Alessandro Parodi and Amir Orusov in Gdansk; editing by Jason Neely, Kirsten Donovan and Milla Nissi)