United Advocates

Follow us

Future of AI Regulation

The rapid advance of artificial intelligence (AI) is proving to be a complex issue for governments around the globe. Currently, international governing bodies are taking steps to regulate AI tools to ensure they are used in an ethical and safe way. There are many key issues that must be addressed by regulators as the potential risks associated with such technology is high according to critics. However, the potential for AI to enhance various industries and improve people’s quality of life has also been acknowledged, thus resulting in a race to regulate the technology.

Due to the broad applications of AI in the professional and personal lives of the public, various regulating bodies are being consulted by governments along with some governments creating dedicated bodies for the purpose of regulating and monitoring the use and development of AI.

Australia

The government of Australia is seeking input on the regulation of AI through consulting the National Science and Technology Council, the main science advisory board of the nation. They are currently considering the next steps to take on regulations.

Britain

Britain, since the early 2000s has been trying to push an enterprise culture to attract foreign investment and rapid growth of the state’s economy, being seen as one of the most business-friendly nations.

Due to the potential impact of AI, the British government is tasking several state regulators to draw up new guidelines which cover the use of AI. They plan to split the responsibility for governing AI between its existing regulators for human rights, health and safety, and competition, rather than creating a new body.

The Financial Conduct Authority (FCA) is consulting with the Alan Turing Institute (the national institute for data science and AI) and other legal and academic institutions. Their goal is for a better understanding of the technology before implementing any binding regulations upon their large financial services industry.

The Competition and Markets Authority (CMA) is examining the impact on consumers, businesses and the economy. Once the findings of their research have been evaluated, the CMA will determine whether new controls are required.

China

In July 2023, China issued temporary measures to manage generative AI, defined in Article 2 of the Generative AI Measures as AI technology providing “services for generating, text, pictures, audio, video, and other content to the public within People’s Republic of China”. These measures require the service providers to conduct security assessments and perform algorithm filing procedures.

The Cyberspace Administration of China released a draft of measures that would affect generative AI services in April 2023. The draft states that firms, before launching offerings to the public, must submit security assessments to authorities.

The temporary measures that were released in July, compared to the April draft did relax the obligations for generative AI service providers. This was to help encourage development and innovation whilst maintaining security.

The temporary measures also include the repercussions for breaching them. The violation of the provisions would result in the providers being ‘punished by the relevant competent authorities’, this may also lead to possible investigations of criminal liability. However, the provision allowing fines for non-compliance have been removed, instead, firms would receive orders to correct the violations.

The measures took effect in mid-August 2023, service providers subject to the new rules must cooperate with state regulators who have the power to supervise and inspect the AI models. The service providers must explain the source, scale and type of training data, labelling rules and algorithmic mechanisms used to the regulators. Due to the nature of the data the regulators have access to, they are required to not disclose or illegally provide confidential information to others.

The government emphasised that the measures do not apply to “industrial associations, enterprises, educational and scientific research institutions, public cultural institutions and, relevant professional institutions that develop and apply generative artificial intelligence technology, but do not provide generative artificial intelligence services to the domestic public”. Entities using generative AI for the development of enterprise-facing products, security tools, scientific research, etc., are potentially exempt from the Generative AI Measures.

European Union (EU)

The EU has drafted the EU AI Act, which they claim will be the first comprehensive AI law in the world. This Act is part of their digital strategy to strengthen the EU’s capacities in new digital technologies to open new opportunities for businesses and consumers while respecting their basic rights and values.

The European Parliament’s priority in the creation of the Act is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. The main belief of the Parliament is that the systems should be overseen by people rather than by automation as a way to prevent potentially harmful outcomes. They also want to establish a uniform definition for AI that can be applied to future AI systems.

There are different risk levels associated with AI, the Act would establish obligations for service providers and users depending on the risk from AI.

Unacceptable risk AI systems are considered a threat to people and will be banned by the EU, this includes cognitive behavioural manipulation of people or specific vulnerable groups, social scoring, and real time and remote biometric identification systems. Certain exceptions may apply; however, court approval would be required for their use.

High risk systems are systems that would negatively affect the safety or fundamental rights of people, there are two categories in which they would be classified in. Firstly, AI systems that are used in products falling under their product safety legislations which includes toys, aviation, cars, medical devices and lifts. Secondly, AI systems that fall into eight specific areas:

  • Biometric identification and categorisation of natural persons
  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law

High risk systems will be assessed before being released to the market and throughout their lifecycle to mitigate risk.

Limited risk systems should comply with minimal transparency requirements that would allow users to make informed decisions. Users should be made aware when they are interacting with AI, this includes systems that generate or manipulate images, audio or video content, an example of this would be deepfakes, which have been increasing in popularity in recent years.

Generative AI systems would have to comply with transparency requirements through disclosing that content generated was by AI, designers would have to prevent the AI systems from generating illegal content, and publishing summaries of copyright data used for training the system.

The EU aims to reach an agreement regarding their position on the AI by the end of 2023.

France

As a part of the EU, France would be subject to their regulations once enacted, however, CNIL, the privacy watchdog in France is investigating complaints about AI systems over suspected breach of privacy rules.

The National Assembly approved the use of AI for surveillance during the 2024 Paris Olympics.

G7 Nations

The Group of Seven agreed to have discussions regarding the regulation of AI systems as part of the “Hiroshima AI Process” and report their results by the end of 2023.

Similar to the EU, they would like to adopt a system that bases the systems off risk.

Israel

Israel published a draft AI policy in October 2022 and is collating feedback ahead of a final decision. According to their Director of National AI planning at the Israel Innovation Authority, the nation has been working on the regulations for nearly two years.

Japan

Japan is expected to enact legislation by the end of 2023 which are likely to be more relaxed than the one planned by the EU. The privacy watchdog has previously warned AI service providers not to collect sensitive data without permission from the user and to minimize the collection of such data.

United Nations (UN)

In July 2023, the UN Security Council held its first formal discussion on AI. They addressed both military and non-military applications which could result in “serious consequences for global peace and security”.

The UN Secretary-General has announced plans to start work by the end of 2023 on an AI advisory body to review AI governance arrangements and offer insights and recommendations.

UAE

The UAE released their National Strategy for Artificial Intelligence 2031 back in early 2019. The strategy aims to improve customer services, assess government performance and increase living standards as well as to harness AI for use in transport, tourism, health and education.

The UAE’s objective is to position themselves as the global leader in artificial intelligence by 2031 and develop an integrated system that utilizes AI in vital areas in the UAE. The objectives of the UAE reaffirm their position as a global hub to AI, this increases the competitive edge of the sector in the UAE and GCC as a whole.

This style of approach allows the nation to attract talents to the country, leading research capabilities, and providing data which can further assist in effective governance and control of AI systems.

The UAE has a dedicated minister for the growth of the artificial intelligence sector. The Artificial Intelligence, Digital Economy and Remote Work Applications Office’s directive is to intensify research and efforts towards promoting the adoption of futuristic technologies in government work models. They launched a comprehensive guide on the utilization of generative AI applications to further promote the use of AI within the nation. The guide has been lauded as being pioneering and encourages the adoption of such technologies across several sectors. The guide estimated the generative AI market generated nearly $90 billion in 2022 and predicted that the sector would grow at an average annual rate of 36.2% from 2022 to 2027.

As a result of the predicted increase in demand for AI systems, the UAE government has committed itself to reinforcing their position through the design of proactive strategies, working on legislative and regulatory framework to limit the negative of AI, but keep up with advancements and trends. To further this initiative, DIFC, in coordination with the UAE Artificial Intelligence Office have introduced an AI and Coding License which allows the holder to work within the DIFC Innovation Hub, the largest FinTech and innovation hub in the region, representing 60% of GCC FinTech companies. The goal of this license is to encourage investment in AI, attract AI companies and coders to the UAE, another incentive for holding the license is that employees in companies that hold the license may be eligible to obtain the UAE Golden Visa. However, a license is not required to work in and develop AI in the UAE.

In the UAE there are currently no specific legislations governing AI or addressing the ethical or legal issues arising from the use of AI such as liability, privacy, discrimination, and data bias. However, existing regulations, such as those related to product liability (just as they would to any other product that malfunctions or is defective) and data protection indirectly apply to AI. However, the non-binding guidelines to provide guidance on the development and ethical use of AI, these guidelines may be used to form the basis of future laws in relation to AI.

USA

The US has had a few court cases in regards to IP rights of works created by AI without any human input, on 21 August 2023, Washington DC district Judge Beryl Howell affirmed the Copyright Office’s rejection of an application filed by a computer scientist on behalf of his AI system. However, earlier this year, an AI assisted graphic novel was granted a limited copyright registration by the US Copyright Office. The Copyright Office did make clear upon approval of the comic that it would not register works created solely by AI, as a result the text would receive copyright protection but the AI generated images would not. The reason for this decision was due to the users of the generative AI system not being “authors” due to the technology used generating images in an unpredictable way.

The Federal Trade Commission (FTC) has opened an investigation into OpenAI, the company behind ChatGPT, over claims that it has breached consumer protection laws by putting personal data at risk. These claims have been investigated in multiple countries including France, Italy and Spain.

FTC’s Bureau of Technology is focussing on the competition concerns of generative AI along with Senator Michael Bennet addressing tech companies urging them to label AI generated content to users. This is to limit the spread of misinformation and harmful materials, furthermore, Senator Bennet introduced a bill to create a task force to look at the policies on AI within the US.

The US has also invested heavily in AI enabled technology that aims to improve the use of AI in military application. The Department of Defence (DOD) has released several AI focused strategies but does not have a comprehensive guide specific to AI acquisitions by the DOD. The Government Accountability Office has stated that the DOD strategy could be more comprehensive. Furthermore, the DOD has not issued any guidance that clearly defines the roles and responsibilities of components that participate in AI activities.