By Julia Chong

With the masses buzzing about generative artificial intelligence (AI) tools like ChatGPT and DALL-E, many long-established players are finally experiencing the long-awaited boom in the sector. Leading AI-chipmaker Nvidia cleared the USD6 billion profit mark this August whilst OpenAI, the company behind ChatGPT, is reported to be generating over USD80 million in monthly revenues (up from USD28 million for the whole of 2022) after its chatbot became a pay-for-use platform.

These companies currently operate in a relatively burden-free environment. There is no super-regulator and the industry is not heavily regulated; compliance costs are at a minimum as most operate within the known confines of the tech world with such costs already embedded in their day-to-day operations; and, let’s face it, it is more creatively and financially rewarding to be a tech nerd in this day and age than it has been at any other time in human history.

However, this is set to change as regulators (and some standard-setters) in major jurisdictions compete to fill the void, each with their own version of the rules and laws to navigate the ethical and governance landmines posed by AI.

As it stands, the diverse approaches adopted by each of the authorities indicate that there is no single approach, no ‘yellow brick road’ in ensuring AI systems are governed well. It is crucial that bankers, in their roles as both users and financiers, have a birds-eye view of the key drivers, objectives, and guardrails that will shape the trajectory of AI governance in the decades to come.

CHINA

As the world’s largest producer of AI research, China is also ahead of the pack in the regulatory space as the first nation to roll out AI regulations. The following are key points excerpted from a July 2023 white paper by Matt Sheehan for the Carnegie Endowment for International Peace, China’s AI Regulations and How They Get Made. The author relied on the English translations of Chinese legislations for the analysis.

China’s AI regulation is the most voluminous and comprehensive issued to date by any jurisdiction. Outlining new technical tools such as disclosure requirements, model auditing mechanisms, and technical performance standards, the following are some of the most visible and impactful laws and regulations issued by the government:

  • Ethical Norms for New Generation AI. This high-level guidance for ethical norms that should be embedded in AI governance sets moral norms, including ultimate human control and responsibility over AI systems. Issued on 25 September 2021, it is overseen by the country’s New Generation AI Governance Expert Committee which sits under one of China’s ministries and recommends policies and areas for international cooperation in the field.
  • Provisions on the Management of Algorithmic Recommendations in Internet Information Services. This first major binding regulation on algorithms was enacted on 31 December 2021, motivated by government fears about algorithms controlling how news and content are disseminated online, specifically ByteDance, the parent company of TikTok, and the 2017 CCP backlash against the app in which user feeds were dictated by algorithms. The regulation includes many provisions for content control as well as protections for workers impacted by algorithms. It also created the ‘algorithm registry’ used in future regulations.
  • Opinions on Strengthening the Ethical Governance of Science and Technology. Issued by the CCP Central Committee and State Council on 20 March 2022, this document focuses on the internal ethics and governance mechanisms scientists and technology developers should deploy, with AI listed as one of three areas of particular concern, along with the life sciences and medicine.
  • Provisions on the Administration of Deep Synthesis Internet Information Services. Effective from 25 November 2022, the regulation targets many text, video, and audio AI applications. It prohibits the generation of “fake news” and requires synthetically generated content to be labelled. There are three oversight agencies: Cyberspace Administration of China (CAC) as the national internet regulator, the Ministry of Industry and Information Technology (MIIT), and the Ministry of Public Security (MPS). The core motivation for this regulation was the growing concern over deepfakes.
  • Measures for the Management of Generative AI Intelligence Services (Draft for Comment). Currently open for inputs, the draft was released on 11 April 2023 by CAC and is a near-mirror of the Deep Synthesis regulation with greater emphasis on text generation and training data. It requires providers to ensure that both the training data and generated content be “true and accurate”.

The Chinese government recently announced in June 2023 that it will begin preparations on a draft Artificial Intelligence Law to be submitted to China’s legislature. Widely expected to be a more comprehensive piece of legislation, analysts expect that it will act as a capstone on Chinese AI policy.

EU

Although an agreement on the final version of the economic region’s AI Act has yet to be reached, the trilogue debates – three-way negotiations between the European Commission, Council, and Parliament – have been underway since mid June. Once the bill is passed, the regulation will apply uniformly throughout the EU with no need for transposition to the respective member states and analysts have confirmed that it carries significant extraterritorial reach with stricter penalties that surpass current regulations, such as the General Data Protection Regulation.

The EU AI Act is a horizontal regulation, meaning that it is designed as an umbrella regulation to cover all applications of the tech. It is significantly different from China’s regulatory AI regime, which is considered to be vertical regulation as each regulation specifically applies to a set of AI technologies, such as text, images, or data management.

Key contentious issues, according to a policy brief by the Brookings Institute, include:

  • Definition of AI. This will determine the scope of the regulation: too narrow and the legislation of some AI systems might escape regulatory oversight; too broad a definition risks unfairly classifying them with more common algorithmic systems that don’t come close to the ethical issues or harm of generative AI.
  • Risk-based approach to AI regulation. Legislators have promised a proportionate risk-based approach based on four identified levels: Unacceptable, High, Limited, Minimal/Low risk. Ultimately, unusually risky or high-risk applications such as social scoring or hyper-personalisation services will be subject to regulatory burdens as these AI systems are “likely to pose high risks to fundamental rights and safety”.
  • Enforcement and Self-assessment Mechanisms. There are current concerns on the proposed governance structure of the AI Act, which will rest under the purview of a newly created European Artificial Intelligence Board (EAIB) and some believe will lead to a fragmented enforcement landscape should member states vary in their commitment and capacity. There is also divergence in the proposed assessment of high-risk systems that have yet to be addressed. Currently, standalone AI systems need only undergo industry self-assessment whereas systems that touch on safety components (such as medical devices) must undergo strict conformity assessment by the relevant national authority.

EU states are also weighing the cost that comes with regulatory burden and at what price point compliance costs will begin to stifle innovation in small- and medium-sized enterprises. While waiting for the Act to run its course, Commissioner Thierry Breton announced in May the EU’s work on an AI Pact alongside Alphabet, Google’s parent company, and other developers to create non-binding rules with companies towards a safer and more accountable AI ecosystem.

UK

On 29 March 2023, the UK government published its AI White Paper, reiterating its pro-innovation and context-specific AI regulatory regime. A departure from the EU approach, the UK proposes to spread the task of regulatory oversight by empowering its existing regulators such as the Financial Conduct Authority and the Competition and Markets Authority. Still in its infancy, the proposed regulatory framework focuses on five principles:

  • Safety, security, robustness: Measures must be in place for regulated entities to prove the security of their AI system; regulators may also need to provide guidance that is in sync with those of other regulators.
  • Appropriate transparency and explainability: Emphasis on public trust must drive any AI development and regulators should nudge industry to adopt sufficient transparency in any AI systems’ decision-making.
  •  Fairness: The onus is on regulators to develop and publish with illustrations of fairness that must apply to AI systems within their current remit.
  • Accountability and governance: Clear lines of accountability and compliance expectations must be mapped throughout the AI lifecycle.
  • Contestability and redress: Third-parties and/or persons negatively impacted by any AI system must have avenues to contest such decisions.

US

This warrants an entry for its notable absence of impactful legislation in this sphere. Currently, the White House has invested USD140 million (a paltry sum by any standard) in AI research through various institutes, has released its AI bill of rights blueprint for public consultation on how best to regulate its use, and issued executive orders vaguely stating that AI systems should be implemented “in a manner that advances equity” in government agencies.

INTERNATIONAL ORGANIZATION FOR STANDARDIZATION (ISO)

Aside from standards related to concepts, terminology, or classifications in machine learning performance and processes, the ISO’s more recent AI-related standards remain vague and entirely voluntary:

+ ISO/IEC 38507: Governance Implications of the Use of Artificial Intelligence by Organizations. Meant as a high-level guide on building a robust governance programme for both public and private bodies, the focus is on building trust through required accountability and transparency frameworks in order to mitigate the ethical landmine that comes with AI applications. The April 2022-issued standard discusses the implications but does not entirely look into foolproof ways to address these. It is expected that the organisation is looking to future standards which will be developed to evaluate and dictate AI governance globally.

+ ISO/IEC 23894: Information Technology – Artificial Intelligence – Guidance on Risk Management. Published in February 2023, it outlines how organisations should develop, produce, deploy, or use products, systems and services that utilise AI and ways to manage its corresponding risks. It provides guidance on how organisations ought to integrate risk management into their AI-related activities and functions, for example process and procedures and transparency reporting.

It is somewhat ironic when industry leads the push for greater regulation. During a US congressional hearing earlier this year, Sam Altman, CEO of OpenAI, told lawmakers that AI could cause “significant harm to the world” and that regulating AI “would be quite wise” given the dangers an unfettered system could inflict on the world.

Altman proposed the formation of a US or global agency along commercial lines, “a new agency that licenses any effort above a certain scale of capabilities and could take that license away and ensure compliance with safety standards”. Putting aside the monopolistic leanings in his suggestion, creating a McDonald’s-type model is the most far-out proposal yet on global AI regulation.

Proof that the approach to global AI governance can really be as wide as an ocean and as deep as a puddle.


Julia Chong is a content analyst and writer at Akasaa, a boutique content development and consulting firm.