Writing the Rules of the Road for Artificial Intelligence
Global monitor of the diverse regulations designed to create trustworthy tech.
Global monitor of the diverse regulations designed to create trustworthy tech.
By Julia Chong
With the masses buzzing about generative artificial intelligence (AI) tools like ChatGPT and DALL-E, many long-established players are finally experiencing the long-awaited boom in the sector. Leading AI-chipmaker Nvidia cleared the USD6 billion profit mark this August whilst OpenAI, the company behind ChatGPT, is reported to be generating over USD80 million in monthly revenues (up from USD28 million for the whole of 2022) after its chatbot became a pay-for-use platform.
These companies currently operate in a relatively burden-free environment. There is no super-regulator and the industry is not heavily regulated; compliance costs are at a minimum as most operate within the known confines of the tech world with such costs already embedded in their day-to-day operations; and, let’s face it, it is more creatively and financially rewarding to be a tech nerd in this day and age than it has been at any other time in human history.
However, this is set to change as regulators (and some standard-setters) in major jurisdictions compete to fill the void, each with their own version of the rules and laws to navigate the ethical and governance landmines posed by AI.
As it stands, the diverse approaches adopted by each of the authorities indicate that there is no single approach, no ‘yellow brick road’ in ensuring AI systems are governed well. It is crucial that bankers, in their roles as both users and financiers, have a birds-eye view of the key drivers, objectives, and guardrails that will shape the trajectory of AI governance in the decades to come.
As the world’s largest producer of AI research, China is also ahead of the pack in the regulatory space as the first nation to roll out AI regulations. The following are key points excerpted from a July 2023 white paper by Matt Sheehan for the Carnegie Endowment for International Peace, China’s AI Regulations and How They Get Made. The author relied on the English translations of Chinese legislations for the analysis.
China’s AI regulation is the most voluminous and comprehensive issued to date by any jurisdiction. Outlining new technical tools such as disclosure requirements, model auditing mechanisms, and technical performance standards, the following are some of the most visible and impactful laws and regulations issued by the government:
The Chinese government recently announced in June 2023 that it will begin preparations on a draft Artificial Intelligence Law to be submitted to China’s legislature. Widely expected to be a more comprehensive piece of legislation, analysts expect that it will act as a capstone on Chinese AI policy.
Although an agreement on the final version of the economic region’s AI Act has yet to be reached, the trilogue debates – three-way negotiations between the European Commission, Council, and Parliament – have been underway since mid June. Once the bill is passed, the regulation will apply uniformly throughout the EU with no need for transposition to the respective member states and analysts have confirmed that it carries significant extraterritorial reach with stricter penalties that surpass current regulations, such as the General Data Protection Regulation.
The EU AI Act is a horizontal regulation, meaning that it is designed as an umbrella regulation to cover all applications of the tech. It is significantly different from China’s regulatory AI regime, which is considered to be vertical regulation as each regulation specifically applies to a set of AI technologies, such as text, images, or data management.
Key contentious issues, according to a policy brief by the Brookings Institute, include:
EU states are also weighing the cost that comes with regulatory burden and at what price point compliance costs will begin to stifle innovation in small- and medium-sized enterprises. While waiting for the Act to run its course, Commissioner Thierry Breton announced in May the EU’s work on an AI Pact alongside Alphabet, Google’s parent company, and other developers to create non-binding rules with companies towards a safer and more accountable AI ecosystem.
On 29 March 2023, the UK government published its AI White Paper, reiterating its pro-innovation and context-specific AI regulatory regime. A departure from the EU approach, the UK proposes to spread the task of regulatory oversight by empowering its existing regulators such as the Financial Conduct Authority and the Competition and Markets Authority. Still in its infancy, the proposed regulatory framework focuses on five principles:
This warrants an entry for its notable absence of impactful legislation in this sphere. Currently, the White House has invested USD140 million (a paltry sum by any standard) in AI research through various institutes, has released its AI bill of rights blueprint for public consultation on how best to regulate its use, and issued executive orders vaguely stating that AI systems should be implemented “in a manner that advances equity” in government agencies.
Aside from standards related to concepts, terminology, or classifications in machine learning performance and processes, the ISO’s more recent AI-related standards remain vague and entirely voluntary:
+ ISO/IEC 38507: Governance Implications of the Use of Artificial Intelligence by Organizations. Meant as a high-level guide on building a robust governance programme for both public and private bodies, the focus is on building trust through required accountability and transparency frameworks in order to mitigate the ethical landmine that comes with AI applications. The April 2022-issued standard discusses the implications but does not entirely look into foolproof ways to address these. It is expected that the organisation is looking to future standards which will be developed to evaluate and dictate AI governance globally.
+ ISO/IEC 23894: Information Technology – Artificial Intelligence – Guidance on Risk Management. Published in February 2023, it outlines how organisations should develop, produce, deploy, or use products, systems and services that utilise AI and ways to manage its corresponding risks. It provides guidance on how organisations ought to integrate risk management into their AI-related activities and functions, for example process and procedures and transparency reporting.
It is somewhat ironic when industry leads the push for greater regulation. During a US congressional hearing earlier this year, Sam Altman, CEO of OpenAI, told lawmakers that AI could cause “significant harm to the world” and that regulating AI “would be quite wise” given the dangers an unfettered system could inflict on the world.
Altman proposed the formation of a US or global agency along commercial lines, “a new agency that licenses any effort above a certain scale of capabilities and could take that license away and ensure compliance with safety standards”. Putting aside the monopolistic leanings in his suggestion, creating a McDonald’s-type model is the most far-out proposal yet on global AI regulation.
Proof that the approach to global AI governance can really be as wide as an ocean and as deep as a puddle.
Julia Chong is a content analyst and writer at Akasaa, a boutique content development and consulting firm.