Global AI Regulations

The UK AI Safety Summit, along with G7 and US actions, signalled progress in AI last week. Over 100 attendees from various sectors met at Bletchley Park for the world’s first international AI Safety Summit, exploring the future of AI.

Several big tech companies and nations signed a ‘landmark’ voluntary agreement to allow governments, including the UK, US, and Singapore, to test their latest models for social and national security risks. Companies signing the pledge included Sam Altman’s OpenAI, Google DeepMind, Anthropic, Amazon, Mistral, Microsoft, and Meta.

One of the summit attendees, Elon Musk, told UK Prime Minister Rishi Sunak, ‘There will come a point where no job is needed,’ as the billionaire entrepreneur described artificial intelligence as the ‘most disruptive force in history’ in a wide-ranging conversation.

Musk’s views are sometimes unpopular, but he’s right about few things:

1. The emergence of AI, like the birth of an entirely new species of intelligent beings, brings creatures that require neither sleep, food, money, nor emotional support onto our planet.

2. The fact that new technologies historically disrupt the job landscape and can exacerbate societal inequalities. But if there will come a point where no job is needed, I do not think so..

The term ‘artificial intelligence’ has been recognized since 1956, but it gained prominence in 2023 when Collins Dictionary named it the ‘word of the year.’ AI, which has been influencing our lives for years, took a significant step in October 2022 with OpenAI’s ChatGPT.

Optimists like me on the other side believe that AI has great potential to enhance areas such as education, health care, access to justice, scientific discovery and environmental protection. If it is to do so, and do it in a responsible way, it is vitally important that democratic governments play a bigger role in shaping AI’s future. The Bletchley Declaration is a positive step toward international cooperation in addressing AI’s risks and benefits.

The value of the declaration may be largely symbolic of political leaders’ awareness that AI poses serious challenges and opportunities and their preparedness to cooperate on appropriate action. But heavy lifting still needs to be done to translate the declaration’s values into effective regulation.

A Legal and Regulatory Blueprint for the Future:

As businesses increasingly embrace AI technologies, and with users worldwide becoming more reliant on these innovations, responsible AI and the cyber security have taken center stage as major concerns. In this era of AI integration, it’s crucial to prioritize following AI security and responsible AI essentials as a foundation for a safer future:

  1. Determining and understanding relevant compliance requirements,
  2. Understanding characteristics of trustworthy and resilient AI based on frameworks like NIST AI Risk Management Framework,
  3. Identifying threats for an AI ecosystem (Privacy concerns, Data poisoning, Model bias, Model Security/Theft and more),
  4. Developing controls for identified threats,
  5. Establishing cybersecurity policies which are tailored to risk level, sector and use-cases.

Relevant legal and regulatory requirements (Part 1):

Countries around the world are actively crafting and enacting AI governance legislation in response to the rapid proliferation and diversification of AI-powered technologies. These legislative endeavors encompass a range of approaches, including the creation of comprehensive legislation, targeted regulations for specific AI applications, and the formulation of voluntary guidelines and standards.

Tracking, unpacking and governing the complex field of global AI governance law and policy has quickly become a top tier strategic issue for organizations. IAPP (International Association of Privacy Professional)s’ Global AI Legislation Tracker is great resource to identify legislative policy and related developments in a subset of jurisdictions.

Article content
Iaap.org

Let’s talk a bit about current legislations:

The European Union (EU), long seen as the gold standard of data, privacy, and technology regulations, and with a focus on upholding principles such as human rights and consumer protection. The EU AI Act moved to the trilogue stage, where a final version will be debated, June 2023. Passage of the act is expected by the end of 2023 or in early 2024. Some EU member states have national AI strategies, many of which emphasize research, training and labor preparedness, as well as multi stakeholder and international collaboration. In addition, there are other EU enacted cyber security and privacy regulations which are related to AI systems as follows:

  1. EU Cybersecurity Act (Enacted)
  2. NIS 2 Directive (Enacted)
  3. GDPR (Enacted)

The U.S. does not have a comprehensive AI regulation, but numerous frameworks and guidelines exist. Very recently, (on October 30, 2023), President Joe Biden signed an executive order (EO) on artificial intelligence (AI) in an effort to establish a “coordinated, Federal Government-wide approach” to the responsible development and implementation of AI. However, it is worth mentioning that the US is ranked No. 1 on Tortoise’s Global AI Index. Just to put things in context, the index aims to make sense of artificial intelligence in 62 countries that have chosen to invest in it. It’s the first ever ranking of countries based on three pillars of analysis; investment, innovation and implementation. Governance aspects are not considered in the index.

Article content
https://www.tortoisemedia.com/intelligence/global-ai/

The United Kingdom recently hosted the UK AI Safety Summit, even though the country does not yet have a comprehensive AI regulation in place. Instead, the government has put forward a context-based and proportionate approach to regulation, relying on existing sectoral laws to establish guidelines and boundaries for AI systems.

Canada’s Artificial Intelligence and Data Act (AIDA) aims to ensure that high-impact AI systems align with existing safety and human rights standards while also prohibiting reckless and malicious uses of AI. The act empowers the Minister of Innovation, Science, and Industry to enforce these regulations.

China in fact also announced its Global AI Governance Initiative at the Belt and Road Forum in Beijing, where the country celebrated the 10-year anniversary of its Belt and Road Initiative.

Around the world, governments are looking for or developing what in effect are new blueprints to govern artificial intelligence. There, of course, is no single or right approach. Effective regulation often operates most efficiently when it employs both carrots (incentives and safe harbors) and sticks (penalties). For example, the EU AI Act adopts a robust enforcement approach, featuring substantial financial penalties. U.S. AI initiatives, on the other hand, have largely taken a voluntary approach, exemplified by commitments from AI companies announced by the White House on July 21, 2023. The White House has long been criticized for its lack of comprehensive legislation to regulate the ‘big tech’ companies on issues ranging from data and privacy protection to the responsibilities of social media platforms.

I tried to give a quick introduction to the most recent legislative aspects and initiatives surrounding the development of trustworthy and secure AI.

In the next episode, we will deep dive in characteristics of trustworthy and resilient AI based on frameworks like NIST AI Risk Management Framework. Stay tuned..

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top