Follow
Subscribe

Mapping Approaches to Regulate AI

Home > Industry Analysis > Content

【Summary】This article provides a comprehensive overview of the current efforts to regulate AI at the national, bilateral, and multilateral levels. It highlights the divergent views between the US and EU on AI regulation and explores the EU-US Trade and Technology Council's joint roadmap, the OECD Principles on AI, the G7 Communiqué, the EU's Artificial Intelligence Act, the US NIST Framework, private sector commitments, and alternative legislative options in the US.

FutureCar Staff    Aug 15, 2023 5:32 PM PT
Mapping Approaches to Regulate AI

Artificial Intelligence (AI) has become a disruptive force in geopolitics and global business, but its regulatory landscape is complicated by conflicting political priorities and business interests. The US and EU have different views on AI, leading to varying regulatory frameworks with different levels of compliance mechanisms.

The EU-US Trade and Technology Council (TTC) has made efforts to align their views on AI regulation. Working Group 1 of the TTC AI Joint Roadmap has agreed on a high-level taxonomy, focusing on "Trustworthy AI." The EU defines Trustworthy AI as lawful, ethical, and robust, while the US emphasizes validity, reliability, safety, accountability, transparency, privacy, and fairness. Despite these differences, there is potential for future agreements.

However, the joint roadmap falls short of creating full alignment on regulatory policy. One limitation is the different approaches to risk assessment, with the EU distinguishing four categories of risk and the US basing risk assessment on likelihood. Another potential point of conflict is the EU's consideration of an AI Voluntary Code, similar to voluntary commitments made by US-based AI companies.

The OECD has developed the Principles on Artificial Intelligence, the first intergovernmental policy guidelines on AI. These principles aim to ensure AI systems are robust, safe, fair, and trustworthy. The framework includes five guiding principles and policy recommendations for governments, including investment in AI research, digital infrastructure, and skills development.

The G7 Summit has also committed to aligning international standards on AI. While the G7 is not a long-standing standards-setting organization, it has delegated this responsibility to appropriate agencies and aims to preserve an open and enabling environment for AI development.

The EU is leading in AI regulation with the Artificial Intelligence Act (AI Act), which sets a bar for AI legislation. The Act classifies AI systems into four categories based on risk and addresses open and closed-source systems. However, there are still discussions on AI oversight, implementation, and compliance burden on small businesses and startups.

In the US, there are non-binding frameworks and voluntary commitments by companies to ensure safe and trustworthy AI systems. The National Institute for Standards and Technology (NIST) has released the AI Risk Management Framework, providing a voluntary roadmap for AI actors. Private sector commitments include testing AI systems, sharing risk management information, investing in cybersecurity, and addressing societal challenges.

Alternative approaches in the US include proposed legislation to modify Section 230, establish an Office of Global Competition Analysis, and the SAFE Innovation Framework. These approaches focus on issues such as liability, global competition, and security in AI development.

The EU and US have different approaches to AI regulation, with the EU prioritizing regulation and the US emphasizing innovation. This lack of alignment poses risks to market divergence and the development of AI. It is crucial for the Atlantic leaders to find a shared approach to address their differences.

Prev                  Next
Writer's other posts
Comments:
    Related Content