A Look at the Evolving Scope of Transatlantic AI Regulations

There have been significant changes to the regulations surrounding artificial intelligence (AI) on a global scale. New measures from governments worldwide are coming online, including the United States (U.S.) government’s executive order on AI, California’s upcoming regulations, the European Union’s AI Act, and emerging developments in the United Kingdom that contribute to this evolving environment.

The European Union (EU) AI Act and the U.S. Executive Order on AI aim to develop and utilize AI safely, securely, and with respect for fundamental rights, yet their approaches are markedly different. The EU AI Act establishes a binding legal framework across EU member states, directly applies to businesses involved in the AI value chain, classifies AI systems by risk, and imposes significant fines for violations. In contrast, the U.S. Executive Order is more of a guideline as federal agencies develop AI standards and policies. It prioritizes AI safety and trustworthiness but lacks specific penalties, instead relying on voluntary compliance and agency collaboration.

The EU approach includes detailed oversight and enforcement, while the U.S. method encourages the adoption of new standards and international cooperation that aligns with global standards but is less prescriptive. Despite their shared objectives, differences in regulatory approach, scope, enforcement, and penalties could lead to contradictions in AI governance standards between the two regions.

There has also been some collaboration on an international scale. Recently, there has been an effort between antitrust officials at the U.S. Department of Justice (DOJ), U.S. Federal Trade Commission (FTC), the European Commission, and the UK’s Competition and Markets Authority to monitor AI and its risks to competition. The agencies have issued a joint statement, with all four antitrust enforcers pledging to “to remain vigilant for potential competition issues” and to use the powers of their agencies to provide safeguards against the utilization of AI to undermine competition or lead to unfair or deceptive practices.

The regulatory landscape for AI across the globe is evolving in real time as the technology develops at a record pace. As regulations strive to keep up with the technology, there are real challenges and risks that exist for companies involved in the development or utilization of AI. Therefore, it is critical that business leaders understand regulatory changes on an international scale, adapt, and stay compliant to avoid what could be significant penalties and reputational damage.

The U.S. Federal Executive Order on AI

In October 2023, the Biden Administration issued an executive order to foster responsible AI innovation. This order outlines several key initiatives, including promoting ethical, trustworthy, and lawful AI technologies. It also calls for collaboration between federal agencies, private companies, academia, and international partners to advance AI capabilities and realize its myriad benefits. The order emphasizes the need for robust frameworks to address potential AI risks such as bias, privacy concerns, and security vulnerabilities. In addition, the order directs that various sweeping actions be taken, including the establishment of new standards for AI safety and security, the passing of bipartisan data privacy legislation to protect Americans’ privacy from the risks posed by AI, the promotion of the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges, and the implementation of actions to ensure responsible government deployment of AI and modernization of the federal AI infrastructure through the rapid hiring of AI professionals.

At the state level, Colorado and California are leading the way. Colorado enacted the first comprehensive regulation of AI at the state level with The Colorado Artificial Intelligence Act (Senate Bill (SB) 24-205), signed into law by Governor Jared Polis on May 17, 2024. As our team previously outlined, The Colorado AI Act is comprehensive, establishing requirements for developers and deployers of “high-risk artificial intelligence systems,” to adhere to a host of obligations, including disclosures, risk management practices, and consumer protections. The Colorado law goes into effect on February 1, 2026, giving companies over a year to thoroughly adapt.

In California, a host of proposed AI regulations focusing on transparency, accountability, and consumer protection would require the disclosure of information such as AI systems’ functions, data sources, and decision-making processes. For example, AB2013 was introduced on January 31, 2024, and would require that developers of an AI system or service made available to Californians to post on the developer’s website documentation of the datasets used to train the AI system or service.

SB970 is another bill that was introduced in January 2024 and would require any person or entity that sells or provides access to any AI technology that is designed to create synthetic images, video, or voice to give a consumer warning that misuse of the technology may result in civil or criminal liability for the user.

Finally, on July 2, 2024 the California State Assembly Judiciary Committee passed SB-1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), which regulates AI models based on complexity.

The European Union’s AI Act

The EU is leading the way in AI regulation through its AI Act, which establishes a framework and represents Europe’s first comprehensive attempt to regulate AI. The AI Act was adopted to promote the uptake of human-centric and trustworthy AI while ensuring high level protections of health, safety, and fundamental rights against the harmful effects of AI systems in the EU and supporting innovation.

The AI Act sets forth harmonized rules for the release and use of AI systems in the EU; prohibitions of certain AI practices; specific requirements for high-risk AI systems and obligations for operators of such systems; harmonized transparency rules for certain AI systems; harmonized rules for the release of general-purpose AI models; rules on market monitoring, market surveillance, governance, and enforcement; and measures to support innovation, with a particular focus on SMEs, including startups.

The AI Act classifies AI systems into four risk levels: unacceptable, high, limited, and minimal. Applications that pose an unacceptable risk, such as government social scoring systems, are outright banned. High-risk applications, including CV-scanning tools, face stringent regulations to ensure safety and accountability. Limited risk applications lack full transparency as to AI usage, and the AI Act imposes transparency obligations. For example, humans should be informed when they are using AI systems (such as chatbots) that they are interacting with a machine and not a human so as to enable the user to make an informed decision whether or not to continue. The AI Act allows the free use of minimal-risk AI, including applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

The adoption of the AI Act has not come without criticism from major European companies. In an open letter signed by 150 executives, they raised concerns over the heavy regulation of generative AI and foundation models. The fear is that the increased compliance costs and hindered productivity would drive companies away from the EU. Despite these concerns, the AI Act is here to stay, and it would be wise for companies to prepare for compliance by assessing their systems.

Recommendations for Global Businesses

As governments and regulatory bodies worldwide implement diverse AI regulations, companies have the power to adopt strategies that both ensure compliance and mitigate risks proactively. Global businesses should consider the following recommendations:

  1. Risk Assessments: Conducting thorough risk assessments of AI systems is important for companies to align with the EU’s classification scheme and the U.S.’s focus on safety and security. There must also be an assessment of the safety and security of your AI systems, particularly those categorized as high-risk under the EU’s AI Act. This proactive approach will not only help you meet regulatory requirements but also protect your business from potential sanctions as the legal landscape evolves.
  2. Compliance Strategy: Develop a compliance strategy that specifically addresses the most stringent aspects of the EU and U.S. regulations.
  3. Legal Monitoring: Stay on top of evolving best practices and guidelines. Monitor regulatory developments in regions in which your company operates to adapt to new requirements and avoid penalties and engage with policymakers and industry groups to stay ahead of compliance requirements. Participation in public consultations and industry forums can provide valuable insights and influence regulatory outcomes.
  4. Transparency and Accountability: To meet ethical and regulatory expectations, transparency and accountability should be prioritized in AI development. This means ensuring AI systems are transparent, with clear documentation of data sources, decision-making processes, and system functionalities. There should also be accountability measures in place, such as regular audits and impact assessments.
  5. Data Governance: Implement robust data governance measures to meet the EU’s requirements and align with the U.S.’s emphasis on trustworthy AI. Establish governance structures that ensure compliance with federal, state, and international AI regulations, including appointing compliance officers and developing internal policies.
  6. Invest in Ethical AI Practices: Develop and deploy AI systems that adhere to ethical guidelines, focusing on fairness, privacy, and user rights. Ethical AI practices ensure compliance, build public trust, and enhance brand reputation.

© 2024 Foley & Lardner LLP by: Natasha Allen, David W. Kantaros of Foley & Lardner LLP For more news on AI Regulations, visit the NLR Consumer Protection section.

  • Related Posts

    You See Health, Whistleblower Saw Fraud: Uncovering a $23 Million Healthcare Fraud Scheme

    A whistleblower’s vigilance has led to the revelation of alleged Medicare and TRICARE fraud involving UCHealth, a healthcare system with locations throughout the state of Colorado. University of Colorado Health…

    Website Use of Third-Party Tracking Software Not Prohibited Under Massachusetts Wiretap Act

    The Supreme Judicial Court of Massachusetts, the state’s highest appellate court, recently held that website operators’ use of third-party tracking software, including Meta Pixel and Google Analytics, is not prohibited…

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    You See Health, Whistleblower Saw Fraud: Uncovering a $23 Million Healthcare Fraud Scheme

    • By admin
    • November 22, 2024
    • 0 views
    You See Health, Whistleblower Saw Fraud: Uncovering a $23 Million Healthcare Fraud Scheme

    New York Fed paper challenges notion of discount window stigma

    • By admin
    • November 22, 2024
    • 1 views
    New York Fed paper challenges notion of discount window stigma

    US economic output hits highest level since April 2022 amid ‘greater optimism’ among businesses

    • By admin
    • November 22, 2024
    • 0 views
    US economic output hits highest level since April 2022 amid ‘greater optimism’ among businesses

    Dow Jones Today: Futures Little Changed as Stocks on Pace for Weekly Gains; Bitcoin Nears $100,000

    • By admin
    • November 22, 2024
    • 1 views
    Dow Jones Today: Futures Little Changed as Stocks on Pace for Weekly Gains; Bitcoin Nears $100,000

    Oil heads for weekly gains as Ukraine war intensifies

    • By admin
    • November 22, 2024
    • 2 views
    Oil heads for weekly gains as Ukraine war intensifies

    Oil Heads for Weekly Advance as Russia-Ukraine War Escalates

    • By admin
    • November 22, 2024
    • 2 views
    Oil Heads for Weekly Advance as Russia-Ukraine War Escalates