Legal Governance of Artificial Intelligence Strategies for Compliance under Emerging U.S. and European Regulation
To minimize business risks and avoid potential liabilities under emerging legal regulation, every business that uses or provides Artificial Intelligence (AI) tools must invest in policies, procedures, education and compliance with a myriad of different rules across the United States and in Europe. Pending a definitive U.S. federal law unifying disparate state procedures and emerging federal agency rules, U.S. compliance will be increasingly redundant, confusing, demanding and costly. For business owners, managers, investors and their legal advisors, this article identifies many pending or enacted laws governing AI.
European Union Artificial intelligence Act
The EU has adopted the EU AI Act, which regulates “deployers” of AI based on the severity and scope of the risks of harms, focusing on cross-industry data protection, consumer protection, an individual’s fundamental rights, employment, and protection of workers and product safety. The EU AI Act does not cover AI systems used solely for military, defense, national security or scientific research and development. The EU AI Act adopts several core principles:
- AI Literacy. As a tool for informed consent, “AI literacy should equip providers, deployers and affected persons with the necessary notions to make informed decisions regarding AI systems. Those notions may vary with regard to the relevant context and can include understanding the correct application of technical elements during the AI system’s development phase, the measures to be applied during its use, the suitable ways in which to interpret the AI system’s output, and, in the case of affected persons, the knowledge necessary to understand how decisions taken with the assistance of AI will have an impact on them. In the context of the application this Regulation, AI literacy should provide all relevant actors in the AI value chain with the insights required to ensure the appropriate compliance and its correct enforcement.” AI Act, Preamble, Para. 20.
- Privacy and Data Governance. AI systems must be developed and used in accordance with privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity.
- Transparency. “Transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.” Id., Para. 27.
- Diversity, Non-discrimination and Fairness. AI systems should be “developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.” Id.
- Social and environmental well-being. AI systems should be “developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings,” with ongoing assessments.
United States Federal Law
In the U.S., federal and state governments are pursuing similar principles and policies based on President Biden’s 2023 executive order on AI.
Federal legislation is limited. It does define “artificial intelligence” and “machine learning” under the National Artificial Intelligence Act of 2020 at sections 5002(3) and 5002(11). The term ‘‘artificial intelligence’’ means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to— (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.” The National Institute for Science and Technology has made recommendations for AI governance.
These definitions have already been adopted by the U.S. Equal Employment Opportunity Commission (EEOC), the U.S. Department of Justice (DOJ), and numerous other Federal and State agencies. Under the Loper Bright Supreme Court decision, federal agency regulations will no longer be presumptively valid, so litigation may challenge emerging rules.
California State Laws
Taking a broad view, California and New York are leading the way in the absence of any uniform federal AI management laws or regulations. Other states have adopted focused rules.
In California, Governor Gavin Newsom signed several AI laws in September 2024. Most significantly, by adopting a risk-weighted approach to AI governance mandates, he vetoed S.B. 1047, which would have imposed several regulatory compliance mandates on low-risk “basic” AI activities. California now has AI laws across a wide spectrum:
- AI Literacy: Definition of AI. California law uniformly defines the term ”artificial intelligence” to mean “an engineered or machine-based system that, for explicit or implicit objectives, infers from the input it receives how to generate outputs that can influence physical or virtual environments.”
- Risk-Weighted Regulation. Under the California Generative AI Accountability Act, California’s Government Operations Agency and other agencies are directed to analyze risks that AI will replace human judgment, particularly for critical infrastructures such as energy and power platforms. A “High-risk automated decision system” means “an automated decision system that is used to assist or replace human discretionary decisions that have a legal or similarly significant effect, including decisions that materially impact access to, or approval for, housing or accommodations, education, employment, credit, health care, and criminal justice.”
- Transparency as to Training Data. AI developers must post, by January 1, 2026, information on the data used to train the AI system or service on their websites for California consumers. Also, the developers of covered GenAI systems must both include provenance disclosures in the original content their systems produce and make tools available to identify GenAI content produced by their systems. S.B. 942.
- Biometric Privacy: Personal information now includes biometric and other unique identifiers stored in AI systems, amending the California Consumer Privacy Protection Act.
- Limitation on Uses of Biometric Digital Copies. In Hollywood and elsewhere in California, an agreement for the performance of personal or professional services that expressly allows for the use of a digital replica of an individual’s voice or likeness is unenforceable if it does not include a reasonably specific description of the intended uses of the replica and the individual is not represented by legal counsel or by a labor union.
- False Images and Sounds. California now prohibits a person from producing, distributing, or making available the digital replica of a deceased personality’s voice or likeness in an expressive audiovisual work or sound recording without prior consent, except as provided. Likewise, businesses hiring individuals for personal or professional services must obtain informed consent.
- Pornography and Sex: Existing child pornography statutes are expanded to include matter that is digitally altered or generated by the use of AI (“Child sexual assault material (CSAM).” California now makes it a crime for a person to intentionally create and distribute any sexually explicit image of another identifiable person that was created in a manner that would cause a reasonable person to believe the image is an authentic image of the person depicted, under circumstances in which the person distributing the image knows or should know that distribution of the image will cause serious emotional distress, and the person depicted suffers that distress. Social media platforms serving Californians must establish a mechanism for reporting and removing “sexually explicit digital identity theft.”
- Politics and Elections. Several laws focus on political persuasions.
- Political Persuasive Activities. California now requires committees that create, publish, or distribute a political advertisement that contains any image, audio, or video that is generated or substantially altered using AI to include a disclosure in the advertisement disclosing that the content has been so altered. It is prohibited to issue advertisement or other election material containing deceptive AI-generated or manipulated content within 120 days before an election.
- Defending Democracy from Deepfake Deception Act of 2024. Large online platforms with at least one million California users must eliminate materially deceptive and digitally modified or created content related to elections, or to label that content, during specified periods before and after an election, if the content is reported to the platform. Victims may obtain injunctive relief.
- AI Literacy. Under the California Generative AI Accountability Act, California’s educational system must adapt and teach about AI.
- Transparency to Users. Under the California AI Transparency Act, SB 942, AI licensors must disclose source materials in a manner reasonably understandable by an ordinary person. If an AI licensee removes such disclosures, the AI licensor must cancel the license within 96 hours..
- Health Care Industry: California health plans and insurers must make health decisions (and utilization review and management) based on human choices, not automated tools. Also, specified health care providers must disclose the use of GenAI when it is used to generate communications to a patient pertaining to patient clinical information.
New York’s Emerging Regulation of Artificial Intelligence
New York City prohibits employers from relying solely on automation for hiring decision-making. Local Law 144 of 2021 prohibits employers and employment agencies from using an automated employment decision tool (AEDT) unless the tool has been subject to a bias audit within one year of the use of the tool, information about the bias audit is publicly available, and certain notices have been provided to employees or job candidates.
New York State is considering over 60 bills on AI. For legislation, top contenders for enactment include five bills:
- Bill of Rights for New York Residents to ensure that any system making decisions without human intervention impacting their lives do so lawfully, properly, and with meaningful oversight. (2023-AB129). Among these rights and protections, if the bill were enacted, are (i) the right to safe and effective systems; (ii) protections against algorithmic discrimination; (iii) protections against abusive data practices; (iv) the right to have agency over one’s data; (v) the right to know when an automated system is being used; (vi) the right to understand how and why an automated system contributed to outcomes that impact one; (vii) the right to opt out of an automated system; and (viii) the right to work with a human in the place of an automated system. (In Committee in Assembly).
- Transparency: Mandating Warnings on Generative AI Systems (S9450 / S101013-B). This bill would amend the General Business Law to require mandatory warnings on generative AI system interfaces. Its definitions of AI and Generative AI are significant and could set standards for other jurisdictions.
- It would define AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. The definition includes but is not limited to systems that use machine learning, large language model, natural language processing, and computer vision technologies, including generative AI.”
- It would define a “Generative artificial intelligence system” as “any artificial intelligence system whose primary function is to generate content, which can take the form of code, text, images, and more.”
- For breach, it would impose civil fines of $25 per user or up to $100,000. (Passsed Senate June 6, 2024; awaiting Assembly action).
- Transparency: Requiring Disclosure of [AI-Generated] “Synthetic Performers” (not Real Humans) in Advertisements. (A 216-C/ S6859-A). This bill would define “generative artificial intelligence” (among others terms, that “performs tasks under varying and unpredictable circumstances without significant human oversight”). A “synthetic performer” that is computer-generated but gives the impression of being that performed by a natural person. It would require disclosure of its artificial character in any advertisement, whether online, radio or TV. Fines for violation would range from $1,000 to $5,000. (Pending in Senate and Assembly).
- Protecting Public officials from Unauthorized AI Depictions. (A10652). The owner, licensee or operator of a visual or audio generative artificial intelligence system would be required to implement a reasonable method to prohibit its users from creating unauthorized realistic depictions of a covered person (public official, candidate or a representative). Fines of $100 per depiction up to $100,000 total, would apply. (Pending in Assembly).
- New Commission on AI, Robotics and Automation. (S8138-A / A9559). This bill would create a temporary commission of 14 members to examine the impact of AI, robotics, automation and technologies across multiple sectors. The commission’s comprehensive study, due before December 31, 2025, could serve as a model for other states, like the EU AI Act does for Europe. (Passed in Senate; pending in Assembly).
- Compliance management through AI-focused policies and procedures: supply chain management. The New York State Department of Financial Services reminds us to include compliance requirements in all subcontracts with third party service providers.
One of the most important requirements for combatting AI-related risks is to maintain TPSP (third-party service providers) policies and procedures that include guidelines for conducting due diligence before a Covered Entity uses a TPSP that will access its Information Systems and/or NPI [non-public information]. When doing so, DFS strongly recommends Covered Entities consider, among other factors, the threats facing TPSPs from the use of AI and AI-enabled products and services; how those threats, if exploited, could impact the Covered Entity; and how the TPSPs protect themselves from such exploitation.
Covered Entities’ TPSP policies and procedures should address the minimum requirements related to access controls, encryption, and guidelines for due diligence and contractual protections for TPSPs with access to Information Systems and/or NPI. In addition, Covered Entities should require TPSPs to provide timely notification of any Cybersecurity Event that directly impacts the Covered Entity’s Information Systems or NPI held by the TPSP, including threats related to AI. Moreover, if TPSPs are using AI, Covered Entities should consider incorporating additional representations and warranties related to the secure use of Covered Entities’ NPI, including requirements to take advantage of available enhanced privacy, security, and confidentiality options.
AI’s Impact: Possible New Social Justice through New Common Law and Legal Protections.
Just as the industrial revolution in the 1800’s and early 1900’s ushered in new concepts of tort law to protect workers (negligence, strict liability or intentional tort), the AI age may generate new litigations and theories of legal protections, whether at common law or statutory.
Under employment law and unemployment compensation laws, AI will impact workers who fail to qualify for jobs whose tasks can be performed mechanically or by AI. For corporate non-compliance under AI regulations, shareholders can pursue directors and officers for misfeasance and malfeasance, impacting D&O and E&O insurance coverages. Consumer class actions may seek damages under Federal Rules of Civil Procedure. Triple-damage “racketeering” claims may be asserted for fraud in interstate commerce. In short, AI invites regulation and litigation, with substantial costs to a business.
In short, fully compliant AI will require human judgment based on fact-checking as well as protections of third party rights affected by AI operations.
If you would like to discuss any concerns about cybersecurity, data protection, supply-chain and AI-related matters, please reach out to attorney William Bierce or the Bierce & Kenerson, PC attorney with whom you most frequently work.
This alert is not a substitute for advice of counsel on specific legal issues. Dated: Oct. 23, 2024