Generative AI Will Stay Top of Mind in 2024
January 17, 2024
Artificial Intelligence (AI), and in particular, generative AI, will continue to be an issue in the year to come, as new laws and regulations, agency guidance, continuing and additional litigation on AI and new AI-related partnerships will prompt headlines and require companies to continually think about these issues.
White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence[1]
On October 30, 2023, the Biden Administration issued a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the Order), which sets forth the Administration’s goals of establishing new or broadened standards for safety and security in the use of artificial intelligence and continuing to strengthen the foundation of protections with respect to Americans’ privacy and civil rights, bolstering support for American workers, promoting responsible innovation, competition and collaboration and advancing America’s role as a world leader with respect to AI.[2]
The Order tasks a number of federal departments and agencies with the responsibility of researching, generating, implementing and/or overseeing standards and guidance with respect to AI-related risks in their respective fields, generally through engaging in such agencies’ proscribed rule-making procedures. Some instructions are specific and tailored to particular departments’ activities, such as guidance offered to the Small Business Administration to consider prioritizing AI development and research through targeted allocation of grants and funding, and some directives are more general, such as where the Order calls on “relevant agencies” to establish guidelines and limitations on the appropriate use of generative AI and to provide personnel with access to secure and reliable generative AI capabilities.
In addition to the Order’s enhancement of the pre-existing obligations imposed on federal agencies to oversee and implement responsible uses of AI (for example, clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use), the Order establishes a new White House AI Council, comprised of the heads of a number of federal agencies and executive offices, which will be responsible for coordinating the activities of the various federal agencies to ensure the effective formulation, development and communication, as well as timely implementation, of AI-related policies, while ensuring appropriate industry engagement.
The Order sets out myriad timelines for the various agencies to take action as instructed, ranging from 30 to 540 days following the Order, and some instructions have no timeline or are periodic (e.g., annual reporting). It will be important to keep abreast of future rulemaking on this topic, in particular from the Federal Trade Commission and other agencies that have investigating powers over companies.
Next Steps for the Private Sector
At this phase, there are very few requirements in the Order that are applicable to private industry participants. The Order does establish a new reporting scheme for companies developing or planning to develop what the Order considers “dual-use foundation models,” which are large-scale, widely adaptable models that can be used in a variety of contexts, and which demonstrate the ability to be used, or could be modified to be used, to threaten national security and economic stability (for example, by lowering the barrier to entry to develop chemical or nuclear weapons, or enabling powerful offensive cyber-attacks).
For private companies without dual-use foundation models, though, there are few applicable regulations coming out of this Order (that is, until agencies promulgate regulations in their respective fields that may apply to the private sector). However, there are several areas of potential opportunity for private AI companies to engage with these new policies. A key focus of the Order is drawing AI talent to the U.S. through recruiting programs, fast-tracked visas and interview procedures and focused immigration policies. This change may create hiring opportunities for industry participants seeking to draw on AI talent from outside of the U.S. There are also funding opportunities centered around small businesses, education programs or employee development programs. Beyond hiring and funding, there are also calls for increased government contracting, with the Order encouraging agencies to seek generative AI contracts with market players to optimize their own workforce and programs.
In addition to business opportunities, there are several opportunities for the industry to engage with regulators in the rulemaking process and provide comments. We encourage industry participants to be mindful and take advantage of the opportunity to provide industry insight for future regulation both from a commercial and technical standpoint.
U.S. Copyright Office and U.S. Patent Office Guidance on AI
In March 2023, the U.S. Copyright Office (USCO) launched an initiative geared towards examining copyright and intellectual property policy issues raised by AI—including both the scope of copyright protection for works generated using AI tools and the use of copyrighted materials to train AI. The initiative is the USCO’s response to requests the Copyright Office has received from Congress and members of the public, including creators and AI users, to examine the issues raised in connection with copyright and AI.
One aspect of the initiative is new registration guidance, which imposes a duty on copyright owners to disclose AI-generated content in works submitted for copyright registration. The USCO has held in its guidance and in response to applications for copyright registration that human authorship remains a requirement in order for works of authorship to be eligible for copyright protection, and such position has been maintained by the federal courts. Where the sole “author” of a work is an AI tool, such work will not be protectable under U.S. copyright laws. The USCO’s registration guidance, accessible on the USCO website,[3] includes instructions on how to disclose such AI-authored works, how to update any applications that are already pending and how to correct any already-approved applications to reflect the use of AI.
The U.S. Patent Office (USPTO) has also responded to the question of patentability of inventions created with the use of AI. The USPTO has determined, and federal courts have affirmed, that under U.S. patent laws, “inventorship” requires a natural person as the inventor, though the question of whether inventions created by humans with the assistance of AI may be eligible for patent protection has yet to be tested in the courts.
The USPTO recognizes that AI programs are becoming increasingly able to meaningfully contribute to innovation and has created a page of related guidance.[4] This guidance includes subject matter eligibility, disclosure requirements, examination guidance and functional limitations. The USPTO has also created a database of patented or applied-for patents that include AI technology.[5] Similar to the USCO, the USPTO has promulgated a series of trainings, both for inventors and examiners, as well as opportunities for industry participation from various stakeholders interested in shaping future guidance.
Ongoing AI-Related Litigation in the U.S.
The regulation of AI is still developing in the U.S.—unsurprisingly, at a slower pace than the technology itself, giving rise to a string of litigation as industry actors and stakeholders attempt to decipher how the development and deployment of AI technology intersects with intellectual property rights. The majority of lawsuits brought to date involve copyright infringement claims for the unauthorized use of copyrighted content to train AI models, including cases brought by authors for unauthorized use of their books (e.g., Tremblay v. OpenAI, Silverman v. OpenAI, Kadrey v. Meta, Chabon v. OpenAI, Authors Guild v. OpenAI, Huckabee v. Meta, et al. and Sancton v. OpenAI, to name a few), by artists for use of their artworks (e.g., Andersen v. Stability AI, et al.), or other content providers for use of their content (e.g., Reuters v. ROSS, J.L. v. Alphabet, Doe v. GitHub, Concord Music Group v. Anthropic PBC and New York Times v. OpenAI). Other claims include violations of publicity rights (Young v. NeoCortext), trademark and trade dress infringement (Getty v. StabilityAI, Andersen v. Stability), violation of the Digital Millennium Copyright Act’s provisions on copyright management information (CMI), unfair competition, unjust enrichment, violations of open-source licensing terms, breach of contract claims and others.
Many of these cases are still in early stages of litigation, but for some of them the courts have issued opinions, which continue to contour the legal landscape around these burgeoning issues. This is particularly true at the motion to dismiss stage, where we can see the level of variety in the pleadings and facts. The judge in Reuters, for example, issued an opinion at the summary judgment stage, denying both motions for summary judgment, reserving on the “fair use” analysis and finding that such issue needs to go to a jury. The court in Andersen dismissed all claims against two out of the three defendants, reiterating the importance of proving unauthorized reproduction, noting that mere usage by one AI model of or reliance on another already-trained model does not suffice for showing direct copyright infringement.[6] Similarly, the court in Kadrey v. Meta granted Meta’s motion to dismiss in full, dismissing the claims that Meta’s LLaMA model is itself an infringing derivative work, that LLaMA’s outputs are infringing derivative works for which Meta could be vicariously liable, that Meta violated the DMCA by omitting plaintiffs’ CMI, and the unfair competition law, unjust enrichment and negligence claims. The court’s order left intact only the claim for direct copyright infringement based on LLaMA’s training (as to which Meta had not moved to dismiss), and plaintiffs ultimately opted not to amend their other claims and to proceed on the direct infringement claim alone. No court has yet reached the merits of the fair use defense, on which many of these cases are likely to turn.
The jurisprudence established through litigation and the forthcoming statutory regulation described above will continue to develop in tandem as the breadth of use-cases for AI technology continues to expand.
European Union Enacts an AI Act
On December 9, 2023, after a period of fraught negotiation, the European Parliament and Council reached a political agreement on the EU’s Artificial Intelligence Act (AI Act).
The AI Act is likely to have a significant impact on the development, provision, distribution and deployment of AI systems in (and relating to) the EU, including (as was the case with the EU General Data Protection Act) as a result of extraterritorial aspects of the rules. It will therefore be of significant interest to boards of directors of companies headquartered outside the EU.
The AI Act proposes a risk-based approach to the regulation of artificial intelligence and machine learning systems (i.e., with a sliding scale of regulatory requirements depending on the level of risk posed by use of such systems). Under this approach, the majority of AI systems are likely to fall into the category of minimal risk (e.g., spam filters) and will not be covered by binding rules under the regulation. The bulk of the obligations (concerning both providers and deployers) under the AI Act will be imposed in respect of AI systems that are classified as high-risk. Further, a narrow set of AI system applications (e.g., biometric categorization systems that use sensitive characteristics) will be prohibited outright.
Providers of certain types of popular consumer-facing AI systems (e.g., chatbots) will be subject to specific transparency obligations, such as a requirement to make users aware that they are interacting with a machine. Deepfakes and other AI-generated content will also have to be labelled as such, and users will need to be informed when biometric categorization or emotion recognition systems are being used. Additionally, providers will have to design systems in a way that ensures any synthetic audio, video, text, images or other content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
As a next step, a consolidated final text will need to be prepared and formally approved by both the European Parliament and Council (which could happen as early as Q1 of 2024). As an EU regulation, the AI Act will be applied directly in the same way across all EU member states. Once it has entered into force, most of the general provisions of the AI Act will apply after a two-year grace period. Failure by companies to comply with the strictest provisions of the AI Act relating to prohibited AI systems may result in fines of up to €35 million or 7% of group global annual turnover (whichever is higher), while non-compliance with most other provisions of the AI Act (including rules relating to GPAI systems and models) may result in fines of up to €15 million or 3% of group global annual turnover (whichever is higher).
Developing and Updating Internal Policies and Procedures to Utilize and Implement AI Tools
With the growth of AI, a crucial next step for business entities is updating and/or developing internal company policies and procedures with respect to the use of AI. Companies utilizing AI tools should ensure they stay up to date with any legal developments that may impact such use, particularly with respect to data privacy and confidentiality, intellectual property rights, terms and conditions of use, reporting and record keeping, and should take care in updating and developing policies around AI.
Organizations that intend to use personally identifiable information (PII) in connection with AI tools should ensure that such usage has been appropriately disclosed to the relevant consumers at or before the time of collection in a privacy notice. Regulatory enforcement actions underscore the importance of not deceiving consumers about the use of automated tools and how their data may be used to feed or train algorithms, and suggest that reliance on broad disclosures that PII is used to “develop or improve products and services” may be insufficient. The consequences for unlawfully using PII in AI tools can be significant, as the FTC has required companies to delete not only unlawfully obtained data, but also the data products and algorithms developed using such data (known as algorithmic disgorgement).
With respect to internal use of AI tools, for example by employees for internal business purposes, organizations should first determine whether they wish to encourage or discourage the use of AI on the job. Keeping with the Order discussed above, companies in highly regulated industries (i.e., healthcare) or that directly impact other individuals (i.e., credit reporting), may be inclined to restrict the use of AI for the time being, while others in less sensitive industries may seek to implement AI for business reasons (i.e., potentially reducing operating costs). Balancing business needs with implementing and maintaining transparent, ethical and responsible AI practices are the primary considerations in drafting an internal AI policy.
Because the law around AI is developing daily, it may be difficult to ensure continued compliance without designating appropriate resources to stay abreast of the ever-evolving legal landscape. In addition to what has already been described arising out of the White House’s Order on AI, in recent months, federal and state regulators have promulgated a vast array of guidance and legislation concerning implementation of AI tools in attempts to maintain pace with this rapidly growing technology. For example, the California Privacy Protection Agency recently released draft regulations (solely for discussion purposes) to outline a potential framework for PII usage in connection with automated decision making technologies. Under the current draft, entities would be required to provide consumers with “pre-use notices” describing how the business intends to use the consumer’s PII for such technologies to allow the consumer to decide whether to proceed or opt-out, and whether to access more information. Further, the draft regulations also provide guidance on the scope of consumer opt-out requests, which would apply primarily in connection with decisions with potential to have significant impact on the consumer (e.g., decisions about employment or compensation). Other regulators, such as the Colorado Attorney General, have also released binding regulations regarding the information required in connection with AI-specific data protection impact assessments.
Further, there are intellectual property and confidentiality risks to consider when drafting an internal corporate AI policy. As AI tools become more prevalent and widely adaptable, companies like Samsung and Amazon, as well as financial institutions including JPMorgan Chase, Bank of America and Goldman Sachs, have implemented controls and are drafting and revising policies addressing their institutions’ internal use of ChatGPT and other similar AI tools amid growing concerns about potential privacy and regulatory risks.[7] Employees should be informed of the permitted (or prohibited) use and disclosure of proprietary or confidential business information and trade secrets in connection with their use of AI. To avoid missteps, in addition to guidance to employees, businesses should conduct diligence with respect to the confidentiality and use practices of any AI programs used by the business in order to confirm such programs implement proper safeguards with respect to any information shared with the AI tool. Further, in light of the current legal regime for IP protection of AI-generated works or inventions, when drafting internal AI policies, businesses should consider whether to allow employees to use AI to create or develop work product or inventions, depending on whether the business is reliant on IP protection (i.e., copyright or patent protection) to safeguard such works. Companies choosing to develop AI models should set strict parameters around how and on what those models may be trained; instruct and monitor employees to ensure compliance; and maintain careful records to document the ethical sourcing, composition, filtering and use of training data. On the other hand, companies using third party AI models should minimize risk by carefully vetting the model selected and ensuring that their intended uses comport with the safety, ethics and privacy interests of their customers.
Companies seeking to train a model on data purchased from a vendor should also take care to look for appropriate representations and warranties as to the source of (and rights to) the data, as well as broad indemnification provisions should those representations prove unsound and lead to litigation.
[1] Several states, such as New York and Connecticut, are also beginning to pass laws that address AI uses in their jurisdictions. These cover a variety of applications, such as criminal justice, employment, loans and education.
[2] For additional details, see our November 2023 alert memo available here.
[3] See U.S. Copyright Office, “Copyright Office Launches New Artificial Intelligence Initiative” (March 16, 2023), available here.
[4] See U.S. Patent and Trademark Office, “AI-related patent resources” (last updated May 27, 2022), available here.
[5] See U.S. Patent and Trademark Office, “Artificial Intelligence Patent Dataset” (last updated December 29, 2022), available here.
[6] For additional information, see our November 2023 blog post available here.
[7] See Siladitya Ray, “Samsung Bans ChatGPT Among Employees After Sensitive Code Leak” (May 2, 2023), available here.