In May 2024, the Bipartisan Senate AI Working Group released a roadmap to guide artificial intelligence (AI) policy in several sectors of the US economy, including intellectual property (IP). The group, which includes Senate Majority Leader Chuck Schumer (D-NY), Mike Rounds (R-SD), Martin Heinrich (D-NM) and Todd Young (R-IN), acknowledged the competing interests of positioning the United States as a global leader in AI inventions while also protecting against copyright infringement and deepfake replicas. According to the Working Group, a careful balance can be achieved by establishing two requirements for generative AI systems: transparency and explainability.

Under the current regime, AI inventors may hesitate to reveal datasets used to train their models or to explain the software behind their programs. Their reluctance stems from a desire to avoid potential liability for copyright infringement, which may arise when programmers train AI systems with copyrighted content (although courts have yet to determine whether doing so constitutes noninfringing fair use). Such secrecy leaves artists, musicians and authors without credit for their works and inventors without open-source models for improving future AI inventions. The Working Group proposed protecting AI inventors against copyright infringement while simultaneously requiring them to disclose the material on which their generative models are trained. Such transparency would provide much-needed acknowledgment and credit to holders of copyrights on content used to train the generative AI models, according to the Working Group. Although attributing credit does not absolve an alleged infringer of liability under the current legal framework, such a disclosure (even without a legislative safe harbor) may promote a judicial finding of fair use. The Working Group also identified the potential for a compulsory licensing scheme to compensate those whose work is used to improve generative AI models.

The roadmap also recommended a mechanism for protecting against AI-generated deepfakes. Under the Lanham Act, people receive protection against the use of their name, image and likeness for false endorsement or sponsorship of goods and services. But deepfakes often avoid liability through humorous or salacious misrepresentations of individuals without reference to goods or services. The Working Group advised Congress to consider legislation that protects against deepfakes in a manner consistent with the First Amendment. Deepfake categories of particular concern included “non-consensual distribution of intimate images,” fraud and other deepfakes with decidedly “negative” outcomes for the person being mimicked.

If Congress legislates in accordance with the roadmap, the transparency and explanation requirements for generative AI could impact IP law by creating a safe harbor for copyright infringement. Similarly, an individual’s name, image, likeness and voice could emerge as a new form of protectable IP against deepfakes.

Nick DiRoberto, a summer associate in the Washington, DC, office, also contributed to this blog post.

read more