UK Considers Granting Artists ‘Right to Personality’ Amid Rise of Generative AI

The UK government is considering an upgrade of its “right to personality” law that could gain artists new safeguards against generative artificial intelligence models capable of mimicking their styles.

Per the Financial Times, the Labor administration today launched a review of how AI companies train their technology by scraping digital content—a process that has already courted controversy among creators in the UK and US. New legislation based on its findings is expected to be proposed by the government within the next two years.

Related Articles

A

Artists and Creatives Are Working with AI Companies, but Should They?

Amazon, Google, OpenAI, Meta, and Microsoft Agree to White House’s AI Guidelines to ‘Protect’ Americans

The consultation reportedly aims to ban the development of AI tools that would allow users to replicate—or come very close to replicating—the image, distinguishing features, or voice of public figures and groups. The report includes plans to provide creators an improved rights mechanism, which in this context means that AI companies such as OpenAI might need to secure licensing agreements with artists to use their copyrighted material for data scraping. UK and EU ministers, however, must ensure that creators who opt out of data scraping aren’t inadvertently penalized by having the visibility of their content reduced online.

The announcement of the consultation follows the release of OpenAI’s Sora text-to-video generation tool to the public on December 16, which allows users to generate up to 20-second-long videos from a brief text prompt. Even before the release, artists and content creators have called for legal intervention regarding Sora, with many voicing concerns about how data scraping was used to train the tool.

In November, a group of visual artists, filmmakers, and graphic designers who received early access to Sora released a copy of the AI tool on an open-source platform and published a scathing rebuke of OpenAI, the company also behind ChatGPD. The letter claimed that the company invited 300 creators to test-run the product but failed to adequately compensate them for their work—and even engaged in artistic censorship, all with the intention of “art washing” the company’s image.

Earlier this year, more than 100 leading artificial intelligence researchers signed an open letter that voiced concerns over the possibility that generative AI could stifle independent research. The experts warned that opaque company protocols designed to stop fraud, or the generation of fabricated news, could have an unintended effect—that independent investigators safety-testing AI models could be banned from the platform or sued. The letter called on prominent firms, including OpenAI, Meta, and Midjourney, to improve their transparency and provide auditors an avenue to check for potential legal issues, like copyright violations.

“Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable,” the letter reads.

Leave a Comment

Your email address will not be published. Required fields are marked *