Denmark to Amend Copyright Law to Combat AI Deepfakes

Denmark plans to amend its copyright law to give citizens control over their likeness and voice, aiming to combat AI-generated deepfakes.

The Danish Government is set to amend its copyright law to address the growing issue of AI-generated deepfakes. This legislative change will grant individuals in Denmark the right to their own body, facial features, and voice, effectively giving them control over their digital likeness. The proposed amendment, which has broad cross-party support, is expected to be submitted for consultation before the summer recess and formally introduced in the autumn.

The initiative aims to protect individuals from unauthorized digital imitations, allowing them to demand the removal of such content from online platforms if shared without consent. The Danish culture minister, Jakob Engel-Schmidt, emphasized that the bill sends a clear message that everyone has the right to their own identity, a protection not currently afforded by existing laws.

The proposed law will also cover realistic, digitally generated imitations of an artist's performance without consent, with potential compensation for those affected by violations. However, parodies and satire will remain unaffected by the new rules. The Danish government hopes that this pioneering legislation will inspire other European countries to adopt similar measures, especially during Denmark's upcoming EU presidency.

We hope you enjoyed this article.

Consider subscribing to one of several newsletters we publish like AI Policy Brief.

Also, consider following us on social media:

Subscribe to AI Policy Brief

Weekly report on AI regulations, safety standards, government policies, and compliance requirements worldwide.

Market report

2025 State of Data Security Report: Quantifying AI’s Impact on Data Risk

Varonis Systems, Inc.

The 2025 State of Data Security Report by Varonis analyzes the impact of AI on data security across 1,000 IT environments. It highlights critical vulnerabilities such as exposed sensitive cloud data, ghost users, and unsanctioned AI applications. The report emphasizes the need for robust data governance and security measures to mitigate AI-related risks.

Read more