Denmark rewrites the rules of the internet by treating faces as copyrighted identity

Denmark is moving to treat a person’s voice and face like creative works that belong to them, not raw material anyone can copy. The government has drafted changes to its copyright law that would give every individual a protectable right in their “own features,” covering realistic digital imitations of appearance and voice. The target is deepfakes, convincing AI-generated audio, images, and video that borrow a person’s likeness without permission. Culture Minister Jakob Engel-Schmidt has framed the idea as basic fairness in a moment when technology outruns the rules, arguing that people deserve legal control over their bodies in digital form. The proposal has cross-party backing, moved through public consultation during the summer, and is being prepared for parliamentary submission in the autumn session.
Under the plan, people could demand takedowns of unauthorized deepfakes and seek compensation when their identity is used without consent. The protection would apply to everyone, not just public figures, with special relevance for performers whose voices and images are easily cloned. Room is left for parody and satire, preserving space for commentary while curbing deceptive impersonation. The government has also signaled that platforms that ignore valid notices could face penalties under existing European digital rulebooks.
Momentum for the reform reflects rising harm. Deepfake scams and political disinformation have multiplied, and musicians are watching AI systems churn out convincing clones from minutes of audio. Danish officials floated the concept in the spring, then circulated a draft through the normal consultation process with industry, civil society, and regulators. That sequence signals an intention to pass a measure with practical teeth rather than a symbolic stance.
Adoption would make Denmark the first European country to anchor a general right to one’s likeness inside copyright. Most legal systems rely on a patchwork of image rights, data protection, unfair competition rules, and the European Digital Services Act’s notice-and-action duties. None settles the threshold question of ownership in a way that maps neatly onto platform enforcement. Denmark’s answer is to use the familiar copyright toolkit, takedown notices, repeat-infringer policies, damages, so victims can trigger established processes with claims that moderation teams already know how to handle.

Implementation will still require careful lines. Copyright norms were built for works fixed in a tangible medium, not living attributes. Courts will need tests for what counts as a “realistic, digitally generated imitation,” and where transformation for commentary becomes protected parody. Effective relief will also depend on detection and reporting, plus timely platform responses, all of which vary across services. The government’s view is that the package complements other European rules, with the AI Act expanding transparency for synthetic media, the Digital Services Act structuring notice-and-action systems, and the new right making misuse clearly unlawful.
For artists and public figures, the shift would deliver leverage. A singer whose cloned voice surfaces on streaming services would not be limited to arguments about consumer confusion or unfair competition. They could assert a clear right to their voice, seek removal, and claim compensation. For ordinary people, the measure would offer faster recourse when scammers fake a video call to defraud relatives or when a fabricated clip is used for harassment at work.
The law will not end deepfakes, yet it would reset the default. Identity would no longer be treated as a free ingredient for generative systems, and victims would gain a straightforward key to unlock platform processes built for copyright. Denmark also intends to promote similar protections across Europe, recognizing that cross-border content demands rules that travel. If Copenhagen’s approach proves workable, other countries are likely to explore the same path, not because copyright is a perfect conceptual fit, but because it offers something the current patchwork lacks: a clear, enforceable statement that your image and your voice are yours.
Deepfakes have become one of the most troubling frontiers in the digital age, blurring the line between reality and fabrication. Using artificial intelligence to mimic faces, voices, and even mannerisms, they can convincingly reproduce real people saying or doing things that never happened. What began as a novelty in entertainment has evolved into a powerful tool for deception, capable of spreading misinformation, defaming reputations, and manipulating public opinion at a scale that outpaces fact-checking or legal redress.
The danger lies not only in political or celebrity misuse but also in how deepfakes undermine trust in everyday interactions. Fraudsters can use AI-generated likenesses to impersonate executives, family members, or public officials, tricking people into sharing information or transferring funds. In a world already struggling with misinformation, the emergence of hyper-realistic synthetic media accelerates the erosion of shared truth. Once trust collapses, even authentic recordings can be dismissed as fabrications—a phenomenon experts call the “liar’s dividend.”
Beyond security and misinformation, deepfakes raise deeper questions about consent and identity. A person’s face, voice, and style have become raw material for anyone with a powerful enough algorithm. Victims of deepfake pornography or impersonation often find little legal recourse, especially across borders. These abuses demonstrate that privacy laws designed for the analog world are ill-equipped for the realities of digital replication. It is within this context that countries like Denmark are pushing for laws recognizing personal likeness as a form of intellectual property—an effort to reclaim ownership of one’s digital self before it is lost to technology entirely.

