Tennessee passed the Ensuring Likeness, Voice, and Image Security Act in March 2024. The law made it illegal to use AI to clone a person's voice or likeness without explicit consent and gave individuals the right to sue both the person who made the clone and the platform that hosted it. At the time, the bill seemed narrowly tailored to the music industry. Nashville's songwriters and recording artists had been lobbying for protections after early voice clone tools showed up online and started making convincing fakes of country and pop stars singing songs they never recorded.

Two years later, the ELVIS Act has become the template for voice cloning legislation across the country. As of April 2026, twenty one states have passed or introduced bills that copy the Tennessee framework with minor modifications. Illinois passed its version in late 2024. California followed in 2025 with broader protections covering deepfake video as well as voice. New York passed a similar bill that took effect January 1 of this year. Florida, Texas, Georgia, and Michigan have bills in committee that are scheduled for votes in the next legislative session. The pattern is unmistakable.

The reason the ELVIS Act has spread so quickly comes down to two things. First, the law works on a clear principle that does not require complicated technical definitions. Cloning someone's voice without their permission is treated like using their photograph in advertising without permission, which has been illegal for a hundred years. The law just extends an existing right of publicity into a new technology. Second, the enforcement mechanism is straightforward. Individuals can sue for damages, statutory minimums apply, and courts have already started resolving cases under the original Tennessee statute, which gives other states confidence that the framework holds up under judicial review.

The case law that has emerged from Tennessee since 2024 has been instructive. Three significant lawsuits filed in Davidson County Chancery Court resolved in favor of the artists who brought them. One case involved a TikTok user who used a voice clone of a major country artist to promote a supplement company. The court awarded $185,000 in damages plus injunctive relief requiring removal of all content. A second case involved a podcast that used voice clones of two political commentators in satire content. The court found that satire did not exempt the use without consent and awarded $90,000. A third case involved a small AI tool that scraped voices from public videos to train its model. The case settled with the company removing all Tennessee resident voices from the dataset and paying an undisclosed amount.

For AI companies operating across multiple states, compliance is becoming complicated. A voice cloning tool that operates legally in one state can trigger liability the moment a user in another state accesses it. The leading platforms have responded by requiring uploaders to certify they have consent for any voice they clone, but enforcement has been uneven. ElevenLabs has introduced voice verification systems that require speakers to read a unique phrase before their voice can be cloned through the platform. Resemble AI has built consent capture into its API. Smaller open source tools have not implemented similar guardrails and the legal exposure is starting to push some of those tools offline.

The music industry side of the conversation has been the loudest, but the larger impact may end up being on advertising and political messaging. A voice clone of a CEO authorizing a fraudulent wire transfer is already a real attack vector. A voice clone of a candidate during an election cycle is the scenario most state legislatures are quietly preparing for. The Tennessee statute does not distinguish between commercial use and political use, which means a deepfake of a candidate would carry the same liability as a deepfake of a country singer. Other states are following that approach.

For individual creators and small business owners, the practical implications are significant. If you record podcasts, run a YouTube channel, or appear in promotional videos, your voice is now a protected asset under state law in roughly half the country. Cloning protections do not require you to register anything. They apply automatically. If your voice gets cloned without your consent, you have a cause of action and a clear path to damages. The same is true if your face appears in a deepfake video in states that have extended protections that far.

For AI developers, the message is that consent based architecture is no longer optional. Building a voice or likeness model in 2026 without a clear consent capture mechanism is building a product with serious legal exposure across most major US markets. The states that have not passed legislation yet are likely to follow the same template within the next eighteen months. The federal version of the bill, the No FAKES Act, has bipartisan support but is moving slowly through Congress. The state level patchwork is likely to remain the practical regulatory framework for the foreseeable future.

Tennessee passed a narrow music industry bill and ended up writing the rules for AI voice cloning in America. That is how state policy spreads when the law is clear, the enforcement is workable, and the underlying principle makes sense.