Judges across the U.S. are increasingly concerned about the reliability of evidence submitted in courts as artificial intelligence becomes capable of producing highly realistic images, videos, documents, and audio recordings. So-called "deepfakes" have raised warnings that AI-generated content could distort the truth in legal proceedings.
In a housing case at the Alameda County Superior Court in California, Judge Victoria Kolakowski determined that a video showing a witness was AI-generated after noticing inconsistencies between the witness's facial movements and voice. The judge ruled the video as a deepfake, dismissed the case on September 9, and rejected a review request on November 6.
Kolakowski noted that this may be one of the first instances of fake AI-generated evidence submitted in court and could signal a much larger threat.
"Judges fear making decisions based on evidence that isn't real," said Minnesota 10th Judicial District Judge Stoney Hiljus, emphasizing that more judges are growing concerned every day.
Louisiana Fifth Circuit Court of Appeals Judge Scott Schlegel warned that fake audio recordings could easily be created, potentially influencing critical proceedings such as protective orders. He explained: "My spouse has kept every version of my voice for 30 years. They could produce a 10-second threatening fake recording and present it to any court. That could get a judge to sign a protection order."
California Santa Clara Superior Court Judge Erica Yew stated that deepfake evidence is likely used more often than reported, but there is no formal system tracking its usage.
The National Center for State Courts and Thomson Reuters Institute published a guide to help judges confront deepfake evidence. The guide suggests judges ask:
What is the source of the evidence?
Who has accessed it?
Has the file been altered?
Is there corroborating evidence?
Some legal experts argue that existing verification rules are insufficient and that AI-specific regulations are needed. However, the U.S. Judicial Conference rejected proposed new guidelines in May, believing current rules are adequate.
Experts warn that AI forgery will also hold lawyers accountable. A Louisiana law passed this year requires attorneys to investigate how evidence submitted by clients was produced.
Digital evidence specialist Daniel Garrie emphasized that technical tools alone are not enough; human expertise will remain essential. In the near future, metadata—such as file creation date, device model, and edit history—will become a key tool for verification.
Experts caution that AI could undermine the very foundation of the justice system. Computer science expert Maura Grossman stated: "Everyone now has access to technology to produce fake evidence. In this new era, the approach should not be 'trust but verify,' but 'don't trust, verify.'"