Overview of the Controversy
A Stanford University professor, known for his expertise in misinformation, is facing serious accusations of using artificial intelligence (AI) to fabricate testimony in a politically sensitive case. This development has raised eyebrows, particularly as it involves Minnesota Attorney General Keith Ellison’s support for a law against political deepfakes.
The Case at Hand
The central figure in this legal battle is Christopher Kohls, a satirical conservative YouTuber. The case revolves around Minnesota’s recent legislation banning political deepfakes, which the plaintiffs argue infringes on their right to free speech. Jeff Hancock, a communications professor and founder of Stanford’s Social Media Lab, provided expert testimony that was later submitted by Ellison to bolster the state’s position on the law.
Allegations Against Hancock
The plaintiffs’ attorneys have called for the dismissal of Hancock’s testimony, claiming that it references a nonexistent study. In a detailed 36-page memorandum, the lawyers assert that Hancock cited a study titled “The Influence of Deepfake Videos on Political Attitudes and Behavior,” supposedly published in the *Journal of Information Technology & Politics*. While the journal itself is real, the study Hancock referenced does not exist.
Unraveling the Fabrication
The plaintiffs’ legal team argues that the citation was likely a “hallucination” created by an AI language model, such as ChatGPT. They emphasize that the pages cited in Hancock’s declaration belong to unrelated articles, raising concerns about the integrity of the entire document. “There is no methodology or analytic logic,” the memo states, questioning the reliability of Hancock’s conclusions.
Ellison’s Reliance on Hancock’s Testimony
The memorandum also critiques Attorney General Ellison, pointing out that much of his argument hinges on Hancock’s conclusions, which lack a solid methodological foundation. The lawyers contend that Hancock could have easily referenced real studies relevant to the issue but chose not to, further undermining the credibility of his claims.
Extensive Searches Yield No Results
The legal team conducted thorough searches using popular search engines like Google and Bing, as well as Google Scholar, but found no trace of the alleged study. “The title and even a snippet of it are absent from the internet,” they noted, emphasizing the dubious nature of Hancock’s citation. They dismissed the possibility of a simple copy-paste error, asserting that the article in question does not exist at all.
Consequences of the Allegations
The attorneys have concluded that if Hancock’s declaration contains fabricated information, it is fundamentally unreliable and should be excluded from the court’s considerations. “The declaration of Prof. Hancock should be excluded in its entirety because at least some of it is based on fabricated material likely generated by an AI model,” they argued. The document suggests that further inquiries into the source of this alleged fabrication may be warranted.
Seeking Comment
Fox News Digital has reached out to Attorney General Ellison, Professor Hancock, and Stanford University for their responses to these serious allegations. As this case unfolds, it raises critical questions about the intersection of technology, misinformation, and the legal system.