Navigating the “Scary AI” of Artificial Interviews with the Dead and the Living

In the realm of artificial intelligence, there’s a surreal frontier emerging: AI-driven interviews with people who are no longer alive, or in some cases, with the living—but without their actual input. This unsettling practice is increasingly possible with language models, like ChatGPT and similar technologies, which can simulate entire conversations by piecing together phrases, themes, and ideas gleaned from past writings, public records, and online content. The implications, however, are chilling, especially for academics, authors, and public figures whose voices can be easily misrepresented.

Recently, I experienced firsthand the unnerving experience of being interviewed by AI—without my actual involvement. After a conversation about AI and interviewing — I shared that I had interviewed author Ursula LeGuin (who died in 2018) — a colleague generated a lengthy interview transcription with me using ChatGPT, purportedly expressing my perspectives. This interview seemed to have drawn on my blog posts and (perhaps?) open-access articles. Yet, it included some viewpoints that did not align with my own. After I stopped laughing and reviewed the transcription in more detail, seeing words and ideas attributed to me that didn’t actually reflect my views was unsettling, not least because the AI-created document had a polished authenticity that made it seem genuine.

The Rise of AI-Generated “Interviews” with the Dead

This issue is not just confined to the living. AI technology is now being used to create “interviews” with historical figures, philosophers, scientists, and writers who passed away long ago. While these digital conversations may begin with innocent educational intentions—helping people “understand” great thinkers through plausible discussions—there’s a dangerous ambiguity about where the simulated dialogue ends and reality begins. The New York Times recently reported on a radio station in Poland in which a radio host “interviewed” the 1996 winner of the Nobel Prize for Literature. The problem with this interview turned out to be that she had died in 2012. The host was fired. Debate ensued.

Some AI-generated interviews with the dead are presented as if they’re factual accounts, leading audiences to believe they’re gaining special insight into the personal reflections of figures like Abraham Lincoln, Albert Einstein, or Virginia Woolf. This illusion is difficult to shake. Even though AI might base responses on historical records or preserved writings, it still presents an inherently fictional narrative. The blending of historical fact and AI-generated fiction creates a narrative that appears real and insightful but lacks any grounding in the genuine thoughts or intentions of those individuals.

This phenomenon raises serious ethical and epistemological questions: How do we know what’s real anymore? How do we protect the legacy and authenticity of those whose voices and perspectives are reconstructed by AI for educational, commercial, or even personal purposes?

Misrepresentation and Its Effects on the Living

For those of us still very much alive, AI-generated interviews present another layer of concern. My colleague’s “interview” with me via ChatGPT initially made me laugh. The resulting transcription was lengthy and detailed, and while it contained kernels of truth, it also presented ideas that felt foreign to me. This simulated interview, based on my writing, included distortions.

In a world where AI can seamlessly stitch together fragments of a person’s online presence, our perspectives risk being misrepresented in subtle ways that are difficult to challenge. Unlike a traditional interview, where clarifications can be made on the spot, an AI-generated interview is an automated extrapolation, merging snippets of public knowledge into an entirely new “voice” that sounds like the original but may misrepresent it. The onus then falls on the person involved to correct these misrepresentations, which becomes an exhausting and sometimes impossible task, especially if the AI-generated interview is shared widely.

The Trust Problem: How Do We Know What’s Real?

This technology raises a fundamental question about trust: how do we discern what’s real when even interviews and statements can be artificially constructed? If an AI can craft lengthy interviews that seem to capture a person’s thoughts, but actually miss the mark, what does this mean for our understanding of authenticity? And more importantly, how can we protect individuals from the potential fallout of false or misleading representations?

Our society has always relied on context and personal verification as cornerstones of credibility. When reading an article, we consider the author’s credentials and intentions. When listening to a public figure, we consider tone, intention, and context. But with AI-generated text, these cues vanish, replaced by seemingly authentic but context-free output that’s woven together by machine learning algorithms. This introduces a precarious dynamic, where the line between real and synthetic thought is blurred, and where intent is implied but never verified.

Implications for Knowledge and Memory

Beyond individual misrepresentation, AI-generated interviews raise concerns about collective memory and knowledge. As we integrate AI tools into research, education, and even journalism, the danger of inaccuracies compounds. AI systems trained on historical figures’ writings may subtly distort their views to fit modern perspectives or omit crucial nuances. Over time, our understanding of those figures could shift, reshaped by simulated conversations that color our perception without being rooted in factual integrity.

For public figures, academics, or anyone whose words carry weight, this misrepresentation is more than a nuisance; it threatens to alter how they’re remembered. If “interviews” with them are widely shared or cited, the ideas and perspectives they genuinely hold could be overshadowed by inaccurate or incomplete portrayals. What starts as a simple AI simulation could have lasting effects on a person’s legacy.

Navigating AI-Generated Content in an Age of Misinformation

As AI tools continue to advance, so too must our strategies for discerning and verifying the authenticity of content. To counteract the potential misrepresentation of voices—both living and dead—clear guidelines and digital literacy will be essential. Here are several considerations as we adapt to this new landscape:

  1. Transparency Requirements: Organizations using AI-generated content, especially interviews or simulations, should be required to disclose that such content was generated by AI. This should include explaining which sources were used to construct the responses.
  2. AI Accountability and Limits: Companies and researchers developing AI-driven simulations should work closely with ethicists, historians, and subject experts to establish boundaries for “interviews” with historical figures. For instance, it should be clear when AI is generating plausible, rather than accurate, portrayals of a figure’s thoughts.
  3. Verification Mechanisms for the Living: For people still alive, there should be a verification step before AI-generated interviews are shared publicly. This could involve allowing individuals to review AI-generated transcripts or flagging content that is speculative or lacks direct sourcing.
  4. Critical Digital Literacy: Users need to become more critical consumers of digital content, particularly AI-generated interviews. Recognizing that AI lacks the true context, intent, and nuances of human experience can help audiences approach such content with caution.
  5. Safeguarding Legacy and Intellectual Property: For public figures, academic institutions, and media outlets, safeguarding legacy may require new forms of intellectual property protections that prevent the creation or sharing of AI-generated misrepresentations without consent.

Conclusion: Toward a Cautious Embrace of AI

The promise of AI is vast, and its potential to illuminate complex subjects and enhance our knowledge is remarkable. However, when it comes to simulating voices—whether of historical figures or contemporary individuals—the risks cannot be ignored. The possibility of misrepresentation, distortion, and even misinformation is real and pressing. As we continue to integrate AI into our lives, we must balance excitement for technological advancements with a commitment to ethical integrity.

For those of us who contribute to the public sphere—whether as writers, educators, or public figures—AI’s capabilities mean we must be vigilant about how our voices and ideas are used, portrayed, and preserved. We may be living in an age where machines can replicate voices, but that does not mean they can replace authenticity. Protecting our voices and ensuring that AI serves as a tool rather than a source of misrepresentation will be critical as we navigate this new digital reality.

This blogpost was generated by ChatGPT and edited by Kathy Roulston. Headings were generated (authored?) by ChatGPT.

Leave a comment