Life advice from the not-quite-alive

Evan Baily
5 min readFeb 1, 2021

--

Aveza Tyson

Aveza Tyson is 22. She hails from Lower Saxony, Germany, is right-handed, has A-positive blood, and is worried that she’s losing her identity.

Bahadur Koppel is 36. He’s 5'9". Either Zed Boy built him a robot dog out of IKEA parts when he was a baby, or he is a robot dog, or Zed Boy is.

Rashmi Lagounov, who currently resides in the state of Telengana in India, is 53. “All of this may be manifest to you, or it may not,” she maintains. “There is no right or wrong way to go about things, other than to do it.” She’ll die on September 15, 2032 at 4:38 AM of an unspecified illness.

What do these people have in common? None of them exist. I created them—or, more accurately, compiled them—and their biographies and reflections on life using a collection of free, web-based tools in just a few hours.

This experiment doesn’t offer any true insight into what it will be like to coexist with increasingly realistic AIs as they graduate from curating our playlists and ordering our cyan toner refills to working alongside us and educating our kids. But it’s a weird and intriguing peek into the posthuman future.

Without further ado, let’s meet our panelists and hear what they have to say. Please note that I exercised little to no editorial control here; this content is a product of the inbuilt processes (and biases) of the tools used to create it. More details about those tools after the bios and videos.

Aveza Tyson

Iuliana Ozols

Eliud Mariani

Bahadur Koppel

Rashmi Lagounov

The compilation process

Here’s how I made these—it’s a process you can easily replicate yourself:

  • Names and bios were generated by behindthename.com’s random name generator.
  • Photos were extrapolated from photos of existing people by Rosebud.ai’s generative photo tool. I initially worked from photos included with the demo (for Aveza and Iuliana) but those options quickly started to seem limited, so I sourced photos online for the others, using search terms related to their biographical details. (Note: although the generative photo tool demo is free, I did pay $20 for a creative-tier subscription so that I’d be able to download the photos without watermarks.)
  • Monologues were created by Talk to Transformer, InferKit’s neural network-powered text generation tool, based on slight variations on the prompt “life advice.” To generate Aveza Tyson‘s monologue, for example, I just added the words “Thursday afternoon.” Here’s a screenshot of the prompt I entered for Aveza + the text as it came out:
  • I don’t know how exactly Talk to Transformer works, but the content it creates is often hauntingly familiar. Is that because it does an uncannily good job of replicating human speech patterns—or because it’s borrowing prefab phrases we’ve encountered elsewhere? I’m not sure… probably a bit of both.
  • I fed the text from Talk to Transformer and the photos for the associated persona into Rosebud.ai’s TokkingHeads demo, exported the video in chunks (the demo has a 280-character limit), and combined the chunks in Premiere. While I tried to avoid making editorial choices in the process of making these videos as much as possible, I had to make some here: generating the videos required me to choose from a menu of voices (US Accent Male A, UK Accent Female C, etc.). You’ll notice that some of the monologues cut off in mid-sentence; I just ended them wherever the text ended.

Final thoughts

Listening to Aveza Tyson bare her soul about what sounds like the imminent loss of her child (we’ll never know for sure), I felt a surge of empathy, and then wondered: How is this empathy, directed toward a construct who doesn’t exist outside of this Medium post, similar to the kind we feel for an actual human? How is it different? How does (or doesn’t) it equate with the empathy we feel for a fictional character who exists within a more robust narrative context — one that includes features that map more fully and recognizably onto our own lives? How does it affect us as humans to experience empathy but to be totally unable to express that empathy toward its object, or otherwise apply it in our own lives?

I’ll give the last word to Rashmi Lagounov. This last video was created using the same methodology as the others, from the text prompt “Thank you for reading this.” The music was procedurally generated by MuseNet, a deep neural network that composes music based on the opening notes of an existing song and a few other user-generated parameters. It’s Justin Timberlake’s version of “Cry Me A River” improvised on piano in the style of the Beatles, with tokens set at 400. Take it away, Rashmi!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Evan Baily
Evan Baily

Written by Evan Baily

Evan Baily is a TV/film producer, entrepreneur, and writer.

No responses yet

Write a response