"I wake up in a pool of blood." "I was trapped in this hospital bed." "I think I'm being stalked."
In horror stories, phrases like these conjure up scenes that can make your blood run cold or cause your heart to beat a little faster. But the author of these words has no heartbeat, nor any blood to chill.
Meet "Shelley," a neural network raised on a diet of horror fiction. Shelley is taking the terrifying lessons learned from those stories and penning its own spooky narratives using artificial intelligence (AI) — along with a little help from like-minded human collaborators. [5 Intriguing Uses for Artificial Intelligence (That Aren't Killer Robots)]
Shelley shares her name with pioneering Victorian horror writer Mary Wollstonecraft Shelley — the author of "Frankenstein" — and the AI came to life at MIT Media Lab, where it was built by the group that previously programmed the Nightmare Machine in 2016. This aptly named neural network also had a knack for unsettling creative work — it transformed ordinary photographs into jarring nightmarish hellscapes.
But whereas the Nightmare Machine was trained with scary images, Shelley was weaned on words — scary words — "learning" from over 140,000 horror stories that were posted on the Reddit forum r/nosleep, Pinar Yanardag, a postdoctoral candidate at MIT Media Lab, told Live Science in an email.
Once Shelley was able to recognize the types of narrative elements that appear in horror stories — eerie imagery, keywords and scenarios — she could write her own, "starting from a random seed or a short snippet of text," Yanardag said.
Sometimes, the results are downright chilling, such as this tweet: "I sat alone for a few minutes before I finally got the courage to tell myself that nothing had happened. I was wrong."
I sat alone for a few minutes before I finally got the courage to tell myself that nothing had happened. I was wrong. #yourturn— Shelley (@shelley_ai) October 29, 2017
But Shelley can also deliver unintentional comedy, as in this tweet: "I glanced at him and gave him a knowing smile, then i threw a huge butterfly at him."
I glanced at him and gave him a knowing smile, then i threw a huge butterfly at him. He ran off, and I climbed down the wall down to 1/2— Shelley (@shelley_ai) October 29, 2017
But what really makes this AI unique is its ability to collaborate. Shelley creates a new story on Twitter every hour, and anyone can add their own sentences to help spin the sinister tale. In a pair of tweets posted on Oct. 30, Shelley wrote of empty doorways and "a faint surge of uneasiness," ending with the hashtag "#yourturn." Another Twitter user chimed in to describe a growing sense of being watched by something unseen, adding "my skin crawled as this presence grew ever closer."
I stood up and made my way to the door. It was unlocked. I checked the back door and could find nothing. Just the same presence and 1/2— Shelley (@shelley_ai) October 30, 2017
Shelley's October debut was no coincidence, as her programmers are huge fans of Halloween — and, of course, of horror, Manuel Cebrian, a research manager at MIT Media Lab, told Live Science in an email.
"As we are interested in how AI induces emotions — fear in this particular case — Halloween is always great timing to roll out a mass-scale AI agent that tests its emotion-inducing capability," he explained.
Judging from the enduring popularity of horror movies, books, video games and television shows, people love being scared — as long as they know they're in no real danger. Sparking this visceral emotion in an audience is a special creative challenge for writers, filmmakers and game designers — and now, for programmers working with AI, Iyad Rahwan, an associate professor at MIT Media Lab, told Live Science in an email. [The 10 Scariest Horror Movies Ever Made]
"This challenge is especially important in a time where we wonder what the limits of artificial intelligence are. Can machines learn to scare us?" Rahwan asked.
Perhaps they can, according to data gathered after last year's launch of the Nightmare Machine and its frightening photos, Rahwan added.
"Follow-up studies show that the images we create indeed scare people on psychological scales," he said. "We are currently working on a research article that investigates this data, a study that is among the first to study fear through AI."
Original article on Live Science.