One of the behaviors considered to be uniquely human is our creativity. While many animal species create visually stunning displays or constructions — think of a spider's delicate web or the colorful, intricate structures built by bowerbirds — they are typically created with a practical purpose in mind, such as snagging prey or seducing a mate.
Humans, however, make art for its own sake, as a form of personal expression. And as computer engineers attempt to imbue artificial intelligence (AI) with humanlike capabilities and behaviors, a question arises: Can AI create art?
The AMC series "Humans," which returns June 5 for its third season, is populated by Synths — intelligent robots that resemble people, save for their unnaturally green eyes. At the end of the last season, millions of Synths had "awakened" to consciousness. Plenty of humans are unhappy about that, presenting Synths with more immediate problems than getting in touch with their inner artists. Nevertheless, exploring and indulging in creativity is introduced in the show as one of the avenues that is newly open to Synths, now that they are self-aware. [5 Intriguing Uses for Artificial Intelligence (That Aren't Killer Robots)]
Even in the real world, AI has demonstrated that it can perform feats that are unexpectedly creative. Under the guidance of programmers, different types of AI have produced original songs, paintings and digital artwork; some bust out their own dance moves, and others spit rhymes like Kanye.
Here are just a few examples of some of today's "artistic" AI. Perhaps the Synths will express themselves in similar ways …
Baby, you can drive my car
In September 2016, a human-AI collaboration produced a cheery pop song that sounded remarkably like a tune composed by The Beatles in the 1960s. While the lyrics were written by a person, the melody sprang from the "brain" of a computer. AI used a system called Flow Machine to incorporate data from 13,000 lead sheets — musical notations for the elements in a pop song's melody — and produce a composition that the programming team dubbed "Daddy's Car."
Dance, dance revolution
Care to dance? Inside a dome framework covered by white fabric, AI recorded and processed dance moves performed by people. It then incorporated those dance steps into a projection of a virtual dancer, which performed a sequence of moves that the AI "remembered" from the human dancers. However, the virtual dancer also matched its rhythm or style to the most recent performer in the dome, turning every dance into a collaboration.
Algorithms can teach AI about the basic elements of a visual style so that neural networks can apply them in original ways to create unique and intriguing artwork. In one example, an AI created "botanical dinosaurs" — images of Triceratops, T. rex, Stegosaurus and other dinosaurs, composed entirely of plants and flowers. This demonstrated a technique called style transfer, which incorporated an art style — in this case, botanical illustrations — and rendered it in the shapes of dinosaur bodies.
The stuff of nightmares
Can a computer know what scares you? This one can. An AI project aptly named "Nightmare Machine" revealed what makes pictures frightening by using deep learning, a system of programs and data structures that form connections in a manner similar to that of neurons firing in the human brain. Programmers trained the AI using terrifying images, teaching it to recognize the visual elements that frighten people. They then unleashed it on ordinary photos so that it could transform them into horrifying, nightmare-inducing scenes.
A teenager proved that AI could be taught to rap like artist Kanye West, and all it took was some open-source code and 6,000 lines of West's lyrics. The high-school student worked for about a week to program the AI, which at first was only able to rearrange the lyrics that the programmer had uploaded. But eventually, the AI "learned" from West's examples, and it began generating its own original lyrics, copying the style and speaking rhythms of the famous rapper.
An AI called "Vincent" channels fine-art masters to assist human artists; together, human and AI produce digital creations that resemble the canvases of some of the most celebrated painters of the 19th and 20th centuries. Vincent's "art school" was the input of 8,000 works of art from the Renaissance through the 20th century. From those "teachers," Vincent learned about the use of color, contrast and brushstrokes. Human users can interact with the AI by submitting a drawing of their own, which Vincent then helps them to complete in the style of a fine artist.
So, this is Christmas
Can you picture a computer chorus caroling for Christmas? AI recently composed an original Christmas song, after analyzing 100 hours of pop music. Programmers guided a neural network to generate a Christmas song using a process they called "neural story singing." First, they prompted the AI to produce a descriptive story about an image of a festive Christmas tree, surrounded by presents. Next, they selected a beat and mapped the story to the song's rhythm, handing it off to the neural network for completion. The result is somewhat dissonant and perplexing, with the digitally generated voice intoning, "A hundred and a half hour ago / I'm glad to meet you."
It was a dark and stormy night
Horror writers craft creepy tales that chill our blood, and a recent addition to their ranks — a neural network named "Shelley" — also produced spooky stories, spinning her yarns from prompts shared by people. (Shelley was named for writer Mary Wollstonecraft Shelley, author of "Frankenstein.") Programmers fed the neural network 140,000 horror stories sourced from a Reddit forum so that it could learn about the storytelling elements that frighten readers. The AI Shelley not only came up with original tales of terror but also created them together with human co-writers, building and escalating the tension in collaborative works posted to Twitter in October and November 2017.
A Song of Bytes and Fire
What's an eager "Game of Thrones" fan to do, when the popular HBO program's final season isn't expected to air until 2019 and the fantasy series that the show is based on — "A Song of Ice and Fire," written by George R.R. Martin — hasn't seen a new book since 2011?
If you're a software engineer, you could program a neural network to generate five new chapters, picking up where the events of the last novel left off. First, the neural network "learned" from the existing books. Then, it used that data to craft new stories using the characters, while keeping track of which people had died as the epic unfolded.
Editor's note: This feature is the second of a three-part series of articles related to AMC's "Humans." The third season debuts June 5 at 10 p.m. EDT/9 p.m. CDT.
Original article on Live Science.
Live Science newsletter
Stay up to date on the latest science news by signing up for our Essentials newsletter.
Mindy Weisberger is a Live Science editor for the channels Animals and Planet Earth. She also reports on general science, covering climate change, paleontology, biology, and space. Mindy studied film at Columbia University; prior to Live Science she produced, wrote and directed media for the American Museum of Natural History in New York City. Her videos about dinosaurs, astrophysics, biodiversity and evolution appear in museums and science centers worldwide, earning awards such as the CINE Golden Eagle and the Communicator Award of Excellence. Her writing has also appeared in Scientific American, The Washington Post and How It Works Magazine.