When science-fiction worlds introduce robots that look and behave like people, sooner or later those worlds' inhabitants confront the question of robot self-awareness. If a machine is built to truly mimic a human, its "brain" must be complex enough not only to process information as ours does, but also to achieve certain types of abstract thinking that make us human. This includes recognition of our "selves" and our place in the world, a state known as consciousness.

One example of a sci-fi struggle to define AI consciousness is AMC's "Humans" (Tues. 10/9c starting June 5). At this point in the series, human-like machines called Synths have become self-aware; as they band together in communities to live independent lives and define who they are, they must also battle for acceptance and survival against the hostile humans who created and used them.

But what exactly might "consciousness" mean for artificial intelligence (AI) in the real world, and how close is AI to reaching that goal? [Intelligent Machines to Space Colonies: 5 Sci-Fi Visions of the Future]

Philosophers have described consciousness as having a unique sense of self coupled with an awareness of what's going on around you. And neuroscientists have offered their own perspective on how consciousness might be quantified, through analysis of a person's brain activity as it integrates and interprets sensory data.

However, applying those rules to AI is tricky. In some ways, the processing abilities of AI are not unlike those that take place in human brains. Sophisticated AI systems use a process called deep learning to solve computational tasks quickly, using networks of layered algorithms that communicate with each other to solve more and more complex problems.

It's a strategy very similar to that of our own brains, where information speeds across connections between neurons. In a neural network, deep learning enables AI to teach itself how to identify disease, win a strategy game against the best human player in the world, or write a pop song.

But to accomplish these feats, any neural network still relies on a human programmer setting the tasks and selecting the data for it to learn from. Consciousness for AI would mean that neural networks could make those initial choices themselves, "deviating from the programmers' intentions and doing their own thing," Edith Elkind, a professor of computing science at the University of Oxford in the U.K., told Live Science in an email.

In the third season of the AMC series "Humans," humanlike robots called Synths that have achieved self-awareness struggle with the consequences.
In the third season of the AMC series "Humans," humanlike robots called Synths that have achieved self-awareness struggle with the consequences.
Credit: Des Willie/Kudos/AMC/C4


"Machines will become conscious when they start to set their own goals and act according to these goals rather than do what they were programmed to do," Elkind said.

"This is different from autonomy: Even a fully autonomous car would still drive from A to B as told," she added.

One of the pitfalls for machines becoming self-aware is that consciousness in humans is not well-defined enough, which would make it difficult if not impossible for programmers to replicate such a state in algorithms for AI, researchers reported in a study published in October 2017 in the journal Science.

The scientists defined three levels of human consciousness, based on the computation that happens in the brain. The first, which they labeled "C0," represents calculations that happen without our knowledge, such as during facial recognition, and most AI functions at this level, the scientists wrote in the study.

The second level, "C1," involves a so-called "global" awareness of information — in other words, actively sifting and evaluating quantities of data to make an informed, deliberate choice in response to specific circumstances.

Self-awareness emerges in the third level, "C2," in which individuals recognize and correct mistakes and investigate the unknown, the study authors reported.

"Once we can spell out in computational terms what the differences may be in humans between conscious and unconsciousness, coding that into computers may not be that hard," study co-author Hakwan Lau, a UCLA neuroscientist, previously told Live Science.

To a certain extent, some types of AI can evaluate their actions and correct them responsively — a component of the C2 level of human consciousness. But don't expect to meet self-aware AI anytime soon, Elkind said in the email.

"While we are quite close to having machines that can operate autonomously (self-driving cars, robots that can explore an unknown terrain, etc.), we are very far from having conscious machines," Elkind said.

So, for now, if you want to see "conscious" AI in action, you can watch the Synths vie for their rights in "Humans." The third season debuts June 5 at 10/9c.

Editor's Note: This feature is the first of a three-part series of articles related to AMC's "Humans."

Original article on Live Science.