The green energy revolution promised by nuclear fusion is step closer, thanks to the first successful use of artificial intelligence to shape hydrogen plasmas inside a fusion reactor.
Human-like machines are coming … slowly, with assistants like Amazon's Alexa and Apple's Siri the most basic artificial intelligence to emerge. But what about those intelligent robots some fear will take over the world? Don't worry! Live Science has all the latest news and features on discoveries and achievements in the world of A.I.
The artificial intelligence company DeepMind has teamed up with mathematicians to generate new conjectures in pure mathematics.
Conversational video technology enables AI-powered back-and-forth between viewers and prerecorded responses.
AI firm DeepMind says it can predict the shape of every protein in the human body and in 20 species of research animals.
Speaking on the BBC show 'Panorama,' Microsoft's Brad Smith warned that unless checks are put in place, artificial intelligence could lead to a dystopian future
A robotic artist powered by AI algorithms has created realistic self-portraits that question the limits of artificial intelligence and what it means to be human.
In AI-generated animations, faces that were once frozen in time blink, turn their heads and even smile.
A new artificially intelligent 'Ramanujan Machine' can generate hundreds of new mathematical conjectures, which might lead to new math proofs and theorems.
Artificial intelligence transformed NASA footage of Apollo missions to the moon, making decades-old events look like they were shot on high-definition video.
An artist used machine learning to create photorealistic portraits of 54 ancient Roman emperors, working from nearly 1,000 images of busts.
A neural network learned to deliver sermons like Jesus (sort of ) after it was trained on the King James Bible.
Scientists couldn't find the pattern in these strange clouds of quantum fireworks. So they enlisted a computer, and it noticed a hidden turtle.
Algorithms rely on humans to feed them training data and help them interpret the world, so is it really that surprising that they reflect the biases of the humans who code them?