Scientists taught an AI-powered 'robot dog' how to play badminton against humans — and it's actually really good
Scientists have trained the ANYmal quadruped robot to play badminton, and it's good enough to complete in a 10-shot rally with a human opponent.

Scientists have trained a four-legged robot to play badminton against a human opponent, and it scuttles across the court to play rallies of up to 10 shots.
By combining whole-body movements with visual perception, the robot, called "ANYmal," learned to adapt the way it moved to reach the shuttlecock and successfully return it over the net, thanks to artificial intelligence (AI).
This shows that four-legged robots can be built as opponents in "complex and dynamic sports scenarios," the researchers wrote in a study published May 28 in the journal Science Robotics.
ANYmal is a four-legged, dog-like robot that weighs 110 pounds (50 kilograms) and stands about 1.5 feet (0.5 meters) tall. Having four legs allows ANYmal and similar quadruped robots to travel across challenging terrain and move up and down obstacles.
Researchers have previously added arms to these dog-like machines and taught them how to fetch particular objects or open doors by grabbing the handle. But coordinating limb control and visual perception in a dynamic environment remains a challenge in robotics.
Related: Watch a 'robot dog' scramble through a basic parkour course with the help of AI
"Sports is a good application for this kind of research because you can gradually increase the competitiveness or difficulty," study co-author Yuntao Ma, a robotics researcher previously at ETH Zürich and now with the startup Light Robotics, told Live Science.
Get the world’s most fascinating discoveries delivered straight to your inbox.
Teaching a new dog new tricks
In this research, Ma and his team attached a dynamic arm holding a badminton racket at a 45-degree angle onto the standard ANYmal robot.
With the addition of the arm, the robot stood 5 feet, 3 inches (1.6 m) tall and had 18 joints: three on each of the four legs, and six on the arm. The researchers designed a complex built-in system that controlled the arm and leg movements.
The team also added a stereo camera, which had two lenses stacked on top of each other, just to the right of center on the front of the robot's body. The two lenses allowed it to process visual information about the incoming shuttlecocks in real time and work out where they were heading.
The robot was then taught to become a badminton player through reinforcement learning. With this type of machine learning, the robot explored its environment and used trial and error to learn to spot and track the shuttlecock, navigate toward it and swing the racket.
To do this, the researchers first created a simulated environment consisting of a badminton court, with the robot's virtual counterpart standing in the center. Virtual shuttlecocks were served from near the center of the opponent's half of the court, and the robot was tasked with tracking its position and estimating its flight trajectory.
Then, the researchers created a strict training regimen to teach ANYmal how to strike the shuttlecocks, with a virtual coach rewarding the robot for a variety of characteristics, including the position of the racket, the angle of the racket's head, and the speed of the swing. Importantly, the swing rewards were time-based to incentivize accurate and timely hits.
The shuttlecock could land anywhere across the court, so the robot was also rewarded if it moved efficiently across the court and if it didn't speed up unnecessarily. ANYmal's goal was to maximize how much it was rewarded across all of the trials.
Based on 50 million trials of this simulation training, the researchers created a neural network that could control the movement of all 18 joints to travel toward and hit the shuttlecock.
A fast learner
After the simulations, the scientists transferred the neural network into the robot, and ANYmal was put through its paces in the real world.
Here, the robot was trained to find and track a bright-orange shuttlecock served by another machine, which enabled the researchers to control the speed, angles and landing locations of the shuttlecocks. ANYmal had to scuttle across the court to hit the shuttlecock at a speed that would return it over the net and to the center of the court.
The researchers found that, following extensive training, the robot could track shuttlecocks and accurately return them with swing speeds of up to approximately 39 feet per second (12 meters per second) — roughly half the swing speed of an average human amateur badminton player, the researchers noted.
ANYmal also adjusted its movement patterns based on how far it had to travel to the shuttlecock and how long it had to reach it. The robot did not need to travel when the shuttlecock was due to land only a couple of feet (half a meter) away, but at about 5 feet (1.5 m), ANYmal scrambled to reach the shuttlecock by moving all four legs. At about 7 feet (2.2 m) away, the robot galloped over to the shuttlecock, producing a period of elevation that extended the arm's reach by 3 feet (1 m) in the direction of the target.
"Controlling the robot to look at the shuttleclock is not so trivial," Ma said. If the robot is looking at the shuttlecock, it can't move very fast. But if it doesn't look, it won't know where it needs to go. "This trade-off has to happen in a somewhat intelligent way," he said.
Ma was surprised by how well the robot figured out how to move all 18 joints in a coordinated way. It's a particularly challenging task because the motor at each joint learns independently, but the final movement requires them to work in tandem.
The team also found that the robot spontaneously started to move back to the center of the court after each hit, akin to how human players prepare for incoming shuttlecocks.
However, the researchers noted that the robot did not consider the opponent's movements, which is an important way human players predict shuttlecock trajectories. Including human pose estimates would help to improve ANYmal's performance, the team said in the study. They could also add a neck joint to allow the robot to monitor the shuttlecock for more time, Ma noted.
He thinks this research will ultimately have applications beyond sports. For example, it could support debris removal during disaster relief efforts, he said, as the robot would be able to balance the dynamic visual perception with agile motion.

Sophie is a U.K.-based staff writer at Live Science. She covers a wide range of topics, having previously reported on research spanning from bonobo communication to the first water in the universe. Her work has also appeared in outlets including New Scientist, The Observer and BBC Wildlife, and she was shortlisted for the Association of British Science Writers' 2025 "Newcomer of the Year" award for her freelance work at New Scientist. Before becoming a science journalist, she completed a doctorate in evolutionary anthropology from the University of Oxford, where she spent four years looking at why some chimps are better at using tools than others.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.