June 16, 2021

THE MYSTERY OF MOVEMENT WITHOUT THE BALL:

How Deepmind Is Using AI To 'Solve' Soccer: After Go, Chess, and protein folding, the world's most famous AI company is taking on the challenge of a uniquely human sport. (The Physics arXiv Blog, Jun 11, 2021)

Deepmind has created an intelligent agent that has learnt how to play soccer. Not just high level skills such as how to tackle, pass and play in a team, but how to control a fully articulated human body in a way that performs these actions like a human. The result is an impressive simulation of soccer in a way that is reminiscent of human players, albeit naïve and ungainly ones.

The approach is described by Siqi Liu and colleagues at Deepmind. The first task is to give the intelligent agent full control over a humanoid figure with all the joints and articulation -- 56 degrees of freedom that a real human has.

The agent learns to control this humanoid in a simulated environment with ordinary gravity and other laws of physics built in. It does this by learning to copy the movement of real footballers captured via standard motion capture techniques. These movements include running, changing direction, kicking and so on. The AI humanoids then practice mid-level skills such as dribbling, following the ball and shooting. Finally, the humanoids play in 2 v 2 games in which the winning team is the one that scores first.

One of the impressive outcomes from this process is that the humanoids learn tactics of various kinds. "They develop awareness of others and learn to play as a team, successfully bridging the gap between low-level motor control at a time scale of milliseconds, and coordinated goal-directed behaviour as a team at the timescale of tens of seconds," say Liu and colleagues. Footage of these games along with the way the players learn is available on line.

What makes this work standout is that Deepmind takes on these challenges together while in the past, they have usually been tackled separately. That's important because the emergent behaviour of the players depends crucially on their agility and their naturalistic movement, which shows the advantage of combining these approaches. "The results demonstrate that artificial agents can indeed learn to coordinate complex movements in order to interact with objects and achieve long-horizon goals in cooperation with other agents," say the team.

Interestingly, the players learn to pass but don't seem to learn how to run into space. Perhaps that because this often requires players to run away from the ball. Without that ability, the patterns of play are reminiscent of those of young children, who tend to chase the ball in a herd.

Older children develop a sense of space and adult players spending large portions of the game running into space or closing down space that opposition players could run into, all without the ball.

But Deepmind's approach is in its infancy and has the potential to advance significantly. The obvious next step is to play games with larger teams and to see what behaviour emerges. "Larger teams might also lead to the emergence of more sophisticated tactics," say the researchers.

Posted by at June 16, 2021 8:26 PM

  

« ON THE OTHER HAND, HOW MUCH FURTHER CAN RUSSIA BE DIMINISHED?: | Main | THIS TIME IS DIFFERENT: »