A writer and a software engineer from Google's People + AI Research team explore the human choices that shape machine learning systems by building competing tic-tac-toe agents.
What have we learned about machine learning and the human decisions that shape it? And is machine learning perhaps changing our minds about how the world outside of machine learning — also known as the world — works?
Head to Head: The Even Bigger ML Smackdown!
Yannick and David’s systems play against each other in 500 games. Who’s going to win? And what can we learn about how the ML may be working by thinking about the results?
David’s variant of tic-tac-toe that we’re calling tic-tac-two is only slightly different but turns out to be far more complex. This requires rethinking what the ML system will need in order to learn how to play, and how to represent that data.
Head to Head: the Big ML Smackdown!
David and Yannick’s tic-tac-toe ML agents face-off against each other in tic-tac-toe!
Give that model a treat! : Reinforcement learning explained
Switching gears, we focus on how Yannick’s been training his model using reinforcement learning. He explains the differences from David’s supervised learning approach. We find out how his system performs against a player that makes random tic-tac-toe moves.
Beating random: What it means to have trained a model
David did it! He trained a machine learning model to play tic-tac-toe! How did his model do against a player that makes random tic-tac-toe moves?