The Pong Experiment
In 1991, Loren Carpenter (co-founder of Pixar), tried an experiment at Siggraph. People entered a theatre to find small paddles left on their seats, with one green side and one red side. On a screen, they could see lots of red and green squares. The audience members connected the two, and were able to identify their own paddle in the crowd on the screen.
Then a game of Pong appeared and the audience came to realise that they had been split into two halves, with each team controlling one of the players in the game collaboratively, using their paddles – green for up, red for down.
Much like Deep Mind learned to play Atari games with only the pixel data, controls and scores, the crowd was able to figure out the situation with no instructions.
They operated as a cohesive single entity to control their player in the Pong game. In the BBC documentary All Watched Over by Machines of Loving Grace, Loren describes this effect:
They’re all acting as individuals, because each one of them can decide what they’re going to do … There’s an order that emerges that gives them kind of like an amoeba like effect where they surge and they play … I wanted to see if no hierarchy existed at all, what would happen? They formed a kind of a subconscious consensus.
I want to recreate Loren Carpenter’s Pong experiment, because I expect I will be able to learn from going through the process. It will be interesting to see, in person, how an audience reacts and I hope it will inspire more ideas. This week I’m beginning investigations into how to do this technically.
Thinking about the technology involved in the Pong experiment, it’s pretty amazing that it was achieved in 1991. That’s the same year that Tim Berner’s Lee turned on the internet at CERN. I even started to wonder whether the description of the experiment in AWOBMOLG was entirely accurate. After all, this was an age where good computers ran at 25 or 33Mhz. The laptop I’m writing this on today runs at 2.4Ghz, almost a hundred times faster. There were no programming languages designed for visuals, such as Processing or OpenFrameworks. Certainly there were no libraries available for computer vision or blob tracking. How did they manage to do it?
This skepticism was quickly quashed when I found Carpenter’s patent for the technology, which shows that, physically, the system relies on a light mounted next to the camera, and the reflective material on the audience’s paddles, making them easier to identify. In terms of the software, the entire thing was coded from scratch in C.
Luckily for me, these days, OpenCV and Youtube tutorials are here to help. I started experimenting with blob detection using this tutorial by Daniel Shiffman. With a little adjustment of the thresholds, and some coloured cards, I managed to get reliable results.