The Pong Experiment
In 1991, Loren Carpenter (co-founder of Pixar), tried an experiment at Siggraph. People entered a theatre to find small paddles left on their seats, with one green side and one red side. On a screen, they could see lots of red and green squares. The audience members connected the two, and were able to identify their own paddle in the crowd on the screen.
Then a game of Pong appeared and the audience came to realise that they had been split into two halves, with each team controlling one of the players in the game collaboratively, using their paddles – green for up, red for down.
Much like Deep Mind learned to play Atari games with only the pixel data, controls and scores, the crowd was able to figure out the situation with no instructions.
They operated as a cohesive single entity to control their player in the Pong game. In the BBC documentary All Watched Over by Machines of Loving Grace, Loren describes this effect:
They’re all acting as individuals, because each one of them can decide what they’re going to do … There’s an order that emerges that gives them kind of like an amoeba like effect where they surge and they play … I wanted to see if no hierarchy existed at all, what would happen? They formed a kind of a subconscious consensus.
I want to recreate Loren Carpenter’s Pong experiment, because I expect I will be able to learn from going through the process. It will be interesting to see, in person, how an audience reacts and I hope it will inspire more ideas. This week I’m beginning investigations into how to do this technically.
Thinking about the technology involved in the Pong experiment, it’s pretty amazing that it was achieved in 1991. That’s the same year that Tim Berner’s Lee turned on the internet at CERN. I even started to wonder whether the description of the experiment in AWOBMOLG was entirely accurate. After all, this was an age where good computers ran at 25 or 33Mhz. The laptop I’m writing this on today runs at 2.4Ghz, almost a hundred times faster. There were no programming languages designed for visuals, such as Processing or OpenFrameworks. Certainly there were no libraries available for computer vision or blob tracking. How did they manage to do it?
This skepticism was quickly quashed when I found Carpenter’s patent for the technology, which shows that, physically, the system relies on a light mounted next to the camera, and the reflective material on the audience’s paddles, making them easier to identify. In terms of the software, the entire thing was coded from scratch in C.
Luckily for me, these days, OpenCV and Youtube tutorials are here to help. I started experimenting with blob detection using this tutorial by Daniel Shiffman. With a little adjustment of the thresholds, and some coloured cards, I managed to get reliable results.
It does pick up some background objects, and I expect that in an audience of people there would be lots of false positives from people’s clothes so I ordered some reflective tape to experiment with Carpenters method.
I though I would need a particular type of light and a dark room to get this to work at all. In fact, using my phone’s torch light next to my Canon 7D in the normally lit auditorium worked really well.
Problems do happen when the cards are too far away, as the light from my phone isn’t reaching far enough. Also tape does need to be aimed at the camera fairly well, to reflect the light properly. But this doesn’t stop the system being usable. I’ll explore improving by adding more lights, brighter lights, or perhaps more omnidirectional reflectors.
In the meantime I decided to code up the Pong game and take advantage of my friend’s Pancake Day gathering to test out how the control actually works. The camera feed is split into two halves, creating two teams, each controlling one of the paddles.
People seemed to enjoy playing and got into the competitiveness. (One friend asked what the consensus was on leaning over and putting his card into the other team’s space to mess them up.) I quickly coded up a counter for many passes a rally had achieved, and a “top score” for the longest rally so far. This made it more of a group activity rather than a competitive one.
I found that, unsurprisingly, people weren’t keen on having a bright light shone in their face and we ended up just reverting to the plain colours – not using reflection. At this size of space, webcam resolution was sufficient. I was able to control the space to avoid false positives (we removed a red cushion, and a red pair of scissors) so this worked fine. I’ll experiment with offsetting the angles of the camera and light to reduce the eye glare.
The biggest issue with the control is that it is hard to make the pong paddle stay still. Currently, if there are more red cards than yellow, the paddle moves down; more yellow than red, it moves up. I tried adjusting the code so that a larger majority was needed to make the paddle move. Everyone said it felt a lot less responsive, as everyone on a 3 person team had to have their card in agreement to make the paddle move and we soon reverted back. Perhaps in a larger group this would work better.
People said that the game felt very responsive, which was encouraging to hear. I wonder what it will be like when there are many more people working together to control the paddle – will individuals still feel like they have control? How will it feel to be part of the group?