Understanding William Forsythe's "City of Abstracts"

by Carl Friedrich Bolz-Tereick, December 2020

(if you don't want to read anything just click this button to play around: )

Last year I had the chance to experience the choreographer William Forsythe's video installation City of Abstracts in the Folkwang museum in Essen. Here's a video of a few people in front of it:

The installation consists of an open space, a camera and a big screen. The camera films the people that are moving and standing in the space, the screen shows a distorted version of the filmed images. The distortion seems to have the following properties: people that are standing still appear only minimally distorted on the screen (for example the person on the left in the back, in the first part of the video). People that are moving across the field of view of the camera turn into diagonally stretched versions of themselves (for example the person moving from right to left at around 0:20). The head is ahead (hrm) in their direction of movement. If they then stand still, their heads stand still first, while the rest of their bodies slowly catch up, from top to bottom.

While I was standing in the museum playing around, I started to wonder how the installation worked. At first I thought it must be a fairly complicated effect, that somehow analyzed motion and did a complicated transformation. However, after some more playing, running back and forth etc. I became convinced that the program is actually quite simple. In the following I want to explain how it works and recreate the effect in Javascript.

A Miniature City

Because it is quite hard to reason about what happens in a big video, let's start by looking at a miniature version of the problem. The following is a 5x5 pixel input video where a stack of pixels first moves left, then right, and then back and forth for a bit:

We want to find out how this input video has to be transformed into an output video in a way that recreates the effect of Forsythe's installation.

Above we saw that the bottom of a moving object is somehow further back in time than the top. This could mean that the output video uses older and older lines of pixels, from top to bottom. The top row of pixels would be from the current camera input, the second row from one frame back, the third row from two frames back, and so on. Let's try this for the miniature input video:

This seems to recreate the effect! When the stack of pixels moves left, it is transformed into a bottom left to top right diagonal, when it moves back it slowly becomes a top left to bottom right diagonal. When it stands still, the top pixels stand still first in the output, the lower pixels slowly catch up. Below you can look at the output pixel video:

The Real Thing

Now that we know how the effect works, we can apply it to a real video, using a webcam. If you click Start and give permission, you can try it yourself (The webcam video is not transmitted anywhere, just shown below). The left video is the webcam output, the right video is the transformed version. If you are interested in the Javascript code, you can use the Glitch button in the top right corner to inspect (and edit) it.

The first video is directly the output of the webcam, the second video shows the "timeshifted" output as explained above. To see anything interesting happening, you need to move around in front of the webcam. Sideways motions are easiest to understand. Remember that the second video takes a while to catch up with your movements. Some fun things to try: walking across the room, slowly shaking your head, jumping, rotating your arms...

(Aside: I am quite impressed that it works just fine to implement this piece of video processing in Javascript and have it run relatively fluidly in a browser on a phone.)

One parameter we can play with in the above transformation is we can make the lower pixels catch up faster with the upper pixels by doing the transformation block-wise, instead of a single row of pixels at a time. If you increase the block size below, the output video will become less laggy, but on the other hand more blocky.

Block Size


Relationship to Rolling Shutter

An amusing sidenote is that we can use the transformation to understand the rolling shutter effect. Rolling shutter happens when taking a picture of a fast moving object with a smartphone camera. Those cameras don't record the whole image at once, but do it line by line. And if the object moves very fast relative to the time the phone takes to record the whole image, different parts of the final image show the object at different points in time (and thus space). What the effect above does is artificially slow down the creation of a picture. Therefore we can recreate the effects that rolling shutter gives with objects that move much more slowly, for example by rotating an arm.


In 2018 I wrote another essay, on a computational piece of art by Jürgen LIT Fischer