riparian rap

Beautiful fluid mechanics dance.

A beautiful performance in which a dancer is tracked by a camera/software and influences virtual fluids projected behind her.

Very original, one of a kind I’d say.  From Hope Goldman’s recent MFA performance at Urbana-Champaign.

We’re working on motion controlled river models.  Perhaps we’ll make one controlled by a dancer.  Via boingboing.net.

Update:  Andrew Moffat, who worked on the soft and hardware for this replied to my questions on methods.  And remember, all this was done real time during the performance, not with post processing of the video:

I’m glad you enjoyed the performance.  I used a fluid solver running in real-time on the GPU.  It used Navier Stokes differential equations for incompressible fluids.  Running the simulation on the GPU allowed us to run it fast enough to achieve real-time.  From there, we had a setup with infrared lighting and a webcam that we modified to be sensitive to infrared light.  This allowed us to track the dancer accurately and create velocity fields of her motion.  Then we added the collisions and velocity fields to the fluid solver on the GPU.  A few transitions, a few colors based on velocity and time, and voila, Form Constant.

 And sent this reddit.com link, which I’ll quote here:


The camera was sensitive to only infrared light, not visible light. We also had lit the entire screen from the back with only bright infrared light. So when the camera (which was at the front, in the audience) looks at the dancer and screen, it sees a completely bright white screen, and a completely dark silhouette of a dancer (because the infrared light is coming from the back, she is not being lit). Because the projector (also at the front, in the audience) only projects visible light onto the dancer and screen, the camera will always still see that white background and black silhouette, no matter what we project onto the dancer and screen. In other words, using infrared light made it so the projector didn’t interfere with determining background from foreground.

From that point, I just ran some processing on the images coming in from the camera, to clean them up. Afterwards, we had a nice image mask of the background in solid white, the dancer’s silhouette in solid black. That’s how we found the position.

Finding the motion of the dancer was a little trickier. It involved running an algorithm called optical flow on each image and its previous image. From that, I was able to determine a velocity field of motion.

What an amazing project, thanks Andrew!