Skip to main content

Processing: Dazzle Mirror

Wanted to share a super quick project I put together as a concept last night!

Based on Daniel Shiffman’s Mirror 2 Example within Processing, this project expands the basic code to include a visualization that interacts with the Dazzle Camouflage pattern.

The sketch takes input from the webcam and creates a grid system for blocks of pixels. The average brightness of each block controls how large the square appears in the grid: bright = full block, dark = tiny block.

Overlaying this creative visualization over the Dazzle pattern and using the Exclusion blend mode, the result is a very trippy live render of your actions which, when still (such as with a screenshot), become fully disguised within the camouflage pattern.

GitHub Repository:

2042: Delta Echo Alpha Dazzle

In this mixed media installation, I seek to explore the effects with which the relationship of perspective and technology has on our ability to see – and not see – various realities.

Wordy artist statement aside, this was an immensely rewarding project that allowed me to explore further the unique medium of animation/projection-mapping/found object installation. This is a medium and language I first began to develop while studying abroad in Rome with my 2040 £RA Installation (video) and have been hoping to continue ever since.

In this project created for my first Thesis I critique, I combined domestic found objects which bring to mind a sensibility of “home,” “childhood,” and “security” with a few malicious items such as a rifle, knife, handcuffs, and pistol. These items are painted in a matte white and arranged in a fixed stilllife composition. Through digital projection mapping techniques, the monochromatic items become the set for a projected reality which masks and reveals realities at whim.

Dazzle camouflage is employed as a traditional tactic to disguise and blend menacing objects with their environment and bright, playful, untextured colors reveal partially what has been before the viewer the entire time. Scan lines and waves reinforce the concept of technology as a tool and as a medium within this work.

The limits of the technology is what is of the most interest to me at this intersection. As the viewer moves around the piece – even in the line of the projector’s beam – the camouflage effect loses and gains credibility, ultimately reaching a threshold at which the projection is no longer able to wrap to the object and degrades to pixels and then voidness.

I greatly enjoyed the process and presentation of this work. Above, you can find a video documentation of the installation as presented during the critique. Additionally, a gallery of imagery showcasing the piece and workflow can be found below:


Project 1: Music Visualizer

Screen Shot 2015-10-01 at 11.38.37 AM

Processing is an incredible platform for creating truly dynamic works with very little work. While learning about the language, adjusting to its syntax, and playing with libraries, I created a music visualizer which uses the Minim library.

Here’s a quick look at it in action — turn the volume up to 11 for the full effect!


GitHub Repository:

Pixel Sorting in Processing

Screen Shot 2015-09-23 at 1.09.38 PMIn my attempts to understand the ways in which I can use pixels[] in Processing, I came across an example of a pixel sorting image on a Glitch Artist forum I frequent.

I quickly found this post on Pixel-Sorting on the Processing Forums which tremendously helped me discover just how easy pixel sorting is (shout out to user “aferriss” for his/her code contribution)!

The image you see above was created with only a few lines:

PImage img;

void setup() {
  img = loadImage(",fl_progressive,q_80,w_800/1441928320169332516.jpg"); 

void draw() {
  for (int y = 0; y<height; y+=1 ) {
    for (int x = 0; x<width; x+=1) {
      int loc = x + y*img.width;
      float r = red (img.pixels[loc]);
      float g = green (img.pixels[loc]);
      float b = blue (img.pixels[loc]);
      float av = ((r+g+b)/3.0);

      if (r > 100 && r < 255) {
GitHub Repository:

The real magic comes in the separation of red, blue, and green pixels which we then translate so as to “sort” the pixels.

Not content with such an easy project, I set out to see if I can pixel sort a live camera image! Of course, Processing makes this super easy too.

Screen Shot 2015-09-23 at 1.18.39 PM

Using the “Getting Started with Capture” example to speed things up (not necessary considering just how easy it is to start a camera and load the feed but useful for the sake of simplicity), I fiddled with the code using the same principles as the previous code to create a quick and dirty app which live pixel-sorts video from your webcam. The result, as shown above, is pretty interesting. Stale scenes like my room don’t do much, but the world beyond the windows is pure psychedelic awesomeness.

At this time, the framerate is a bit sluggish but thanks to P3D, I’m able to get about 12-15FPS. You can find the code here:


int numPixels;
int[] backgroundPixels;
Capture video;

void setup() {
  size(1280, 720, P3D);
  String[] cameras = Capture.list();

  if (cameras == null) {
    println("Failed to retrieve the list of available cameras, will try the default...");
    video = new Capture(this, 1280, 720);
  } if (cameras.length == 0) {
    println("There are no cameras available for capture.");
  } else {
    println("Available cameras:");
    for (int i = 0; i < cameras.length; i++) {

    // The camera can be initialized directly using an element
    // from the array returned by list():
    video = new Capture(this, cameras[0]);
    // Start capturing the images from the camera

void draw() {
  if (video.available() == true) {;
    for (int y = 0; y<height; y+=1 ) {
      for (int x = 0; x<width; x+=1) {
        int loc = x + y*video.width;
        float r = red (video.pixels[loc]);
        float g = green (video.pixels[loc]);
        float b = blue (video.pixels[loc]);
        float av = ((r+g+b)/3.0);
        if (r > 100 && r < 255) {
          line(0,0,(av-255)/3,0); //change these values to alter the length. The closer to 0 the longer the lines. 
         // you can also try different shapes or even bezier curves instead of line();
GitHub Repository:

Pushing this project further, I will be looking for ways to aesthetically degrade the image further. Scanlines, frozen pieces, transformations, etc.

P.S. Have to also give a shoutout to SyntaxHighlighter Evolved (WordPress Plugin) for allowing me the ability to format and present my code so well.

One Step Forward, Three Steps Back – Kinect & Processing


I’ve given in.

After far, far, far too many hours hoping to find some life in the Processing community surrounding the Kinect V2, I’ve come to conclude that the only worthwhile move for the sake of compatibility is to change my hardware, not my software.

For the projects I have in mind, libraries such as SimpleOpenNI and programs such as DepthKit perfectly fit the bill. The issue, however, is that the brilliant Kinect V2 I purchased in April 2015 is just not supported. I’d like to write “not supported yet,” but after reading about so many false starts in the community way back in 2014, I just don’t see myself being able to take advantage of the Processing + Kinect platform with the newest Kinect.

So, I bit the bullet and bought a used Kinect V1 on ebay. Fortunately, unlike the latest Kinect, the investment is not too high. A mere $40 will get you the Kinect V1, USB/AC Adapter, and shipping. An important note, there are actually TWO Kinect models under the V1 umbrella: model 1414 and model 1473.

The 1414 is the original Kinect model and is the mostly widely supported and used version of the Kinect V1 for digital media projects. The 1473 is an updated revision of the hardware and internal software of the V1 Kinect – it alters the way in which the chain of components is interpreted and thus requires a rewrite of much of the original/popular libraries available.

UC Davis has a great writeup on the differences between the two Kinect models in relation to their ability to be utilized for projects here: UC Davis – Kinect Hacking

While I am excited to be able to plug into the world of Kinect hacking now that I will have the most supported version, I am disappointed that I have to take three steps back and work in a legacy format just to accomplish what I would like to do (without spending months writing my own libraries).

Of course, the other sad part is that I now am behind a full week as I await delivery of the device in over a week. Between now and then (September 28th, hopefully), I will continue my research of Processing + Kinect and develop further practice projects that work well with the Kinect V2.

Always hate when I feel behind the ball but there is no way I could have predicted so little life in the Processing community when it comes to the Kinect V2. With any luck, we’ll see a revamped initiative in the coming months. For now, it seems that other artists – myself officially included – are dealing with the limitations and developing for the legacy Kinect V1.

Photo Credit:


Getting Started with Kinect Masking

Camera output from Kinect V2

First, let me start by saying that, at this point, I have officially stared at my face for far longer than I am comfortable with today. In part, because I cannot get over the incredible new face effects on Snapchat but mainly because I have been extensively working with examples in Processing for the Kinect. At this point, I’ve come to conclude two things: 1. I will be facing more hurdles than I had hoped, and 2. my “good side” has a very narrow range (see image above).

In all seriousness, I’m loving the examples included with Thomas Lengeling’s KinectPV2 library. I am embarking on a project for my Art Thesis I course which involves an interactive installation which will allow the viewers/participants to interact with a set Dazzle Pattern (yes, the one used in World War II and on my first Art Thesis I project) that is overlaid over their bodies, room, objects, etc. My hope is that I can track the participants and have the Dazzle pattern follow their movements as well as any objects which are moving or otherwise displaced. Should not be much of a challenge – especially considering the number of masking examples provided in KinectPV2’s library.

If that goes well, I want to expand the concept to take advantage of one of the Kinect’s most interesting features: gesture recognition. In my research, I have found a number of attempts at gesture recognition using Processing. Blobscanner seems to be a popular library for such actions but I am curious how I can expand the works further to recognize when a participant is making a “hand gun” gesture with their left or right hand.

I am limited here by two factors: 1) gesture recognition involving exact digit placement is still relatively complicated – especially in Processing. 2) Methods I have discovered which could be applied to this goal will not work (at all or well) when the number of participants surpasses two. Not exactly ideal for presenting to a large body of gallery viewers or students in a critique. For this reason, I am focusing foremost on the pattern mapping and will, time permitting, explore gesture recognition.

Another library I am very excited to explore – and one which has myriad applications far beyond Kinect – is everyone’s favorite OpenCV. Just looking at some of the uses, I can already imagine myself expanding some initial concepts to newer heights through this library coupled with KinectPV2.

More to come soon.