Skip to main content
Screen Shot 2015-11-05 at 2.46.38 PM

Tweetris – A Study of Whole-Body Interaction in Public Space

Screen Shot 2015-11-05 at 2.46.38 PM

While researching artists and works dealing with the kinesthetic performance of the body in public space, I came across a fantastic research project called “Tweetris: A Study of Whole-Body Interaction During a Public Art Event” (Full Article Available Here) by Dustin Freeman et. al.

In the article, an exploration of multiple methods for Whole-Body Interaction (WBI) presents interesting findings on effective practices for utilizing one’s body in space as an interactive controller. Modes of WBI representation such as silhouettes and avatars are compared with the concept of “discretized silhouette” being the selected method as it allows for a “down-sampling of the raw silhouette given by any body-detection sensor” and thus “encourages exploration of whole body interaction strategies” (Freeman, 1) by the user.

The project utilized the Microsoft Kinect V1 and presented a projected overlay of gameboard, kinect’s line of sight, and the interpreted player shapes. Presented as an interactive art exhibit for 2011 Nuit Blanche event in Toronto, the project did much to reveal the user interpretation and interplay as a participant with and without interaction constraints. The willingness of participants to fully explore the space, including using walls and bystanders allows for a deeper understanding of the ways in which participants will use their own kinesthetic abilities within a given public space.

Article and Image: http://www.cs.toronto.edu/~fchevali/resources/projects/tweetris/tweetris-CC2013.pdf

Screen Shot 2015-10-18 at 11.12.17 PM

Quick Update: Live Image Manipulation with Processing

Screen Shot 2015-10-18 at 11.12.17 PM

While building a foundation in image manipulation, I began to experiment with more advanced “glitches” using live feeds from the Kinect and webcamera. The results are pretty unique and I absolutely wanted to share two quick shogts in the interim while I prepare for a full blog post.

Screen Shot 2015-10-22 at 4.41.29 PM

 

GitHub Repository: https://github.com/XBudd/Processing-Experiments/tree/master/databend_kinect_3D
Screen Shot 2015-10-16 at 9.13.19 PM

Quick Update: Kinect Visualization in 3D Space

Screen Shot 2015-10-16 at 9.13.19 PM

Just a quick update on the amazing ways of visualization and manipulation using Processing with camera feeds.

Direct depth sensor information from the Kinect allows one to achieve results like that seen above. Through SimpleOpenNI, a skeleton is registered for users’ bodies, enabling a great wealth of information for interaction by and with participants.

By sampling the points to get the color information from the Kinect’s camera, one can replace the monotone points with colored ones, allowing for an effect that shows well the great potential and visualization capabilities of the Kinect’s system when used to its fullest ability. Such an effect is seen here:

Screen Shot 2015-10-19 at 12.03.45 AM

Bonus video of me nerding out to this cool technology:

 

 

 

Screen Shot 2015-10-10 at 5.21.26 PM

Lovejoy: Video as Time, Space, Motion

Screen Shot 2015-10-10 at 5.21.26 PMReading through Margot Lovejoy’s Digital Currents: Art in the Electronic Age, I came across a particular writing that struck a chord with my current explorations in video and performance art: “Video as time, space, motion.”

In this writing, Lovejoy examines the relationship and history of video artists and the technology/tools that aid and inform their practices. Artists such as Nam June Pak and Wolf Vostell are front and center in this piece, as one would imagine — however, it is the understanding of their practices through the evolution of technology that is of great interest to me.

In 1965, the Portapak video camera was released and with it came a new era of accessible videography. This made large waves in the art world due to the device’s accessibility in terms of financial cost and in terms of portability. Moving images became a form of interactive art – imperative to the formation of new video art and, eventually, telepresence works.

From Rosenbach to Nauman, a very wide spectrum of artists quickly appropriated video as an expressive new art medium. For feminist artists, this tool was particularly invaluable due to what Lovejoy calls the “newness of video.” She argues that this trait afforded it a completely unobjective stance as a medium which could allow it to be appropriated for influential, ungendered works.

Video has progressed in so many ways, becoming increasingly accessible with each year and each innovation. Today, we hardly think of video as “new” or clear of objective history. Understanding and appreciating how exhaustive the relatively short history of videography and video as an artform is incredibly important for this exact reason. Lovejoy’s work is a great start in the right direction with ample support, works, and queries to push the importance of video as a medium to new heights.

Screen Shot 2015-09-23 at 1.09.38 PM

Pixel Sorting in Processing

Screen Shot 2015-09-23 at 1.09.38 PMIn my attempts to understand the ways in which I can use pixels[] in Processing, I came across an example of a pixel sorting image on a Glitch Artist forum I frequent.

I quickly found this post on Pixel-Sorting on the Processing Forums which tremendously helped me discover just how easy pixel sorting is (shout out to user “aferriss” for his/her code contribution)!

The image you see above was created with only a few lines:

PImage img;

void setup() {
  img = loadImage("http://i.kinja-img.com/gawker-media/image/upload/s--HjX4WPNS--/c_scale,fl_progressive,q_80,w_800/1441928320169332516.jpg"); 
  size(800,396);
}

void draw() {
  background(255);
  loadPixels();  
  for (int y = 0; y<height; y+=1 ) {
    for (int x = 0; x<width; x+=1) {
      int loc = x + y*img.width;
      float r = red (img.pixels[loc]);
      float g = green (img.pixels[loc]);
      float b = blue (img.pixels[loc]);
      float av = ((r+g+b)/3.0);

    pushMatrix();
    translate(x,y);
      stroke(r,g,b);
      if (r > 100 && r < 255) {
        line(0,0,(av-255)/3,0); 
      }
    popMatrix(); 
      
    }
  }
println("done");
noLoop();
}
GitHub Repository: https://github.com/XBudd/Processing-Experiments/tree/master/pixel_sorting

The real magic comes in the separation of red, blue, and green pixels which we then translate so as to “sort” the pixels.

Not content with such an easy project, I set out to see if I can pixel sort a live camera image! Of course, Processing makes this super easy too.

Screen Shot 2015-09-23 at 1.18.39 PM

Using the “Getting Started with Capture” example to speed things up (not necessary considering just how easy it is to start a camera and load the feed but useful for the sake of simplicity), I fiddled with the code using the same principles as the previous code to create a quick and dirty app which live pixel-sorts video from your webcam. The result, as shown above, is pretty interesting. Stale scenes like my room don’t do much, but the world beyond the windows is pure psychedelic awesomeness.

At this time, the framerate is a bit sluggish but thanks to P3D, I’m able to get about 12-15FPS. You can find the code here:

import processing.video.*;

int numPixels;
int[] backgroundPixels;
Capture video;

void setup() {
  size(1280, 720, P3D);
  background(0);
  String[] cameras = Capture.list();

  if (cameras == null) {
    println("Failed to retrieve the list of available cameras, will try the default...");
    video = new Capture(this, 1280, 720);
  } if (cameras.length == 0) {
    println("There are no cameras available for capture.");
    exit();
  } else {
    println("Available cameras:");
    for (int i = 0; i < cameras.length; i++) {
      println(cameras[i]);
    }

    // The camera can be initialized directly using an element
    // from the array returned by list():
    video = new Capture(this, cameras[0]);
   
    // Start capturing the images from the camera
    video.start();
  }
}

void draw() {
  if (video.available() == true) {
    video.read();
  
    video.loadPixels();
    
    for (int y = 0; y<height; y+=1 ) {
      for (int x = 0; x<width; x+=1) {
        int loc = x + y*video.width;
        float r = red (video.pixels[loc]);
        float g = green (video.pixels[loc]);
        float b = blue (video.pixels[loc]);
        float av = ((r+g+b)/3.0);
  
      pushMatrix();
      translate(x,y);
        stroke(r,g,b);
        if (r > 100 && r < 255) {
          line(0,0,(av-255)/3,0); //change these values to alter the length. The closer to 0 the longer the lines. 
         // you can also try different shapes or even bezier curves instead of line();
        }
      popMatrix(); 
     }
   }
 }
}
GitHub Repository: https://github.com/XBudd/Processing-Experiments/tree/master/pixel_sorting_vid

Pushing this project further, I will be looking for ways to aesthetically degrade the image further. Scanlines, frozen pieces, transformations, etc.

P.S. Have to also give a shoutout to SyntaxHighlighter Evolved (WordPress Plugin) for allowing me the ability to format and present my code so well.

Kinect

One Step Forward, Three Steps Back – Kinect & Processing

Kinect

I’ve given in.

After far, far, far too many hours hoping to find some life in the Processing community surrounding the Kinect V2, I’ve come to conclude that the only worthwhile move for the sake of compatibility is to change my hardware, not my software.

For the projects I have in mind, libraries such as SimpleOpenNI and programs such as DepthKit perfectly fit the bill. The issue, however, is that the brilliant Kinect V2 I purchased in April 2015 is just not supported. I’d like to write “not supported yet,” but after reading about so many false starts in the community way back in 2014, I just don’t see myself being able to take advantage of the Processing + Kinect platform with the newest Kinect.

So, I bit the bullet and bought a used Kinect V1 on ebay. Fortunately, unlike the latest Kinect, the investment is not too high. A mere $40 will get you the Kinect V1, USB/AC Adapter, and shipping. An important note, there are actually TWO Kinect models under the V1 umbrella: model 1414 and model 1473.

The 1414 is the original Kinect model and is the mostly widely supported and used version of the Kinect V1 for digital media projects. The 1473 is an updated revision of the hardware and internal software of the V1 Kinect – it alters the way in which the chain of components is interpreted and thus requires a rewrite of much of the original/popular libraries available.

UC Davis has a great writeup on the differences between the two Kinect models in relation to their ability to be utilized for projects here: UC Davis – Kinect Hacking

While I am excited to be able to plug into the world of Kinect hacking now that I will have the most supported version, I am disappointed that I have to take three steps back and work in a legacy format just to accomplish what I would like to do (without spending months writing my own libraries).

Of course, the other sad part is that I now am behind a full week as I await delivery of the device in over a week. Between now and then (September 28th, hopefully), I will continue my research of Processing + Kinect and develop further practice projects that work well with the Kinect V2.

Always hate when I feel behind the ball but there is no way I could have predicted so little life in the Processing community when it comes to the Kinect V2. With any luck, we’ll see a revamped initiative in the coming months. For now, it seems that other artists – myself officially included – are dealing with the limitations and developing for the legacy Kinect V1.


Photo Credit: https://www.uni-weimar.de/medien/wiki/images/Kinect.png

 

Capture_kinect_september2015

Getting Started with Kinect Masking

Capture_kinect_september2015
Camera output from Kinect V2

First, let me start by saying that, at this point, I have officially stared at my face for far longer than I am comfortable with today. In part, because I cannot get over the incredible new face effects on Snapchat but mainly because I have been extensively working with examples in Processing for the Kinect. At this point, I’ve come to conclude two things: 1. I will be facing more hurdles than I had hoped, and 2. my “good side” has a very narrow range (see image above).

In all seriousness, I’m loving the examples included with Thomas Lengeling’s KinectPV2 library. I am embarking on a project for my Art Thesis I course which involves an interactive installation which will allow the viewers/participants to interact with a set Dazzle Pattern (yes, the one used in World War II and on my first Art Thesis I project) that is overlaid over their bodies, room, objects, etc. My hope is that I can track the participants and have the Dazzle pattern follow their movements as well as any objects which are moving or otherwise displaced. Should not be much of a challenge – especially considering the number of masking examples provided in KinectPV2’s library.

If that goes well, I want to expand the concept to take advantage of one of the Kinect’s most interesting features: gesture recognition. In my research, I have found a number of attempts at gesture recognition using Processing. Blobscanner seems to be a popular library for such actions but I am curious how I can expand the works further to recognize when a participant is making a “hand gun” gesture with their left or right hand.

I am limited here by two factors: 1) gesture recognition involving exact digit placement is still relatively complicated – especially in Processing. 2) Methods I have discovered which could be applied to this goal will not work (at all or well) when the number of participants surpasses two. Not exactly ideal for presenting to a large body of gallery viewers or students in a critique. For this reason, I am focusing foremost on the pattern mapping and will, time permitting, explore gesture recognition.

Another library I am very excited to explore – and one which has myriad applications far beyond Kinect – is everyone’s favorite OpenCV. Just looking at some of the uses, I can already imagine myself expanding some initial concepts to newer heights through this library coupled with KinectPV2.

More to come soon.