Skip to main content
Screen Shot 2015-12-16 at 8.48.39 PM

Project 4: Advanced Drum Kit & Algorithm

Addressing the multitude of interactivity issues that dominated my recent advancements with the Kinect in Processing, I began to write new library classes and algorithms. I started by adapting the skeleton library from Kinect Projector Toolkit which is based on SimpleOpenNI with a number of tweaks to allow for more accurate joint and silhouette tracking. With a more accurate system of tracking joints in place, I began work on adapting an algorithm to allow for button objects to be created, displayed, and tracked with a simple call. One too many sleepless nights later, I had accomplished just that for simple PShapes. Taking the logic a step further, I was able to create an adaptable algorithm which allows for the translation of elements from an SVG file into interactive components on screen.

This algorithm can be seen in action above and, along with the simple shape version, picked apart on GitHub:

GitHub Repository: https://github.com/XBudd/ART-3092-Projects-in-Processing/tree/master/Project_4
Screen Shot 2015-11-05 at 2.46.38 PM

Tweetris – A Study of Whole-Body Interaction in Public Space

Screen Shot 2015-11-05 at 2.46.38 PM

While researching artists and works dealing with the kinesthetic performance of the body in public space, I came across a fantastic research project called “Tweetris: A Study of Whole-Body Interaction During a Public Art Event” (Full Article Available Here) by Dustin Freeman et. al.

In the article, an exploration of multiple methods for Whole-Body Interaction (WBI) presents interesting findings on effective practices for utilizing one’s body in space as an interactive controller. Modes of WBI representation such as silhouettes and avatars are compared with the concept of “discretized silhouette” being the selected method as it allows for a “down-sampling of the raw silhouette given by any body-detection sensor” and thus “encourages exploration of whole body interaction strategies” (Freeman, 1) by the user.

The project utilized the Microsoft Kinect V1 and presented a projected overlay of gameboard, kinect’s line of sight, and the interpreted player shapes. Presented as an interactive art exhibit for 2011 Nuit Blanche event in Toronto, the project did much to reveal the user interpretation and interplay as a participant with and without interaction constraints. The willingness of participants to fully explore the space, including using walls and bystanders allows for a deeper understanding of the ways in which participants will use their own kinesthetic abilities within a given public space.

Article and Image: http://www.cs.toronto.edu/~fchevali/resources/projects/tweetris/tweetris-CC2013.pdf

Screen Shot 2015-12-16 at 8.28.53 PM

Project 3: User3d_Dance_DJ

In this project, I continue my experimentation with Kinect, interactivity, and music within Processing.

After weeks of development and struggling to work around the finicky interactivity of Kinect within Processing (including activating buttons and tracking skeletons), I was able to create an interactive DJ set which allows users to control pitch, gain, granulization, and playback position of an audio file as well as add in samples.

Although the final result was one that worked well and was very enjoyable to use, the prevailing interactive issues greatly limited the experience. Moving forward, I intend to address such issues through continued research and, if necessary, coding and recoding my own libraries for Processing.

GitHub Repository: https://github.com/XBudd/ART-3092-Projects-in-Processing/tree/master/Project_3/User3d_Dance_2
Screen Shot 2015-09-30 at 6.01.27 PM

Twitter in Processing: Part 1

Twitter Project 1: “#Cornell”

For Art Thesis I, I began to intermix my understandings of Processing in order to create a series of projects incorporating a live Twitter feed.

My first project of this nature, demonstrated in the video above, displayed all Twitter posts with “#Cornell” on screen with language coded for negative and positive words. The results were stunning visually and theoretically – allowing for a live “pulse” on Cornell at any given moment.

IMG_8082-1

While only an early concept, this project allowed me to gain a better understanding of the powers of Twitter’s API along with the incredible Twitter4J Library. Moving forward, I wanted to explore increased user interaction and reach beyond the confines of a single queried term. To accomplish this, I explored GUI libraries for Processing and eventually decided on the very capable and well documented controlP5.

GitHub Repository: https://github.com/XBudd/Processing-Experiments/tree/master/twitter_Cornell

Twitter Project 2: “Twitter God”

(Note: Video and image contains explicit language and imagery unsuitable for work and those under 18 years of old. Viewer discretion is advised.)

Using controlP5 and the Twitter4J libraries, I created a unique experience for interacting with Twitter in realtime. Like the first project, the messages, images, posting dates, locations, and user information are all disconnected. Words too are again coded for negative and positive language.
Screen Shot 2015-10-14 at 4.43.29 PMParticipants now have the capability of querying for any given term and combining queries for some interesting results. A “Chaos Mode” (shown in the video with a prior label) allows for the chaotic collection of Tweets in an endless and seemingly pointless fashion, covering the screen and begging for attention.

The result is wonderfully unsettling, especially in consideration of the less-than-innocent results that appear with even the most cursory of queries (such as the word “goo,” shown above).

GitHub Repository: https://github.com/XBudd/Processing-Experiments/tree/master/twitter_God_2 

Conclusions:

After working towards a complete understanding of the capabilities of Twitter within Processing, I found myself greatly enjoying the possibilities but not completely enthused by the resulting projects beyond just experiments. Fortunately, these works and the accompanying research would form the foundation for a project I am truly excited about: my final Thesis I project. More on that soon.

 

 

Screen Shot 2015-10-18 at 11.12.17 PM

Quick Update: Live Image Manipulation with Processing

Screen Shot 2015-10-18 at 11.12.17 PM

While building a foundation in image manipulation, I began to experiment with more advanced “glitches” using live feeds from the Kinect and webcamera. The results are pretty unique and I absolutely wanted to share two quick shogts in the interim while I prepare for a full blog post.

Screen Shot 2015-10-22 at 4.41.29 PM

 

GitHub Repository: https://github.com/XBudd/Processing-Experiments/tree/master/databend_kinect_3D
Screen Shot 2015-12-16 at 8.01.09 PM

Project 2: Kinect Music Dance

 

Building upon the Processing Music Visualizer created for Project 1, I began my exploration with the Minim library and how it could be used in conjunction with the Microsoft Kinect. As detailed extensively in my prior posts (such as this one), the Kinect has not been an easy hardware to use in terms of compatibility with Processing. Fortunately, through SimpleOpenNI, Processing v2, and the Kinect v1, I have been able to create a dance project that responds to music!

Next step: making an interactive platform with Minm and the Kinect.

GitHub Repository: https://github.com/XBudd/ART-3092-Projects-in-Processing/tree/master/Project_2

 

Screen Shot 2015-10-16 at 9.13.19 PM

Quick Update: Kinect Visualization in 3D Space

Screen Shot 2015-10-16 at 9.13.19 PM

Just a quick update on the amazing ways of visualization and manipulation using Processing with camera feeds.

Direct depth sensor information from the Kinect allows one to achieve results like that seen above. Through SimpleOpenNI, a skeleton is registered for users’ bodies, enabling a great wealth of information for interaction by and with participants.

By sampling the points to get the color information from the Kinect’s camera, one can replace the monotone points with colored ones, allowing for an effect that shows well the great potential and visualization capabilities of the Kinect’s system when used to its fullest ability. Such an effect is seen here:

Screen Shot 2015-10-19 at 12.03.45 AM

Bonus video of me nerding out to this cool technology:

 

 

 

Screen Shot 2015-10-10 at 5.21.26 PM

Lovejoy: Video as Time, Space, Motion

Screen Shot 2015-10-10 at 5.21.26 PMReading through Margot Lovejoy’s Digital Currents: Art in the Electronic Age, I came across a particular writing that struck a chord with my current explorations in video and performance art: “Video as time, space, motion.”

In this writing, Lovejoy examines the relationship and history of video artists and the technology/tools that aid and inform their practices. Artists such as Nam June Pak and Wolf Vostell are front and center in this piece, as one would imagine — however, it is the understanding of their practices through the evolution of technology that is of great interest to me.

In 1965, the Portapak video camera was released and with it came a new era of accessible videography. This made large waves in the art world due to the device’s accessibility in terms of financial cost and in terms of portability. Moving images became a form of interactive art – imperative to the formation of new video art and, eventually, telepresence works.

From Rosenbach to Nauman, a very wide spectrum of artists quickly appropriated video as an expressive new art medium. For feminist artists, this tool was particularly invaluable due to what Lovejoy calls the “newness of video.” She argues that this trait afforded it a completely unobjective stance as a medium which could allow it to be appropriated for influential, ungendered works.

Video has progressed in so many ways, becoming increasingly accessible with each year and each innovation. Today, we hardly think of video as “new” or clear of objective history. Understanding and appreciating how exhaustive the relatively short history of videography and video as an artform is incredibly important for this exact reason. Lovejoy’s work is a great start in the right direction with ample support, works, and queries to push the importance of video as a medium to new heights.

Screen Shot 2015-10-01 at 12.39.08 PM

Processing: Dazzle Mirror

Wanted to share a super quick project I put together as a concept last night!

Based on Daniel Shiffman’s Mirror 2 Example within Processing, this project expands the basic code to include a visualization that interacts with the Dazzle Camouflage pattern.

The sketch takes input from the webcam and creates a grid system for blocks of pixels. The average brightness of each block controls how large the square appears in the grid: bright = full block, dark = tiny block.

Overlaying this creative visualization over the Dazzle pattern and using the Exclusion blend mode, the result is a very trippy live render of your actions which, when still (such as with a screenshot), become fully disguised within the camouflage pattern.

GitHub Repository: https://github.com/XBudd/Processing-Experiments/tree/master/Kinect_Mirror_2/Mirror2