Skip to main content
Screen Shot 2015-12-16 at 8.48.39 PM

Project 4: Advanced Drum Kit & Algorithm

Addressing the multitude of interactivity issues that dominated my recent advancements with the Kinect in Processing, I began to write new library classes and algorithms. I started by adapting the skeleton library from Kinect Projector Toolkit which is based on SimpleOpenNI with a number of tweaks to allow for more accurate joint and silhouette tracking. With a more accurate system of tracking joints in place, I began work on adapting an algorithm to allow for button objects to be created, displayed, and tracked with a simple call. One too many sleepless nights later, I had accomplished just that for simple PShapes. Taking the logic a step further, I was able to create an adaptable algorithm which allows for the translation of elements from an SVG file into interactive components on screen.

This algorithm can be seen in action above and, along with the simple shape version, picked apart on GitHub:

GitHub Repository:
Screen Shot 2015-12-16 at 8.28.53 PM

Project 3: User3d_Dance_DJ

In this project, I continue my experimentation with Kinect, interactivity, and music within Processing.

After weeks of development and struggling to work around the finicky interactivity of Kinect within Processing (including activating buttons and tracking skeletons), I was able to create an interactive DJ set which allows users to control pitch, gain, granulization, and playback position of an audio file as well as add in samples.

Although the final result was one that worked well and was very enjoyable to use, the prevailing interactive issues greatly limited the experience. Moving forward, I intend to address such issues through continued research and, if necessary, coding and recoding my own libraries for Processing.

GitHub Repository:
Screen Shot 2015-09-30 at 6.01.27 PM

Twitter in Processing: Part 1

Twitter Project 1: “#Cornell”

For Art Thesis I, I began to intermix my understandings of Processing in order to create a series of projects incorporating a live Twitter feed.

My first project of this nature, demonstrated in the video above, displayed all Twitter posts with “#Cornell” on screen with language coded for negative and positive words. The results were stunning visually and theoretically – allowing for a live “pulse” on Cornell at any given moment.


While only an early concept, this project allowed me to gain a better understanding of the powers of Twitter’s API along with the incredible Twitter4J Library. Moving forward, I wanted to explore increased user interaction and reach beyond the confines of a single queried term. To accomplish this, I explored GUI libraries for Processing and eventually decided on the very capable and well documented controlP5.

GitHub Repository:

Twitter Project 2: “Twitter God”

(Note: Video and image contains explicit language and imagery unsuitable for work and those under 18 years of old. Viewer discretion is advised.)

Using controlP5 and the Twitter4J libraries, I created a unique experience for interacting with Twitter in realtime. Like the first project, the messages, images, posting dates, locations, and user information are all disconnected. Words too are again coded for negative and positive language.
Screen Shot 2015-10-14 at 4.43.29 PMParticipants now have the capability of querying for any given term and combining queries for some interesting results. A “Chaos Mode” (shown in the video with a prior label) allows for the chaotic collection of Tweets in an endless and seemingly pointless fashion, covering the screen and begging for attention.

The result is wonderfully unsettling, especially in consideration of the less-than-innocent results that appear with even the most cursory of queries (such as the word “goo,” shown above).

GitHub Repository: 


After working towards a complete understanding of the capabilities of Twitter within Processing, I found myself greatly enjoying the possibilities but not completely enthused by the resulting projects beyond just experiments. Fortunately, these works and the accompanying research would form the foundation for a project I am truly excited about: my final Thesis I project. More on that soon.



Screen Shot 2015-12-16 at 8.01.09 PM

Project 2: Kinect Music Dance


Building upon the Processing Music Visualizer created for Project 1, I began my exploration with the Minim library and how it could be used in conjunction with the Microsoft Kinect. As detailed extensively in my prior posts (such as this one), the Kinect has not been an easy hardware to use in terms of compatibility with Processing. Fortunately, through SimpleOpenNI, Processing v2, and the Kinect v1, I have been able to create a dance project that responds to music!

Next step: making an interactive platform with Minm and the Kinect.

GitHub Repository:


Screen Shot 2015-10-01 at 12.39.08 PM

Processing: Dazzle Mirror

Wanted to share a super quick project I put together as a concept last night!

Based on Daniel Shiffman’s Mirror 2 Example within Processing, this project expands the basic code to include a visualization that interacts with the Dazzle Camouflage pattern.

The sketch takes input from the webcam and creates a grid system for blocks of pixels. The average brightness of each block controls how large the square appears in the grid: bright = full block, dark = tiny block.

Overlaying this creative visualization over the Dazzle pattern and using the Exclusion blend mode, the result is a very trippy live render of your actions which, when still (such as with a screenshot), become fully disguised within the camouflage pattern.

GitHub Repository:
Screen Shot 2015-10-01 at 12.15.55 PM

2042: Delta Echo Alpha Dazzle

In this mixed media installation, I seek to explore the effects with which the relationship of perspective and technology has on our ability to see – and not see – various realities.

Wordy artist statement aside, this was an immensely rewarding project that allowed me to explore further the unique medium of animation/projection-mapping/found object installation. This is a medium and language I first began to develop while studying abroad in Rome with my 2040 £RA Installation (video) and have been hoping to continue ever since.

In this project created for my first Thesis I critique, I combined domestic found objects which bring to mind a sensibility of “home,” “childhood,” and “security” with a few malicious items such as a rifle, knife, handcuffs, and pistol. These items are painted in a matte white and arranged in a fixed stilllife composition. Through digital projection mapping techniques, the monochromatic items become the set for a projected reality which masks and reveals realities at whim.

Dazzle camouflage is employed as a traditional tactic to disguise and blend menacing objects with their environment and bright, playful, untextured colors reveal partially what has been before the viewer the entire time. Scan lines and waves reinforce the concept of technology as a tool and as a medium within this work.

The limits of the technology is what is of the most interest to me at this intersection. As the viewer moves around the piece – even in the line of the projector’s beam – the camouflage effect loses and gains credibility, ultimately reaching a threshold at which the projection is no longer able to wrap to the object and degrades to pixels and then voidness.

I greatly enjoyed the process and presentation of this work. Above, you can find a video documentation of the installation as presented during the critique. Additionally, a gallery of imagery showcasing the piece and workflow can be found below:


Screen Shot 2015-10-01 at 11.38.37 AM

Project 1: Music Visualizer

Screen Shot 2015-10-01 at 11.38.37 AM

Processing is an incredible platform for creating truly dynamic works with very little work. While learning about the language, adjusting to its syntax, and playing with libraries, I created a music visualizer which uses the Minim library.

Here’s a quick look at it in action — turn the volume up to 11 for the full effect!


GitHub Repository:
Screen Shot 2015-09-10 at 10.56.47 AM


HappyHappyRevolution Demonstration from Colin Budd on Vimeo.

Hello. Welcome to the party. We are only just getting started — what’s that? Oh, no, no, you didn’t miss a thing. Welcome to HappyHappyRevolution. The game where winning makes you happy and happiness makes you win!

In this project, I seek to explore the intricate and curious nature of our human condition as it relates to emotions. So often we are told to “snap out of it” and “fake it to we make it”…but what good are these remarks when depression is so easy to sink in to? Producing a game unlike any other, but exactly like many others (Guitar Hero, Dance Dance Revolution, etc.), HappyHappyRevolution turns the experience of emotions and the facial expressions we convey into a game.

While holding a big smile, the player progresses through the levels from Neutral to Happy to Euphoric and beyond, gaining points and become happier and happier until finally winning the game. Conversely, alternative facial expressions are categorized as “Sadness,” dragging the player down from Neutral to Sad to Depressed and then ultimately losing the game after having lost many points. Along the journey in either direction, the player unlocks footage from my personal memory archives. What they face within these archives is not just a reflection of myself, but a literal personal reflection of his/herself through a technique which visually interlaces footage of the player and the archival footage. This results in a completely new memory in which the player now becomes an active component through their interactions and expressions. The way in which the viewer interprets and reacts to the footage can break the user of their streak or further tie together the viewer, their emotions, and my own in an intimate fashion.

Ultimately tongue-in-cheek but technically impressive and intricate, HappyHappyRevolution presents the human condition of emotion in an entirely new form that mimics well our natural tendency to find sadness an easy companion but happiness a lustworthy goal. The difficulty of holding a smile long enough to win the game, and the ease at which one can appear sad and thus lose the game  plays well into the very curious nature of our emotions: happiness is often fleeting and hard to hold, while sadness comes easy and is hard to break. Ultimately, it becomes a choice: happiness or sadness.

Just as in life, the player must decide.


  • Kinect v2 Sensor
  • Windows 8.1
  • Visual Basic Community 2013
  • Adobe Premiere Pro CC 2014
  • Adobe After Effects  CC 2014
  • Adobe Illustrator CC 2014
  • Coffee… a lot of coffee


View the set-up action in my post here: Setting up HappyHappyRevolution

Screen Shot 2015-09-10 at 10.53.52 AM

Experimental Time Lapse of Cornell

In this project, I seek to explore the techniques of  time lapse and tilt shift. Although new to me, such technology and practices are quickly becoming impressive staples of the worlds of cinematography and photography. Turning the lens on the same paths and surroundings I and so many others tread on and through every day – to class, from class, and to home – I captured the beautiful juncture of the library walk with the many paths of the Arts Quad. Life ebbs and flows through the paths like blood through veins – starting slowing until it becomes a powerful surge by midday. In this way, not only were new techniques explored, but new means of decrypting the livelihood of those who surround me on campus.



In order to realize this project, I set up a Canon 5D Mk III on the top of Cornell’s famed McGraw Tower. Starting at 7:45AM and finishing the recording at 1:10PM, my camera captured approximately 6,000  photos – each triggered automatically using a Polaroid remote on a 3 second interval.

Upon reaching the top of the tower, I was dismayed to find that there was no power outlet available – a worrying fact considering I did not have a dual battery extender with me. By some miracle, the brilliant LP-E6 battery in my camera lasted for the entire duration of the shoot, allowing me to perfectly capture the Arts Quad in all of its glory.

Here is a rundown of the camera settings:

Focus – Manual
Exposure – 60 (to allow for 30fps)
Aperture – f/8 (to allow for capturing a broader range of focus)

By far the largest challenge of this shoot was ensuring that the footage would not become overexposed as the day grew longer and sun became brighter. Although this did occur at two points during my shoot – resulting in the need to end the footage earlier that was captured, the overall shoot was a success.

The tilt shift effect was accomplished in post using Adobe After Effects CC 2014. The procedure is simple and involves creating a linear gradient blur mask where the center of the image receives no blur but progressively blurs outward to the top and bottom edges. To fine tune this effect, I worked with placement and further altered the mask so as to disclude some elements (most noticeable with the tree in center of the frame).

While not a fan of the “miniature” look for this particular shot, such a procedure was great practice for later applications of this neat effect.

Overall I greatly enjoyed the production of this video and am beyond appreciative of all of those who helped in order to make this idea a possibility.

Moving forward, I would love to create additional timelapses of various campus and Ithaca scenes – particularly in the Winter and Mid-Spring seasons.