Tuesday, November 15, 2011

Post 05: Reading & Response #3

Created by vocalist-composers Jaap Blonk and Joan La Barbara, Messa Di Voice is an exceptional piece of interactive performance that blurs the line between vision and sound. First premiering in 2003 at the Ars Electronica Festival in Linz, Austria, it has appeared at galleries and festivals across America and Europe. The overall theme of the work is the visualization of the human voice. The piece is performed by live actors “speaking” through phonetic sounds, which are analyzed by a computer and transformed into visual phenomena through real-time video and speech allithograms. For example, during the sequence “Ripple”, two performers create high-pitched chirping sounds reminiscent of wetland fauna. These “chirps” are projected as ripples, each sound having a unique shape and motion.

The underlying concept of Messa Di Voice comes from the theory of phonesthesia – the idea that sounds have an implicit visual shape or form. Furthermore, studies have shown that the “imagery” of any given sound may be based in our collective subconscious. For example, a 1927 study by psychologist Wolfgang Köhler asked subjects to match geometric figures with the sound they thought it most accurately represented. The results showed an almost unanimous answer.

By creating images out of phonetic speech, Messa Di Voice conveys the non-physical nature of voice by giving it a tangible form in space. It communicates its ideas in a form that can be understood regardless of language or reading ability. It is interactive in such a way that while its performers are communicating before the entire audience, it is up to the individual viewer to comprehend their message.

Thursday, November 10, 2011

Post 04: Project #3 Proposal

My idea for this project is to create an audio-visualization program that generates tones based on user-activated “turtle paths”. By taking microphone input, the program creates a semi-random turtle path depending on the volume/length/pitch of the user’s voice. (For example, speaking quickly will create a turtle path that changes direction frequently, producing a rapidly fluctuating tone.) These “turtles” in turn will produce a musical tone defined by their movement. The overall effect will be to create a unique audio environment based on the user’s input. Using the “turtles” to create a constantly moving visual pattern will also be a goal for my project as well. To keep the program in constant motion, I may also have a “metronome” effect happen by default if there is no input at a given time.

For the project, I’ll be using Max5 to create the overall visual and aural result to be presented to the viewer. In addition, I will also use GarageBand for creating the “base” tones that the program will alter. The program will use the computer’s built-in microphone, though it’s possible I may be able to use a USB microphone for user input instead.

The first week of work will be concentrated on creating the basic programming and making sure the program runs in a simplified form. The next week will be concentrated on determining what to measure, how it translates to output, and tweaking it to get a desirable result.

The general concept I have for the project is similar to the Music-generation game Electroplankton for the Nintendo DS. Here's a video of the game, just to give you an idea.