fbpx

Interface Building with Reza Ali

When we were planning our collaborations with the Bunker, hosting Reza seemed like an obvious choice. A few months prior Reza demoed some of his recent work to a small group of friends. Here, he elegantly navigated through a large array of original graphics with his newly developed ofxUI (a user interface addon for openFrameworks). His comprehensive understanding of shaders became self evident as he nonchalantly exhibited the numerous tricks in his bag. Luckily for us, Reza brought an even cleaner performance to the Bunker last month.

We had the honor of hosting him, as well as a number of other talented visual artists (Jono Brandel, Stephanie Sherriff, vade, outpt, and EvdM). To document what goes into each of these artists’ processes, we began the Toolmaking for a Performance Context series. The interviews conducted in this series give our readers an impression of how each of these artists tailored their performance tools for their sui generis approach. This interview is Part II of the Toolmaking for a Performance Context series. If you haven’t read Part I, be sure to check out Jono Brandel’s discussion of his new performance software, Neuronal Synchrony as well as Part II, in which Stephanie Sherriff tells us about her transition into OpenGL and some of her lo-fi approaches.



Cullen Miller: Your work really spans the gamut and has shown your mastery of a number of disciplines. Are there any emerging fields that you haven’t had the chance of diving into that you would like to?

Reza Ali: Not sure if the areas I am going to mention are emerging, but they are definitely evolving and becoming more interesting and approachable to use and play with to make novel types of art/media pieces. I would love to play with more frontend/backend web technologies: jQuery, d3.js, Node.js. I am going to take a stab at 3D printing generative forms, and designing generative products. I want to make flexible printed circuit boards so I can make stuff like the Nike Fuelband. I want to get my hands dirty with some hardcore Cinder and/or Apple’s Scenekit and see how I can make the interfaces seen in Tron a reality.

CM: Earlier this year you released ofxUI which is an openFrameworks addon that easily allows for the creation of GUIs. But even more recently you have ported it over to Cinder as the ciUI Block. Can you explain why you decided to move it over to Cinder?

RA: After Eyeo 2012 I was very inspired to check out Cinder because of Andrew Bell‘s talk, Feeding Babies with Creative Coding, and Robert Hodgin‘s beautiful pieces (which were made in Cinder). I was really impressed by Cinder’s code quality and projects/visuals made with Cinder. As a designer/artist I am always looking for the best tools and frameworks to allow me to make art, and design things. As a coder, my programming skills have come a long way over the past four years since I started programming. Processing and openFrameworks were very important in helping me learn how to solve problems with code and design with code. Now I am at a point, where I just want to work with a professional framework to write clean code to make amazing visuals/sounds/apps and just have things work logically. Cinder isn’t the easiest framework to pick up if you aren’t a C++ coder already, but once you get the hang of it, its design is clean and straightforward. I plan on using it for all my projects from now on, thus why I ported ofxUI to ciUI.

CM: You VJed last weekend for GAFFTA’s A/V showcase. What did you do to prepare for the show? Can you give us a brief preview of your performance?

RA: The second to last time I VJed was in 2009 at an underground basement in Isla Vista, CA and I was using Max/MSP/Jitter.

The last time I VJed it was in 2010 and I was using Processing:

During the last two years my tools and style have been evolving. I have been building sketches, learning algorithms, designing visuals, and building tools to help me play/perform visuals in real-time. As of now I like to synthesize all my visuals from scratch, thus no prerecorded videos are used during my performances. I like to think of this as “electronic” or “synthetic” VJing. I believe at some point I will incorporate video into my performance, but only when appropriate. So essentially I’ve been preparing this show and other shows like this one for the last two years. A week prior to the performance I started writing my own VJ application using openFrameworks, since I was more comfortable with that and most my works were already using openFrameworks. I also built an iPad application using openFrameworks so I could send touches and gestural information to VJ application for use when synthesizing visuals. OfxUI was essential in allowing me to tweak the visuals in real-time, and save and load settings for the visuals.

CM: A lot of your work utilizes custom interfaces to manipulate and explore form, what are some of the concepts that shape your choices when parameterizing your applications?

RA: As a generative visual designer I like to “research” ideas/processes and turn them into parameterized visual sketches. When I research a generative process, like magnetic attraction and repulsion applied to particles, I like to have many sliders and buttons that allow me to tweak the algorithm’s parameters and particle’s behaviors in real-time so I can find its sweet spots, which are the ones that make the visual movement looks most aesthetically pleasing. Then when I have found the sweet spots, I think about how to play the visuals during a performance. During performances I like to use physical movements (such as touches, swipes, etc) to affect the visuals. After the research phase I will design the visual system, using the generative process researched, and create rendering parameters that will control the visuals’ aesthetic. A custom interface is made for each sketch and this interface allows me to play the visuals, like a musical instrument. I make sure to keep the interfaces minimal, and only use the things I should be touching during a show, that way there aren’t dozens of sliders or buttons to manipulate. That way I can really tweak the parameters like a musician would do on a drum machine. I have learned to make my visuals “poke-able”, which essentially means I should be able to touch my iPad’s touchscreen (which sends over touch information to my VJ application) and it should cause a reaction out of the visuals. Furthermore my VJ application has a beat detector, knows when a snare, kick, or hat has been hit. This means I can make the visuals respond to this as well, thus I get lots of inputs and outputs. The mapping is where the magic is, and I spend most my time playing/tweaking the mappings.

CM: Your work has been shown in a number of different venues, presented on a variety of platforms and devices, and has been transmitted through various media. While each project is uniquely specific to each platform, media, or context, is there a format or context that you find best frames your ideas?

RA: My ideas resolve into entities that are experiential and allow for audience participation and/or interaction of some type. The entities I end up making are a combination or music, video, 3D, animation and architecture. Thus no one context is ideal for expressing the idea, but I think that the internet, and mobile/ tablet devices are a great platform for allowing the ideas to reach their full or close to full potential. The format I work in typically is .app (or .exe) or .com. This is mainly because I can write an application for free. I have all the parts I need to create an application or website, and if I don’t I usually can get them in seconds. I don’t need to get physical materials, or have a warehouse to keep my works. I would love to get back into making physical things, but the turn around time for those things aren’t close to milliseconds like they are on xCode or Processing or even HTML. In the future I hope to get bigger budgets and think about what I could create given I have the resources to make them a reality. But I still dream and I have a couple ideas for when I do get those budgets. I would like to use space as the medium/context for the pieces and have site specific installations that require the audience’s presence to experience and contribute to the pieces.



Reza is a computational designer/creative technologist/multi-faceted hybrid engineer who is interested in everything from design to biology to entrepreneurship. He is interested in human computer interaction (interaction design), architecture/product design, software, mobile technology/hacking, generative visuals, algorithmic art, data visualization, audio-visual interactive immersive environments, new media tools for DJs/VJs/Performers, Trans-Architecture, photography, graphic design, user interfaces, electronics, 3D animation, modeling, rendering and scripting. Some of his goals are to create content and interactive controllers for multimedia performance systems, to create new and fun models of interaction, to create form/visuals/sounds through algorithmic processes, to create real-time computer graphics for virtual worlds, and to explore the realm of science and mathematics to make complex phenomena understandable and intuitive. He hopes to change the world by making a difference in how people use technology, design products and experience new media art and entertainment.

For some latter half of 2010 and early 2/3 of 2011 he lived in LA and worked for various companies, including Motion Theory under Mathew Cullen, Kaan Atilla, and Chris Riehl and at Nokia Research Center under Rebecca Allen (Founding Chair of Design Media Arts @ UCLA). All the while he freelanced for POSSIBLE (created an audio-visual VJ app for Deadmau5), and the Santa Barbara Museum of Art (created an iPad app, iCubist, that augmented the museum’s Analytic Cubist Exhibit featuring works from Picasso and Braque). Reza gave presentations and talks at Nokia Design, Google Data Arts Center, NIME 2010, and NIME 2011. His work was featured in two books, Visual Complexity by Manuel Lima and Generative Art by Matt Pearson and numerous times online at www.creativeapplications.net. In his spare time he worked on a gallery installation, and various other personal new media art projects.

In 2010 Reza earned a Masters of Science in Multimedia Engineering (with a focus in Visual and Spatial Arts) from Media Arts and Technology at the University of California, Santa Barbara. His advisors were George Legrady (Data Visualization Artist), Casey Reas (Co-Creator of Processing, MIT Media Lab), and Matthew Turk (MIT Media Lab). Before his move to Santa Barbara, California in 2008, he graduated from Rensselaer Polytechnic Institute with two B.S. (One in Mechanical and Electrical Engineering, and minors in Electronic Art and Product Design) studying under Curtis Bahn, Shawn Lawson and Kenneth Conner.