I have to write up so much from recent events, but as usual at these times, my head is buzzing with ideas for other stuff to do next (all based on a long weekend of messing with arduinos, robotics, Puredata and Max/MSP), so I thought it best to document that first. So. I would like to make: An enactment of a score I wrote a long time ago, involving dancers/actors performing with a box, that follows different parameters based on what stage the performance is at. It would be a black box, interacting with the movement and words only through sound. It would be capable of "jamming" or following music in some way or other . (Link suggested by Mat) A dance based implementation of the MaxMSP script that my friend Mat quickly put together last weekend, which allows webcams to interpret visual data as audio samples(more on that in the next post). I want to invite a duet of dancers to perform with this webcam audio, in December, but mostly scriptless, just a result of trying things out ...
My name is Ale Fernandez and I'm Chilean and Italian.
I am a web developer, artist and technical researcher.
I've lived in Scotland, Italy, Spain and England and career-wise I am interested in distributed systems and their applications to improvised performance and ecology.