My name is Ale Fernandez. I live in Barcelona, Spain and I'm Chilean and Italian.
I am a web developer, artist and technical researcher.
I've lived in Scotland, Italy, Spain and England and career-wise I am interested in distributed systems and their applications to improvised performance and ecology.

Twitter

9/18/2006

Distributed Computing VS Distributed Performance

Networks are by their nature, failure prone. Whether you are talking about a person in your extended network turning up as requested to a place, or a computer not crashing when you need it, you can't really control the remote points. But you can "influence" it - devote more time to any node and give it incentives such as quick replacement arrangements, money, systems administrator time etc.

In working on the Locating Grid Technologies workshop series' last practical event, we had to take this into account. I envisioned the connections as being a central set of maybe 2-3 nodes that we really checked up on and ensured would be functioning on the day, and then as many other nodes as needed, but with no checks or involvement from us.

In a much larger network we'd have had the chance to test out many more of the fallacies/problems/opportunities of distributed computing, but I'm impressed at how much a social/performance event resembles a generic computer procedure in distributed computing.

A good start for this might be some of the accompanying materials to a recent interview to a data grid enabling technology Jini developer Van Simmons on javaposse.com: http://weblogs.java.net/jag/Fallacies.html our problems mirrored these - and our expectations did in fact mirror the fallacies on that page.

A quick summary:

  1. Latency: we saw this in sound, but probably also video is late in arriving to another node by a few milliseconds. So synchronisation is a known problem.
  2. Uniformity: some AG nodes run inSors, some run AG in various sometimes incompatible flavours.
  3. Partial Failure: Partial Failure could be seen in the case of the Japanese and U.S nodes which didn't make it, and the London node, which very nearly had to be pulled out at the last minute. In organising events for the Orchestra Cube or at other times, you have to know what any performer or group is doing and be in touch habitually to some extent to know they are not going through something which might mean they can't perform on a given date. With distributed, far away performances, you have even less control. On the other hand the "partial" bit is positive: if there is a sudden unavailability of a venue when doing a traditional performance, you can't have much "partial" success other than doing it out on the road!
  4. Concurrency:
    Concurrency, on the other hand, or "The illusion that two or more tasks are being performed in parallel" (definition from developer.novell.com/research/devnotes/1999/october/04/02.htm) was what enabled our little experiment, but it's failure is race conditions or deadlock (when more than one person use a single shared resource - like a camera for example, resulting in unpredictable behaviour) and I didn't hear from anyone if what we went through proved this was happening, except in the actual performance, when a great many actions by Mmmmm in the london node didn't actually get put across because they had no technician, and therefore little control of the cameras.
    I think concurrency problems did happen though - because as a partial observer of the whole thing ( I only took part in some placing of screens and a couple of comments to the orchestra every so often) - you often had many screens vying for attention at every moment, sounds and visuals everywhere, and around the room, so I had a classic case of information overload. I think we are wired up to handle this to some extent though - and it would be good to experiment more with concurrency in distributed performance.


Another note about concurrency is the idea that 2 people can communicate any one piece of information (which could be for example a new bit of narrative or plot sequence) and they could do it concurrently to aid people's understanding. An interesting example which I read about recently is that in small self sufficient communities (where there is usually a flat heirarchy and decisions are usually made by consensus), little bits of information (who has the keys to what, where is X etc) are known by almost everyone. These redundant bits of information mean the network itself is less prone to errors, and if a person (or node) goes down (on holiday, dies, forgets etc) someone else knows this anyway. A point I've left out of this list is also in the paper mentioned below:

Local versus remote memory access: I left it out because it goes into the problems that happen when programmers access a computer's memory in distributed grids. I can't find a comparison with distibuted events, but I think there is a problem to do with the memories and fragments of understanding of "what we are doing here" with everyone else.

A social construction happening in distributed space is going to be what? This is another experiment that would be really useful to examine by using the AG in performance. For example, inviting people on two occasions, perhaps a year apart, or using existing data and their memories of a real life event, and then doing a live reconstruction of what happened using the AG with Memetic. Another experiment would be to invite people to do an AG based event, but leave out some important details and see how people interpreted what they thought should happen despite this lack of information.

The note on distributed computing is the paper I mention. So in what ways does distributed computing *not* mirror what we found out in devising these workshops?

A good note also is that Jini could be a good technology for building compute grid applications, although the one used by the BBC Weather and I think SETI, could be quicker to write in the short run and is also open source.

It's important in doing research to build either compute grid apps, or AG enabled apps for performance, that we step back from what we're trying to do now (for example, run "distributed improvisation" events to inform the devising process), and really look into the possibilities and nature of distributed computing and into distribution where found in the natural world, and try to use that to allow for possibilities for performance that we've not even concieved of yet.

No comments:

Label Cloud