Half a croissant, on a plate, with a sign in front of it saying '50c'
h a l f b a k e r y
We got your practicality ... right here.

idea: add, search, annotate, link, view, overview, recent, by name, random

meta: news, help, about, links, report a problem

account: browse anonymously, or get an account and write.



Distributed rendering

Who needs no steenking render-farms?
  (+3, -1)
(+3, -1)
  [vote for,

Distributed processing is the concept of utilising the computing cycles of network-connected computers on a processing-intensive problem while they would otherwise be idle.
SETI@home is the one most people will have heard of; there are others for things like biochemical modeling or cracking enigma codes which geeks will have heard of.

The ideal problem for this technique is trivially parallelisable, has very low data-transfer requirements and of course needs a lot of calculation.

One area which needs a lot of processing is the production of high-end rendered computer graphics, such as is found in animated films like "Toy Story", "Shrek", and many science-fiction films nowadays.
Companies which produce these have enormous "render farms" which churn for days to produce the final image. There is considerable impentus to move to ever-more complicated or processing-intensive algorithms, and so presumably what is possible is currently affected by the amount of processing power available.

The potential advantage of using distributed computing is clear, but the sticking points are:
1) Data transfer rates
2) System resources used on the client machines
3) Privacy

(1 and 2) The amount of data transfer can be reduced if the system is clever. The work unit would be a small 'tile' on an individual frame. 16x16 might be a good size, as this can be compressed to a JPEG block. MPEG might work similarly, I don't know. Obviously lossy compression could only be applied where it would for the final animation.
The clients would not need access to the full scenes information. Each client would maintain a cache of data, and would only request data as needed. Tiles would be allocated in adjacent groups, with some consideration of the scene makeup. (For example, all blocks with a particular object in would be initally grouped.)
The client could also work on a similar batch of tiles in each frame of the scene. Also, after sending a request for more information, it wouldn't have to busy-wait if it could find another tile to work on in the mean-time.
Finally, it would be desirable to use procedural textures where possible.

(3) Keeping the images rendered secret could be more of a problem. Although I think it may be possible to obfurscate the picture (for example by using four unspecified colour-components rather than rgb), it may be preferrable to promote the fact that people running the program would get sneak-previews of parts of the film, and this would be a big motivator for installing it.

Loris, Mar 09 2006

Processing power leached from gamers Reclaim_20unused_20processor_20power
A very clever related idea. [Loris, Mar 09 2006]

Download Aborted: Can I help to render Shrek 3? http://downloadabor...render-shrek-3.html
Discusses this idea in some detail, with input from people who seem to know a fair bit about rendering. [jutta, Mar 09 2006]


       Even with compression, I don't think the input/output bandwidth requirements of rendering favor this kind of approach.   

       It's easy to think that it might work, but when you do the math, the cat-herding you need to put in (both technically and socially) dwarves the processing you get out.   

       That said, it would be clever for a studio to do this if they wanted to create a large fan base of people who would feel like it's "their" movie.
jutta, Mar 09 2006

       [Loris] I looked at this at some detail when I rolled out my broker workstation software in the early 90s -- I had about 15,000 workstations at my disposal.   

       It seems that since the workstations weren't doing anything at night, rendering would have been perfect. Had a tough time convincing the client, though :)   

       My next distributed computing project is to beat Kasparov with thousands of cellphones.
theircompetitor, Mar 09 2006

       Over the course of a final render, total I/O on the client side wouldn't be too bad. The server side would need high, but not unreasonable bandwidth.   

       Textures and model details could be shared on a P2P basis to cut server-side upstream requirements.   

       Images could be rendered in small, randomly allocated pieces, and encrypted prior to transmission. This would make it a bit more difficult to view pre-release rendered images.   

       Procedural textures in a distributed environment may work, but only if it can be guaranteed that each machine will generate the exact same final texture. Otherwise, there would be a serious problem with artifacts, both in-frame and frame-to-frame.
Freefall, Mar 09 2006

       The inconsistencies in the way individual cpus render things like ray marcher shaders (volumetric lighting) make anything but utterly homogenous render farms very difficult to work with. Maybe for the simplest of scenes this might work but usually scenes without large texmaps and complex shaders don't need distributed rendering.
bristolz, Mar 09 2006

       //beat Kasparov with thousands of cellphones// Ow! My head! Where was I... Bishop to king 4... Cut it out will you! That hurt!
spidermother, Mar 09 2006

       Thankyou for the comments, you are all probably right.
jutta, your link just goes to show how unoriginal I am, but the comments are very interesting (except for the mile of spam at the end).
Regarding the processing vs cat-herding issue specifically, clearly it would need a very process-intensive method to make it worthwhile. A comment in jutta's link makes some good points about using procedural models as well as textures.

       I hadn't considered that the different computers would render things differently. I'd have thought the IEEE standard for floating point precluded differences, but if not moving to integers should fix this, at the cost of using the host less efficiently.   

       I'm considering now that it might be a win for an enthusiast/prosumer-level rendering package to maintain a peer-to-peer/bittorrent-like network to speed up rendering which would otherwise be done on a single computer. But I don't think that quite merits a separate idea.
Loris, Mar 09 2006

       Oh, I was expecting that we would all share in the disposal of animal parts.
normzone, Mar 09 2006

       Wow, Loris, you're right about that spam problem - I didn't actually read that far down, because I was really looking for FREE MOVIE TICKETS. Get your FREE MOVIE TICKETS here. [ducks, runs]   

       I think part of the problem is that the advances of rendering are very close to hardware - people are still developing algorithms to make something possible that simply couldn't be done before, and you can't afford to do it other than in hardware. On the other side, some of the arguments in the linked-to thread no longer hold - for example, having 2G of RAM isn't unusual, people have more high-bandwidth connections, etc.   

       You could have something that's, say, JVM and OpenGL based that amateurs can use to quickly render sequences. Do it cleanly, don't care how long it takes. I like FreeFall's P2P texture distribution idea. Pixar won't use it, but kids who want to become animators might. In a way, helping finish someone's term project would be more exciting than buddying up to a large studio.
jutta, Mar 09 2006

       i dont do much distributed computing nowadays with seti@home and other BOINC projects. My amd64 cpu underclocks itself automatically if its not being used much and that saves me money on the power bill. Running at full power to crunch numbers for SETI costs me. I even turn off all lights except a half watt LCD nightlight when i use my pc at night.
vmaldia, Aug 05 2006


back: main index

business  computer  culture  fashion  food  halfbakery  home  other  product  public  science  sport  vehicle