Home Technical Talk

Render farms - how do they work?

polycounter lvl 7
Offline / Send Message
MasterBeard polycounter lvl 7
Hello Polycount.
Can please anyone who used render farms explain me what are they for and how the price is calculated.
I mean I have red all the info on the webs, used auto aprox time calculators, and still cant understand how it works.

I mean, basics. You send your max file to the server and they render it for you right?
The render that would take 2 hours on your pc will be done in few minutes with farm.
You pay for 1 hour of render on your pc or on there pc?

I am totaly confused. If anyone could explain me this I would be very thankfull.
Cheers.

Replies

  • armagon
    Options
    Offline / Send Message
    armagon polycounter lvl 11
    I've used pixelplow.net in the past. Basically, you download an agent, upload your scene and wait for it. You can choose the computational power and that's going to affect the pricing. It's priced based on Ghz/h. Quite good for Vue renders!

  • MasterBeard
    Options
    Offline / Send Message
    MasterBeard polycounter lvl 7
    Thanks Armagon.
  • Eric Chadwick
    Options
    Offline / Send Message
    The price fluctuates per render because usually each render's computational needs are different. I've used Rebus for V-Ray renders, and it was a very different cost for complex vs. simple scenes.
  • Mark Dygert
    Options
    Offline / Send Message
    On the other end of the spectrum you can set up your own render farm with the backburner software that ships with max and maya. Start manager, start server, fire off jobs, keep an eye on them using monitor. I have this set up with a few PC's at home. When I was working at HER we depended on this type of farm, it was amazing how much rendering/iteration you can get done with just a handful of machines on a farm.

     It helps if you have another machine or two, to take the work but you can run it on your work machine too. A lot of people will do this to queue up a bunch of renders and let them render over night. It helps that you can set a schedule for the farm and black out hours of the day that it won't render. That way the jobs just queue up waiting for a render node to open up.

    You also need to be smart about the pathing of your images if you're using more than one machine. It helps to share the project folder with the network and use UNC pathing "//myfolder/textures..." instead of "C:/". 
  • Hito
    Options
    Offline / Send Message
    Hito interpolator
    With backburner:
    Manager receives copy of scene you want to render, sends it out to the servers, and assigns each server a frame to render. When server completes the frame, it saves the frame to some network location and requests next unrendered frame from the Manager. Rinse/Repeat until the whole sequence is done. Slower servers render less frames, faster servers render more. Once the whole job is done, Manager keeps copy of the scene for records for preset amount of time then either dump it or archive it somewhere else. Servers don't keep finished jobs.  Servers has to have write access to the save location, wherever it is on the network. You can get a fairly accurate estimate if you know the performance of the servers and push out a few test frames to get an average frame time. E.g. 100 frame job; 3min/frame average on i7 quadCore 3.5GHz; 3 machines on your farm; render time for the job will be in the neighborhood of 300min/3 = 1hr 40 min. vs 5 hrs on a single machine.

    Real rub is farms generally don't do well if you're rendering single large image. You might try render strips, but light calc variations might make final result less than desireble.

    and as Mark said, there's bunch of other settings you can do to help get the most efficient use out of your own farm. Far as renting... I don't know, I imagine it's per unit time per CPU core.
  • MasterBeard
    Options
    Offline / Send Message
    MasterBeard polycounter lvl 7
    Thanks a lot guys for your time and help. I appreciate it alot.

Sign In or Register to comment.