I know cloud rendering sites like these are getting bigger now:
http://www.rayvision.com/en/main.phphttps://www.rebusfarm.net/en/
but I'm wondering,
Instead of having to invest like 20 - 30 thousand dollars for a workstation that can render as fast as I'd like to with vray/renderman/mentalray to create my initial frame / render setup,
Instead can I just use a cloud service to do my work and see the results as I'm making changes for almost instant render results? I'm confused if its strictly just for consecutive frames after you already have the look you want, just for animation,
Or could it actually be used as an almost realtime cloud solution where as I'm tweaking my materials/shaders adding lights and setting up renders on just one frame, my initial one,
then when hit the render button, harnessing the power of the cloud it will be like instant? So no more waiting on my main workstation? And I dont even really need some crazy xeons at all? I can just run Maya/Max and let the cloud rendering service take care of the rest when I hit the render button on 1 frame to be instant every time I make a change?
Does anyone use it this way? Is it feasible, practical, and affordable instead of spending a ton of money that I dont have on a beast workstation?
Or would this end up being like 1-2 dollars every time I make a change and hit the render button?
Replies
http://renegatt.com/webservice.php
But still, the data somehow has to get from your computer to the server and if you upload 8k lossless textures and have to reupload them with every tweak you make in photoshop you might be better of rendering local if you have a slow internet connection.
I dont have fiber optic, but I get 60 mbps down and 10 up
What about renting some xeons from amazon and using strip rendering/final gather?
Is there anywhere like Renegatt that uses amazon and sets it up automatically with a virtual machine but for xeons instead of gpus?
Or is using that many gpus with Renegatt actually just as good as cpu rendering? Aside from vfx/particles & sims is it able to still do g.i. raytracing SSS massive dense foliage & outdoor environments & everything that cpu renderers can do? Would this be the way to go?
And if so, since I dont use blender, which gpu renderer should I use with it?
Octane, furryball, redshift, arion?
My machine is running an i7 4770k and to actually get much faster render speeds in vray/renderman/mentalray without using a cloud render service I think I would need to spend like 10 thousand at LEAST on xeons, no?
http://www.elaspix.de/singleview/archive/2013/january/02/article/blender-render-farm-in-der-amazon-ec2-cloud.html
Also this:
https://cgcookie.com/blender/2013/08/09/setting-up-a-render-farm/
(also check out the comments)
Apart from that you'll have to do some research yourself, my knowledge on renderfarms pretty much ends here. Instead of cranking up the processing power I always try to optimize anything else I can to optimize render times. For the little things I do 2 GPUs and blender work just fine. I rarely render a whole frame to tweak, I'll crop a small part of the image and have almost realtime feedback on that as I tweak materials etc..
With what you are talking about I don't get the feeling you will be better of with blender. Especially if you already have have vray and do archviz I'd stick with vray.
GPU rendering in blender has serious limitations like the whole scene has to fit in the videocard memory, so there is a cap to complexity and size of textures that you could easily hit and where you simply can not render it any more without changing the scene or buying a new card. Afaik they are working on changing this, but that could be a while away.
With blender I'd personally always use the render engine "cycles" that comes with it (don't confuse that with "blender internal", if you read that anywhere that's not the one you want), as I don't like the way other render engines are integrated and the solutions get so "exotic" in terms of number of people who use them, that it is harder to get support through forums/tutorials than if you stick with cycles.
Also I should clarify, that cycles can run either on GPU or on CPU and new features tend to get developed for the CPU version first and might take a while to get ported to GPU.
It is very good for a great many of tasks, but in my humble opinion for sunlight-through-window-indirect-lighting-archviz-stuff it is not quite there yet.
I believe I read an article about the ikea product renderings where they said artists had to learn to "see through the noise" when tweaking and testing in vray and then let their final scene render out over night.
Can you show us what you are working on? It sounds interesting.
It takes quite a bit of time to upload and initialize Rebus, it's certainly not quick enough for tweaking.
Rebus charges based on CPU cycles basically, so the rate depends on the rendering complexity. I had lots of refractive glass in a recent sequence, had to optimize that a ton to save Rebus cost, but still it cost more than other scenes.
You can also choose to prioritize your renders for a higher fee, so when it's a busy time of day and Rebus has a lot of jobs coming in, yours moves higher up the queue. Helpful if you're on a deadline, but can be 2x or 4x normal cost.
Like Martin says, isolate/crop your test renders for quick feedback. Also helps me to render 1/4 resolution at the start, when I'm making large lighting/camera adjustments, then inch my way larger bit by bit.
You should also look into V-Ray's real-time renderer. In 3ds Max it couples with the ActiveShade viewport. I haven't used this yet, but will be looking into it soon.