Surprisingly (or unsurprisingly depending on who you ask) google doesn't give any relevant results to such a simple query. So hopefully it's okay to ask here. My guess/intuition is that it's because of massive amounts of cache misses due to random memory access but i don't know for sure.
I could read about the entire algorithm in complete detail and then code up an implementation and then benchmark parts of it one at a time but that's quite a lot of work just to satisfy an idle curiosity, so sorry for the noise but the alternate route to getting an answer to this is just waaaay too much work.
I don't mind a technical/down to the nitty gritty detail answer in fact that's exactly what i am looking for which is why i am posting in technical forum.
There are many reasons but the main one is because they're doing a LOT of calculations.
It depends on the renderer but when you set your sample rate you're essentially subdividing pixels. E.g. 4 global samples of a HD frame means 4 rays per pixel * 1920 * 1080 == over 8 million rays. You might question the need for so many samples, after all there are only x many pixels why don't we just use x many rays! But well... firstly 1 ray may not accurately simulate the surface (think fireflies) and also the stability of your render will probably suffer temporally since you're relying on such a small dataset if you move your camera the same surface might produce a wildly different result. Put simply, if you don't sample enough then you'll probably see artifacts in your render.
Then you realise that 4 samples is extremely low... Cycles renders are frequently in the thousands, that's billions of rays for a HD frame.
Other than that, other causes for slow downs might be:
There's plenty of optimisations such as geometry caches, splitting your paths etc. but the main point is: Lots of non-trivial calculations.
- Renderers re-downloading the whole scene each frame because that's the only way it can handle geometry that changes topology or adds/removes points.
- Applying post-processing
Unless I'm misunderstanding something I think this sentence is a bit off to me. A cache is when you have data in CPU cache that the CPU thinks is what is required by the program. A cache is miss when the program takes a turn down a branch that the CPU wasn't expecting, therefore it has to go get that information from RAM. I don't think there's a lot of branching code going on in path tracers since it's all heavy floating point math and not boolean logic, so I don't think cache misses are where the performance goes. It certainly would have been a performance hit at one point historically but I'd be surprised if it was still an issue in most commercial renderers.