WebRender’s original strategy for rendering web pages was “We re-render everything each frame”. Quite a bold approach when all modern browsers’ graphics engines are built around the idea of first rendering web content into layers and then compositing these layers into the screen. This compositing approach can be seen as a form of mandatory caching. It is driven by the observation that most websites are very static and optimizes for it often at the expense of making it harder to handle dynamic content.
WebRender on the other hand almost took it as a mission statement to do the opposite and a go back to a simpler rendering model that just renders everything directly without painting/compositing separation.
I do have to add a bit of nuance: as you scroll with WebRender today, we re-draw a rectangle for each glyph on screen each frame, but the rasterization of the glyphs has always been cached in a traditional text rendering fashion. Most other things were initially redrawn each frame.
Working this way has a few advantages. In gecko, a lot of code goes into figuring out what should go into what layer and within these these layers figuring out the minimal amount of pixels that need to be re-drawn. More often than we’d like, figuring these things out takes longer than it would have taken to just re-draw everything (but of course you don’t know that until it’s too late!). Creating and destroying layers is very costly which means that when heuristics fail to guess what will change, stuttering can happen and it looks bad.
So layerization is expensive, its heuristics are hard to maintain and overall it is hard to make it work well with very dynamic content where a lot of elements are animated and transition from static to animated (motion design type of things).
On the other hand WebRender spends no time figuring layerization out and the cost of changing a single thing is about the same as the cost of changing everything. Instead of spending time optimizing for static content assuming rendering is expensive, the overall strategy is to make rendering cheap and optimize for dynamic and static content alike.
This approach performs well and scales to very complex web pages but there has to be a limit to how much work can be done each frame. And web developers have no limit as shown by this absolutely insane reproduction of an oil paining in pure CSS. Most browser will spend a ton of time painting this incredibly complex page into a layer and will let you scroll through it smoothly by just moving the layer around. WebRender on the other hand re-draws everything each frame as you scroll, and on most GPUs this is too much. ouch.
In addition to that, even if WebRender is fast enough to render complex content every frame at 60fps, there are valuable power optimizations we could get from not redrawing some things continuously.
In short, a “picture” in WebRender is a collection of drawing commands and the scene is represented as a tree of pictures. The idea behind picture caching is to have a very simple cost model for rendering a picture (much much simpler than Gecko’s layerization heuristics) and opt into caching the rendered content of a picture when we think it is profitable (as opposed to always having to cache the content into a layer). Because we don’t have a separation between painting and compositing in WebRender, it isn’t very expensive to switch between cached and non-cached the way creating and destroying layers is expensive in Gecko today and we don’t need double or triple buffering as we do for layers which also means a lot less memory overhead.
I’m excited about this hybrid approach between traditional compositing and WebRender’s initial “re-draw everything” strategy. I think that it has the potential to offer the best of both worlds. Glenn is making good progress on this front and we are hoping to have the core of the picture caching infrastructure in place before WebRender hits the release population.