As usual, there will be a lot of Mozillians attending FOSDEM this year, including 3 members of the graphics team. Bas Schouten will give a talk about Utilizing GPUs to accelerate 2D content on Saturday (16:30 in the Mozilla devroom). It’s going to be a very interesting and also fairly technical talk. If you are interested in the challenges of hardware accelerated 2D rendering, we hope to see you there!
And as Mozilla is not all about graphics, there will be plenty of other interesting talks in the Mozilla devroom this year that you should checkout!
This post is only interesting for advanced Firefox users on Linux who manually activated OpenGL compositing.
On more and more platforms we perform compositing in a separate thread from content rendering. This is awesome for smooth panning and zooming, as well as smooth video playback and CSS animations. We refer to this as “off-main-thread compositing” (OMTC).
Currently, we use OMTC on android, Firefox OS, Mac, and very soon on (some flavors of) Windows. On Linux, it can be activated but it is turned off by default as there are some bugs to fix (just like hardware accelerated compositing).
Main-thread compositing and off-main-thread compositing have a different infrastructure and as we move toward using OMTC, the main-thread compositing code becomes more and more obsolete, and today we don’t ship main-thread OpenGL compositing on any platform by default.
Very soon we will remove the main-thread OpenGL compositing code. This will feel great because that’s a few thousand lines of code that we won’t have to support anymore. This will not affect any user running the default configuration. However, for adventurous users that manually opted in OpenGL compositing on Linux, compositing will suddenly fallback to the CPU.
OpenGL compositing will not be gone. It will be gone for main-thread compositing only. If you opted in for OpenGL compositing on Linux, then you will still be able to activate OMTC and enjoy OpenGL compositing there.
At the moment the only place where OpenGL layers on linux is really useful is WebGL, because compositing on the CPU forces reading back the output of the WebGL canvas, which is slow. For other use cases, OpenGL compositing trades faster blitting with slower texture uploads (as we are not yet using a good solution for direct texturing on Linux such as texture from pixmap).
So, if you decide that you want GPU compositing on Linux, here is how to activate OMTC on this platform:
In about:config, set the following prefs to “true”
- layers.async-video.enabled (this is optional)
Also set this pref to “false”
As I write this post, we have not yet removed main-thread OpenGL layers. Power users don’t need to jump on OMTC yet, unless they want to report lots of bugs! I will write another blog post when we actually make the change in nightly (soon-ish) and when the change hits the stable version of Firefox.
Again, this does not affect the vast majority of our users.
Edit: We recently removed the need for an environment variable.
I heard we don’t have enough mentored bugs filed for the gfx code. If you are interested in contributing to Gecko’s graphics code, read on.
Contributing to Gecko for the first time can be scary because Gecko is a complex beast and it is very easy to get overwhelmed by the amount of code. Trying to understand it all is impossible, and knowing what parts are important to understand and where to start is hard when you approach a code base for the first time.
This is why mentoring bugs is useful.
A mentored bug is a bug that is approachable for new contributors, and that has someone who feels familiar with the code volunteering to mentor.
The mentor a bug already has a good understanding of the code and can help new contributors get started, show them where to look and explain the non-trivial things that may get in the way. Actually, the mentor probably already knows how to fix the bug, but he chooses to let someone else do it because he thinks the task is a good way to get started contributing.
Josh Matthews made a pretty cool tool to help finding mentored bugs.
Enough with the introduction, let’s talk about a good place where newcomers can start contributing to Gecko’s graphics code.
In gecko we have two drawing APIs:
Thebes, which is for the most part a wrapper around Cairo, is what we have been using for a while.
Moz2D, a higher level API that has several backends (including Cairo) is a more recent API that we want to use instead of thebes. Migrating from Thebes to Moz2D is a long process that has been going on for a while and will keep us busy for while too. It is a very good place to start contributing, because a lot of it isn’t very hard. Both APIs fulfill the same roles. So in many cases it is mostly about finding uses of thebes class (such as gfxIntSize, gfxIntRect, gfxImageSurface, etc.) and replace them with Moz2D equivalent (gfx::IntSize, gfx::IntRect, gfx::DataSourceSurface, etc.).
Moving from Thebes to Moz2D is important for us to make Gecko’s code more awesome because it makes Gecko easier to maintain and lets us build our graphics stack on top of better and more future-proof foundations.
We already filed a few mentored bugs to port from Thebes to Moz2D, such as https://bugzilla.mozilla.org/show_bug.cgi?id=882113 and the dependent bugs, and already received contributions, which is great! More of these bugs will be filed as we still have a lot of Thebes in the code.
If you are interested in helping there, the rest of this post contains some useful information:
Edit: For the curious, more info about Moz2D here: https://wiki.mozilla.org/Platform/GFX/Moz2D
Converting the simple classes like size, point and rectangle classes is rather trivial because they mostly have the same interfaces.
A class that is also interesting to convert is Thebes’ gfxImageSurface.
gfxImageSurface represents an image which pixels are stored in memory and can be initialized and accessed through pointers (as opposed to, say a texture on the GPU or some vector shapes that haven’t been rasterized yed).
Typically we use gfxImageSurface when we have some producer code that creates an image in memory and we want to wrap it in an object:
RefPtr<gfxImageSurface> thebesSurface = new gfxImageSurface(dataPointer, size, stride, format);
// we can use this image as any other surface type and also access the buffer through pointers
Or when we want to access an image’s buffer to pass it to some other API that doesn’t talk in term of thebes surfaces (like OpenGL texture uploads).
So for these two use cases we need an image abstraction that ensures that the image data is constituted of pixels and accessible by pointers, and that is what gfxImageSurface provides.
The Moz2D equivalent is mozilla::gfx::DataSourceSurface.
It offers pretty much the same thing and passing from one to the other in the use cases described above is rather easy.
One difference though is that while gfxImageSurface can be instanciated directly with the new operator, DataSourceSurface is created using gfx::Factory::CreateDataSourceSurface and gfx::Factory::CreateWrappingDataSourceSurface (static methods so you don’t need to hold on to a Factory object).
Some pieces of advise:
- It is very easy to fall in the snowball effect of converting one function parameter from Thebes to Moz2D and then fix all the code that depends on it, and then all the code that depends on that code, etc. Don’t hesitate to make small patches that convert a method or a class to Moz2D and then use the helpers in gfx2DGlue.h to do the glue on the calling code. Then on a subsequent patch the calling code can be ported to Moz2D, etc. This way we avoid hug monster-patches that are awful to review and rebase. Using mercurial queues help a lot with this workflow (having several patches in a certain order and going back and forth between them to modify them).
- When the task is to do some mechanical conversion, you can try to understand the surrounding code out of curiosity but you really don’t need to. I have seen motivated new contributors trying to understand all of Gecko’s graphics code before doing a simple patch and ending up not doing the patch because understanding everything all at once is hard and discouraging. Try to limit yourself to what you think is needed for the task. When in doubt ask questions to the mentor of the bug, ask about what parts of the code are important to understand, and don’t hesitate voice what you intend to do before doing it so that the mentor can tell you if your are going in the wrong direction before you have spent too much time on it.
- Again, ask questions. On the bug, email the mentor and on IRC (#gfx about graphics stuff #introduction about problems like building firefox). Sometimes on IRC someone asks a question and nobody answers it. This may feel intimidating and rude, but in fact it just means no-one that is watching IRC at this very moment knows the answer to your question. Perhaps the people who could answer are in a different time-zone or a different IRC channel, In any case you can send an email or ping your mentor on IRC and if he is not swamped with work he will try to help you asap or direct you to the right person to ask.
So if you are looking to contribute to gfx but haven’t yet found a starting point, this is a good one.
Hardware acceleration conveys the idea that some parts of the rendering use the GPU in order to speed things up.
In a web browser there are two main topics for hardware acceleration:
- Hardware accelerated content rendering: Drawing content on the GPU, like rendering text, shadows, shapes, etc. I also place video decoding in that bag but it is a somewhat different topic and I am not going to talk about it in this post.
- Hardware accelerated compositing: We render different elements of a web page in different layers (think of layers as photoshop layers) and compositing is the action of flattening all these layers into the final image that gets to the screen.
The former is generally very hard to achieve. Rendering content on the GPU is hard because GPUs are not always good at it. But it depends and there are things that we can do efficiently on the GPU and things that we can’t do efficiently on the GPU. For example graphics cards are not good at rendering text (text that looks good). Some canvas operations are very fast on the gpu (like blitting surfaces) but others are not (drawing bezier curves and shapes in general), so there is a trade-off and “hardware-accelerated canvas” is not always “accelerated” depending on what you are doing with your canvas.
The latter, GPU compositing, is a much more interesting optimization for browser implementors because it is a simple task that the GPU is very good at doing: blitting quads. Gecko performs compositing on the GPU except when the graphics card model or driver is black-listed and unfortunately, except on desktop Linux because we haven’t yet dedicated enough resources to support it efficiently (it can be activated manually but the results will not always be better). Note that Linux accelerated compositing is a very good area to contribute to.
When people around the internet speak of hardware acceleration in web browsers, they almost always refer to accelerated compositing.
Now let’s talk about a very bad misconception that has spread: I have read in various places on the internet that using 3D transforms enable hardware acceleration. This is very wrong. Gecko, Webkit, Blink, etc. use heuristic to determine when elements of a web page should be placed in a separate layer. Doing so let’s us do things like not invalidating (and re-drawing) an element that is overlapping with something that is animated, and some other optimizations. Applying a 3D transform to an element just happens to trigger this element to have its own layer. Layerizing an element is different from enabling hardware acceleration. when the browser starts up, it decides whether it will do compositing on the CPU or the GPU, and it won’t switch between the two because someone applied a transform on some element of a web page.
Layerizing an element is good if the element is going to be animated, but it is not something that will just make things faster. It can, and most likely will, use more memory because it will require us to allocate an extra texture to host the pixels of the element. For example, Gecko tends to layerize elements during a CSS transition, and merge back the element with the rest as soon as the transition is over, to avoid having too many layers at the same time.
Sometimes people blog about how applying a 3D transform can accelerate rendering, then other people read it and believe that applying transforms is the silver bullet to making web pages faster. Please, please, don’t blindly do things like that. I realize that we need to document more what layers acceleration in a web browser is, and how compositing works. Thankfully most browsers today have roughly the same notions of layers and compositing, so talking about layers in Gecko or Webkit should be equivalent from a web developer’s point of view. Layerizing for the sake of layerizing can degrade performances and memory usage. Heuristics evolve, and using hacks in websites today can prevent us from doing clever optimizations in the browser tomorrow.
So only use this kind of hacks when you understand what it actually does in the browser and when you can observe a real difference. Always remember that using a hack for something it is not meant to can probably speed things up in some browser today, and make things worse later as browsers evolves.
We’ve been quite busy lately and, I must confess, not very good at feeding this blog. Sorry about that. If you want some news about Gfx and you are attending FOSDEM this year, I invite you to see the the two talks by Gfx members:
In the first one, Sunday in the morning, I will give a quick overview of some parts of our rendering pipeline, and focus on the Layers system that we are in the process of refactoring. I think it should be an interesting talk for anyone curious about how things work under the hood, or for those who would like to contribute specifically on the topic of layers and off-main-thread compositing.
In the second talk (Sunday afternoon), Bas will give useful pointers for people interested in contributing to graphics in Gecko.
There are plenty of other cool Mozilla talks at FOSDEM this year.
See you tomorrow morning!
- is it improving/regressing your user experience? In what ways?
- do you see rendering bugs when OMTC? If so, on which web pages?
- is it crashing?
- most likely the experience will be less smooth at this point, since we have not played with optimizing OMTC on desktop platforms yet. However if it turns out to be faster it’s worth.
Setup a testing profile in Firefox nightly
./firefox --no-remote -P
- you are using Linux:
you need to set an environment variable before running Nightly. This is the most annoying part. in your terminal, in the directory containing Firefox Nightly, type the following command:
start firefox again by entering in your terminal
./firefox --no-remote -p testing
- you are using OS X
- no need to set an environment vairable. start firefox with the testing profile by typing ./firefox –no-remote -p testing if you closed it, and set the preference layers.offmainthreadcomposition.enabled in about:config to “true”.
./firefox --no-remote -p testing
Now let’s test.
Report the bugs.
gdb --args ./firefox --no-remote -p testing
If you need help
The primary purpose of this blog is to help communication across and around the Mozilla Gfx team. In particular, we want to help non-employee contributors be better connected and better informed of any relevant developments.
In particular, we aim for this blog to have:
- team goal/project updates;
- announcements of new projects, or developments in existing projects;
- early feature announcements;
- posts introducing anyone in the gfx team, regardless of employee status;
- posts from anyone in the gfx team, regardless of employee status.