Skip navigation

Tag Archives: Mozilla

Dietrich recently posted about the memory usage of social plugins, and I found the results rather surprising because, at least in the case of Facebook, I didn’t think it ever loaded enough code to consume 20+MB of memory.

When I first learned about social plugins, I thought that they were a really cool idea and thought that they had a lot of potential. If they use a ton of memory though, I feel like it’s a bit of a deal breaker to using them. So, being the curious engineer that I am, I decided to test this out myself. I conducted these tests in a new Firefox profile and I was not signed into Facebook (to try and replicate the experience Dietrich had).

One Like Button

For my first test, I had a very simple page for the default like social plugin pointing to my site.

like page result

One like button doesn’t seem to add much, which is good!

Two Like Buttons

The next test I tried was duplicating the like button so it showed up twice. This code is a bit naive since I duplicate a <div> element with the same id and don’t need to include the JavaScript twice. However, it shows what someone who would just copy and paste will end up with, which I think is valuable.

like page (two button) result

As you can see, memory usage nearly doubled. This is a bit surprising since the exact same JavaScript is included. I would expect there to not be any additional shapes, but that nearly doubles. scripts and mjit-code also all double, and I would expect that at least the latter to not.

A more interesting version of this test would be to not include the JavaScript twice, and just add one additional <fb:like> button that doesn’t like the same url.

two like button test results

Interestingly, memory usage did not change significantly from the duplicate resource case! So, what exactly is going on here? This page ends up loading four additional resources:

File HTTP Status Size Mime Type
all.js 304 143KB application/x-javascript
login_status.php 200 58b text/html
like.php 200 33KB text/html
like.php 200 33KB text/html

That is 209KB of HTML and JavaScript that is being sent for two like buttons. Something tells me that part of the problem here is that Facebook is sending more than it needs to for this (I did not look into exactly what was being sent). The good news is that 143KB comes from the browser’s cache.

Send Button

The last test I did was the send button pointing to my website.

send test results

Given that the like button test includes a send button as well, I’m not surprised to see that this used even less memory.

Summary

I think there are are two problems here:

  1. Firefox should create less shapes and do a better job of not duplicating the same JavaScript code in a given compartment.
  2. Facebook needs to send less data down for their social plugins. I have a hard time believing that that much JavaScript is needed in order to display a like button, a share button, and a faces of your friends who have liked a page.

It’d be interesting to see how these numbers change when you are logged in, but I don’t have time to do that analysis. I’ve provided all the code and steps I used to get these results, so it shouldn’t be too hard for someone else to come along and do that if they are interested. Another interesting test would be to see how the Twitter and Google+ integrations break down too (but I leave that as an exercise for the reader).

I’ll assert that one of the biggest pain points of getting a patch in the tree is getting review. This is largely a problem because it can often take a very long time to get that review from the limited set of people who are allowed to review your patch. Even a veteran contributor who knows his way around the system can have his patches sitting in somebody’s review queue for weeks.

I’ve decided I’m going to do something about this by setting a good example, and hope that others follow. For the past two weeks, I’ve been driving my review queue down to zero almost every day. This sounds like a lot of work (especially for anyone who gets a lot of review requests), but I’ve found that in practice, it isn’t so bad. I usually start out every morning by going through all my bugmail before I dig into my days work. I’ve simply added a step after this where I do code reviews until about noon. This doesn’t often take that long as most patches I get aren’t terribly large, aren’t terribly complicated, or there aren’t many to review. If I happen to not get through all of the review requests, I then set aside an hour at the end of my day to go through the rest. In theory, this means my turn-around time for review requests is one business day. In practice, it has come out to be closer to a two or three days due to a number of large patches that I’ve had to review.

You might say “well, that’s easy to do if you don’t get many review requests!” This is very true, but it’s not like I get a small number of requests. Over the past two weeks I’ve done review and feedback requests for 24 bugs, many of which required more than one review iteration. I don’t think that’s a small number of reviews (at least two per day).

I have one caveat with this though; I don’t want to have the quality of my reviews to degrade. For most patches, I’ve found that if I do any more than five or six a day I start to miss things because I’m too mentally drained at that point. If I have some really large patches I’ll review less, and if it is a bunch of small patches, I’ll review more. Sacrificing code quality for the sake of fast turn-around times isn’t ideal, in my opinion.

I would love it if more people tried this out to see if it works for them. The people requesting review from you would really appreciate it!

Back in December when I was working on the Places branch, I was using Xperf quite a bit to try and figure out why we were regressing certain performance tests. Xperf is an incredibly powerful tool, but it’s really hard to get it to do what you want sometimes. This is largely due to the fact that the documentation isn’t great, and there appears to be wrong information on the Internet (from blogs, which tend to be more useful than the documentation).

I cared about getting information about hard faults, file I/O, and disk I/O to see if patterns changed with my work. In order to accomplish this, I started Xpef like this (from an admin console):
xperf -start "NT Kernel Logger" -on PROC_THREAD+LOADER+HARD_FAULTS+FILE_IO+FILE_IO_INIT+DISK_IO -stackwalk FileWrite+FileRead+FileFlush -MaxBuffers 1024 -BufferSize 1024 -f output.etl
The first list options after -on is a list of providers. To see the list of installed kernel providers on your system and what they do, open up a command prompt, and run xperf -providers K.
The options after -stackwalk tells Xperf to get call stacks for certain events. The full list of events supported can be found here. I found it very difficult to get stacks, and if you are on a 64-bit version of windows, you have an extra hoop to jump through. I was actually never able to get stacks from an optimized build with symbols, so I ended up just using debug builds when I needed stacks, and opt builds when I wanted good numbers for everything else.
The -MaxBuffers and -BufferSize arguments were useful to prevent events from being dropped (if Xperf doesn’t have enough memory set aside to record an event, it just drops it). You can tweak those values to your needs, but the values I used should be fine.

Once I ran that command in my console, I’d perform the test I wanted to get data on. Upon completion, Xperf needs to be told to stop, and then have it merge (merging may be unnecessary when you only use kernel providers, but I never tested this theory):
xperf -stop "NT Kernel Logger"
xperf -merge output.etl output_final.etl

You can now open output_final.etl to examine the data you just gathered!

Erik Dahlström felt compeled to respond to limi’s post about why Firefox will not pass the ACID3 test. Erik’s post claims to bust two myths:

  1. You need to implement the whole SVG 1.1 Fonts chapter to pass Acid3
  2. SVG Fonts and foreignObject support are both required to pass Acid3

Let’s be clear here: these two statements were never made by limi, or by bz, whose quote is used in both posts. Erik used only parts of bz’s quote that helped his argument, and conveniently left out this bit:

We don’t particularly want to do that small subset in Gecko, since it gives no benefits to authors or users over the existing downloadable font support (beyond the brownie points on Acid3).

On the other hand, support for the full specification in a UA that also supports HTML is… very difficult. SVG fonts are just not designed with integration with HTML in mind. Once you put an <iframe> in a glyph, all sorts of issues arise — both in terms of the spec being underdefined and in terms of the behavior being very difficult to implement, no matter what the spec said.

The astute reader might notice that Erik used a straw man argument to disprove limi’s post. Luckily, it looks like not everybody on the Internet has fallen for this logical fallacy.

Sometime soon after the beta 8 code freeze, the Places team will be merging the Places branch into mozilla-central. There are a lot of changes we’ve been working on, the most important of which is some major re-architecting how we store data.

The Benefits

The work on the Places branch brings us a number of benefits. In general, we’ve parallelized work, and made it substantially less likely that we’ll block on the GUI thread. Some of the important fixes we have landed are:

  • Faster Location Bar
    The location bar is faster because other database work no longer blocks us from searching, and the queries are much simpler.
  • Asynchronous Bookmark Notifications
    Indicating if the current page is bookmarked in the location bar (with the star) is now an asynchronous operation that does not block the page load.
  • Faster Bookmarks & History Management/Searches
    Simpler queries and other improvements should make this all work faster.
  • Faster Link Coloring
    Link coloring is now executed on a separate database connection so it cannot block other database work.
  • Expiration Work
    Less work at startup, less work at shutdown, and less work when we run expiration.
  • Less Data Stored
    Embedded pages are now tracked only in memory and never hit the disk.
  • Better Battery Management
    Much less work during idle time, which will improve our power consumption behaviors.
  • Fixes 29 blockers and 18 other issues

A bit of History

Way back in the days leading up to Firefox 3.5, we moved from storing all of our history and location data in disk tables to in-memory tables that we’d flush out to disk every two minutes off of the GUI thread. The benefit of this was two-fold:

  1. No longer performing the vast majority of our disk writes on the GUI thread
  2. No longer performing the vast majority of our fsyncs/Flushes on the GUI thread

More details about how we came up with this solution can be found in a series of blog posts.

The Problem

This solution has worked out pretty well for us for a while, but recently, especially on OS X, it has not been. The short story is that our architecture did not scale well due to lock contention between our GUI thread and our background I/O thread. While the common case access case may be fine, the failure case (when we hit lock contention) is pretty terrible. The problem is so terrible that I once described it like this:

the failure case makes us fall on our faces, skid about 100 feet, and then fall off a cliff without a parachute.

Ultimately, the only way we can avoid this situation is to not do any database work on separate threads with the same database connection. It was not an issue in the past because we just did not do enough work on the I/O thread, but as we have added to the workload of that thread, we increase the likelihood of it holding the lock, which means there is a higher probability that the GUI thread will not be able to instantly acquire the lock and do whatever it needs to do. This essentially leaves us with two options:

  1. Move the rest of our database work off of the GUI thread.
  2. Move database work from the I/O thread back to the GUI thread.

The Solution

The second choice is not actually a viable option. Disk I/O completes in a non-deterministic amount of time, which is why we have been moving it from the GUI thread to an I/O thread since Firefox 3.5. The first choice is not entirely viable either due to schedule constraints either (we have tons of API calls that are not used heavily but still synchronous). A hybrid solution exists, however. We can reduce the amount of work we do on the I/O thread by using additional I/O threads. Additionally, we can move the remaining synchronous operations during browsing to an I/O thread. In the end, Places ends up with one read/write thread, and multiple read-only threads.

This wasn’t really an option back in the Firefox 3.5 days because in SQLite readers and writers blocked each other. However, the SQLite developers recently devised a new journaling method called WAL that lets readers not block writers, and writers not block readers. When the Places branch merges into mozilla-central, we will end up with three read-only I/O threads and our original read-write I/O thread. The three read-only threads are used for location bar searches, visited checks (is a given hyperlink visited), and some bookmark operations. Each I/O thread has its own connection to the database, allowing operations to happen in parallel (SQLite is only threadsafe because it serializes all access on each connection object, which is why we had the lock contention in the first place).

Performance Test Issues

One of the things that made this work especially difficult is seemingly random changes in performance numbers. We often had regressions suddenly appear (according to talos) on changesets that would have zero impact on performance, and then backing out the change would cause an additional regression. Other times, when we would merge mozilla-central into Places, we would suddenly get new regressions when comparing to mozilla-central. This could be indicative of a bad interaction with our code and the changes on mozilla-central, however after looking at the changes on mozilla-central that landed with the merge, that appeared to be highly unlikely.

I’m also quite certain that some of our performance tests do not actually test/measure what we actually want to test/measure. I’ll leave that discussion to a future blog posts, however.