Skip navigation

Tag Archives: Firefox

Dietrich recently posted about the memory usage of social plugins, and I found the results rather surprising because, at least in the case of Facebook, I didn’t think it ever loaded enough code to consume 20+MB of memory.

When I first learned about social plugins, I thought that they were a really cool idea and thought that they had a lot of potential. If they use a ton of memory though, I feel like it’s a bit of a deal breaker to using them. So, being the curious engineer that I am, I decided to test this out myself. I conducted these tests in a new Firefox profile and I was not signed into Facebook (to try and replicate the experience Dietrich had).

One Like Button

For my first test, I had a very simple page for the default like social plugin pointing to my site.

like page result

One like button doesn’t seem to add much, which is good!

Two Like Buttons

The next test I tried was duplicating the like button so it showed up twice. This code is a bit naive since I duplicate a <div> element with the same id and don’t need to include the JavaScript twice. However, it shows what someone who would just copy and paste will end up with, which I think is valuable.

like page (two button) result

As you can see, memory usage nearly doubled. This is a bit surprising since the exact same JavaScript is included. I would expect there to not be any additional shapes, but that nearly doubles. scripts and mjit-code also all double, and I would expect that at least the latter to not.

A more interesting version of this test would be to not include the JavaScript twice, and just add one additional <fb:like> button that doesn’t like the same url.

two like button test results

Interestingly, memory usage did not change significantly from the duplicate resource case! So, what exactly is going on here? This page ends up loading four additional resources:

File HTTP Status Size Mime Type
all.js 304 143KB application/x-javascript
login_status.php 200 58b text/html
like.php 200 33KB text/html
like.php 200 33KB text/html

That is 209KB of HTML and JavaScript that is being sent for two like buttons. Something tells me that part of the problem here is that Facebook is sending more than it needs to for this (I did not look into exactly what was being sent). The good news is that 143KB comes from the browser’s cache.

Send Button

The last test I did was the send button pointing to my website.

send test results

Given that the like button test includes a send button as well, I’m not surprised to see that this used even less memory.

Summary

I think there are are two problems here:

  1. Firefox should create less shapes and do a better job of not duplicating the same JavaScript code in a given compartment.
  2. Facebook needs to send less data down for their social plugins. I have a hard time believing that that much JavaScript is needed in order to display a like button, a share button, and a faces of your friends who have liked a page.

It’d be interesting to see how these numbers change when you are logged in, but I don’t have time to do that analysis. I’ve provided all the code and steps I used to get these results, so it shouldn’t be too hard for someone else to come along and do that if they are interested. Another interesting test would be to see how the Twitter and Google+ integrations break down too (but I leave that as an exercise for the reader).

Erik Dahlström felt compeled to respond to limi’s post about why Firefox will not pass the ACID3 test. Erik’s post claims to bust two myths:

  1. You need to implement the whole SVG 1.1 Fonts chapter to pass Acid3
  2. SVG Fonts and foreignObject support are both required to pass Acid3

Let’s be clear here: these two statements were never made by limi, or by bz, whose quote is used in both posts. Erik used only parts of bz’s quote that helped his argument, and conveniently left out this bit:

We don’t particularly want to do that small subset in Gecko, since it gives no benefits to authors or users over the existing downloadable font support (beyond the brownie points on Acid3).

On the other hand, support for the full specification in a UA that also supports HTML is… very difficult. SVG fonts are just not designed with integration with HTML in mind. Once you put an <iframe> in a glyph, all sorts of issues arise — both in terms of the spec being underdefined and in terms of the behavior being very difficult to implement, no matter what the spec said.

The astute reader might notice that Erik used a straw man argument to disprove limi’s post. Luckily, it looks like not everybody on the Internet has fallen for this logical fallacy.

Sometime soon after the beta 8 code freeze, the Places team will be merging the Places branch into mozilla-central. There are a lot of changes we’ve been working on, the most important of which is some major re-architecting how we store data.

The Benefits

The work on the Places branch brings us a number of benefits. In general, we’ve parallelized work, and made it substantially less likely that we’ll block on the GUI thread. Some of the important fixes we have landed are:

  • Faster Location Bar
    The location bar is faster because other database work no longer blocks us from searching, and the queries are much simpler.
  • Asynchronous Bookmark Notifications
    Indicating if the current page is bookmarked in the location bar (with the star) is now an asynchronous operation that does not block the page load.
  • Faster Bookmarks & History Management/Searches
    Simpler queries and other improvements should make this all work faster.
  • Faster Link Coloring
    Link coloring is now executed on a separate database connection so it cannot block other database work.
  • Expiration Work
    Less work at startup, less work at shutdown, and less work when we run expiration.
  • Less Data Stored
    Embedded pages are now tracked only in memory and never hit the disk.
  • Better Battery Management
    Much less work during idle time, which will improve our power consumption behaviors.
  • Fixes 29 blockers and 18 other issues

A bit of History

Way back in the days leading up to Firefox 3.5, we moved from storing all of our history and location data in disk tables to in-memory tables that we’d flush out to disk every two minutes off of the GUI thread. The benefit of this was two-fold:

  1. No longer performing the vast majority of our disk writes on the GUI thread
  2. No longer performing the vast majority of our fsyncs/Flushes on the GUI thread

More details about how we came up with this solution can be found in a series of blog posts.

The Problem

This solution has worked out pretty well for us for a while, but recently, especially on OS X, it has not been. The short story is that our architecture did not scale well due to lock contention between our GUI thread and our background I/O thread. While the common case access case may be fine, the failure case (when we hit lock contention) is pretty terrible. The problem is so terrible that I once described it like this:

the failure case makes us fall on our faces, skid about 100 feet, and then fall off a cliff without a parachute.

Ultimately, the only way we can avoid this situation is to not do any database work on separate threads with the same database connection. It was not an issue in the past because we just did not do enough work on the I/O thread, but as we have added to the workload of that thread, we increase the likelihood of it holding the lock, which means there is a higher probability that the GUI thread will not be able to instantly acquire the lock and do whatever it needs to do. This essentially leaves us with two options:

  1. Move the rest of our database work off of the GUI thread.
  2. Move database work from the I/O thread back to the GUI thread.

The Solution

The second choice is not actually a viable option. Disk I/O completes in a non-deterministic amount of time, which is why we have been moving it from the GUI thread to an I/O thread since Firefox 3.5. The first choice is not entirely viable either due to schedule constraints either (we have tons of API calls that are not used heavily but still synchronous). A hybrid solution exists, however. We can reduce the amount of work we do on the I/O thread by using additional I/O threads. Additionally, we can move the remaining synchronous operations during browsing to an I/O thread. In the end, Places ends up with one read/write thread, and multiple read-only threads.

This wasn’t really an option back in the Firefox 3.5 days because in SQLite readers and writers blocked each other. However, the SQLite developers recently devised a new journaling method called WAL that lets readers not block writers, and writers not block readers. When the Places branch merges into mozilla-central, we will end up with three read-only I/O threads and our original read-write I/O thread. The three read-only threads are used for location bar searches, visited checks (is a given hyperlink visited), and some bookmark operations. Each I/O thread has its own connection to the database, allowing operations to happen in parallel (SQLite is only threadsafe because it serializes all access on each connection object, which is why we had the lock contention in the first place).

Performance Test Issues

One of the things that made this work especially difficult is seemingly random changes in performance numbers. We often had regressions suddenly appear (according to talos) on changesets that would have zero impact on performance, and then backing out the change would cause an additional regression. Other times, when we would merge mozilla-central into Places, we would suddenly get new regressions when comparing to mozilla-central. This could be indicative of a bad interaction with our code and the changes on mozilla-central, however after looking at the changes on mozilla-central that landed with the merge, that appeared to be highly unlikely.

I’m also quite certain that some of our performance tests do not actually test/measure what we actually want to test/measure. I’ll leave that discussion to a future blog posts, however.

This week, I spent some time looking at some real life profiles that were sent into us by users seeing startup time in the minutes. The tests were ran just like I ran the test on my profile: all add-ons disabled. The results I got are both good and bad, but first the results!

Results

The first shows the raw test run data (which isn’t terribly interesting). The second compares the reported startup time for each test. You will probably want to click to zoom in.

Conclusions

Like I said, the results are both good and bad. Good in that I now have a pretty good idea on why people have bad startup times. Bad in that we don’t have any way to quickly improve the issues that people are seeing. What I see from this data is that profiles in the wild, with add-ons disabled aren’t much slower than a clean profile. This seems to implicate add-ons being at least part of the problem (which we knew) or possibly all of the problem at this point (for the profiles tested). The good news is that the add-ons team is already working on solutions to this, and you should expect some blog posts from them about this soon.

Next Step

Next week I’m going to spend some time getting numbers with these profiles on the latest release of Firefox 3.6 with and without add-ons disabled to compare. This will pretty much confirm or deny my hypothesis of this week’s results.

News on the Past

In my last post, we looked at my profile with various pieces removed to try and figure out why startup might be slow for people. With those results, I identified two issues that would impact startup the most:

  1. Large cookies.sqlite
  2. Many tabs being restored

I also have good news about both of these issues! The cookies.sqlite issue is now fixed and will be a part of beta 4, and Paul has some good data about session restore and tabs (with more to come).

Recently, I started working on one of our Q3 goals to reduce “dirty” profile startup time to be only 20% of normal startup time. It is a pretty big and scary problem, and is probably the hardest problem I have ever worked on for the Mozilla project (I have had some doozies in the past too…). So far I have only spent time trying to get consistent results of startup time with a dirty profile (of the Firefox kind) so I can then compare profiles (of the program profile kind) to a clean profile (of the Firefox kind). For now, I am just using a copy of my own profile to get some data (no, you cannot have a copy of it).

Unfortunately, I immediately hit a bit of a speed bump. I think a picture best explains this:

chart of dirty startup

This is a chart of the 20 runs talos does when measuring startup time. As you can see, the dirty profile kept on increasing each and every run. After about a day of investigating this, the cause is finally known: tabs. Turns out, talos tries to quit the browser before calling window.close(), which results in the tabs not closing. As a result, at the end of the 20 cycles, we were loading 19 more tabs than the first cycle. I filed a bug about this behavior against talos. It does not matter now, but if we ever decide to change the default preference in Firefox to load your windows and tabs from last time this will come back to bite us.

I did learn something useful out of all of this though: startup time scales linearly with the number of tabs session restore has to restore. I confirmed this by running talos with 200 cycles instead of the normal 20, and it was clearly a linear increase. We should probably figure out a way to mitigate that, but I have not filed a bug on it (yet!).