Skip navigation

With the help of tBunnyMan’s post, I managed to get the Chef DK running inside a jail on my FreeBSD box.

After you’ve done your initial setup in the jail, you’ll want to also setup sudo in your jail and allow anybody in the wheel to have password-less sudo (you can modify the file by hand if you want to see what it’s doing):

# pkg install sudo
# sed -ie 's/#\(%wheel ALL=\)/\1/' /usr/local/etc/sudoers

Now, create the user that will run the setup:

# adduser
Username: chef
Full name: chef
Uid (Leave empty for default):
Login group [chef]:
Login group is chef. Invite chef into other groups? []: wheel
Login class [default]:
Shell (sh csh tcsh nologin) [sh]:
Home directory [/home/chef]:
Home directory permissions (Leave empty for default):
Use password-based authentication? [yes]:
Use an empty password? (yes/no) [no]:
Use a random password? (yes/no) [no]: yes
Lock out the account after creation? [no]:
Username   : chef
Password   : 
Full Name  : chef
Uid        : 1001
Class      :
Groups     : chef wheel
Home       : /home/chef
Home Mode  :
Shell      : /bin/sh
Locked     : no
OK? (yes/no): yes

We need an older version of devel/gecode, so now we have to downgrade it. This step will take a while if you have a CPU that isn’t very fast.


# su chef
# cd ~
# sudo pkg install portdowngrade
# sudo portdowngrade devel/gecode r345033
# cd gecode
# sudo make deinstall install clean

We are not yet done with gecode, however. A pull request to dep_selector added a dependency on GECODE_VERSION_NUMBER, which isn’t properly defined in /usr/local/include/gecode/support/config.hpp, so we have to fix it.

# sudo sed -ie 's/\(#define GECODE_VERSION_NUMBER\)\s*/\1 300703/' /usr/local/include/gecode/support/config.hpp

Almost there! No we can install our other dependencies and checkout the git repo.

# cd ~
# sudo pkg install ruby rubygem-bundler git
# git clone https://github.com/chef/chef-dk.git
# cd chef-dk
# USE_SYSTEM_GECODE=1 bundle install --without development

This will at least let you build the Chef DK. As I go further down this rabbit hole, I may end up putting up more posts on how I got Chef setup on FreeBSD.

I recently purchased a UniFi UAP-PRO for my home wireless. I choose it because it is commercial grade hardware with good management software for a low price (comparatively). It then occurred to me that I could take advantage of my DreamHost VPS that I barely use to host the controller software so I don’t need to bother having it on any of my local computers. The EdgeRouter Lite makes it trivial to automatically point your access points to a place in the cloud with a given IP address, so the hardest part was going to be getting the software running on my VPS.

Once I got on a newer version of DreamHost’s VPS offering (I was on something running Debian 5 before I switched to one running Ubuntu 12.04), I had a bit of a rocky start. Some instructions I found online were outdated and had me install a very old version of the controller software. I was trying to import the settings I had done on my local controller so I didn’t have to set everything up again, and that import process wasn’t going to work out with that old controller software. I’ve got it working now, so I wanted to share the steps that worked for me so hopefully nobody else has to go through the pains I did.

Step One: Get a newer version of MongoDB

We’ll want to get a newer version than what is installed by default, so simply follow the instructions from MongoDB (version 2.4).

Step Two: Follow the release instructions to install the controller

As of this writing, 4.6.6 is the latest version. In the announcement thread for that version, search for “UniFi Controller APT howto”, and follow those instructions (skipping step two since we did that in step one from this blog post).

Step Three: Load our controller and import our config

I exported my local controller’s config (Settings -> Maintenance -> Download Backup Settings) before doing this next step. When we navigate to our server’s address (over https on port 8443), we’re given the option to import a config. Once we’ve imported it, the service will restart, and then we’ll be able to point our access points to our controller. Note: we can also create a completely new config.

Step Four: Set the Controller Hostname/IP

The last step is to open the Settings pane, clicking the Controller tab and entering the hostname or IP address of our controller.

Dietrich recently posted about the memory usage of social plugins, and I found the results rather surprising because, at least in the case of Facebook, I didn’t think it ever loaded enough code to consume 20+MB of memory.

When I first learned about social plugins, I thought that they were a really cool idea and thought that they had a lot of potential. If they use a ton of memory though, I feel like it’s a bit of a deal breaker to using them. So, being the curious engineer that I am, I decided to test this out myself. I conducted these tests in a new Firefox profile and I was not signed into Facebook (to try and replicate the experience Dietrich had).

One Like Button

For my first test, I had a very simple page for the default like social plugin pointing to my site.

like page result

One like button doesn’t seem to add much, which is good!

Two Like Buttons

The next test I tried was duplicating the like button so it showed up twice. This code is a bit naive since I duplicate a <div> element with the same id and don’t need to include the JavaScript twice. However, it shows what someone who would just copy and paste will end up with, which I think is valuable.

like page (two button) result

As you can see, memory usage nearly doubled. This is a bit surprising since the exact same JavaScript is included. I would expect there to not be any additional shapes, but that nearly doubles. scripts and mjit-code also all double, and I would expect that at least the latter to not.

A more interesting version of this test would be to not include the JavaScript twice, and just add one additional <fb:like> button that doesn’t like the same url.

two like button test results

Interestingly, memory usage did not change significantly from the duplicate resource case! So, what exactly is going on here? This page ends up loading four additional resources:

File HTTP Status Size Mime Type
all.js 304 143KB application/x-javascript
login_status.php 200 58b text/html
like.php 200 33KB text/html
like.php 200 33KB text/html

That is 209KB of HTML and JavaScript that is being sent for two like buttons. Something tells me that part of the problem here is that Facebook is sending more than it needs to for this (I did not look into exactly what was being sent). The good news is that 143KB comes from the browser’s cache.

Send Button

The last test I did was the send button pointing to my website.

send test results

Given that the like button test includes a send button as well, I’m not surprised to see that this used even less memory.

Summary

I think there are are two problems here:

  1. Firefox should create less shapes and do a better job of not duplicating the same JavaScript code in a given compartment.
  2. Facebook needs to send less data down for their social plugins. I have a hard time believing that that much JavaScript is needed in order to display a like button, a share button, and a faces of your friends who have liked a page.

It’d be interesting to see how these numbers change when you are logged in, but I don’t have time to do that analysis. I’ve provided all the code and steps I used to get these results, so it shouldn’t be too hard for someone else to come along and do that if they are interested. Another interesting test would be to see how the Twitter and Google+ integrations break down too (but I leave that as an exercise for the reader).

I’m going to write something that will probably surprise you. I say this because it sure surprised me when I realized I was even considering what I’m doing a possibility. I’m going to be moving on to something a bit different in the mobile space, and it’s going to be a different kind of challenge for me.

June 1st will be my last day at Mozilla. I’ve learned so much over the years working there, and choosing to leave was the hardest decision I’ve had to make. I do not intend to disappear from the project, however, but my activity level will decrease. Feel free to continue to send review requests my way and cc me to bugs you want my opinion on, and I’ll do my best to reply in a timely manner.

So long, and thanks for all the fish.

I handle a lot of code review and code feedback requests these days. However, it’d be great to get more people doing this for a number of reasons:

  • More people exposed to more parts of the code base
  • More review bandwidth so more work can be checked into the tree
  • Less dependence on a small set of people

In order to get more people doing this, it would be good to document to look for, and how to make sure the code is sound. I’m sure every reviewer does things a bit differently, but I’m going to share my process. There are two types of review I do these days: feedback and review.

Feedback

Feedback is pretty simple to do, and I can usually fly though any patch (even large ones) quickly. This isn’t very thorough (in fact, I tend to keep it to general comments), but I look for the following things:

  • correct API usage (XPCOM, jsm, whatever)
  • internal invariants are not violated
  • any new APIs created make sense and aren’t confusing
  • code style matches what’s there, or follows the style guide

Review

Review is more important because once a patch gets r+, it can generally land in the tree. Consequentially, I tend to spend a lot more time on any given patch. In addition to all the things I do for a feedback request (which are looked at more closely for a review), I’ll also look at the following:

  • evaluate how well tested is the code that is being added/modified. If it isn’t well tested, I’ll generally suggest a set of test cases that I feel are the bare minimum this needs to land.
  • evaluate how this might impact other work going on in this area of the codebase
  • ensure this doesn’t add any I/O on the GUI thread
  • apply the patch and run the tests
  • if the patch looks like it might regress performance, ask for the author to verify that it does not

Note that a number of these things may not be done if I know what the patch author has already done to ensure the patch is safe.