Skip navigation

Category Archives: Technology

Anything and everything dealing with technology.

I recently purchased a UniFi UAP-PRO for my home wireless. I choose it because it is commercial grade hardware with good management software for a low price (comparatively). It then occurred to me that I could take advantage of my DreamHost VPS that I barely use to host the controller software so I don’t need to bother having it on any of my local computers. The EdgeRouter Lite makes it trivial to automatically point your access points to a place in the cloud with a given IP address, so the hardest part was going to be getting the software running on my VPS.

Once I got on a newer version of DreamHost’s VPS offering (I was on something running Debian 5 before I switched to one running Ubuntu 12.04), I had a bit of a rocky start. Some instructions I found online were outdated and had me install a very old version of the controller software. I was trying to import the settings I had done on my local controller so I didn’t have to set everything up again, and that import process wasn’t going to work out with that old controller software. I’ve got it working now, so I wanted to share the steps that worked for me so hopefully nobody else has to go through the pains I did.

Step One: Get a newer version of MongoDB

We’ll want to get a newer version than what is installed by default, so simply follow the instructions from MongoDB (version 2.4).

Step Two: Follow the release instructions to install the controller

As of this writing, 4.6.6 is the latest version. In the announcement thread for that version, search for “UniFi Controller APT howto”, and follow those instructions (skipping step two since we did that in step one from this blog post).

Step Three: Load our controller and import our config

I exported my local controller’s config (Settings -> Maintenance -> Download Backup Settings) before doing this next step. When we navigate to our server’s address (over https on port 8443), we’re given the option to import a config. Once we’ve imported it, the service will restart, and then we’ll be able to point our access points to our controller. Note: we can also create a completely new config.

Step Four: Set the Controller Hostname/IP

The last step is to open the Settings pane, clicking the Controller tab and entering the hostname or IP address of our controller.

Back in December when I was working on the Places branch, I was using Xperf quite a bit to try and figure out why we were regressing certain performance tests. Xperf is an incredibly powerful tool, but it’s really hard to get it to do what you want sometimes. This is largely due to the fact that the documentation isn’t great, and there appears to be wrong information on the Internet (from blogs, which tend to be more useful than the documentation).

I cared about getting information about hard faults, file I/O, and disk I/O to see if patterns changed with my work. In order to accomplish this, I started Xpef like this (from an admin console):
xperf -start "NT Kernel Logger" -on PROC_THREAD+LOADER+HARD_FAULTS+FILE_IO+FILE_IO_INIT+DISK_IO -stackwalk FileWrite+FileRead+FileFlush -MaxBuffers 1024 -BufferSize 1024 -f output.etl
The first list options after -on is a list of providers. To see the list of installed kernel providers on your system and what they do, open up a command prompt, and run xperf -providers K.
The options after -stackwalk tells Xperf to get call stacks for certain events. The full list of events supported can be found here. I found it very difficult to get stacks, and if you are on a 64-bit version of windows, you have an extra hoop to jump through. I was actually never able to get stacks from an optimized build with symbols, so I ended up just using debug builds when I needed stacks, and opt builds when I wanted good numbers for everything else.
The -MaxBuffers and -BufferSize arguments were useful to prevent events from being dropped (if Xperf doesn’t have enough memory set aside to record an event, it just drops it). You can tweak those values to your needs, but the values I used should be fine.

Once I ran that command in my console, I’d perform the test I wanted to get data on. Upon completion, Xperf needs to be told to stop, and then have it merge (merging may be unnecessary when you only use kernel providers, but I never tested this theory):
xperf -stop "NT Kernel Logger"
xperf -merge output.etl output_final.etl

You can now open output_final.etl to examine the data you just gathered!

Curtis just gave me this incredibly handy piece of code that higlights errors and warnings in make output. Now, when I’m building, all the warnings are highlighted in yellow, and the errors in red. Just put the following in your bash profile script:

make()
{
  pathpat="(/[^/]*)+:[0-9]+"
  ccred=$(echo -e "\033[0;31m")
  ccyellow=$(echo -e "\033[0;33m")
  ccend=$(echo -e "\033[0m")
  /usr/bin/make "$@" 2>&1 | sed -E -e "/[Ee]rror[: ]/ s%$pathpat%$ccred&$ccend%g" -e "/[Ww]arning[: ]/ s%$pathpat%$ccyellow&$ccend%g"
  return ${PIPESTATUS[0]}
}

Of course, improvements and more ideas welcome! Thanks goes to Curtis for this!

A little while ago I got interviewed by Anthony Bryan from the metalink project. I feel sorry for him because it took me several long months to actually get the time to sit down and talk to him. Anyway, you can check out the podcast here. They used an old facebook photo of mine, so, uh, pardon the odd image of me. I figured it could be worse though.

It’s worth a listen though. I talk about some of the features of the new download manager (old news now, but he ask for this interview a while ago…), how I got involved with the Mozilla project, and a few other interesting tidbits including what I’ve currently been working on. There is other interesting things in there too!

Well, WordPress has seen fit to release version 2.2. Now I have to decided if I want to upgrade now, or wait awhile. So, I ask you all this – is it worth upgrading, or should I stay on the 2.1 path for now (which is up to date for any of you would be hackers)?