Sean Kaiser (dot) com

A home for things (work-related & not) that I feel inclined to share with others.

The Big Project- Preparing the Machines for Deployment With DeployStudio

| Comments

Previously on “The Big Project”

In part 1 of this series, I provided an overview of what I now call “The Big Project.” Part 2 talked about the importance of inventorying. This article is the first of a series of more detailed technical articles describing various aspects and the tools we used to pull off this project. FIrst up…

DeployStudio

We’ve been using DeployStudio for several years to help us image machines. In the (relatively distant) past, we restored full monolithic images to machines, new or old. Over the past almost two years, we’ve switched to the thin imaging model. Thin imaging is a method of preserving the contents of a machine’s hard drive and deploying either site specific applications or deployment tools to facilitate the installation of such applications and other files in an effort to minimize the amount of time needed to get a machine ready for a user. In our case, we deploy our deployment tools (puppet and munki) and some basic settings. This cuts down on the deployment cost, in both time and bits traveling across the network.

In situations where we need to reimage a machine, we have a workflow that lays down an InstaDMG created vanilla image and then runs the normal thin imaging workflow.

The thin imaging workflow

When we “Northmont-ize a new machine” (the name of our thin imaging workflow), DeployStudio performs the initial basic setup fo our machines. Everything is rather basic at this point. The workflow does the following:

  • set the time server to our internal NTP server. (This is the only non-automated step in the workflow. I didn’t automate this step to let the technician confirm that the correct drive is selected as the target volume. All other steps are automated and use the previous task target option for the target volume.)
  • set the computer’s name via a query to our help desk
  • perform an anonymous bind the machine to our open directory server
  • install puppet, facter, a basic puppet.conf file, a tweaked ruby puppet wrapper that I copied from Gary Larizza way back when he was still in the K12 education space at Huron, and a launchdaemon to keep things running on a schedule
  • create our default local user set (the users are actually payloadless packages created from InstaDMG’s CreateLionUser.) We create two admin accounts, one for the district tech staff to use and one for the in-building tech support staff to use. We also have a passwordless “classroom” account so that users can access the local machine in case the network or server (either file server or open directory infrastructure) is down.
  • install munki
  • skip Apple’s setup assistant that runs for new machines and enable the ARD agent
  • set munki’s client_identifier via a query to our help desk
  • set softwareupdate’s CatalogURL to point to the central reposado server and, based on information gathered via a query to our help desk, configure it to use the test catalog
  • join the computer to the appropriate wifi network as specified in the help desk (if one is specified. Since a majority our new computers are MacBook Pros, they all have a wifi network specified. The new iMacs and most of the relocated machines are all hardwired, so they don’t have a network specified, and the script just exits appropriately.)
  • perform a non-destructive partition of the hard drive into two partitions (after checking that the drive only has one partition.) The first partition is set to 250GB, with the second partition using the remaining space (which has to be calculated since the diskutil command doesn’t know how to “use remaining space”.) The script then grabs the UUID for the new second partition and adds it to /etc/fstab to be mounted at /Users (after checking that this information isn’t already in the file.) It also creates a new Shared folder to match the one that’s in the /Users folder by default on an OS X machine. Having this partition in place allows the users of the machine to have a space that’s safe from future reimaging. Our teaching staff are using portable home directories (I know… bad word) so their local synchronized home folder sits on this “safe” partition.
  • touches /Users/Shared/.com.googlecode.munki.checkandinstallatstartup to bootstrap munki (make munki run immediately upon boot instead of on its regular schedule.)
  • reboot the machine (I have a reboot step because I’ve configured DeployStudio not to reboot after a workflow runs so we can run multiple workflows if necessary. The drawback of this is that computers never appear to finish in the Activity area of DeployStudio.)

All of our workflows run as “postponed installations”. Is this necessary? For some steps, such as installing packages, it seems to work better. Since I’m doing a postponed install for some things, I decided to do it for all steps.

Full reimage workflow

If we do actually deploy a new image, it’s usually because something has gone wrong on the machine or we’ve simply replaced the hard drive. I’ve used InstaDMG to create a vanila 10.6, 10.7, and 10.8 image. Since we have 6 elementary schools, a middle school, a high school, a service center (where my department’s offices are), and the administrative office, each with their own LAN and only a 25 mbps WAN link back to the datacenter, I’ve implemented a modified version of a strategy I discovered a couple of years ago to have a central DeployStudio server with local images repositories. (I know DeployStudio now offers a synchronization process, but I’ve never looked into it. This process works well for us. And, remember from part 1, one of the steps I had to complete to start this project was to update DeployStudio and its NetBoot image… I doubt that prior to a month and a half ago that the version we were running offered synchronization.)

So, our complete reimage workflow looks like this:

  • run the local repository mount script
  • restore the vanilla image
  • run the northmont-ize workflow described above

The logistics of prepping 1200+ machines

We wanted our machines to be usable as soon as we deployed them so as to cut down on the amount of time we were in the schools. (In one case, we actually replaced a school’s computers as the students cycled through lunch, which at that school’s was about an hour and a half.)

Since our offices are on the second floor in our building, and although our server room has a garage door that allows our warehouseman to forklift pallets of equipment through, we had no intention of moving 240+ 5-pack boxes of MacBook Pros any more than we really needed to. We commandeered a couple of tables near the staff mailboxes downstairs between our storage “cage” and the warehouse, dropped a couple of extension cords and a patch cord (offering a gigabit connection) from the office space above, and set up an imaging bench with an HP switch, two power strips, and 10 Magsafe power adapters. One of my coworkers had the idea of taping the power and patch cords to the tables so they didn’t slide around. Initially I wasn’t sure this was necessary, but it turned out to be brilliant.

Since we could get 10 computers going at a time, and get those 10 computers booted into DeployStudio, run the workflow, reboot to go through the firstboot process, reboot a second time, let puppet and munki run to install our settings and applications (which will be detailed in subsequent articles), then shut the computers down to rebox them, all in about 15-20 minutes per group. While one batch was running, we’d unbox the next batch and have them ready to move into place when the previous batch was finished.

Think about that… every 15-20 minutes, we had 10 computers ready to be deployed, with our applications, our users, our settings, etc. Repeat that process 120+ times, and you can understand how I was able to successfully Northmont-ize an iMac whose display was DOA. After a while, I could do that process in my sleep, knowing about how long to wait for each step of the process, how many down arrows to press to get to our workflow, etc. I did get a bit excited (probably overly so) when we heard the iMac reboot after the workflow ran, then reboot again a couple of minutes later when the first boot processes finished.

Another benefit of thin imaging and using tools like puppet and munki is that as things change, things need to be added, etc., you’ve got a system in place that will handle that, automatically. You don’t need to rebuild your image.

What’s coming up

I’ll have dedicated articles on our puppet and munki configurations, a brief discussion on how I’m using luggage to package printer installers and repackage applications that don’t come with good installers, some tips on inventorying machines, and some scripts that I’ve written to tie everything together, and finally reposado, in that general order.