We use AutoPkg to automatically download (and process into munki) updates to our commonly installed applications and internet plugins. One common practice is to run AutoPkg via JenkinsCI, but I have not taken the time to install and configure Jenkins, so I just run AutoPkg via a cron launch daemon script.
Because I didn’t want to have to check munki periodically to see if AutoPkg had downloaded anything, I wrote a wrapper script which emailed me the output from each AutoPkg run, which in our environment happens at the top of each hour. After a weekend of getting hourly emails (which I had to browse to check for any updates), I decided there had to be a better way to be notified of AutoPkg’s work.
As I started thinking about my approach to this problem, I immediately thought that I wanted to be notified only when something was different from a ‘nothing downloaded, packaged or imported’ (the actual output from AutoPkg when nothing has been done) run. And how better to do that than to compare the output from the run against the output when ‘nothing (was) downloaded, packaged or imported.’
My solution was to save the output of a AutoPkg run when nothing was done, and on each subsequent AutoPkg run, check that run’s output against the saved output. If there are differences between these two files (because of either an update being downloaded, packaged or imported, or a download error), the script emails me the current AutoPkg run’s output. This way when I get the email from the script, I know there’s something that likely needs my attention.
I’ve modified the script somewhat from my initial version, making it easier for other admins to customize for their environment. As you can see, there are three customization variables (recipe_list
, mail_recipient
, and autopkg_user
) in the script. The only changes you’ll need to make to the script are in those three lines.
You’ll want to add whatever recipes you’re feeding to AutoPkg on this line. If you’re using munki, you’ll want to be sure to include MakeCatalogs.munki at the end of the list, of course. In the example, I’m only using the AdobeFlashPlayer.munki and MakeCatalogs.munki recipes, indicating that I want to download and import any Flash updates, then rebuild the munki catalogs.
This is the email address you want the change notification to be sent to.
This is the local user that will be running the autopkg-wrapper script. The default (nothing changed) log will be stored in this user’s Documents/autopkg folder.
Download the script and install it in /usr/local/bin
.
As you’re setting things up, you’ll want to run this script manually, and you’ll be prompted to run the script with the initialize
argument, which will tell the script to run autopkg with your recipe list twice, the second of which will save the output information to the default log location for later reference. You should also manually run the script with the initialize
option if you change the recipe list, since that will change the output.
Once you’ve run the script manually, you’re ready to set the script to run autoatically, either via cron or launchd. Obviously Apple wants you to run things via launchd, but it will work just fine as a cron job. An example launchd plist file is available in my github repository. You’ll need to modify the plist to meet your needs. It assumes the following:
autopkg-wrapper.sh
script lives in /usr/local/bin
,autopkg
,If you’re using the launchd plist, be sure to load it via something like launchctl load /Library/LaunchDaemons/com.example.autopkg-wrapper.plist
(or reboot your machine, which will force the plist to be loaded on boot.)
That’s pretty much it. Modify the three variables, load the launchd plist (or add to cron), and wait for the email notifications when AutoPkg finds and processes updates for you.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
|
If you were in my presentation, or even if you weren’t, and would like a copy of my slides, I’ve posted them below.
As a reminder, I’ll have a blog post in the next couple of days with links to the various sources of information that I used to build our deployment system, as well as some expanded notes on why I designed things the way I did.
I’m offering the slides in two versions: PDF and keynote file (in case you want the multisite reposado demo video mainly… or in case you really liked the transitions.)
More information on multi-site reposado is also available.
If you attended my session, thank you. I hope that I offered something that will help you deploy your own modular deployment system. If you have any questions, you can reach me via twitter @seankaiser. Or, if you would like to contact me via email, my email is my first name -at- seankaiser.com.
]]>Let’s say you work in an environment where you’re running reposado. Let’s also say that your environment consists of several locations with relatively slow WAN links between them. Additionally, let’s say that some of your users roam between locations, and before they move, they just put their MacBooks (or Airs or Pros) to sleep instead of shutting down (because who shuts their machine down every time they’re not using their machine?)
In an ideal world, you want to point the machine to the reposado server, but you don’t want the machine to download updates over the slow WAN link, and while you could run a reposado server at each location, but by configuring the machine to look at an onsite reposado server, the machine will likely move to another location before softwareupdate checks for updates.
You’re running munki and have it set to install Apple software updates? Awesome. You could set the appropriate CatalogURL
in your preflight
script, but that means that you have to maintain catalog files on several reposado servers, and who wants to do that? (Ok, you could just clone the master reposado server, including the catalog files to get around that last part.)
But what happens if the user has the ability to install Apple software updates via Software Update from the Apple menu (or by running softwareupdate
itself)? Their machine might have their previous location’s CatalogURL
set…
Since /Library/Preferences/com.apple.SoftwareUpdate.plist
doesn’t allow you to configure a PkgURL
like munki does, everything goes to the server that the catalog file defined by CatalogURL
goes to. But that’s the problem.
The workaround? You set up redirects on the master reposado server based on the client’s IP address. It seems simple, but I haven’t found any references to anyone else doing this. Interested? Great. Let’s set it up.
First of all, if you’re going to get this working, you’re going to have to clone your reposado server to a server at your different locations. Just copy the reposado/html/content
folder to the other server(s) and set up apache on that server to point to the repsado/html
folder as the root folder for the site.
I’m going to assume that you have probably already enabled mod_rewrite
to handle the .sucatalog redirects so you can set one CatalogURL
regardless of what OS the client machine is running. If you haven’t done that yet, I’ll wait for you to go do it. It’s that awesome. Seriously.
Once you’ve got mod_rewrite
enabled and your .htaccess
file in place (in reposado/html
), you need to configure the redirects for your different locations. Using a tool like Google’s IP address range tool, you can build your regular expression rules. You then copy those regular expressions into your .htaccess
file and it looks something like this:
Seems simple, right? It is. It’s just a different way of thinking about things. And it resolves issues related to using different mechanisms to run software update.
(As an alternative, you could probably set up crankd
, which is part of the pymacadmin project, to reconfigure your CatalogURL
when the machine wakes from sleep or changes networks, but since I haven’t set up crankd
yet, I can’t offer any guidance on that.)
In part 1 of this series, I provided an overview of what I now call “The Big Project.” Part 2 talked about the importance of inventorying. This article is the first of a series of more detailed technical articles describing various aspects and the tools we used to pull off this project. FIrst up…
We’ve been using DeployStudio for several years to help us image machines. In the (relatively distant) past, we restored full monolithic images to machines, new or old. Over the past almost two years, we’ve switched to the thin imaging model. Thin imaging is a method of preserving the contents of a machine’s hard drive and deploying either site specific applications or deployment tools to facilitate the installation of such applications and other files in an effort to minimize the amount of time needed to get a machine ready for a user. In our case, we deploy our deployment tools (puppet and munki) and some basic settings. This cuts down on the deployment cost, in both time and bits traveling across the network.
In situations where we need to reimage a machine, we have a workflow that lays down an InstaDMG created vanilla image and then runs the normal thin imaging workflow.
When we “Northmont-ize a new machine” (the name of our thin imaging workflow), DeployStudio performs the initial basic setup fo our machines. Everything is rather basic at this point. The workflow does the following:
All of our workflows run as “postponed installations”. Is this necessary? For some steps, such as installing packages, it seems to work better. Since I’m doing a postponed install for some things, I decided to do it for all steps.
If we do actually deploy a new image, it’s usually because something has gone wrong on the machine or we’ve simply replaced the hard drive. I’ve used InstaDMG to create a vanila 10.6, 10.7, and 10.8 image. Since we have 6 elementary schools, a middle school, a high school, a service center (where my department’s offices are), and the administrative office, each with their own LAN and only a 25 mbps WAN link back to the datacenter, I’ve implemented a modified version of a strategy I discovered a couple of years ago to have a central DeployStudio server with local images repositories. (I know DeployStudio now offers a synchronization process, but I’ve never looked into it. This process works well for us. And, remember from part 1, one of the steps I had to complete to start this project was to update DeployStudio and its NetBoot image… I doubt that prior to a month and a half ago that the version we were running offered synchronization.)
So, our complete reimage workflow looks like this:
We wanted our machines to be usable as soon as we deployed them so as to cut down on the amount of time we were in the schools. (In one case, we actually replaced a school’s computers as the students cycled through lunch, which at that school’s was about an hour and a half.)
Since our offices are on the second floor in our building, and although our server room has a garage door that allows our warehouseman to forklift pallets of equipment through, we had no intention of moving 240+ 5-pack boxes of MacBook Pros any more than we really needed to. We commandeered a couple of tables near the staff mailboxes downstairs between our storage “cage” and the warehouse, dropped a couple of extension cords and a patch cord (offering a gigabit connection) from the office space above, and set up an imaging bench with an HP switch, two power strips, and 10 Magsafe power adapters. One of my coworkers had the idea of taping the power and patch cords to the tables so they didn’t slide around. Initially I wasn’t sure this was necessary, but it turned out to be brilliant.
Since we could get 10 computers going at a time, and get those 10 computers booted into DeployStudio, run the workflow, reboot to go through the firstboot process, reboot a second time, let puppet and munki run to install our settings and applications (which will be detailed in subsequent articles), then shut the computers down to rebox them, all in about 15-20 minutes per group. While one batch was running, we’d unbox the next batch and have them ready to move into place when the previous batch was finished.
Think about that… every 15-20 minutes, we had 10 computers ready to be deployed, with our applications, our users, our settings, etc. Repeat that process 120+ times, and you can understand how I was able to successfully Northmont-ize an iMac whose display was DOA. After a while, I could do that process in my sleep, knowing about how long to wait for each step of the process, how many down arrows to press to get to our workflow, etc. I did get a bit excited (probably overly so) when we heard the iMac reboot after the workflow ran, then reboot again a couple of minutes later when the first boot processes finished.
Another benefit of thin imaging and using tools like puppet and munki is that as things change, things need to be added, etc., you’ve got a system in place that will handle that, automatically. You don’t need to rebuild your image.
I’ll have dedicated articles on our puppet and munki configurations, a brief discussion on how I’m using luggage to package printer installers and repackage applications that don’t come with good installers, some tips on inventorying machines, and some scripts that I’ve written to tie everything together, and finally reposado, in that general order.
]]>I was going to keep this article for near the end of the series, but then it might imply that the prep work wasn’t important. This is wrong. It’s very important, and in our case just as critical as the deployment tools themselves. I’d guess this is (or should be) the case everywhere.
Any time you deploy a machine, whether it’s one machine or (in our case) part of a 1300+ machine deployment, you need to add the machine to your inventory system. In our case, our inventory system is our help desk (we run Web Help Desk.) As I’ll describe in later articles, many of the processes in our deployment workflow refer to the help desk and custom asset fields so we can have a dynamic configuration without having to edit files on individual machines.
Before we started this project, I asked my Twitter followers if anyone had recommendations for barcode scanners. I got a couple of responses that folks used the Symbol barcode scanner that is recommended for use with Apple’s GSX (it’s the Symbol DS6707 if you’re interested.) We immediately bought this scanner because, well, we had no intention of manually entering serial numbers or ethernet and wifi addresses into the help desk. This guaranteed that there were no typos as well. Typos are bad when you depend on the data that the OS gives you to look up settings. Also, if you’ve ever looked at an Apple barcode, it’s sometimes hard to differentiate a B from an 8, or an S from a 5. But a barcode scanner doesn’t have this problem. If you don’t have a barcode scanner and have trouble reading Apple’s 4 point font, invest in one.
Using the exported asset template from Web Help Desk, one of my coworkers created an import file that had all of the information we intended to import (asset numbers, room numbers, building, assigned client, etc.), except for the information from the barcodes. Two of our other coworkers handled scanning the boxes (thank you Apple for putting the three pieces of information on the outside of the MacBook Pro 5 pack boxes), applying the inventory asset tags to the machines, and labeling the boxes what asset numbers were in the box. Once they had scanned all of the machines for an individual building, they forwarded the file to me. I cleaned up some of the stuff (took off the leading S from the scanned serial number, removing spaces and converting to lower case the ethernet and wifi addresses mainly) and imported the file into Web Help Desk.
Now we were ready to start “imaging” the machines in DeployStudio. That’s a topic worthy of its own article.
]]>Deploy 1170 MacBook Pros (later amended to 1250), 60 iMacs, 214 Apple TVs, 106 (full sized) new iPads, and 30 iPad minis. Relocate 100 or so “newer” (one or two year old) machines into different locations, either in different classrooms in the same building or to a different building. Retrieve nearly 1700 old machines (iBooks, eMacs, iMacs, Mac Minis, MacBooks) back to the warehouse for disposal. And do all of this in roughly 3 weeks (once the computers started arriving.)
Under any normal circumstances, this would be a huge summer project that we’d start planning around this time of the year. But this project was on the fast track. For various reasons, we were going to do this during the school year. The vast majority of devices would be ordered in two waves, the first in mid-November and the second a week or two later (with the additional 80 MacBook Pros ordered in early December), and we’d start deploying machines ASAP because of limited warehouse space. And, we’d try to have everything deployed before winter break was over. Although I was involved with the later stages of deciding what computers we were buying and in what quantities, most of the rest of our team hadn’t been involved, and didn’t even know about the project until around the time that the first order went in. (We didn’t want too many people knowing about the project until it had official board approval.) Our team (our supervisor, 3 technicians, our administrative assistant and myself) had our work cut out for us.
Inventorying all of the new equipment, then physically removing the old computers and placing new 13” MacBook Pros would be a relatively easy process. The real challenge was getting our deployment tools updated and in place in order to get 1400+ computers ready to deploy (or redeploy.) We’d already bought into the “thin imaging” mindset (adding management tools to a new machine and using those tools to install applications, etc., versus building a whole new image with necessary applications included) via deploystudio. We had been using puppet to do our software installs, but we hadn’t been very diligent in keeping the software that was installed on existing machines updated (we had only recently pushed Firefox 15 out to machines that had been running 3.6.28, for example.)
I’ve been to several conferences and learned about munki, and wanted to start using it, but had never taken the time to get something set up beyond using puppet to install munki on a test machine. Now I had to get a fully functioning munki system set up, and I had to do it quickly. I had to change our puppet configuration to manage the state of the computers (things like ensuring remote login was enabled, certain users existed on the computers, etc.) and move software installation over to munki. I also had to change the scripts that I had written for puppet to look up information from our help desk to return valid data for munki (while still returning valid data for puppet.) Based on the responses on a recent staff survey, we wanted to give staff the ability to install and update software without needing help from someone in our department. We wanted to leverage munki’s optional software feature to make software available and let the end users decide if it was something they want to use. Because munki doesn’t require a password to do installations/updates, our end users will now be able to do their own installations and updates (at least as much as we make available to them.)
Since we hadn’t purchased very many computers recently, our deploystudio netboot set was woefully out of date (it was still built on Mac OS X 10.6.4, I believe), so it wouldn’t be able to boot any of these new machines. I’d have to get that updated, too. Fortunately, we had a few MacBook Pros from the same hardware family so I was able to get a jump start on that and not have to wait until the first order arrived and it’s not a big deal to get a base deploystudio netboot image built (or updated.) The real work is in the workflows anyway, and I’d have to update those to start installing munki and some other stuff.
We don’t want to fall behind on updates again, so in addition to using munki to handle application updates, I’m working to get reposado set up to handle Apple software updates. This is the one part that isn’t quite done, mainly because of how I want things to work.
There were some other situations that required that some custom code be written. One of these was that some of the computers (the 80 that were added at the end of the project) needed VPN connections to be configured. In an effort to be as hands off as possible in the configuration process, I needed to have a way to do that.
So that was the project. No big deal… ha. This was by far the biggest project our department had ever undertaken in the 12+ years I’ve been here. And we had to be successful, or we’d have everyone questioning why we were doing this during the school year.
Over the next few posts, I’ll describe how I configured deploystudio, puppet, and munki to work for us. I’ll throw in some luggage discussion since I’m using munki to deploy printers and needed to repackage some applications as well. Once it’s done, I’ll describe our reposado configuration as well. So stay tuned… this was a fun project (I can say it now that it’s essentially done and we’re starting to get our heads back above water) and I’m looking forward to sharing our experience.
]]>For our Google Apps implementation, we are using two different domains: one for staff and one for students. This was recommended to us by others, and we already owned the domains, so it made sense. The problem with this approach is that the user directory that allows users within a domain to simply type in a user’s name and then select their address from a list doesn’t work when some users are in one domain and the rest are in the other. Sure, each user could create the contacts in their personal contacts list, but for a teacher to create a new contact for each of their students would take considerable time. They’d also have to have access to the user list to know what the student email addresses are.
Google provides APIs to allow 3rd party scripts and solutions to interact with the domains. As we were setting up the domains, I remembered seeing something about a Shared Contacts API. Yesterday I started looking into what this API could do to help us solve the cross-domain contact issue. I found a Google Code project called Google Shared Contacts Client (or gscc for short for the rest of this document.) This python script lets you interact with the domain’s shared contacts.
To get started, you’ll need to follow the installation instructions. They’re simple. Be sure to install the GData Python client library, or nothing will work.
Once you’ve got things installed, you will want to export your domains’ contact directories. From the gscc directory, you’ll want to run the command
1
|
|
(You’ll need to change the administrative account and the export file name as you see fit.)
You’ll be prompted for your account password. This will generate a CSV file containing the specified domain’s users with their various email addresses and aliases/nicknames. Now, run the same command again, but specify an administrative account in your other domain, and export it to a different file.
The first couple of fields in each record are the action and id fields. The export files are meant to update existing contacts, rather than add them to the directory. As exported, the action is always update, and the id is the username. When adding a record to a domain, the action should be add (hopefully I didn’t just lose you there.) As I discovered today, when you’re adding a contact, the id field must be empty. How you choose to change this is up to you. You could open the file in Excel (yuck) to do a find & replace to change update to add, then clear the id column out. Or you could run the command
1
|
|
(This command assumes that your usernames are all lower case and don’t have special characters or numbers. If that’s not the case for you, you’ll need to change the sed command.)
Do this for both files.
The final step is to import the contact directory into the other domain. It’s simple. From the gscc directory, run the command
1
|
|
(4/12/2011: fixed above command to reflect –import instead of –export)
Run the command again and switch domain1 and domain2.
Once you’ve imported the files, there’s nothing to do but sit back and wait for the changes to appear in the Contacts list. Google states it can take up to 24 hours for changes to the Shared Contacts list to show up.
What happens if you’ve already loaded your domain’s contacts list, but get new staff or students? Maybe you want to add some addresses that are completely external to your domain. The easy way would be to create a CSV file that has the updated information in it. You don’t have to import a ton of empty fields. Create your file with the following fields (add additional fields if necessary, as specified in the gscc documentation):
1
|
|
Acceptable actions are add, update, and delete. You can mix and match actions in the same file. I suspect if you use the update or delete actions, you’ll need to have the id field included. The id field is shown in the output of the add process. You can either record it at that point, or you can run the export command and pull the id field from that file for the appropriate user(s).
]]>My current focus is moving our district from First Class email to Google Apps for Education. In First Class, we provided email accounts for most staff, but no students. Each staff member had a limited amount of disk space available for their email and a simple website. Now, staff will have tons of space for email, virtually unlimited space for documents, and the ability to have multiple websites. On top of all of that, we’re providing accounts for our students to do the same thing. All of a sudden, instead of supporting around 650 or so email users, we’re going to be supporting nearly 7000 email, docs, etc. users. You know what the best part is? Our department has learned a ton, has come together like never before, and we’re pumped.
With that focus, most of my initial posts will be related to this migration. Over time, posts will shift from one technology to another, from desktop management to network infrastructure to who knows what, and back and forth. If my work inspires even one person, or even just helps save someone some time or their job, I’ll be happy. If not, I’ll still be happy. My life is good.
]]>