MacAD.UK 2019

MacAD.UK is just a few weeks away, and I’m rather excited about the whole thing. I will be speaking on the second day about Practical CI/CD workflows for Mac Admins - a topic that I’ve wanted to speak about for quite some time. And of course I am hoping we will all be able to go for the now traditional curry one evening (I’m sure we can all agree, curry is the height of British cuisine).

Movember

Movember Earlier this year I was diagnosed with testicular cancer. I’m one of the lucky ones, I caught it early, was fortunate enough to have excellent treatment and am now in remission. Testicular cancer is the most common form of cancer in men under 40 years old - chances are either you will get it or you will know someone who will.

This year, I’m raising money for Movember. I’ve already raised an amazing amount mostly due to the generosity of the Apple admin community. I’ve had to raise my target several times until I went big and set it at $10,000. I blasted though that before November even started, so thank you to everyone who donated!

If you haven’t donated yet, and also think testicular cancer is a bit shit, please find your way to donating either on the Movember site proper or on Facebook (who will very kindly eat the currency conversion fees if you don’t want to donate in USD).

Deploying a Munki repo in five minutes with Terraform

terraform-munki-repo is a Terraform module that will set up a production ready Munki repo for you. More specifically it will create:

  • An s3 bucket to store your Munki repo
  • An s3 bucket to store your logs
  • A CloudFront Distribution so your clients will pull from an AWS endpoint near them
  • A Lambda@Edge function that will set up basic authentication

Why?

A Munki repo is a basic web server. But you still need to worry about setting up one or more servers, patching those servers, scaling them around the world if you have clients in more than one country.

Amazon Web Services has crazy high levels of up time - more than we could ever manage ourselves. CloudFront powers some of the world’s busiest websites without breaking a sweat, so it can handle your Munki repo without any trouble.

So it makes sense to offload the running of these services so we can get on with our day.

Read more →

Optimizing Postgres for Sal

Over time, you may notice your Sal install getting slower and slower - this will happen faster the more devices you have checking in. You may even see rediciulous amounts of disk space being used - maybe even 1Gb per hour. This can all be solved by tweaking some simple matinenance settings on your Postgres server.

Background

Before we crack on with how to stop this from happening, it will be useful to know how Postgres handles deleted data.

Take the following table (this is a represenation of the facts table in Sal):

id | machine_id | fact_name | fact_data
---------------------------------------
01 | 01 | os_vers | 10.13.6
02 | 02 | os_vers | 10.13.6
03 | 01 | memory | 16Gb
04 | 02 | memory | 4Gb

When a device checks into Sal, rather than asking the database what facts are stored for the machine, iterating over each one, working out which ones have values that need updating, working out which ones are missing, and working out which ones need to be removed, Sal instructs tha database to delete all of the facts for that device, and then to save the new ones. What could potentially be 1000 operations becomes two, which is much faster.

You would expect Postgres to delete the rows out of the database at this point. Unfortunately that isn’t what happens. What actually happens is Postgres marks the row as able to be deleted. There are various good reasons for this outlined in the documentation which I won’t go into here, but when an application like Sal is updating and deleting data constantly, the disk usage can skyrocket.

after machine_id 01 has checked in
id | machine_id | fact_name | fact_data
---------------------------------------
XX | XX | XXXXXXX | XXXXXXX
02 | 02 | os_vers | 10.13.6
XX | XX | XXXXXX | XXXXXXX
04 | 02 | memory | 4Gb
05 | 01 | os_vers | 10.13.6
06 | 01 | memory | 16Gb

As time goes on, these empty tuples will mount up. This is where the database’s maintenance tasks come in. They are supposed to come along and vaccuum the tables, removing these dead tuples.

So what can we do?

But unfortunately the defaults are basically useless. I am not going to go in depth about why I chose the following settings - I learned a lot from this post and adjusted their recommendations to meet our needs. My Postgres server is Amazon’s RDS, so the settings are entered in the Parameter Group for the database. If you are running a bare metal install, you will be editing the Postgres configuration. I have added a few notes about why we chose the value we did next to the setting. Our general goal was to have maintenance performed more frequently, so it would take less time as it will have less work to do during each run, and to give the maintenance worker as much resources as possible so it would complete as quickly as possible.

autovacuum_analyze_scale_factor = 0.01
# This means the 1% of the table needs to change to trigger autovacuum.
autovacuum_max_workers = 1
# The default is 3. We set this to 1 to allow maximum resources for each worker, so it can complete it's work quickly and move onto the next table.
autovacuum_naptime = 30
# The delay between autovacuum runs in seconds. This is half the default - we want autovacuum to run as often as possible.
autovacuum_vacuum_cost_limit = 10000
# The 'cost' of autovacuuming is calculated using several factors (see the article linked for a good explanation) - we want autovacuum to happen as much as possible, so this is high.
autovacuum_vacuum_scale_factor = 0.1
# % of dead tuples to tolerate before triggering an autovacuum
maintenance_work_mem = 10485760
# The amount of memory to assign to mantinenance in Kb. We have assigned ~10Gb, as we have lots of memory on our RDS instance and can spare it. It should be set to the maximum amount of memory you can spare, as the maintenance will run much quicker if it can load more of the table into memory rather than having to read it from disk every time.

Conference Talks (Summer 2018 Edition)

It’s been three long months since I gave a talk with Brett, my lovely coworker at MacAd.UK, so it’s time to give some talks on the side of the pond which I currently reside.

Firstly I will be at MacDevOps:YVR on June 7th - 8th, where I will be joined by fellow beer snob Wes Whetstone where we will be talking about Crypt and probably talking about beer in the bar afterwards.

The next stop on my summer tour of places that aren’t the Bay Area will be PSU MacAdmins on July 10th - 13th. I’m speaking on July 11th at 3:15PM in the snappily named room “TBA2”, where I will be peering into my completely made up crystal ball and will be looking at where managing Apple devices is going and how that will affect our roles at Mac Admins.

I’m looking forward to seeing you all at one or both of these great conferences, so you can all tell me I’m wrong in the bar ;)

If you haven’t got your tickets yet, here is a 20% discount link for MDO:YVR and if you register by May 15th you can get $200 off your tickets for PSU.