# Switching to nikola for blogging

## Why?¶

The key to me for blogging is to keep it simple. This at first meant static sites that rendered nicely, and this was a function fulfilled by Wintersmith.

## The Problems¶

The problem with Wintersmith was that it was a pure markdown solution. At the end of the day, it turned out that most of what I wanted to talk about probably had some type of interactive or graphical component to it, or I just wanted to be able to add photos and images easily.

None of this is easy with pure markdown.

## The Solution¶

The new solution for me here is Nikola which seems to hit a new sweetspot of codeability versus low friction. Specifcially it's support for posts being rendered out of jupter-lab notebooks, something that I've had just grow in usefulness.

## Perceived Benefits¶

Getting the transition from "jupyter notebook" to "blog" post down to as frictionless as possible feels important. When you want to share neat stuff about code, it helps to do it right next to the neat code you just wrote.

When you want to share data analysis - well that's a jupyter specialty. And when you want to show off something you're doing in your workshop, then at the very least pulling in some images with jupyter is a bit more practical and probably a lot more likely to be low enough effort that I'll do it - this last part is the bit that's definitely up in the air here.

## Workarounds¶

### Keeping media and posts together¶

The first problem I ran into with Nikola is that it and I disagree on how to structure posts.

Namely: out of the box Nikola has the notion of separate /images /files and /posts (or /pages) directories for determining content.

I don't like this, and it has a practical downside: when working on a Jupyter notebook for a post, suppose it requires data or I'm using some tool which will do drag and drop images for me? What I'd like is to open that notebook, and just that notebook in Jupyter or VSCode and work on just that file - including how I reference data.

Although this issue suggests a workaround, it has a of drawbacks. But more importantly - one of the reasons we use Nikola is it's written in Python, and better - it's main configuration file conf.py is itself, runnable Python.

This gives us a much better solution: we can just generate our post finding logic at config time.

To do this we need to find this stanza:

POSTS = (
("posts/*.ipynb", "posts", "post.tmpl"),
("posts/*.rst", "posts", "post.tmpl"),
("posts/*.md", "posts", "post.tmpl"),
("posts/*.txt", "posts", "post.tmpl"),
("posts/*.html", "posts", "post.tmpl"),
)


This is a pretty standard stanza which configures how posts are located. Importantly: the wildcard here isn't a regular glob, and all these paths act recursive searches, with the directory names winding up in our paths (i.e. posts/some path with spaces/mypost.ipynb winds up as https://yoursite/posts/some path with spaces/mypost-name-from-metadata)

So what do we want to have happen?

Ideally we want something like this to work:

|-my-post/- my-post.ipynb
|- images/some-image.jpg
|- files/data_for_the_folder.tsv

and then on the output it should end up in a sensible location.

We can do this by calculating all these paths at compile time in the config file, to workaround the default behavior.

So for our POSTS element we use this dynamic bit of Python code:

# Calculate POSTS so they must follow the convention <post-name>/<post-name>.<supported extension>
_post_exts = (
"ipynb",
"rst",
"md",
"txt",
"html",
)
_posts = []
_root_dir = os.path.join(os.path.dirname(__file__),"posts")
for e in os.listdir(_root_dir):
fpath = os.path.join(_root_dir,e)
if not os.path.isdir(fpath):
continue
_postmatchers = [ ".".join((os.path.join("posts",e,e),ext)) for ext in _post_exts ]
_posts.extend([ (p, "posts", "post.tmpl") for p in _postmatchers ])

POSTS = tuple(_posts)

PAGES = (
("pages/*.ipynb", "posts", "page.tmpl"),
("pages/*.rst", "pages", "page.tmpl"),
("pages/*.md", "pages", "page.tmpl"),
("pages/*.txt", "pages", "page.tmpl"),
("pages/*.html", "pages", "page.tmpl"),
)


Testing this - it works. It means we can keep our actual post bodies, and any supporting files, nicely organized.

Now there's two additional problems: images and files. We'd like to handle images specially because Nikola will do automatic thumbnailing and resizing for us in our posts. They're handled lossily. Whereas files are not touched at all in the final output.

The solution I settled on is just to move these paths to under images and files adjacent to the posts respectfully. This means is the Jupyter notebook I'm using is referencing data, it's reasonably well behaved.

For files we use this config stanza:

FILES_FOLDERS = {'files': 'files'}

for e in os.listdir(_root_dir):
fpath = os.path.join(_root_dir,e)
if not os.path.isdir(fpath):
continue
FILES_FOLDERS[os.path.join(fpath,"files")] = os.path.join("posts",e,"files")


and for images we use this:

IMAGE_FOLDERS = {
"images": "images",
}

for e in os.listdir(_root_dir):
fpath = os.path.join(_root_dir,e)
if not os.path.isdir(fpath):
continue
IMAGE_FOLDERS[os.path.join(fpath,"images")] = os.path.join("posts",e,"images")


### Setting up the publish workflow¶

Nikola comes out of the box with a publishing workflow for Github pages, which is where I host this blog.

Since I've switched over to running decentralized with my git repos stored in syncthing, I wanted to ensure I only pushed the content of this blog and kept the regular repo on my local systems since it leads to an easier drafting experience.

I configure the github publish workflow like so in conf.py:

GITHUB_SOURCE_BRANCH = "src"
GITHUB_DEPLOY_BRANCH = "master"

# The name of the remote where you wish to push to, using github_deploy.
GITHUB_REMOTE_NAME = "publish"

# Whether or not github_deploy should commit to the source branch automatically
# before deploying.
GITHUB_COMMIT_SOURCE = False


and then add my Github repo as the remote named publish as

git remote add publish https://github.com/wrouesnel/wrouesnel.github.io.git

and then synchronize my old blog so nikola can take it over:

git fetch publish
git checkout master
find -depth -delete
git commit -m "Transition to Nikola"
git checkout main

and then finally just do the deploy:

nikola github_deploy

## Next Steps¶

This isn't perfect, but it's a static site and it looks okay and that's good enough.

I've got a few things I want to fix:

• presentation of jupyter notebooks - I'd like it to look seamless to "writing things in Markdown"
• a tag line under the blog title - the old site had it, the new one should have it.
• using nikola new_post with this system probably doesn't put the new file anywhere sensible - it would be nice if it did
• figure out how I want to use galleries

# Setting a separate encryption password and pattern lock on Android

If you run an older version of LineageOS (14.1 or so) then by using the cryptfs utility you can separate your devices pattern lock and boot password.

This is something you want to do. While state-of-the-art for security is going to belong to Apple for the forseeable future, practical security for the every day user can be achieved (sort of) in Android by ensuring that the password to decrypt your devices storage from a cold boot is much more complicated then the online pattern lock.

A human sitting there trying it is unlikely to break the pattern lock (or will actually power off the phone). Whereas someone looking to go farming your device for personal data might try to image it and break it offline.

For peace of mind then, we want to know that if the device is powered off, they're unlikely to break the initial login password.

Irritatingly, LineageOS makes this difficult.

Thankfully (if you trust the author) the cryptfs tool makes this easy...provided you know how to convert a pattern lock key into a password to do it.

## 3x3 Patterns

Look around the net and 3x3 patterns don't have a clear translation table.

However, there's not too many possibilities - and in fact the basic translation is left to right, top to bottom, you get:

1 2 3
4 5 6
7 8 9


When using cryptfs, just convert your pattern to numbers using the above table. Simple right?

But I use a 4x4 pattern. What then?

## 4x4 Patterns

Always look at the code and think about it. Someone on StackOverflow did - but the code is not correct for current LineageOS.

The real function in LineageOS is this:

    /**
* Serialize a pattern.
* @param pattern The pattern.
* @return The pattern in string form.
*/
public static String patternToString(List<LockPatternView.Cell> pattern, byte gridSize) {
if (pattern == null) {
return "";
}
final int patternSize = pattern.size();
LockPatternView.Cell.updateSize(gridSize);

byte[] res = new byte[patternSize];
for (int i = 0; i < patternSize; i++) {
LockPatternView.Cell cell = pattern.get(i);
res[i] = (byte) (cell.getRow() * gridSize + cell.getColumn() + '1');
}
return new String(res);
}


Found in the file frameworks/base/core/java/com/android/internal/widget/LockPatternUtils.java in the Android source tree.

The important line is here - res[i] = (byte) (cell.getRow() * gridSize + cell.getColumn() + '1');

The key being the '1' - what's happening is that the pattern lock is converted to an offset from ASCII 1, which actually converts to the (byte) number 49.

But the final conversion is just mapping the whole byte sequence to characters - so higher number patterns are just offsets into the ASCII lookup table past 1.

So for a 4x4 grid this gives us the following translation table:

1 2 3 4
5 6 7 8
9 : ; <
= > ? @


## 5x5 Pattern

Here's the pattern following the above for a 5x5 code if you use it:

1 2 3 4 5
6 7 8 9 :
; < = > ?
@ A B C D


# Securing CockroachDB

So I just lost about 16 hours to this, and I haven't even been able to evaluate whether it'll work for me. On the one hand I suppose I could've not secured anything, but personally I feel you want to know what the production configuration looks like before you evaluate (and in my case, I like to default my docker containers to "would not be wrong to roll this into production").

So: how does TLS work for CockroachDB? Well the problem is CockroachDB has atrocious logging for its TLS certificate errors in v2.0.1.

## The Problem

The problem was basically that CockroachDB expects a very specific format for it's x509 certificate data - outlined here https://github.com/cockroachdb/cockroach/issues/24621

I have a small utility I use for test certificates called makecerts which exists basically to have a much simpler static binary that does something like cfssl but with looser defaults. But the problem would apply to both scenarios.

In short: organization needs to be set to cockroach for node certificates, and the commonName needs to be set to node. I was generating certificates with a commonName of my docker-compose test network - 172.20.0.1 and the like, which is perfectly valid, validates correctly with the CA, and can be used to initialize the cluster - but none of the nodes will connect to each other.

And as noted in the Github issue produces no logs actually describing the problem.

## The Solution

So there you have it - with makecerts the line I needed for the test docker-compose file was:

makecerts --O=cockroach --CN=generated \
172_20_0_1=node,172.20.0.1,localhost,127.0.0.1 \
172_20_0_2=node,172.20.0.2,localhost,127.0.0.1 \
172_20_0_3=node,172.20.0.3,localhost,127.0.0.1 \
172_20_0_4=node,172.20.0.4,localhost,127.0.0.1 \
172_20_0_5=node,172.20.0.5,localhost,127.0.0.1 \
root


Note on how this works: this command above is saying "generate 172_20_0_1.crt and 172_20_0_1.pem for the certificate and key respectively, assign a commonName of node and then generate SANs for the commonName and all common-separated values."

Since makecerts is simple minded it also just signs the cert for all use-cases - it's very much a testing tool.

The final docker-compose I used to get this started was:

version: '2'

networks:
roachnet:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.20.0.0/24
gateway: 172.20.0.254

services:
roach1:
image: cockroachdb/cockroach:v2.0.1
command: start --host=172.20.0.1 --logtostderr=INFO --certs-dir=/certs --join=172.20.0.1,172.20.0.2,172.20.0.3,172.20.0.4,172.20.0.5
volumes:
- ./roach1:/cockroach/cockroach-data
- ./172_20_0_1.crt:/certs/node.crt
- ./172_20_0_1.pem:/certs/node.key
- ./root.crt:/certs/client.root.crt
- ./root.pem:/certs/client.root.key
- ./generated.crt:/certs/ca.crt
- ./generated.crt:/usr/local/share/ca-certificates/ca.crt
networks:
roachnet:
ipv4_address: 172.20.0.1

roach2:
image: cockroachdb/cockroach:v2.0.1
command: start --host=172.20.0.2 --logtostderr=INFO --certs-dir=/certs --join=172.20.0.1,172.20.0.2,172.20.0.3,172.20.0.4,172.20.0.5
volumes:
- ./roach2:/cockroach/cockroach-data
- ./172_20_0_2.crt:/certs/node.crt
- ./172_20_0_2.pem:/certs/node.key
- ./root.crt:/certs/client.root.crt
- ./root.pem:/certs/client.root.key
- ./generated.crt:/certs/ca.crt
- ./generated.crt:/usr/local/share/ca-certificates/ca.crt
networks:
roachnet:
ipv4_address: 172.20.0.2

roach3:
image: cockroachdb/cockroach:v2.0.1
command: start --host=172.20.0.3 --logtostderr=INFO --certs-dir=/certs --join=172.20.0.1,172.20.0.2,172.20.0.3,172.20.0.4,172.20.0.5
volumes:
- ./roach3:/cockroach/cockroach-data
- ./172_20_0_3.crt:/certs/node.crt
- ./172_20_0_3.pem:/certs/node.key
- ./root.crt:/certs/client.root.crt
- ./root.pem:/certs/client.root.key
- ./generated.crt:/certs/ca.crt
- ./generated.crt:/usr/local/share/ca-certificates/ca.crt
networks:
roachnet:
ipv4_address: 172.20.0.3

roach4:
image: cockroachdb/cockroach:v2.0.1
command: start --host=172.20.0.4 --logtostderr=INFO --certs-dir=/certs --join=172.20.0.1,172.20.0.2,172.20.0.3,172.20.0.4,172.20.0.5
volumes:
- ./roach4:/cockroach/cockroach-data
- ./172_20_0_4.crt:/certs/node.crt
- ./172_20_0_4.pem:/certs/node.key
- ./root.crt:/certs/client.root.crt
- ./root.pem:/certs/client.root.key
- ./generated.crt:/certs/ca.crt
- ./generated.crt:/usr/local/share/ca-certificates/ca.crt
networks:
roachnet:
ipv4_address: 172.20.0.4

roach5:
image: cockroachdb/cockroach:v2.0.1
command: start --host=172.20.0.5 --logtostderr=INFO --certs-dir=/certs --join=172.20.0.1,172.20.0.2,172.20.0.3,172.20.0.4,172.20.0.5
volumes:
- ./roach5:/cockroach/cockroach-data
- ./172_20_0_5.crt:/certs/node.crt
- ./172_20_0_5.pem:/certs/node.key
- ./root.crt:/certs/client.root.crt
- ./root.pem:/certs/client.root.key
- ./generated.crt:/certs/ca.crt
- ./generated.crt:/usr/local/share/ca-certificates/ca.crt
networks:
roachnet:
ipv4_address: 172.20.0.5


and you need to run a once-off init phase to start the cluster:

#!/bin/bash
docker-compose exec roach1 ./cockroach init --certs-dir=/certs/ --host=172.20.0.1


## A final note - why does makecerts exist?

I really want to like cfssl, but it still just seems like too much typing for when you're setting up test scenarios. It's a production tool for Cloudflare, whereas the goal with makecerts was to make it as easy as possible to generate TLS certs for test cases on the desktop and thus force myself to always turn TLS on when developing - since obviously I'm always going to be using it in production, so I should test with it.

# Prometheus reverse_exporter

Find reverse_exporter on Github Releases

In which I talk about something I made to solve a problem I had.

## Why

I like to make my deployments of things as "appliance-like" as possible. I want them to be plug-and-play, and have sensible defaults - in fact if possible I want to make them production-ready "out of the box".

This usually involves setting up VMs or containers which include a number of components, or a quorum of either which do the same.

To take a real example - I have a PowerDNS authoritative container which uses Postgres replication for a backend. These are tightly coupled components - so tightly that it's a lot easier to run them in the same container. PowerDNS is nice because it has an HTTP REST API, which leads to a great turn-key DNS solution while retaining a lot of power - but it totally lacks an authentication layer, so we also need to throw in nginx to provide that (and maybe something else for auth later - for now I manage static password lists, but we might do LDAP or something else - who knows?)

Obviously, we want to monitor all these components, and the way I like doing that is with Prometheus.

## The Problem

Prometheus exporters provide metrics, typically on an http endpoint like /metrics. For our appliance like container, ideally, we want to replicate this experience.

The individual components in it - PowerDNS, Postgres, nginx - all have their own exporters which provide specific metrics but also generic information about the exporter itself - which means we have conflicting metric names for at least the go-runtime specific metrics. And while we're at it we probably have a bunch of random glue-code we'd like to produce some metrics about, plus some SSL certificates we'd like to advertise expiry dates for.

There's also a third factor here which is important: we don't necessarily have liberty to just open ports willy-nilly to support this - or we'd like to able to avoid it. In the space of corporations with security policies, HTTP/HTTPS on port 80 and 443 is easy to justify. But good luck getting another 3 ports opened to support monitoring - oh and you'll have to put SSL and auth on those too.

### Solution 1 - separate endpoints

In our single-container example, we only have the 1 IP for the container - but we have nginx so we could just farm the metrics out to separate endpoints. This works - it's my original solution. But instead of a nice, by-convention /metrics endpoint we now have something like /metrics/psql, /metrics/nginx, /metrics/pdns.

Which means 3 separate entries in the Prometheus config file to scrape them, and breaks nice features like DNS-SD to let us just discover.

And it feels unclean: the PowerDNS container has a bunch of things in it, but they're all providing one-service - they're all one product. Shouldn't their metrics all be given as one endpoint?

### Solution 2 - just use multiple ports

This is the Prometheus way. And it would work. But it still has some of the drawbacks above - we're still explicitly scraping 3 targets, and we're doing some slicing on the Prometheus side to try and group these sensibly - in fact we're requiring Prometheus to understand our architecture in detail which shouldn't matter.

i.e. is the DNS container a single job with 3 endpoints in it, multiple jobs per container? The latter feels wrong again - if our database goes sideways, its not really a database cluster going down - just a single "DNS server" instance.

Prometheus has the idea of an "instance" tag per scraped endpoint...we'd kind of like to support that.

## Solution 3 - combine the exporters into one endpoint - reverse_exporter

reverse_exporter is essentially the implementation of how we achieve this.

The main thing reverse_exporter was designed to do is receive a scrape request, proxy it to a bunch of exporters listening on localhost behind it, and then decode the metrics they produce so it can rewrite them with unique identifier labels before handing them to Prometheus.

Obviously metric relabelling on Prometheus can do something like this, but in this case as solution designers/application developers/whatever we are, we want to express an opinion on how this container runs, and simplify the overhead to supporting it.

The reason we rewrite the metrics is to allow namespace collisisions - specifically we want to ensure we can have multiple golang runtime metrics from Prometheus live side-by-side, but still be able to separate them out in our visualiazation tooling. We might also want to have multiples of the same application in our container (or maybe its something like a Kubernetes pod and we want it to be monitored like a single appliance). The point is: from a Prometheus perspective, it all comes out looking like metrics from the 1 "instance", and gets metadata added by Prometheus as such without any extra effort. And that's powerful - because it means DNS SD or service discovery works again. And it means we can start to talk about cluster application policy in a sane way - "we'll monitor /metrics on port 80 or 443 for you if it's there.

## Other Problems (which are solved)

There were a few other common dilemmas I wanted a "correct" solution for when I started playing around with reverse_exporter which it solves.

We don't always want to write an entire exporter for Prometheus - sometimes we just have something tiny and fairly obvious we'd like to scrape with a text format script. When using the Prometheus node_exporter you can do this with the text collector, which will read *.prom files on every scrape - but you need to setup cron to periodically update these - which can be a pain, and gives the metrics lag.

What if we want to have an on-demand script?

reverse_exporter allows this - you can specify a bash script, even allow arguments to be passed via URL params, and it'll execute and collect any metrics you write to stdout.

But it also protects you from the danger of naive approach here: a possible denial of service from an overzealous or possibly malicious user sending a huge number of requests to your script. If we just spawned a process each time, we could quickly exhaust container or system resources. reverse_exporter avoids this problem by waterfalling the results of each execution - since Prometheus regards a scrape as a time-slice of state at the moment it gets results, we can protect the system by queuing up inbound scrapers while the script executes, and then sending them all the same results (provided they're happy with the wait time - which Prometheus is good about).

We avoid thrashing the system resources, and we can confidently let users and admins reload the metrics page without bringing down our container or our host.

## Conclusion

This post feels a bit marketing like to me, but I am pretty excited that for me at least reverse_exporter works well.

Hopefully, it proves helpful to other Prometheus users as well!

# S4-i9505 in 2018

Some notes on running a Samsung Galaxy S4 i9505 in Australia in 2018

My first high end phone, still perfectly capable of everything I need from a smart phone and now dirt cheap on ebay so I'm basically going to keep buying them till there's no more to be had (or someone releases a Spectre-immune CPU phone I guess now).

Baseband: XXUGNG8 I upgraded the baseband a bunch of times including to some alleged Telstra OTA packages, and found I lost wifi. The actual modem and APN-HLOS don't seem to matter much but...the XXUGNG8 bootloader and related files are vitally important to getting sound to work.

OS: Lineage OS Loved Cyanogenmod, like seeing it continued. There's a patch to the SELinux config needed on newer android to allow the proximity sensor to calibrate properly - the symptom is an apparent freeze when making/receiving calls and its to do with SELinux only allowing the phone to use the default prox-sensor thresholds - which if your phone meets them, great - if not - then it will appear broken.

I'm hoping to get this patched in the upstream soon.

# Structuring my Go projects

Recently I've been maintaining a Github repository to serve as a generic template for my Golang projects, and its been working rather well for me.

The repository is here: Self-contained Go Project

The basic idea is that using this template, you can setup a Go project with vendored dependencies not just for the main project but also for every tool used in building and linting it (with the exception of make, git and a working Golang install).

go get <my project>
cd $GOPATH/src/<my project> make  does a production build. ## How to Use It Out of the box (i.e. git clone https://github.com/wrouesnel/self-contained-go-project.git) on a Linux machine it should be all setup to go. I've made some effort to try and remove Linux specific things from it, but since I don't run Mac OS or Windows for Go development it's probably not working too well there. Essentially, it'll build multi-platform, CGO-less binaries for any main package you place in a folder underneath the cmd directory. Running make binary will build all current commands for your current platform and symlink them into the root folder, while running make release will build all binaries and then create tarballs with the name and version in the release directory. It also includes bevy of other CI-friendly commands - namely make style which checks for gofmt and goimports formatting and make lint which runs gometalinter against the entire project. ## Philosophy Just looking at the commands, the main thing accomplished is a lot of use of make. It's practically used for ergonomics more then utility to some level since make is a familiar "build whatever this is" command in the Unix world. But, importantly, make is used correctly - build dependencies are expressed and managed in a form it understands so it only rebuilds as necessary. But there is more important element, and that is not just that there is a Makefile but that the repository for the project, through govendoring includes not just the code but also the linting and checking tools needed to build it, and a mechanism to update them all. Under the tools directory we have a secondary Makefile which is called from the top-level and is reposible for managing the tools. By running make update here we can go get a new version of gometalinter, extract the list of tools it runs, then automatically have them updated and installed inside the source directory and made available to the top level Makefile to use to run CI tasks. This combines to make project management extremely ergonomic in my opinion, and avoids dragging a heavier tool like Docker into the mix (which often means some uncontrolled external dependencies). Basically: you check in everything your project needs to be built and run and tested into the one Git repository, because storage is cheap but your time is not and external dependencies can't be trusted to always exist. ## Conclusion It's not the be all and end all - in build tooling there never is one, but I'm thusfar really happy with how this basic structure has turned out as I've evolved it and it's proven relatively easy to extend when I need to (i.e. adding more testing levels, building web assets as well with npm and including them in the go-binary etc.) # Tracking down why my desktop fails every second resume ## Collecting a thread to pull on... Had an interesting problem for a while now - my desktop under linux would mostly suspend and resume just fine, except when it didn't. This is annoying as I'm the type of person who likes to leave a big dev environment running and come back to it. Power management problems are the worst type of problems to debug in many ways, so documenting any progress I made was fairly important. The Ubuntu guide to kernel suspend is the most useful one I found: https://wiki.ubuntu.com/DebuggingKernelSuspend And the important bit is this action: sync && echo 1 > /sys/power/pm_trace && pm-suspend  This does some kernel magic which encodes suspend/resume progress into the systems RTC clock via a hash, which allows - if things freeze - to reboot and grab the point at which they did. You have about 3 minutes to do so after the next boot before the data vanishes and you grab it from dmesg. This led to an immediate reproduction - suspend->resume worked the first time, and then hung my system on the second time. So it works, but something gets corrupted through the process and we need to (hopefully) just reset it on resume to avoid the problem. $ dmesg | grep -A10 Magic
[    3.607642]   Magic number: 0:474:178
[    3.625900]   hash matches /build/linux-B4zRAA/linux-4.8.0/drivers/base/power/main.c:1070
[    3.644583] acpi device:0e: hash matches
[    3.663313]  platform: hash matches


That's the easy part. What the hell does it mean?

## Goto the Source

We get a source line out of that request, and we're running an ubuntu kernel which has a convenient source package we can grab. So let's get that so we can account for the Ubuntu packages:

$cd ~/tmp$ uname -a
Linux will-desktop 4.8.0-19-generic #21-Ubuntu SMP Thu Sep 29 19:39:23 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$apt-get source linux-image-4.8.0-19-generic-generic  Which leads to this: static void async_suspend_noirq(void *data, async_cookie_t cookie) { struct device *dev = (struct device *)data; int error; error = __device_suspend_noirq(dev, pm_transition, true); if (error) { dpm_save_failed_dev(dev_name(dev)); pm_dev_err(dev, pm_transition, " async", error); } put_device(dev); }  So the error line we're getting puts right on that if (error) line which hopefully means this is just some device failure we can add a PM script for. From dmesg above we've got two more things to look at - whatever acpi_device:0e is and the platform driver for. Some googling shows that this puts us into the category of very annoying problems: we're not even successfully getting into the resume code, so the failure on the second resume happens very early. https://lkml.org/lkml/2016/7/14/160 ## Time to rebuild the kernel... Which is often less work then it sounds, but judging from that LKML link it's pretty much the only lead we have to go on since we don't have a Thinkpad but the problem is probably suspiciously similar. # Totally static Go builds I wouldn't make a post on my blog just so I don't have to keep googling something would I? Of course I would. It's like...95% of the reason I keep this. Totally static go builds - these are great for running in Docker containers. The important part is the command line to create them - it's varied a bit, but the most thorough I've found is this (see this Github Issue): CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' .  This will create an "as static as possible" binary - beware linking in things which want glibc, since pluggable name resolvers will be a problem (which you can workaround in Docker quite well, but that's another question). # Quickly configuring modelines? ## Quickly configuring modelines? Something hopefully no one should ever have to do in the far distant future, but since I insist on using old-hardware till it drops, it still comes up. Working from an SSH console on an XBMC box, I was trying to tune in an elusive 1366x768 modeline for an old plasma TV. The best way to do it is with xrandr these days in a ~/.xprofile script which is loaded on boot up. To quickly go through modelines I used the following shell script: #!/bin/bash xrandr -d :0 --output VGA-0 --mode "1024x768" xrandr -d :0 --delmode VGA-0 "1360x768" xrandr -d :0 --rmmode "1360x768" xrandr -d :0 --newmode "1360x768"$@
xrandr -d :0 --addmode VGA-0 "1360x768"
xrandr -d :0 --output VGA-0 --mode "1360x768"


Simply passing in a modeline when running it causes that modeline to be set and applied to the relevant output (VGA-0) in my case.

i.e. ./tryout 84.750 1366 1480 1568 1800 768 769 776 800 -hsync +vsync

# Installing the latest docker release

Somehow the installation instructions for Docker never work for me and the website is surprisingly cagey about the manual process.

It works perfectly well if you just grab the relevant bits of that script and run them manually, but usually fails if you let it be a bit too magical.

To be fair, I probably have issues due to the mismatch of LSB release since I run Mint. Still though.

So here's the commands for Ubuntu:

$apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D$ echo deb https://apt.dockerproject.org/repo ubuntu-vivid main > /etc/apt/sources.list.d/docker.list
\$ apt-get update && apt-get install -y docker-engine