Structuring my Go projects

Recently I've been maintaining a Github repository to serve as a generic template for my Golang projects, and its been working rather well for me.

The repository is here: Self-contained Go Project

The basic idea is that using this template, you can setup a Go project with vendored dependencies not just for the main project but also for every tool used in building and linting it (with the exception of make, git and a working Golang install).

go get <my project>
cd $GOPATH/src/<my project>
make

does a production build.

How to Use It

Out of the box (i.e. git clone https://github.com/wrouesnel/self-contained-go-project.git) on a Linux machine it should be all setup to go. I've made some effort to try and remove Linux specific things from it, but since I don't run Mac OS or Windows for Go development it's probably not working too well there.

Essentially, it'll build multi-platform, CGO-less binaries for any main package you place in a folder underneath the cmd directory. Running make binary will build all current commands for your current platform and symlink them into the root folder, while running make release will build all binaries and then create tarballs with the name and version in the release directory.

It also includes bevy of other CI-friendly commands - namely make style which checks for gofmt and goimports formatting and make lint which runs gometalinter against the entire project.

Philosophy

Just looking at the commands, the main thing accomplished is a lot of use of make. It's practically used for ergonomics more then utility to some level since make is a familiar "build whatever this is" command in the Unix world.

But, importantly, make is used correctly - build dependencies are expressed and managed in a form it understands so it only rebuilds as necessary.

But there is more important element, and that is not just that there is a Makefile but that the repository for the project, through govendoring includes not just the code but also the linting and checking tools needed to build it, and a mechanism to update them all.

Under the tools directory we have a secondary Makefile which is called from the top-level and is reposible for managing the tools. By running make update here we can go get a new version of gometalinter, extract the list of tools it runs, then automatically have them updated and installed inside the source directory and made available to the top level Makefile to use to run CI tasks.

This combines to make project management extremely ergonomic in my opinion, and avoids dragging a heavier tool like Docker into the mix (which often means some uncontrolled external dependencies).

Basically: you check in everything your project needs to be built and run and tested into the one Git repository, because storage is cheap but your time is not and external dependencies can't be trusted to always exist.

Conclusion

It's not the be all and end all - in build tooling there never is one, but I'm thusfar really happy with how this basic structure has turned out as I've evolved it and it's proven relatively easy to extend when I need to (i.e. adding more testing levels, building web assets as well with npm and including them in the go-binary etc.)

Tracking down why my desktop fails every second resume

Collecting a thread to pull on...

Had an interesting problem for a while now - my desktop under linux would mostly suspend and resume just fine, except when it didn't. This is annoying as I'm the type of person who likes to leave a big dev environment running and come back to it.

Power management problems are the worst type of problems to debug in many ways, so documenting any progress I made was fairly important.

The Ubuntu guide to kernel suspend is the most useful one I found: https://wiki.ubuntu.com/DebuggingKernelSuspend

And the important bit is this action:

sync && echo 1 > /sys/power/pm_trace && pm-suspend

This does some kernel magic which encodes suspend/resume progress into the systems RTC clock via a hash, which allows - if things freeze - to reboot and grab the point at which they did. You have about 3 minutes to do so after the next boot before the data vanishes and you grab it from dmesg.

This led to an immediate reproduction - suspend->resume worked the first time, and then hung my system on the second time. So it works, but something gets corrupted through the process and we need to (hopefully) just reset it on resume to avoid the problem.

$ dmesg | grep -A10 Magic
[    3.607642]   Magic number: 0:474:178
[    3.625900]   hash matches /build/linux-B4zRAA/linux-4.8.0/drivers/base/power/main.c:1070
[    3.644583] acpi device:0e: hash matches
[    3.663313]  platform: hash matches

That's the easy part. What the hell does it mean?

Goto the Source

We get a source line out of that request, and we're running an ubuntu kernel which has a convenient source package we can grab. So let's get that so we can account for the Ubuntu packages:

$ cd ~/tmp
$ uname -a
Linux will-desktop 4.8.0-19-generic #21-Ubuntu SMP Thu Sep 29 19:39:23 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
$ apt-get source linux-image-4.8.0-19-generic-generic

Which leads to this:

static void async_suspend_noirq(void *data, async_cookie_t cookie)
{
    struct device *dev = (struct device *)data;
    int error;

    error = __device_suspend_noirq(dev, pm_transition, true);
    if (error) {
        dpm_save_failed_dev(dev_name(dev));
        pm_dev_err(dev, pm_transition, " async", error);
    }

    put_device(dev);
}

So the error line we're getting puts right on that if (error) line which hopefully means this is just some device failure we can add a PM script for.

From dmesg above we've got two more things to look at - whatever acpi_device:0e is and the platform driver for. Some googling shows that this puts us into the category of very annoying problems: we're not even successfully getting into the resume code, so the failure on the second resume happens very early. https://lkml.org/lkml/2016/7/14/160

Time to rebuild the kernel...

Which is often less work then it sounds, but judging from that LKML link it's pretty much the only lead we have to go on since we don't have a Thinkpad but the problem is probably suspiciously similar.

Totally static Go builds

I wouldn't make a post on my blog just so I don't have to keep googling something would I? Of course I would. It's like...95% of the reason I keep this.

Totally static go builds - these are great for running in Docker containers. The important part is the command line to create them - it's varied a bit, but the most thorough I've found is this (see this Github Issue):

CGO_ENABLED=0 GOOS=linux go build -a -ldflags '-extldflags "-static"' .

This will create an "as static as possible" binary - beware linking in things which want glibc, since pluggable name resolvers will be a problem (which you can workaround in Docker quite well, but that's another question).

Quickly configuring modelines?

Quickly configuring modelines?

Something hopefully no one should ever have to do in the far distant future, but since I insist on using old-hardware till it drops, it still comes up.

Working from an SSH console on an XBMC box, I was trying to tune in an elusive 1366x768 modeline for an old plasma TV.

The best way to do it is with xrandr these days in a ~/.xprofile script which is loaded on boot up.

To quickly go through modelines I used the following shell script:

#!/bin/bash
xrandr -d :0 --output VGA-0 --mode "1024x768"
xrandr -d :0 --delmode VGA-0 "1360x768"
xrandr -d :0 --rmmode "1360x768"
xrandr -d :0 --newmode "1360x768" $@
xrandr -d :0 --addmode VGA-0 "1360x768"
xrandr -d :0 --output VGA-0 --mode "1360x768"

Simply passing in a modeline when running it causes that modeline to be set and applied to the relevant output (VGA-0) in my case.

i.e. ./tryout 84.750 1366 1480 1568 1800 768 769 776 800 -hsync +vsync

Installing the latest docker release

Somehow the installation instructions for Docker never work for me and the website is surprisingly cagey about the manual process.

It works perfectly well if you just grab the relevant bits of that script and run them manually, but usually fails if you let it be a bit too magical.

To be fair, I probably have issues due to the mismatch of LSB release since I run Mint. Still though.

So here's the commands for Ubuntu:

$ apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
$ echo deb https://apt.dockerproject.org/repo ubuntu-vivid main > /etc/apt/sources.list.d/docker.list
$ apt-get update && apt-get install -y docker-engine

Using the Go playground locally

Summary

I modified Rocky Bernsteins go-play to compile with go-assetfs and run from a single executable. Get it here!

Why and How

iPython is one of the things I love best about Python. In a dynamically typed language its a huge benefit to be able to quickly and easily paste in chunks of code and investigate what the actual output would be or what an error situation would look like.

Go is not dynamically typed, but many of the same issues tend to apply - when errors rise they can be tricky to introspect without diving through the code, and sometimes the syntax or results of a function call aren't obvious.

As a learning tool, Go provides the Go Playground - a web service which compiles and runs snippets of Go code within a sandbox, which has proven a huge boon to the community for sharing and testing solutions (its very popular on Stack Overflow).

The public Go playground is necessariy limited - and it would be nice to be able to use Go in the same way clientside, or just without internet access.

Fortunately Rocky Bernstein pulled together an unrestricted copy of the Go play ground which runs as a client-side HTML5 app. Unlike the web playground, this allows unrestricted Go execution on your PC and full testing of things as they would work locally. The Github export is found here.

The one problem I had with this was that this version still exposed dependencies on the location of source files outside the executable - which for a tiny tool was kind of annoying. Fortunately this has been solved in Go for a long time - and a little fun with go-bindata-assetfs yielded my own version which once built runs completely locally.

Get it here. It's fully go-gettable too so go get github.com/wrouesnel/go-play will work too.

SSH port forwarding when port fowarding is disabled with socat and nc

The Problem

You have a server you can SSH to. For whatever reason AllowTCPPortForwarding is disabled. You need to forward a port from it to your local machine.

If it's any sort of standard machine, then it probably has netcat. It's less likely to have the far more powerful socat - which we'll only need locally.

This tiny tip servers two lessons: (1) disabling SSH port forwarding is not a serious security measure, and far more of an anoyance. And (2) since it's pretty likely you still need to do whatever job you need to do, it would be nice to have a 1-liner which will just forward the port for you

The Solution

socat TCP-LISTEN:<local port>,reuseaddr,fork "EXEC:ssh <server> nc localhost <remote port>"

It's kind of obvious if you know socat well, but half the battle is simply knowing it's possible.

Obviously you can change localhost to also be a remote server. And this is really handy if you want to do debugging since socat can echo all data to the console for you if you want.

The Lesson

As I said at the start: if you have standard tools installed, or if your users can upload new tools (which, with shell access they can), and if you don't have firewall rules or cgroups limitations on those accounts, then stuff like disabled port forwards is not a security measure.