Forrest Jacobs

Keeping NixOS systems up to date with GitHub Actions

Keeping my NixOS servers up to date was dead simple before I switched to flakes – I enabled system.autoUpgrade, and I was good to go. Trying the same with a shared flakes-based config introduced a few problems:

  1. I configured autoUpgrade to commit flake lock changes, but it ran as root. This created file permission issues since my user owned my NixOS config.
  2. Even when committing worked, each machine piled up slightly different commits waiting for me to upstream.

I could have fixed issue #1 by changing the owner, but fixing #2 required me to rethink the process. Instead of having each individual machine update their lock file, I realized it would be cleaner to update the lock file upstream first, and then rebuild each server from upstream. Updating the lock file first ensures there’s only one version of history, and that makes it easier to reason about what is installed on each server.

Below is one method of updating the shared lock file before updating each server:

Updating flake.lock with GitHub Actions

The update-flake-lock GitHub Action updates your project’s flake lock file on a schedule. It essentially runs nix flake update --commit-lock-file and then opens a pull request. Add it to your NixOS config repository like this:

# /.github/workflows/main.yml

name: update-dependencies
on:
  workflow_dispatch: # allows manual triggering
  schedule:
    - cron: '0 6 * * *' # daily at 1 am EST/2 am EDT

jobs:
  update-dependencies:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: DeterminateSystems/nix-installer-action@v12
      - id: update
        uses: DeterminateSystems/update-flake-lock@v23

Add this step if you want to automatically merge the pull request:

      - name: Merge
        run: gh pr merge --auto "${{ steps.update.outputs.pull-request-number }}" --rebase
        env:
          GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}
        if: ${{ steps.update.outputs.pull-request-number != '' }}

Pulling changes & rebuilding

Next, it’s time to configure NixOS to pull changes and rebuild. The configuration below adds two systemd services:

  • pull-updates pulls config changes from upstream daily at 4:40. It has a few guardrails: it ensures the local repository is on the main branch, and it only permits fast-forward merges. You’ll want to set serviceConfig.User to the user owning the repository. If it succeeds, it kicks off rebuild
  • rebuild rebuilds and switches to the new configuration, and reboots if required. It’s a simplified version of autoUpgrade’s script.
systemd.services.pull-updates = {
  description = "Pulls changes to system config";
  restartIfChanged = false;
  onSuccess = [ "rebuild.service" ];
  startAt = "04:40";
  path = [pkgs.git pkgs.openssh];
  script = ''
    test "$(git branch --show-current)" = "main"
    git pull --ff-only
  '';
  serviceConfig = {
    WorkingDirectory = "/etc/nixos";
    User = "user-that-owns-the-repo";
    Type = "oneshot";
  };
};

systemd.services.rebuild = {
  description = "Rebuilds and activates system config";
  restartIfChanged = false;
  path = [pkgs.nixos-rebuild pkgs.systemd];
  script = ''
    nixos-rebuild boot
    booted="$(readlink /run/booted-system/{initrd,kernel,kernel-modules})"
    built="$(readlink /nix/var/nix/profiles/system/{initrd,kernel,kernel-modules})"

    if [ "''${booted}" = "''${built}" ]; then
      nixos-rebuild switch
    else
      reboot now
    fi
  '';
  serviceConfig.Type = "oneshot";
};

There are many possible variations. For example, in my real config I split the pull service into separate fetch and merge services so I can fetch more frequently. You could also replace the GitHub action with a different scheduled script, or change the rebuild service to never (or always!) reboot.

Waiting on Tailscale

I restarted my server the other day, and I realized one of my systemd services failed to start on boot because the Tailscale IP address was not assignable:

# journalctl -u bad-bad-not-good.service
...
listen tcp 100.11.22.33:8080: bind: cannot assign requested address

This is easy enough to fix. The service should wait to start until after Tailscale is online, so let’s just add tailscaled.service to the the service’s wants and after properties, reboot, and…

# journalctl -u bad-bad-not-good.service
...
listen tcp 100.11.22.33:8080: bind: cannot assign requested address

Huh. It turns out Tailscale comes up a bit before its IP address is available. I was tempted to add an ExecStartPre to my service to sleep for 1 second – gross! – but eventually I found systemd’s fabulous systemd-networkd-wait-online command, which exits when a given interface has an IP address. Call it with -i [interface name] and either -4 or -6 to wait for an IPv4 or IPv6 address.

Wrapping it up into a service gives you something like this:

# tailscale-online.service
[Unit]
Description=Wait for Tailscale to have an IPv4 address
Requisite=systemd-networkd.service
After=systemd-networkd.service
Conflicts=shutdown.target

[Service]
ExecStart=/usr/lib/systemd/systemd-networkd-wait-online -i tailscale0 -4
RemainAfterExit=true
Type=oneshot

Services using your Tailscale IP address can now depend on tailscale-online.

Using Syncthing to sync coding projects

I code on a MacBook and a Windows PC, and I want to keep my coding projects in sync between them. Here are my wishes in decreasing priority:

  1. Code changes on my MacBook should magically update my PC, and vice versa. (Think Dropbox.)
  2. Some files should not sync, like host-specific dependencies and targets. I want to ignore these files via patterns, a la gitignore.
  3. Ideally, this sync extends to both a headless Linux server I use for remote development, and to WSL on my Windows PC.

After experimenting with other solutions (outlined below) I discovered that Syncthing meets every requirement.

What I tried first

1. OneDrive

I use OneDrive to sync most of my files. It’d be nice to just add my coding projects to OneDrive, but it doesn’t work in practice: ignoring files is awkward, and seemingly only works on Windows. Additionally, OneDrive doesn’t run on Linux without some help.

Dropbox looks like a better fit on paper: it can ignore files on any platform (in a different, awkward way) and it has a first party Linux client. But switching to Dropbox would be painful – my partner and I switched away from Dropbox about a year ago because we were getting more storage for less money from Microsoft, and the modern Dropbox app sucks.

2. Remote development

If the issue is syncing files across computers, why don’t I just work on one computer? Well, developing on a remote machine has its own issues:

  • Blips in internet connectivity become big problems. At best, you wait for keystrokes to appear over SSH. At worst, you can’t code at all. (And while file sync also requires connectivity, a few seconds is enough to sync changes.)
  • Waiting for my dinky free-tier Oracle Cloud VM to compile a complex Rust project is frustrating. Sure, I could rent a better VM, but it’s silly to pay for that additional power when I have a more than capable computer in front of me.
  • Some development doesn’t work well in a remote environment. Web dev is fine, but what if I want to play around with game or mobile dev?

3. Git

Can’t I just use Git to stay up to date?

No – version control is different than file sync. I don’t want to track personal config files in version control, but I do want to sync them. And I don’t always want to check in work in progress – for example, I don’t want to check in changes that cause builds or tests to fail.

Using Syncthing

Syncthing is amazing. It does everything I outlined at the top – it syncs my projects, it ignores files based on patterns, and it runs everywhere I code (Windows, MacOS, and Linux.)

I resisted using it because of its high barrier to entry. It uses peer-to-peer file syncing, so you need to set it up on a server to ensure each computer sees the latest changes. And its configuration is more involved than something like Dropbox.

But it’s still worth it for me because it solves all my original problems. (And I want to sync these files to a server anyway.) If you’re struggling with the same issues I ran into, and you’re willing to set up a server, give Syncthing a shot.


Addendum: Syncing your ignore patterns

Syncthing does not keep your ignore patterns in sync across hosts, but there’s a way around it:

  1. Create a text file with the patterns you want to ignore.
  2. Save it to your Syncthing folder.
  3. Add #include name-of-that-file to each host’s ignore patterns.

Voilà!