ZFS-on-root on Nixos: Working Multiple EFI Partitions

Nov 17, 2025 Updated on Nov 18, 2025

tldr; Multiple efi partitions are possible to complement zfs-on-root, with nofail filesystem options and specialisations to handle drive failure.

DISCLAIMER: There's probably a smarter way to do this, I am not an expert in any of the following topics.

edit 2025-11-17: rsync instead of cp to avoid orphaning generations in efi partitions, also removing extraneous generationsDir.copyKernels; thanks @ElvishJerricco for pointing this out

Initial Setup

My server's storage setup is a single zfs-on-root pool, 2x2 with a hot spare. I wanted to try zfs on root, instead of the perhaps standard solid state boot / system drive and zfs for storage, for maximum survivability.

Instead of a single /boot efi partition, each drive contributes its own boot partition, mounted to /boot/efis/efi1, /boot/efis/ef2, ..., /boot/efis/efi5.

My config has this snippet (cribbed from this thread):

boot = {
loader = {
  systemd-boot = {
    enable = true;

    extraInstallCommands = ''
      ${pkgs.rsync}/bin/rsync -a --delete /boot/efis/efi1/ /boot/efis/efi2
      ${pkgs.rsync}/bin/rsync -a --delete /boot/efis/efi1/ /boot/efis/efi3
      ${pkgs.rsync}/bin/rsync -a --delete /boot/efis/efi1/ /boot/efis/efi4
      ${pkgs.rsync}/bin/rsync -a --delete /boot/efis/efi1/ /boot/efis/efi5
    '';
  }; # systemd-boot

  efi = {
    canTouchEfiVariables = true;
    efiSysMountPoint = "/boot/efis/efi1";
  }; # efi
}; # loader

# kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;
};

And relevant filesystem config as detected by nixos-generate-config:

fileSystems."/boot/efis/efi1" =
{ device = "/dev/disk/by-uuid/41D1-D66C";
  fsType = "vfat";
  options = [ "fmask=0022" "dmask=0022" ];
};

fileSystems."/boot/efis/efi2" =
{ device = "/dev/disk/by-uuid/41D2-DAE6";
  fsType = "vfat";
  options = [ "fmask=0022" "dmask=0022" ];
};

fileSystems."/boot/efis/efi3" =
{ device = "/dev/disk/by-uuid/41D3-FEA1";
  fsType = "vfat";
  options = [ "fmask=0022" "dmask=0022" ];
};

fileSystems."/boot/efis/efi4" =
{ device = "/dev/disk/by-uuid/41D5-0933";
  fsType = "vfat";
  options = [ "fmask=0022" "dmask=0022" ];
};

fileSystems."/boot/efis/efi5" =
{ device = "/dev/disk/by-uuid/41D6-53CC";
  fsType = "vfat";
  options = [ "fmask=0022" "dmask=0022" ];
};

This was my first server, kinda YOLO'd and figured it would work. Can you spot the problem(s)?

Missing filesystem -> boot to emergency mode

If systemd-boot can't find one of those efi partitions, it kicks you out to emergency mode, which, if you don't have a root password set up, is completely unworkable unless you do SYSTEMD_SULOGIN_FORCE=1 in kernel params. It also starts with no networking, which is a problem for my server, colo'd at a datacenter some distance away from home.

Note that the system actually will boot, as long as one of the efi partitions is around. But you get stuck in emergency mode after waiting 1.5 minutes for a failed mount.

I was comfortable with setting all of these to nofail; if enough drives were dead that the pool was inoperable, I'd have bigger problems.

So, they now look something like

fileSystems."/boot/efis/efi1" =
{ device = "/dev/disk/by-uuid/41D1-D66C";
  fsType = "vfat";
  options = [ 
    "fmask=0022" 
    "dmask=0022" 
    "nofail"
  ];
};
...

We're still not done though...

Missing efiSysMountPoint -> problems with nixos-rebuild

As it stands, if the drive with boot/efis/efi1 goes down, I can't do nixos-rebuild, I think because there's nothing at the current boot.loader.efi.efiSysMountPoint -- the error is that no previous version of systemd-boot can be found.

My fix is a set of specialisations that specify the other efi partitions as efiSysMountPoint -- for good measure they also attempt to copy the bootloader to the other efi partitions as appropriate:

specialisation = {
  efi2.configuration = {
    system.nixos.tags = [ "efi2" ];
    boot.loader = {
      systemd-boot.extraInstallCommands = lib.mkForce ''
        ${pkgs.rsync}/bin/rsync -a --delete /boot/efis/efi2/ /boot/efis/efi1
        ${pkgs.rsync}/bin/rsync -a --delete /boot/efis/efi2/ /boot/efis/efi3
        ${pkgs.rsync}/bin/rsync -a --delete /boot/efis/efi2/ /boot/efis/efi4
        ${pkgs.rsync}/bin/rsync -a --delete /boot/efis/efi2/ /boot/efis/efi5
      '';
      efi.efiSysMountPoint = lib.mkForce "/boot/efis/efi2";
    };
  };
  ...
};

We can use nixos-rebuild with the --specialisation argument to use an alternate efi partition and get out of our pickle.

When I get back to the datacenter I may try to actually pull a drive and test this, for now I've verified on a libvirt vm and five qcow2 files to simulate the pool. Seems to work!

https://blog.femtodata.com/posts/feed.xml