The reboot and poweroff commands do not work without forcing. They work fine by specifying -f. The commands return and do not give any errors. Can someone point me to where I might find logs or information as to what is causing this? I’m sure it is not a sensible long time plan to keep using -f
For me, $ reboot results in a reboot. Always had. Perhaps if you shared your config some would be able to help.
Try systemctl poweroff
also run realpath $(which reboot), if you’ve installed some funny packages like toybox or busybox it can cause issues.
Pretty much any logs you could want will be in journalctl. Use -b to control which boot whose logs you’re looking at.
My config is massive as it is flaked for a whole bunch of computers. They share a lot but this issue only affects 2 that are on similar hosts. The work to sanitise the configuration might not be trivial so I was hoping to just work it out from the logs, or at least get a kick start in the right direction.
reboot and poweroff just do the same thing as this I believe and it was one of the first things I tried without luck
OK so this might be getting close to the issue.
which reboot
/run/current-system/sw/bin/reboot
realpath $(which reboot)
/nix/store/cpjbaq5g5xyv5z9dzsxasbi1gzlbcr31-toybox-0.8.11/bin/toybox
I use toybox on all my servers without issue> The two in question are non interactive so I could remove it from them. I will try to re-arrange my modules to see if I can do that easily.
I can confirm that removing toybox fixes the issue. As an aside, is there an alternative to toybox which does not bring so many issues? (I can’t remember why I installed it in the first place, it was probably just for 1 or 2 of the commands)
Depends why you installed toybox in the first place.
I don’t remember that. Looking through what’s included I probably needed something in it and noticed it contained lots of useful things so I just installed it rather than lots of different things. I might only find out by getting rid of it and waiting until I don’t have a command I need. It is however in a common module of my configuration so fiddling could affect a lot of hosts. For now I’ve removed it from the 2 affected so maybe that’ll be the end of this particular issue for me