Ask Ubuntu Asked on December 12, 2020
I recently wiped my box and installed 20.10 which i love and is amazing! This time i decided i wanted to live on the edge and use the experimental zfs support but after some time installing packages and updates I now have a weird problem with my boot zpool being too full.
Anytime I hit "update now" in the software updater i get a message like this one:
The upgrade needs a total of 254 M free space on disk '/boot'. Please free at least an additional 194 M of disk space on '/boot'. You can remove old kernels using 'sudo apt autoremove', and you could also set COMPRESS=xz in /etc/initramfs-tools/initramfs.conf to reduce the size of your initramfs.
Ive run sudo apt autoremove
and it removes nothing, i hesitate changing the compression on my initramfs because that more feels like a patch to a possible lower problem (maybe im wrong haha).
I can still manually upgrade with sudo apt update && sudo apt upgrade
but i get this error every time:
ERROR couldn't save system state: Minimum free space to take a snapshot and preserve ZFS performance is 20%.
When i run zpool list
i get:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
bpool 1.88G 1.69G 185M - - 47% 90% 1.00x ONLINE -
rpool 460G 165G 295G - - 10% 35% 1.00x ONLINE -
so im at 90% capacity on my boot pool…
ive also tried zsysctl service gc -a
to remove snap shots but that didnt seem to change the bpool usage…
So maybe i need to change my bpool allocated size? How do i do that?
Im also getting some weird initramfs failures when i upgrade/install but im not sure if thats related?
Setting up initramfs-tools (0.137ubuntu12) ...
update-initramfs: deferring update (trigger activated)
Setting up linux-firmware (1.190.1+system76~1605123765~20.10~3894207) ...
update-initramfs: Generating /boot/initrd.img-5.8.0-29-generic
I: The initramfs will attempt to resume from /dev/nvme0n1p2
I: (UUID=05a735a7-9e82-494e-be9b-171b1c132af5)
I: Set the RESUME variable to override this.
Error 24 : Write error : cannot write compressed block
E: mkinitramfs failure cpio 141 lz4 -9 -l 24
update-initramfs: failed for /boot/initrd.img-5.8.0-29-generic with 1.
dpkg: error processing package linux-firmware (--configure):
installed linux-firmware package post-installation script subprocess returned error exit status 1
Processing triggers for initramfs-tools (0.137ubuntu12) ...
update-initramfs: Generating /boot/initrd.img-5.8.0-29-generic
I: The initramfs will attempt to resume from /dev/nvme0n1p2
I: (UUID=05a735a7-9e82-494e-be9b-171b1c132af5)
I: Set the RESUME variable to override this.
Error 24 : Write error : cannot write compressed block
E: mkinitramfs failure cpio 141 lz4 -9 -l 24
update-initramfs: failed for /boot/initrd.img-5.8.0-29-generic with 1.
dpkg: error processing package initramfs-tools (--configure):
installed initramfs-tools package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
linux-firmware
initramfs-tools
ZSys is adding automatic system snapshot to GRUB menu
E: Sub-process /usr/bin/dpkg returned an error code (1)
running zfs list -t snapshot
shows a bunch of snapshots for bpool:
NAME USED AVAIL REFER MOUNTPOINT
bpool/BOOT/ubuntu_fjp6bn@autozsys_z4aetj 72K - 237M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_mtxh3h 72K - 237M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_72y92u 105M - 357M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_lo8d22 85.2M - 337M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_y7ihca 104M - 336M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_qs6vz5 85.2M - 318M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_cyg6vx 72K - 337M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_r6e64v 56K - 218M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_nrhjqi 56K - 218M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_kgfl6b 104M - 218M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_nw3nk0 85.1M - 199M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_m1b6l9 104M - 218M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_hnt98o 85.1M - 199M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_rj8ttq 64K - 218M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_da1f4s 0B - 218M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_ljdo3n 0B - 218M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_danwfz 0B - 218M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_4sjbka 104M - 218M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_gl3ru4 0B - 218M -
bpool/BOOT/ubuntu_fjp6bn@autozsys_tdbgin 0B - 218M -
rpool/ROOT/ubuntu_fjp6bn@autozsys_z4aetj 71.7M - 5.10G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_mtxh3h 217M - 5.25G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_72y92u 33.3M - 5.43G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_lo8d22 30.2M - 5.30G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_y7ihca 224M - 5.42G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_qs6vz5 27.8M - 5.23G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_cyg6vx 56.3M - 5.51G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_r6e64v 56.6M - 5.29G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_nrhjqi 30.6M - 5.29G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_kgfl6b 7.01M - 5.25G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_nw3nk0 29.6M - 5.17G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_m1b6l9 222M - 5.32G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_hnt98o 27.7M - 5.13G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_rj8ttq 26.2M - 5.17G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_da1f4s 155M - 5.29G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_ljdo3n 24.9M - 5.61G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_danwfz 181M - 5.74G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_4sjbka 498M - 5.66G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_gl3ru4 0B - 5.92G -
rpool/ROOT/ubuntu_fjp6bn@autozsys_tdbgin 0B - 5.92G -
But im not firmilar enough with zfs or zsys to know if i can just destroy snapshots?
So i think i fixed it...
I ran zfs list -t snapshot | grep bpool
to list of all the snapshots for the boot pool. Then sudo zfs destroy bpool/...
for a handful of snapshots starting from the top of the list until zpool list
showed bpool at around 60% CAP. Then ran sudo apt upgrade
and it ran mkinitramfs successfully! And now my bpool is around 70% :shrug:
Correct answer by Kevin on December 12, 2020
Attempting to clear space on /boot/
uname -r
tells which kernel you're running; this version cannot be removed.
Simple list:
dpkg --list | grep linux-
will show kernel-related stuff, among other items.
Advanced, list installed (ii in leftmost column) items that MIGHT be purged:
dpkg --list | grep -E '^ii.*linux-(headers|image|modules)' | grep -v $(uname -r)
Now check which items (_name_
) you CAN remove, and do
sudo apt purge _name_
... on them.
Answered by Hannu on December 12, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP