fuckkkk ZFS why you do this. guess i know what i'm spending tonight doing.
any #zfs experts out there able to help me recover my pool? i apparently broke things quite significantly lol
fuckkkk ZFS why you do this. guess i know what i'm spending tonight doing.
any #zfs experts out there able to help me recover my pool? i apparently broke things quite significantly lol
Today, out of nowhere, my shell started to misbehave. My prompt suddenly reverted to default. Some unexpected "command not found" errors started popping up. Something was off.
I went to check my shell configuration. The directory was not there. I then went look into ~/.config. Half of the directories inside were simply gone.
I immediately flipped into what the fuck is happening mode.
This system is an Alpine root-on-ZFS. I have a script called by cron every 20 minutes that snapshots everything.
First I went into the snapshot directory and started copying some things. I soon noticed it was just too much missing. How to map out what was gone in the first place? Even so, copying would only go so far because I was duplicating things.
I looked to my left at the resource monitor. I had less than 1 GB left of free space. That was not going to work.
I flip some pages, looking for an incantation...
% zfs rollback zroot/home@20m
A long second hanged in the air. Then all the resource monitor's bars flipped at once to green: 53% free space.
"Blessed be the ZFS Daemons and the Authors who conjured Them."
The system was still confused, so I rebooted. It let its conscience drift - as it is used to -, uncertainty still heavy in the air. Then it resurfaced... every line of output unconcerned.
Back up, no trace was left of the seconds leading up to the warp. The only suspicion came from a cryptography guardian, who noticed something was wrong about the timestamps. "Please re-enter the passcodes", it asked. Every other blob was either unconcerned or unimpressed with the glitch.
Like any time travel, the only trace left was in my memory. No history anywhere has me looking for that spell. I booted 20 minutes into the past and that's from when I am writing to you.
A nice side effect of expanding my ZFS pool is that scrubs are now quite a bit faster. Down from ~19 hours to under 15. Makes sense since it can read data faster now.
I'd still like to know why the speed of scrubs decreases over as it progresses.
scrub repaired 0B in 14:27:28 with 0 errors on Mon Apr 14 14:51:33 2025
Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟬𝟰/𝟭𝟰 (Valuable News - 2025/04/14) available.
https://vermaden.wordpress.com/2025/04/14/valuable-news-2025-04-14/
Past releases: https://vermaden.wordpress.com/news/
Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟬𝟰/𝟭𝟰 (Valuable News - 2025/04/14) available.
https://vermaden.wordpress.com/2025/04/14/valuable-news-2025-04-14/
Past releases: https://vermaden.wordpress.com/news/
Wrote a backup of the last #windows10 #vm and transferred to my #zfs-#ghostbsd box. And then deleted it from the #fedora host. Didn’t touch the system for month since I switched from #citavi to #zotero. Away with that time consuming bullshit os. It took more time to wait for installing updates then having worked with. Not regarding the stupidity of how to upgrade to #windows11. Bye, and now back to work;)
So,... I decided to use the zfs filesystem on Ubuntu for your typical laptop usage.
Good or bad decision?
Kleinen Fileserver #NAS mit #raspberrypi bauen: Externe Festplatte mit #RAID oder einzelne Platten mit #zfs? Was sagt ihr?
i'm looking for a new HBA for my FreeBSD file server, is the LSI SAS3416 a reasonable choice?
it seems to be supported by the mps(4) driver and does both SAS/SATA and PCIe, and has PCIe 3.1 for the host interface, so i'm assuming it's a reasonable upgrade for my current LSI SAS2008.
(i mostly just want more ports, but more performance and the ability to use NVMe disks would be nice too.)
#ProxmoxBackupServer 3.4 has been released (#Proxmox / #ProxmoxBS / #BackupServer / #ZFS / #OpenZFS / #Linux / #Debian / #Bookworm / #DebianBookworm) https://proxmox.com/
#ProxmoxVE 8.4 has been released (#Proxmox / #VirtualEnvironment / #Virtualization / #VirtualMachine / #VM / #Linux / #Debian / #Bookworm / #DebianBookworm / #QEMU / #LXC / #KVM / #ZFS / #OpenZFS / #Ceph) https://proxmox.com/
@chpietsch
#ZFS also supports encryption.
@marczz @Artikel5eV
welp i may have just lost a 16TB drive- that's like 200€. i need to check to see if #zfs can work around the corruption :(
I love #ZFS compression since I can be sloppy and lazy with allocating disks to my VMs
$ du -sh vm/images
5.2G /esky/vm/images
$ du -sh --apparent-size vm/images
91G vm/images
First time trying to expand a ZFS raidz2...
What am I missing? Forest + trees problem?
I thought it might be a requirement to escape the colons in the device name but I've been backwards and forwards of escaping and quoting to no effect.
For giggles tried adding a 'single' vdev and then 'zpool attach' but it doesn't work.
I was initially following @vermaden tutorial at https://freebsdfoundation.org/blog/openzfs-raid-z-expansion-a-new-era-in-storage-flexibility/ but working on a live system with physical devices and without a net. Ha.
"Linux" things?
Debian Bookwork, Linux 6.12.12+bpo-amd64, zfs 2.3.1. Very vanilla other than the backport kernel.