Well, here my story, might it be useful to others too.

I have a home server with 6Tb RAID1 (os on dedicated nvme). I was playing with bios update and adding more RAM, and out of the blue after the last reboot my RAID was somehow shutdown unclean and needed a fix. I probably unplugged the power chord too soon while the system was shutting down containers.

Well, no biggie, I just run fsck and mount it, so there it goes: “mkfs.ext4 /dev/md0”

Then hit “y” quickly when it said “the partition contains an ext4 signature blah blah” I was in a hurry so…

Guess what? And read again that command carefully.

Too late, I hit Ctrl+c but already too late. I could recover some of the files but many where corrupted anyway.

Lucky for me, I had been able to recover 85% of everything from my backups (restic+backrest to the rescue!) Recreate the remaining 5% (mostly docker compose files located in the odd non backupped folders) and recovered the last 10% from the old 4Tb I replaced to increase space some time ago. Luckly, that was never changing old personal stuff that I would have regret losing, but didn’t consider critical enough on backup.

The cold shivers I had before i checked my restic backup discovering that I didn’t actually postponed backup of those additional folders…

Today I will add another layer of backup in the form of an external USB drive to store never-changing data like… My ISOs…

This is my backup strategy up to yesterday, I have backrest automating restic:

  • 1 local backup of the important stuff (personal data mostly)
  • 1 second copy of the important stuff on an USB drive connected to an openwrt router on the other side of home
  • 1 third copy of the important stuff on a remote VPS

And since this morning I have added:

  • a few git repos (pushed and backup in the important stuff) with all docker compose, keys and such (the 5%)
  • an additional USB local drive where I will be backup ALL files, even that 10% which never changes and its not “important” but I would miss if I lost it.

Tools like restic and Borg and so critical that you will regret not having had them sooner.

Setup your backups like yesterday. If you didn’t already, do it now.

  • NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 months ago
    • a few git repos (pushed and backup in the important stuff) with all docker compose, keys and such (the 5%)

    Um, maybe I’m misunderstanding, but you’re storing keys in git repositories which are where…?

    And remember, if you haven’t tested your backups then you don’t have backups!

    • Shimitar@feddit.itOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      All my got repos are on my server, not public. Then backupped on my restic, encrypted.

      Only the public keys are under backup tough, for the private ones, I prefer to have to regenerate them rather get stolen.

      I mean, when like in foegejo you add the public keys for git push and such.

    • WhyAUsername_1@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      24 days ago

      How do we test back-ups honestly? I just open a folder or two in my backup drive. If it opens , I am ok. Any better way to do this?

      • NaibofTabr@infosec.pub
        link
        fedilink
        English
        arrow-up
        2
        ·
        23 days ago

        Hahhh… well really, the only way to test backups is to try to restore from them. VMs are extremely helpful for this - you can try to restore a VM mirror of your production system, see if it works as expected, wipe it and start over if it doesn’t.