🇨🇦

  • 6 Posts
  • 129 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle


  • Sure, cloudflare provides other security benefits; but that’s not what OP was talking about. They just wanted/liked the plug+play aspect, which doesn’t need cloudflare.

    Those ‘benefits’ are also really not necessary for the vast majority of self hosters. What are you hosting, from your home, that garners that kind of attention?

    The only things I host from home are private services for myself or a very limited group; which, as far as ‘attacks’ goes, just gets the occasional script kiddy looking for exposed endpoints. Nothing that needs mitigation.





  • I setup borg around 4 months ago using option 1. I’ve messed around with it a bit, restoring a few backups, and haven’t run into any issues with corrupt/broken databases.

    I just used the example script provided by borg, but modified it to include my docker data, and write info to a log file instead of the console.

    Daily at midnight, a new backup of around 427gb of data is taken. At the moment that takes 2-15min to complete, depending on how much data has changed since yesterday; though the initial backup was closer to 45min. Then old backups are trimmed; Backups <24hr old are kept, along with 7 dailys, 3 weeklys, and 6 monthlys. Anything outside that scope gets deleted.

    With the compression and de-duplication process borg does; the 15 backups I have so far (5.75tb of data) currently take up 255.74gb of space. 10/10 would recommend on that aspect alone.

    /edit, one note: I’m not backing up Docker volumes directly, though you could just fine. Anything I want backed up lives in a regular folder that’s then bind mounted to a docker container. (including things like paperless-ngxs databases)






  • That’s gonna be quite the bill when the device does eventually disconnect…

    I’d expect it to bill you for the extra data retroactively. (otherwise people would just save all the heavy downloads for the end of the month) I’d also expect it to calculate usage regularly as well as upon disconnect, so this plan wouldn’t really work.

    If this is really what you want to try though: take a rpi, open a Screen for a persistent terminal, run ‘ping google.com’ and disconnect from that screen (leaving it to endlessly ping google, at least until it’s restarted).








  • Darkassassin07@lemmy.ca
    cake
    toSelfhosted@lemmy.worldBackup solutions
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 months ago

    After reading this thread and a few other similar ones, I tried out BorgBackup and have been massively impressed with it’s efficiency.

    Data that hasn’t changed, is stored under a different location, or otherwise is identical to what’s already stored in the backup repository (both in the backup currently being created and all historical backups) isn’t replicated. Only the information required to link that existing data to its doppelgangers is stored.

    The original set of data I’ve got being backed up is around 270gb: I currently have 13 backups of it. Raw; thats 3.78tb of data. After just compression using zlib; that’s down to 1.56tb. But the incredible bit is after de-duplication (the part described in the above paragraph), the raw data stored on disk for all 13 of those backups: 67.9gb.

    I can mount any one of those 13 backups to the filesystem, or just extract any of 3.78tb of files directly from that backup repository of just 67.9gb of data.