and using DDNS
As in, running software to update your DNS records automatically based on your current system IP. Great for dynamic IPs, or just moving location.
🇨🇦
and using DDNS
As in, running software to update your DNS records automatically based on your current system IP. Great for dynamic IPs, or just moving location.
Sure, cloudflare provides other security benefits; but that’s not what OP was talking about. They just wanted/liked the plug+play aspect, which doesn’t need cloudflare.
Those ‘benefits’ are also really not necessary for the vast majority of self hosters. What are you hosting, from your home, that garners that kind of attention?
The only things I host from home are private services for myself or a very limited group; which, as far as ‘attacks’ goes, just gets the occasional script kiddy looking for exposed endpoints. Nothing that needs mitigation.
Unless you are behind CGNAT; you would have had the same plug+play experience by using your own router instead of the ISP supplied one, and using DDNS.
At least, I did.
Huh, usually they ask ‘jump where?’
I have one more thought for you:
If downtime is your concern, you could always use a mixed approach. Run a daily backup system like I described, somewhat haphazard with everything still running. Then once a month at 4am or whatever, perform a more comprehensive backup, looping through each docker project and shutting them down before running the backup and bringing it all online again.
I setup borg around 4 months ago using option 1. I’ve messed around with it a bit, restoring a few backups, and haven’t run into any issues with corrupt/broken databases.
I just used the example script provided by borg, but modified it to include my docker data, and write info to a log file instead of the console.
Daily at midnight, a new backup of around 427gb of data is taken. At the moment that takes 2-15min to complete, depending on how much data has changed since yesterday; though the initial backup was closer to 45min. Then old backups are trimmed; Backups <24hr old are kept, along with 7 dailys, 3 weeklys, and 6 monthlys. Anything outside that scope gets deleted.
With the compression and de-duplication process borg does; the 15 backups I have so far (5.75tb of data) currently take up 255.74gb of space. 10/10 would recommend on that aspect alone.
/edit, one note: I’m not backing up Docker volumes directly, though you could just fine. Anything I want backed up lives in a regular folder that’s then bind mounted to a docker container. (including things like paperless-ngxs databases)
Dirty secrets about you.
There’s a few ways to do it; but if they block based on username it can lockout legitimate users too.
This is what fail2ban is for. Too many failed auths from an IP and that whole IP is blacklisted for a day or two. This can still catchout vpn users, but it’s still less disruptive.
deleted by creator
Na, pretty much anything that’ll run linux will do.
Seriously, expect a bill and/or permanently cutoff service when it is discovered.
That’s gonna be quite the bill when the device does eventually disconnect…
I’d expect it to bill you for the extra data retroactively. (otherwise people would just save all the heavy downloads for the end of the month) I’d also expect it to calculate usage regularly as well as upon disconnect, so this plan wouldn’t really work.
If this is really what you want to try though: take a rpi, open a Screen for a persistent terminal, run ‘ping google.com’ and disconnect from that screen (leaving it to endlessly ping google, at least until it’s restarted).
Lmao, yeah… You can make a can so secured a bear definitely won’t get in; but will people go to the effort to use it then?
Definitely some overlap there.
deleted by creator
Digital wizard.
I’m sure it works fine when held like a pen…
Lmao. That’s even better when you consider the copilot button replaced the ‘show desktop’ (ie ‘minimize all my windows’) button.
After reading this thread and a few other similar ones, I tried out BorgBackup and have been massively impressed with it’s efficiency.
Data that hasn’t changed, is stored under a different location, or otherwise is identical to what’s already stored in the backup repository (both in the backup currently being created and all historical backups) isn’t replicated. Only the information required to link that existing data to its doppelgangers is stored.
The original set of data I’ve got being backed up is around 270gb: I currently have 13 backups of it. Raw; thats 3.78tb of data. After just compression using zlib; that’s down to 1.56tb. But the incredible bit is after de-duplication (the part described in the above paragraph), the raw data stored on disk for all 13 of those backups: 67.9gb.
I can mount any one of those 13 backups to the filesystem, or just extract any of 3.78tb of files directly from that backup repository of just 67.9gb of data.
Linux?
I just use sshfs to mount ssh shares and move files between them like any other folder.
Same with samba shares (windows).
Excellent. I was wondering why android passkey support was taking so long, but effectively rebuilding the entire app, twice, would definitely do it…
Drink less paranoia smoothie…
I’ve been self-hosting for almost a decade now; never bothered with any of the giants. Just a domain pointed at me, and an open port or two. Never had an issue.
Don’t expose anything you don’t share with others; monitor the things you do expose with tools like fail2ban. VPN into the LAN for access to everything else.