Just an Aussie tech guy - home automation, ESP gadgets, networking. Also love my camping and 4WDing.

Be a good motherfucker. Peace.

  • 6 Posts
  • 190 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle
  • It all depends on how you want to homelab.

    I was into low power homelabbing for a while - half a dozen Raspberry Pis - and it was great. But I’m an incessant tinkerer. I like to experiment with new tech all the time, and am always cloning various repos to try out new stuff. I was reaching a limit with how much I could achieve with just Docker alone, and I really wanted to virtualise my firewall/router. There were other drivers too. I wanted to cut the streaming cord, and saving that monthly spend helped justify what came next.

    I bought a pair of ex enterprise servers (HP DL360s) and jumped into Proxmox. I now have an OPNsense VM for my firewall/router, and host over 40 Proxmox CTs, running (at a guess) around 60-70 different services across them.

    I love it, because Proxmox gives me full separation of each service. Each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. On top of that, Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

    Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

    Let’s say there’s a new contender that competes with Immich. They offer the promise of a really cool feature no one else has thought of in a self-hosted personal photo library. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT), accessible via photos.domain on my home network.

    I can spin up a Proxmox CT from my custom Debian template, use my Ansible playbook to provision Docker and all the other bits, access it in Portainer and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

    I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my photos.domain hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.

    That’s a simplified example, but hopefully illustrates at least what I get out of using Proxmox the way I do.

    The cons for me is the cost. Initial cost of hardware, and the cost of powering beefier kit like this. I’m about to invest in some decent centralised storage (been surviving with a couple li’l ARM-based NASes) to I can get true HA with my OPNsense firewall (and a few other services), so that’s more cost again.




  • It doesn’t have to be hard - you just need to think methodically through each of your services and assess the cost of creating/storing the backup strategy you want versus the cost (in time, effort, inconvenience, etc) if you had to rebuild it from scratch.

    For me, that means my photo and video library (currently Immich) and my digital records (Paperless) are backed up using a 2N+C strategy: a copy on each of 2 NASes locally, and another copy stored in the cloud.

    Ditto for backups of my important homelab data. I have some important services (like Home Assistant, Node-RED, etc) that push their configs into a personal Gitlab instance each time there’s a change. So, I simply back that Gitlab instance up using the same strategy. It’s mainly raw text in files and a small database of git metadata, so it all compresses really nicely.

    For other services/data that I’m less attached to, I only backup the metadata.

    Say, for example, I’m hosting a media library that might replace my personal use of services that rhyme with “GetDicks” and “Slime Video”. I won’t necessarily backup the media files themselves - that would take way more space than I’m prepared to pay for. But I do backup the databases for that service that tells me what media files I had, and even the exact name of the media files when I “found” them.

    In a total loss of all local data, even though the inconvenience factor would be quite high, the cost of storing backups would far outweigh that. Using the metadata I do backup, I could theoretically just set about rebuilding the media library from there. If I were hosting something like that, that is…








  • This may take us down a bit of a rabbit hole but, generally speaking, it comes down to how you route traffic.

    My firewall has an always-on VPN connected to Mullvad. When certain servers (that I specify) connect to the outside, I use routing rules to ensure those connections go via the VPN tunnel. Those routes are only for connectivity to outside (non-LAN) addresses.

    At the same time, I host a server inside that accepts incoming Wireguard client VPN connections. Once I’m connected (with my phone) to that server, my phone appears as an internal client. So the routing rules for Mullvad don’t apply - the servers are simply responding back to a LAN address.

    I hope that explains it a bit better - I’m not aware of your level of networking knowledge, so I’m trying not to over-complicate just yet.





  • I’ve been thinking about exactly the same problem.

    We want to give our near-10yo daughter her first phone, but she’s not allowed to have it at school. She’s also getting to the point where she can be trusted at home for an hour or so before one of us gets home from work, so I also need a presence detection method that doesn’t use a mobile phone.

    My best theoretical solutions are like those already suggested here: an ESP32 BT proxy detecting a homebrew BLE beacon in her school bag, or detect activity on her iPad/the TV. But neither of those are reliable for all scenarios - she obviously doesn’t take her school bag to her friend’s house, and doesn’t always use her iPad or the TV.

    The only other thing I’m pondering is if I could setup facial recognition using our video doorbell. I use Frigate with a Coral TPU, so hoping there’s a project out there that could possibly do that.


  • Don’t be a dick, mate. Engage just a little bit of critical thinking before calling people names like that.

    By law where I am, our kids aren’t allowed to have their phones at school. My daughter’s school’s policy, then, is that phones are left at the school office.

    We want to give our soon-to-be 10yo daughter her first phone later this year (times with a planned family trip, so it can be her new camera as well). But if she takes it to school and has to leave it at the office, I can guarantee she’ll absolutely forget on more than one occasion to pick it up before coming home.

    So, her phone will have to stay home. But we’re also getting to the point where she can be trusted to let herself in and wait for one of us to get home (like OP, maybe an hour or so). So a presence detection option can’t be based on whether the phone has moved into the geo zone in HA.

    This is a legitimate question for modern parents. Denigrating OP without knowing or understanding all the facts certainly does shine a light on ignorance at play here. Just not OP’s ignorance.





  • Not heaps, although I should probably do more than I do. Generally speaking, on Saturday mornings:

    • Between 2am-4am, Watchtower on all my docker hosts pulls updated images for my containers, and notifies me via Slack then, over coffee when I get up:
      • For containers I don’t care about, Watchtower auto-updates them as well, at which point I simply check the service is running and purge the old images
      • For mission-critical containers (Pi-hole, Home Assistant, etc), I manually update the containers and verify functionality, before purging old images
    • I then check for updates on my OPNsense firewall, and do a controlled update if required (needs me to jump onto a specific wireless SSID to be able to do so)
    • Finally, my two internet-facing hosts (Nginx reverse proxy and Wireguard VPN server) auto-update their OS and packages using unattended-upgrades, so I test inbound functionality on those

    What I still want to do is develop some Ansible playbooks to deploy unattended-upgrades across my fleet (~40ish Debian/docker LXCs). I fear I have some tech debt growing on those hosts, but have fallen into the convenient trap of knowing my internet-facing gear is the always up to date, and I can be lazy about the rest.