Hiker, software engineer (primarily C++, Java, and Python), Minecraft modder, hunter (of the Hunt Showdown variety), biker, adoptive Akronite, and general doer of assorted things.

  • 1 Post
  • 100 Comments
Joined 11 months ago
cake
Cake day: August 10th, 2023

help-circle





  • Yes, WireGuard was designed to fix a lot of these issues. It does change the equation quite a bit. I agree with you on that (I kind of hinted at it but didn’t spell that out I suppose).

    That said, WireGuard AFAIK still only works well with static IPs/becomes a PITA once dynamic IPs are in play. I think some of that is mitigated if the device being connected to has a static IP (even if the device being connected from doesn’t). However, that doesn’t cover a lot of self hosting use cases.

    Tailscale/ZeroTier/Nebula etc do transfer some control (Nebula can actually be used with fully internal control and ZeroTier can also be used that way as well though you’re going to have to put more work in with ZeroTier … I don’t know about TailScale’s offering here).

    Though doing things yourself also (in most cases) means transferring some level of control to a cloud/traditional server hosting provider anyways (e.g, AWS, DigitalOcean, NFO, etc).

    Using something like ZeroTier can cutout a cloud provider/VPS entirely in favor of a professionally managed SAS for a lot of folks.

    A lot of this just depends on who you trust – yourself or the team running the service(s) you’re relying on – more and how much time you have to practically devote to maintenance. There’s not a “one size fits all answer” but … I think most people are better off doing SAS to form an internal mesh network and running whatever services they’re interested in running inside of that network. It’s a nice tradeoff.

    You can still setup device firewalls, SSH key-only authorization, fail2ban, and things of that ilk as a precaution in case their networks do get compromised. These are all things you should do if you’re self hosting … but hobbyist/novices will probably stumble through them/get it wrong, which IMO is more okay in the SAS case because you’ve got a professional security team keeping an eye on things.



  • The company Tailscale is a giant target and has a much higher risk in getting compromised than my VPN or even accessible services.

    One must be careful about this mindset. A bunch of smart lightbulbs that are individually operated aren’t a particularly appealing target either. However, in aggregate… If someone can write a script that abuses security flaws in them or their default configuration … even though you’re not part of a big centralized target, you are part of a class that can be targeted automatically at scale.

    Self hosting only yields better security when you are willing to take steps to adequately secure your self hosted services and implement a disaster recovery strategy.


  • The thing about something like TailScale or ZeroTier or Nebula is that it’s dynamic. These all behave similar to a multiplayer game … a use case every residential firewall should “just get.”

    The ports that are “opened” can change regularly, they’re not some standard port that can just be checked to see if it’s open (typically).

    Compare that to the average novice opening port 51822 for wireguard or 22 for SSH and you start to see the difference. With those ports, you’ve got a pretty good idea what’s on the other side and it might even be willing to talk to you and give you error messages or TCP ACK packets to confirm it’s there (e.g. SSH).

    This advice is as you can probably imagine more relevant to things like OpenVPN that are notoriously hard to correctly configure or application protocols like SSH or HTTP.

    With these mesh VPNs you also don’t have to worry about your home dynamic IP changing and breaking your connection at inopportune times… And that’s a huge benefit (IMO). It’s also very easy to tie in new devices to the network.

    A lot of it is about outsourcing labor to programs that know how to set up a VPN and make management of it easy. That ties into security because … a LOT of security issues boil down to misconfiguration.




  • Updated two machines, both AMD based systems, no major issues.

    One fairly low impact SELinux issue with ecryptfs (an error on login about a file read being denied that doesn’t actually seem to have broken anything). A few “this pixel feels out of place” things with KDE, but generally a very smooth update, especially for one that contains a major KDE release.




  • Wow the responses here are really off at the moment. I’m going to try and help.

    So, what you’re going to want to do is add all the subdomain A records you need to you DNS (sounds like you’re using cloudflare for that, not required, but that should be fine).

    Those DNS records are all going to be the same IP record, that’s fine.

    What you need to do after that, so that you don’t have to enter ports is a bit more complicated. For web servers, some kind of reverse proxy like nginx, haproxy, apache, etc is what you need. The term you’re looking for is “virtual host”.

    A virtual host setup is basically one where a reverse proxy looks at the domain name that was used to access the server over HTTP and then uses that to decide what server running on the machine you actually talk to.

    It’s HTTP that actually is passing along the domain name you used, so if the service isn’t HTTP you may or may not be able to do anything depending on the underlying protocol.

    So to recap:

    1. Set up your DNS records
    2. Set up an HTTP reverse proxy
    3. Add virtual hosts for each service you added a DNS record for to the reverse proxy (so that the reverse proxy can turn foo.example.com into example.com:xyz – localhost:xyz in practice, morally example.com:xyz though – behind the scenes)


  • Hmmm… That’s true, my rough litmus test is “can you explain what this thing does in fairly precise language without having to add a bunch of qualifiers for different cases?”

    If you meet that bar the function is probably fine/doesn’t need broken up further.

    That said, I don’t particularly care how many functions I have to jump through or what their line count is because I can verify “did the function do the thing it says it’s supposed to do?” after it’s called in a debugger. If it did, then I know my bug isn’t there. If it didn’t, I know where to look.

    Just like with commits, I’d rather have more small commits to feed through git bisect than a few larger commits because it makes identifying where/when a contract/test case/invariant was violated much more straight forward.


  • This only leads to bad code when people get to afraid to refactor things in light of the new requirements.Which sadly happens far to often. People seem to like to keep what was there already and follow existing patterns even well after they are no longer suitable. I have made quite a lot of bad code better by just ripping out the old patterns and putting back something that better fits the current requirements - quite often in code I have written before and others have added to over time.

    Yup, this is part of what’s lead me to advocate for SRP (the single responsibility principle). If you have everything broken down into pieces where the description of the function/class is something like “given X this function does Y” (and unrelated things thus aren’t unnecessarily coupled) it makes reorganization of the higher level logic to fit the current requirements a lot easier.

    For instance I see this a lot in DRY code. While the rules themselves are useful to know and apply they are too easily over applied removing any benefit they originally gave and result in overly abstract code. The number of times I have added duplication back into code to remove a layer of abstraction that was not working only to maybe reapply it in a different way, often keeping some duplication.

    Preach. DRY is IMO the most abused/mis-understood best practice particularly by newer programmers. DRY is not about compressing your code/minimizing line count. It’s about … avoiding things like writing the exact same general (e.g., a sort) algorithm inline in a dozen places. People are really good at finding patterns and “over fitting” making up abstractions that make no sense.


  • I agree; I prefer a “hammer and chisel” strategy, I tend to leave things a little less precisely organized/factored earlier in the project and then make a some incremental passes to clean things up as it becomes more clear that what I’ve done handles all the cases it needs to handle.

    It’s the same vein as the “don’t prematurely optimize.”

    Minimizing responsibilities of individual functions/classes/components is the only thing that I take a pretty hard line on. Making sure that I can reason about the code later and objectively say simple sentences like “given X this does Y.” I want all the complex pieces to be isolated into their own individual smaller pieces that can be reasoned about.

    All of the code bases I’ve been in where I go “oh my god why”, the typical reason is been because that’s not true; when I’m in the function I don’t know what it does because it does a lot of things depending on different state flags.


  • I’ll contest their is such a thing as good code. I don’t think experienced devs always do the best job at passing on what works and what doesn’t though. Universities certainly could do more software engineering/architecture.

    My personal take is that SRP (the single responsibility principle) is the #1 thing to keep in mind. In my experience DRY (do not repeat yourself) often takes precedence over SRP – IMO because DRY is easy to (mis-)understand – and that ends up making some major messes when good/reasonable code is rewritten into some ultra-compact (typically) inheritance or template-based mess that’s “fewer lines of code, so better.”

    I’ve never regretted using composition (and thus having a few extra lines and a little bit more boilerplate) over inheritance. I’ve similarly never regretted breaking down a function into smaller functions (even if it introduces more lines of code). I’ve also never regretted generalizing code that’s actually general (e.g., a sum N elements function is always a sum N elements function).

    The most important thing with all of these best practices though is “apply it if it makes sense.” If you’re writing some code and you’ve got a good reason to have a function that does multiple things … just write the function, don’t bend over backwards doing something really weird to “technically” abide by the best practice.