• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 22nd, 2023

help-circle
  • In my opinion, the difference with Google is that Google is actively using your data and you’re giving them a lot of it. For Cloudflare, what do they have exactly? Depends on what services you use, but really all they get from me is the list of servers that connect to my domains. Google does that too if you use 8.8.8.8, or if you have any of their hardware that overrides router DNS settings like Chromecast and Google TV.


  • I mean it depends on the intensity of the surge, but basically you’d be making it so your PSU is unable to protect the devices from surges. The more sensitive the electronics, the more critical the ground is and CPUs are pretty darned sensitive among other things. And depending on the type of components in the PSU, “surges” also include things like inrush current. Basically, when you turn on a transformer or certain other devices, there is a surge of sometimes as much as 10 times the rated current to create the initial magnetic flux. Depending on the components, this excess energy may end up getting shunted to the ground to avoid pushing it through your electronics. So if it can’t do that, you likely will blow fuses a lot when switching the power on (hopefully there are fuses), or if you’re touching the case which is supposed to be grounded, you may end up getting that jolt.

    Anyway, without grounded outlets, and especially if your electronics are cheaply made because many expect there to be grounding and don’t build in extra components to deal with not having a ground, you are likely to significantly reduce the life of your electronics, your life, or start a fire without even considering major surges. If you have a high-end PSU, you may never have a problem until that surge happens. How stable is your power? Because even a normally small surge combined with a cheap PSU, and no ground, is pretty likely to end up in damage to electronics at the best case.


  • Automate as much as possible. I rsync to both an online and home NAS for all of my hosted stuff, both at home and in the cloud. Updates for the OS and low level libraries are automated. The other updates are generally manual, that allows me to set aside time for fixing problems that updates might cause while still getting most of the critical security updates. And my update schedules are generally during the day, so that if something doesn’t restart properly, I can fix it.

    Also, whenever possible I assume a fair amount of time for updates, far beyond what it should actually take. That way I won’t be rushed to fix the problem and end up having to revert to a backup and find time later to redo it. Then most of the time I have extra time for analyzing stats to see if I can improve performance or save money with optimizations.

    I’ve never had a remote provider just suddenly vanish though I use fairly well known hosts. And as for local hardware, I just have to do without until I can buy a replacement. Or if it’s going to be some time, I do have old hardware that I could set up as a makeshift, temporary replacement like old desktop computers and some hardware that I use for experimenting like my Le Potato that isn’t powerful enough for much, but ok for the short term.

    And finally I’ve been moving to more container-based setups that are easier to get up and running again. I’ve been experimenting with Nomad, Docker Swarm, K3s, etc., along with Traefik and some other reverse proxies so o can keep the workers air-gapped for security.


  • I self host a lot, but I host a lot on cheap VPS’s, mostly, in addition to the few services on local hardware.

    However, these also don’t take into account the amount of time and money to maintain these networks and equipment. Residential electricity isn’t cheap; internet access isn’t cheap, especially if you have to get business class Internet to get upload speeds over 10 or 15 mbps or to avoid TOS breaches of running what they consider commercial services even if it’s just for you, mostly because of of cable company monopolies; cooling the hardware, especially if you live in a hotter climate, isn’t cheap; and maintaining the hardware and OS, upgrades, offsite backups for disaster recovery, and all of the other costs. For me, VPS’s work, but for others maintaining the OS and software is too much time to put in. And just figuring out what software to host and then how to set it up and properly secure it takes a ton of time.





  • irotsoma@lemmy.worldtoSelfhosted@lemmy.worldShould I move to Docker?
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    7 months ago

    Docker is nice for things that have complex installations and I want a very specific implementation that I don’t plan to tweak very much. Otherwise, it’s more hassle than it’s worth. There are lots of networking issues like limited/experimental support for IPv6, and too much is hidden and preconfigured, making it difficult to make adjustments that would otherwise just be a config file change.

    So it is good for products like a mail server where you want to use the exact software they use like let’s say postfix + dovecot + roundcube + nginix + acme + MySQL + spam assassin + amavisd, etc. But you want to use an existing reverse proxy and cert it setup, or want to use a different spam filter or database and it becomes a huge hassle.


  • My Libre, El Potato, works well enough. I use it to run octoprint on Debian Bullseye and the GPIO to turn on/off the printer and with some temperature sensors and to control the fan in the external electronics enclosure I built. But to be fair, it was a lot more trouble to set up than if I could have gotten my hands on a raspberry pi at the time.


  • Depends on the language and platform as well as how asynchronous things are. For example, lots of platforms have little to no debugging for scripting languages. I write a lot of Groovy on a platform that has a debugger that is mostly too much trouble to connect my IDE to since the platform can’t run locally. But even then it doesn’t debug the Groovy code at all.

    And with asynchronous stuff it’s often difficult to tell what something isn’t running in the right order without some kind of debug logging. Though in most cases I use the logger rather than printing directly to the console so that it can be left in and just configured to only print if the logging level is set to debug which can be configured based on the environment.


  • Hey I contributed 2 of those just this month reinstalling Windows after an update broke my raid array by modifying the partitions on one of the drives rather than the array. And then decided the drive was corrupt. And then finally accepting this as the last straw and giving up and installing Linux as my primary OS. Only keeping Windows for the few games that require it, or I’d just run a VM for the few other times I might need it.




  • Yeah, and contracting is weird, too. I worked for a company that built a product to regression test that upgrades to major components that our systems integrated with didn’t change some functionality that would cause incorrect pricing or other issues. The testers at the companies that bought it loved it, but it was an annual fee and they couldn’t justify the cost without a specific upgrade planned in advance. Instead, they all went back to spending up to 100x as much hiring contractors to manually create test data and analyze the results. Worst part is the divisions of the companies that purchased the software could easily have convinced the other divisions to use the software and there would have been plenty of projects every year even if one division only had one project every two years or so.

    But nope. Can’t collaborate and share expenses or they’ll lose their funding. Better to have big spikes in spending so that they could look like they were saving money all the rest of the time. Otherwise, they would lose all of their permanent staff to budget cuts.



  • Yeah. I get it if it was a market where things change quickly, so all you need is a quick and dirty product to get your foot in the door with customers. And sometimes it’s easier to build something that is more targeted rather than collaborating to make a more generic solution.

    I don’t work in that kind of industry, really. And the kinds of things I’m talking about aren’t things that take years to develop. For example, just in the last two months I built a solution that will make literally hundreds of small upcoming projects spread across four teams take a single two week sprint to implement for one to two people depending on complexity. Previously each of these were taking 3 to 4 people 2-3 months to implement. Plus tying down people from working on maintaining the existing system, so they were going to need to ramp up on engineers pretty quickly.

    Plus this solution doesn’t require code deployments to onboard new customers, only to implement the new functionality that each of these small projects are adding. The old solution would have meant possibly having to wait months for a window to deploy code just to onboard a new customer because so many things were hard coded. Our system is extremely high volume and downtime can mean not just losing money, but fines from not meeting timeliness regulations, so deployments are heavily controlled.

    And of the two months I spent on this it only was about a week of research and development. The rest was winning the trust of the other tech leads, gathering their requirements, and getting them all to agree on things like naming conventions. Both because they’d been burned too many times and because I’ve only been there for 2 years and wasn’t even a tech lead of my team yet when I started this, though I was about to be because the lead was moving to a newly formed team. And sure, if you had joined one of those meetings in that first week or two, it might have seemed like a waste of time with the bickering and nitpicking. But that’s just because they didn’t believe it was possible to collaborate and get things done, too.

    The company was happily going to hire a bunch of contractors to build these things in order to maintain the silos and “competition”. It’s only because of a new manager, that I built trust with over the last year, that no one interfered when I started pulling people together and “wasting time” to collaborate. It’s not even that the middle management is doing these things maliciously in most of the places I’ve worked. They’ve just been brainwashed to believe that making people compete makes them more productive than making them collaborate. But it’s only the worst engineers that need that threat of losing. And only the worst ones that will stick around to play the game since good engineers just want to build stuff.


  • And this is how you end up with five different parts of the company building pretty much the same thing, because if there was a central team creating shared components, they wouldn’t bring in any profit to justify their existence. But hey, at least there are no dependencies. And competition between teams drives innovation, right?

    So tired of this line. The first thing I do in any team I’m on is start building bridges, sharing information, and collaborating on shared components that have the features that all the teams need, so we’re not all wasting everyone’s time building ten crappy, buggy versions of something we all need with slight variation. And instead build a single, well designed and well tested version that suits us all. But it’s always an uphill battle. Experienced engineers are always hesitant to trust, even if it’s exactly what they all want. They get burned or even punished by management policy for collaboration.