• 1 Post
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle
  • Have you considered just beige boxing a server yourself? My home server is a mini-ITX board from Asus running a Core i5, 32GB of RAM and a stack of SATA HDDs all stuffed in a smaller case. Nothing fancy, just hardware picked to fulfill my needs.

    Limiting yourself to bespoke systems means limiting yourself to what someone else wanted to build. The main downside to building it yourself is ensuring hardware comparability with the OS/software you want to run. If you are willing to take that on, you can tailor your server to just what you want.



  • No, but you are the target of bots scanning for known exploits. The time between an exploit being announced and threat actors adding it to commodity bot kits is incredibly short these days. I work in Incident Response and seeing wp-content in the URL of an attack is nearly a daily occurrence. Sure, for whatever random software you have running on your normal PC, it’s probably less of an issue. Once you open a system up to the internet and constant scanning and attack by commodity malware, falling out of date quickly opens your system to exploit.


  • Short answer: yes, you can self-host on any computer connected to your network.

    Longer answer:
    You can, but this is probably not the best way to go about things. The first thing to consider is what you are actually hosting. If you are talking about a website, this means that you are running some sort of web server software 24x7 on your main PC. This will be eating up resources (CPU cycles, RAM) which you may want to dedicated to other processes (e.g. gaming). Also, anything you do on that PC may have a negative impact on the server software you are hosting. Reboot and your server software is now offline. Install something new and you might have a conflict bringing your server software down. Lastly, if your website ever gets hacked, then your main PC also just got hacked, and your life may really suck. This is why you often see things like Raspberry Pis being used for self-hosting. It moves the server software on to separate hardware which can be updated/maintained outside a PC which is used for other purposes. And it gives any attacker on that box one more step to cross before owning your main PC. Granted, it’s a small step, but the goal there is to slow them down as much as possible.

    That said, the process is generally straight forward. Though, there will be some variations depending on what you are hosting (e.g. webserver, nextcloud, plex, etc.) And, your ISP can throw a massive monkey wrench in the whole thing, if they use CG-NAT. I would also warn you that, once you have a presence on the internet, you will need to consider the security implications to whatever it is you are hosting. With the most important security recommendation being “install your updates”. And not just OS updates, but keeping all software up to date. And, if you host WordPress, you need to stay on top of plugin and theme updates as well. In short, if it’s running on your system, it needs to stay up to date.

    The process generally looks something like:

    • Install your updates.
    • Install the server software.
    • Apply updates to the software (the installer may be an outdated version).
    • Apply security hardening based on guides from the software vendor.
    • Configure your firewall to forward the required ports (and only the required ports) from the WAN side to the server.
    • Figure out your external IP address.
    • Try accessing the service from the outside.

    Optionally, you may want to consider using a Dynamic DNS service (DDNS) (e.g. noip.com) to make reaching your server easier. But, this is technically optional, if you’re willing to just use an IP address and manually update things on the fly.

    Good luck, and in case I didn’t mention it, install your updates.


  • At the time I stood my server up, I was supporting RHEL at work and support for docker seemed a bit spotty. IIRC, it took both setting up the docker yum repo directly, along with the EPEL repo. And every once in a while, you could end up in dependency hell from something which was at different versions between EPEL and the official repos. Ubuntu, on the other hand, had better docker support in the official repos and docker seemed more targeted at .deb distributions. So, I made the choice to go Ubuntu.

    I suspect this is long since all sorted. But, I see no compelling reason to change distributions now. The base OS is solid and almost everything the server does is containerized anyway. If I were to rebuild it, I would probably use something more targeted at containerization/virtualization, like Proxmox.


  • sylver_dragon@lemmy.worldtoLinux@lemmy.mlBefore your change to Linux
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    I had dabbled with Linux before, both at home and work. Stood up a server running Ubuntu LTS at home for serving my personal website and Nextcloud. But, gaming kept my main machine on Win10. Then I got a Steam Deck and it opened my eyes to how well games "just worked’ on Linux. I installed Arch on a USB drive and booted off that for a month or so and again, games “just worked”. I finally formatted my main drive and migrated my Arch install to it about a week ago.

    I’m so glad that I won’t be running Windows Privacy Invasion Goes to 11.



  • My experience has been pretty similar. With Windows turning the invasive crap up to 11, I decided to try and jump to Linux. The catch has always been gaming. But, I have a Steam Deck and so have seen first hand how well Proton has been bridging that gap and finally decided to dip my toes back in. I installed Arch on a USB 3 thumbdrive and have been running my primary system that way for about a month now. Most everything has worked well. Though, with the selection of Arch, I accepted some level of slamming my head against a wall to get things how I want them. That’s more on me than Linux. Games have been running well (except for the input bug in Enshrouded with recent major update, that’s fixed now). I’ve had no issues with software, I was already using mostly FOSS anyway. It’s really been a lot of “it just works” all around.


  • And once you have found your specific collection of plugins that happen not to put the exact features you need behind a paywall but others, you ain’t touching those either.

    And this is why, when I’m investigating phishing links, I’ve gotten used to mumbling, “fucking WordPress”. WordPress itself is pretty secure. Many WordPress plugins, if kept up to date, are reasonably secure. But, for some god forsaken reason, people seem to be allergic to updating their WordPress plugins and end up getting pwned and turned into malware serving zombies. Please folks, if it’s going to be on the open internet, install your fucking updates!



  • The answer to that will be everyone’s favorite “it depends”. Specifically, it depends on everything you are trying to do. I have a fairly minimal setup, I host a WordPress site for my personal blog and I host a NextCloud instance for syncing my photos/documents/etc. I also have to admit that my backup situation is not good (I don’t have a remote backup). So, my costs are pretty minimal:

    • $12/year - Domain
    • $10/month - Linode/Akamai containers

    The Domain fee is obvious, I pay for my own domain. For the containers, I have 2 containers hosted by the bought up husk of Linode. The first is just a Kali container I use for remote scanning and testing (of my own stuff and for work). So, not a necessary cost, but one I like to have. The other is a Wireguard container connecting back to my home network. This is necessary as my ISP makes use of CG-NAT. The short version of that is, I don’t actually have a public IP address on my home network and so have to work around that limitation. I do this by hosting NGinx on the Wireguard container and routing all traffic over a Wireguard VPN back to my home router. The VPN terminates on the outside interface and then traffic on 443/tcp is NAT’d through the firewall to my “server”. I have an NGinx container listening on 443 and based on host headers traffic goes to either the WordPress or NextCloud container which do their magic respectively. I also have a number of services, running in containers, on that server. But, none of those are hosted on the internet. Things like PiHole and Octoprint.

    I don’t track costs for electricity, but that should be minimal for my server. The rest of the network equipment is a wash, as I would be using that anyway for home internet. So overall, I pay $11/month in fixed costs and then any upgrades/changes to my server have a one-time capital cost. For example, I just upgraded the CPU in it as it was struggling under the Enshrouded server I was running for my Wife and I.


  • Attempt at serious answer (warning: may be slightly offensive)

    Wow, you are a fucking moron. But, there is an interesting question buried in there, you just managed to ask it in a monumentally stupid way. So, let’s pick this apart a bit. Assuming Trump gets re-elected and speed-runs the US into global irrelevancy, what happens to the various standards and standards bodies? tl;dr: Not much.

    • FIPS - This will be the most effected. If companies no longer need to care about working with the US Government (USG), no one is going to bother with FIPS. FIPS is really only a list of cryptographic standards which are considered “secure enough” for USG use. The standards won’t actually change and the USG may still continue to update FIPS, people would just stop noticing.
    • UNICODE - Right so UNICODE is a code page maintained by the Unicode Consortium. Maybe with the US being less dominant, we see the inclusion of more stuff; but, it’s just a way to define printable characters. It works incredibly well and there’s no reason such would be abandoned. Also, there are already plenty of other code pages, Unicode is just popular because it covers so much. Maybe the headquarters for the consortium ends up elsewhere.
    • ANSI - Isn’t a standard, it’s a US Government Body. So, assuming it stops being good at it’s job, other countries/organizations would likely stop listening to it’s ideas. The ANSI standards which exist will continue to exist, if ANSI continues to exist, it’ll probably keep publishing standards but only the US would care about them.
    • ISO - Again, this isn’t a standard, it’s a Non-Governmental Organization, headquartered in Switzerland. Also, ISO is not an acronym, it’s borrowed from Greek. And ya, this one would almost certainly keep chugging along. Probably a bit more Euro-centric than they are now, but mostly unchanged.

    For this reason, and a lot of other reasons, I am in favor of liberterianism because then, it would not be a government ran by octogenarians deciding standards for communication,

    It’s ok, I was young and stupid once too. The fact is that, while many telecommunications standards started off in the US, and some even in the USG, most of them have long since been handed off to industry groups. The Internet Engineering Task Force is responsible for most of the standards we follow today. They were spun off from the USG in 1993 and are mostly a consensus driven organization with input from all over the world. In a less US centric world, the makeup of the body might change some. But, I suspect things would keep humming along much as they have for the last few decades.

    Will we live in a post-standard world?

    This depends on the level of fracturing of networks. Over time, there has been a move towards standardization because it makes sense. Sure, companies resist and all of them try to own the standard, but there has been a lot of pushback against that and often from outside the US. For example, the EU’s law to require common charging ports. In many ways, the EU is now doing more for standardization than the US.

    Worse, cryptography. Well, for ‘serious shit’, people roll their own crypto because…

    Tell me you know fuck all about security without saying you know fuck all about security. There is a well accepted maxim, called “Schneier’s law” based on this classic essay. It’s often shortened to “Don’t roll your own crypto”. And this goes back to that FIPS standard mentioned earlier. FIPS is useful mostly because it keeps various bits of the USG from picking bad crypto. The algorithms listed in FIPS are all bog-standard stuff, from things like the Advanced Encryption Standard (AES) process. The primitives and standards are the primitives and standards because they fucking work and have been heavily tested and shown to be secure over a lot of years of really smart people trying to break them. Ironically, it was that same sort of open testing that resulted in the NSA being caught trying to create a crypto backdoor.
    So no, for ‘serious shit’ no one rolls their own crypto, because that would be fucking dumb.

    But what about primitives? For every suite, for every protocol, people use the same primitives, which are standardized.

    And ya, they would continue to be, as said above, they have been demonstrated over and over again to work. If they are found not to work, people stop using them (se:e SHA1, MD5, DES). Its funny that, for someone who is “in favor of liberterianism” you seem to be very poorly informed of examples where private groups and industry are actually doing a very good job of things without government oversight.

    Overall, you seem to have a very poor understanding of how these standards get created in the modern world. Yes, the US was behind a lot of them. But, as they have been handed over to private (and often international) organizations, they have moved further and further away from US Government control. Now, that isn’t to say that US Based companies don’t have a lot of clout in those organizations. Let’s face it, we are all at the mercy of Microsoft and Google way too often. But, even if those companies fall to irrelevance, the organizations they are part of will likely continue to do what they already do. It’s possible that we’d see a faster balkanization of the internet, something we already see a bit of. Countries like China, Iran or Russia may do more to wall their people off from US/EU influence, if they don’t have an economic interest in some communications. Though, it’s just as likely that trade will continue to keep those barriers to the flow of information as open as possible.

    The major change could really be in language. Without the US propping it up, English may lose it’s standing as the lingua franca of the world. As it stands right now, it’s not uncommon for two people, neither of which speaks English as their native language, to end up conversing in English as that is the language the two of them share. If a new superpower rises, perhaps the lingua franca shifts and the majority of sites on the internet shift with it. Though, that’s likely to be a multi-generational change. And it could be a good thing. English is a terrible language, it’s less a language and more three languages dressed up in a trench coat pretending to be one.

    So yes, there would likely be changes over time. But, it’s likely more around the edges than some wholesale abandoning of standards. And who knows, maybe we’ll end up with people learning to write well researched and thought out questions on the internet, and not whatever drivel you just shat out. Na, that’s too much to hope for.