• 0 Posts
  • 130 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle
  • They should be powered on if you want to retain data on them long-term. The controller should automatically check physical integrity and disable bad sections as needed.

    I’m not sure if just connecting them to power would be enough for the controller to run error correction, or if they need to be connected to a computer. That might be model specific.

    What server OS are you using? Are you already using some SSDs for cache drives?

    Any backup is better than no backup, but SSDs are really not a good choice for long-term cold storage. You’ll probably get tired of manually plugging them in to check integrity and update the backups pretty fast.






  • Installing an OS will always be a hurdle. Most people don’t want to spend that much time thinking about how their computer works, they just want to turn it on and have it work. For more people to use Linux, it will have to be preinstalled.

    After that, it needs to be stable. If the audio stops working, most people don’t think “maybe I need to roll back my driver” or “maybe ALSA has muted my output channel for some reason”, they just think “my computer is broken”. These kind of problems have to go away, or at least be reduced to <1% of users.

    Also, very few people are going to have any patience for any kind of difficulty related to “oh you have to add a different repository to your package manager to play common media formats” or w/e (e.g. AUR or Ubuntu Multiverse &etc). Normal people spend exactly 0 time considering what codecs they might need to install to listen to some music, or where they might need to get those codecs from, or whether those codecs are open or proprietary or freeware or whatever.




  • Realistically no organization has so many endpoints that they need IPv6 on their internal networks. There’s no reason to deal with more complicated addressing schemes except on the public Internet. Only the border devices should be using IPv6.

    Hopefully if an organization has remote endpoints which are connecting to the internal network over the Internet, they are doing that through a VPN and can still just be assigned IPv4 addresses on dedicated VLANs when they connect.



  • This is an increasing problem and I’m not sure how the open source community is going to deal with it. It’s been a big problem with NPM packages and also Python libraries over the past five years. There’s a bunch of malicious typo-squatting stuff in many package repositories (say you want libcurl but you type libcrul, congratulations it’s probably there and it’ll probably install libcurl for you and bring a fun friend along).

    Now with AI slop code getting submitted, it’s not really possible to check every new package upload. And who’s going to volunteer for that work?







  • Encrypting the connection is good, it means that no one should be able capture the data and read it - but my concern is more about the holes in the network boundary you have to create to establish the connection.

    My point of view is, that’s not something you want happening automatically, unless you manually configured it to do that yourself and you know exactly how it works, what it connects to and how it authenticates (and preferably have some kind of inbound/outbound traffic monitoring for that connection).


  • NaibofTabr@infosec.pubtoSelfhosted@lemmy.worldSyncthing alternatives
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    4 months ago

    Ah, just one question - is your current Syncthing use internal to your home network, or does it sync remotely?

    Because if you’re just having your mobile devices sync files when they get on your home wifi, it’s reasonably safe for that to be fire-and-forget, but if you’re syncing from public networks into private that really should require some more specific configuration and active control.


  • NaibofTabr@infosec.pubtoSelfhosted@lemmy.worldWhat do I actually need?
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    4
    ·
    5 months ago

    My main reasons are sailing the high seas

    If this is the goal, then you need to concern yourself with your network first and the computer/server second. You need as much operational control over your home network as you can manage, you need to put this traffic in a separate tunnel from all of your normal network traffic and have it pop up on the public network from a different location. You need to own the modem that links you to your provider’s network, and the router that is the entry/exit point for your network. You need to segregate the thing doing the sailing on its own network segment that doesn’t have direct access to any of your other devices. You can not use the combo modem/router gateway device provided by your ISP. You need to plan your internal network intentionally and understand how, when, and why each device transmits on the network. You should understand your firewall configuration (on your network boundary, not on your PC). You should also get PiHole up and running and start dropping unwanted inbound and outbound traffic.

    OpSec first.


  • In comparison with other city-builders Wandering Village isn’t very deep. There isn’t much in the way of complex systems. The art is nice though and it’s fairly relaxing to play.

    Timberborn is a lot more involved and there is a lot more depth to population management and economics, and it’s pretty fun when you get to the level of reshaping the ground to suit your purposes. My favorite challenge is to arrange to keep the whole map green through a drought.

    Wandering Village is more like a story or adventure game with city-builder mechanics, so it kind of needs a proper narrative arc.