

That’s ok we’ll just refactor it with AI.
That’s ok we’ll just refactor it with AI.
Well shit, yeah, that “MUST be accepted and parsed” is pretty explicit. That sucks. What is even the point of revising standards? How the fuck do we ever get rid of some of these bad ideas?
#18 seems really bad, like no-one-has-ever-sanity-checked-this bad.
Yeah I feel like the correct answer for anything obsoleted by a more recent RFC should be “Invalid”.
Installing an OS will always be a hurdle. Most people don’t want to spend that much time thinking about how their computer works, they just want to turn it on and have it work. For more people to use Linux, it will have to be preinstalled.
After that, it needs to be stable. If the audio stops working, most people don’t think “maybe I need to roll back my driver” or “maybe ALSA has muted my output channel for some reason”, they just think “my computer is broken”. These kind of problems have to go away, or at least be reduced to <1% of users.
Also, very few people are going to have any patience for any kind of difficulty related to “oh you have to add a different repository to your package manager to play common media formats” or w/e (e.g. AUR or Ubuntu Multiverse &etc). Normal people spend exactly 0 time considering what codecs they might need to install to listen to some music, or where they might need to get those codecs from, or whether those codecs are open or proprietary or freeware or whatever.
both AD and GPO are fucking incredible pieces of software.
AD is really the only way to manage an organization with thousands of endpoints and users.
I have some hope that someone in the EU will develop a competing product now that they’re pushing to get away from Microsoft, but it doesn’t exist yet.
We asked 100+ AI models to write code.
The Results: AI-generated Code
no shit son
That Works
OK this part is surprising, probably headline-worthy
But Isn’t Safe
Surprising literally no one with any sense.
Realistically no organization has so many endpoints that they need IPv6 on their internal networks. There’s no reason to deal with more complicated addressing schemes except on the public Internet. Only the border devices should be using IPv6.
Hopefully if an organization has remote endpoints which are connecting to the internal network over the Internet, they are doing that through a VPN and can still just be assigned IPv4 addresses on dedicated VLANs when they connect.
“We have investigated ourselves and found no problem.”
This is an increasing problem and I’m not sure how the open source community is going to deal with it. It’s been a big problem with NPM packages and also Python libraries over the past five years. There’s a bunch of malicious typo-squatting stuff in many package repositories (say you want libcurl but you type libcrul, congratulations it’s probably there and it’ll probably install libcurl for you and bring a fun friend along).
Now with AI slop code getting submitted, it’s not really possible to check every new package upload. And who’s going to volunteer for that work?
Actual Budget is an open-source envelope-style budgeting tool similar to YNAB. It has a self-hostable syncing service so that you can manage your budget across multiple devices.
The reason you might want to do this is that it’s probably easier to do full account review sitting at your computer, but you might want to track expenses/receipts on your smartphone while you’re away from home.
Would this mini pc be a good homeserver
For what purpose?
It’s an embellishment on the above monkey’s paw comment, not actual technical information.
It doesn’t check dependencies.
You have 356 different copies of libcurl installed on your system.
Nginx, Apache and Lighttpd are all running in the background and collectively using the same port, somehow.
Wayland and X are both running with multiple sessions but none of them are on the default TTY.
If you’re just doing a quick config edit, nano is significantly easier to use and is also present in most distros.
Vi/Vim is useful as a customizable dev environment, but in the present there are better, more feature-rich development tools - unless you are specifically doing a lot of development in a GUI-free system, for some reason.
Encrypting the connection is good, it means that no one should be able capture the data and read it - but my concern is more about the holes in the network boundary you have to create to establish the connection.
My point of view is, that’s not something you want happening automatically, unless you manually configured it to do that yourself and you know exactly how it works, what it connects to and how it authenticates (and preferably have some kind of inbound/outbound traffic monitoring for that connection).
Ah, just one question - is your current Syncthing use internal to your home network, or does it sync remotely?
Because if you’re just having your mobile devices sync files when they get on your home wifi, it’s reasonably safe for that to be fire-and-forget, but if you’re syncing from public networks into private that really should require some more specific configuration and active control.
My main reasons are sailing the high seas
If this is the goal, then you need to concern yourself with your network first and the computer/server second. You need as much operational control over your home network as you can manage, you need to put this traffic in a separate tunnel from all of your normal network traffic and have it pop up on the public network from a different location. You need to own the modem that links you to your provider’s network, and the router that is the entry/exit point for your network. You need to segregate the thing doing the sailing on its own network segment that doesn’t have direct access to any of your other devices. You can not use the combo modem/router gateway device provided by your ISP. You need to plan your internal network intentionally and understand how, when, and why each device transmits on the network. You should understand your firewall configuration (on your network boundary, not on your PC). You should also get PiHole up and running and start dropping unwanted inbound and outbound traffic.
OpSec first.
In comparison with other city-builders Wandering Village isn’t very deep. There isn’t much in the way of complex systems. The art is nice though and it’s fairly relaxing to play.
Timberborn is a lot more involved and there is a lot more depth to population management and economics, and it’s pretty fun when you get to the level of reshaping the ground to suit your purposes. My favorite challenge is to arrange to keep the whole map green through a drought.
Wandering Village is more like a story or adventure game with city-builder mechanics, so it kind of needs a proper narrative arc.
They should be powered on if you want to retain data on them long-term. The controller should automatically check physical integrity and disable bad sections as needed.
I’m not sure if just connecting them to power would be enough for the controller to run error correction, or if they need to be connected to a computer. That might be model specific.
What server OS are you using? Are you already using some SSDs for cache drives?
Any backup is better than no backup, but SSDs are really not a good choice for long-term cold storage. You’ll probably get tired of manually plugging them in to check integrity and update the backups pretty fast.