

How many JS codebases are over 30 years old? Can you name even one?


How many JS codebases are over 30 years old? Can you name even one?


BASIC? That’s cute.
If you don’t think EE students learn how circuits (including ICs) work, what exactly do you think they’re doing while they’re in school?


OK, but that doesn’t really answer my question, and I’m getting the sense you don’t know how deeply some engineers understand how the hardware works. Plenty of embedded programmers have EE degrees, and can write VHDL just as well (or just as badly) as they can write C and ASM.


You think people writing C(++) for baremetal systems don’t understand how their hardware works?
12gbps could be useful if you use port expanders to put dozens of drives on the same port, but without a port expander you’re right you wouldn’t saturate the 6gbps channel.
It’s not even about the size or complexity of the project - the code in question is wrapped in an unsafe{} block, so no one should be expecting it to have the guarantees you’d normally get from rust. It’s ragebait.
LSI cards are generally easy to switch to IT mode. You should be able to find a guide on servethehome.com for your model.
This, but if you already have a SAS card in RAID mode you might be able to flash IT (AKA HBA) mode firmware instead of buying a new card.
Also, SAS cables fit SATA drives, but not vice versa. So no need to buy new cables.


If it’s your feature branch, just revert his commits (or reset the remote branch to your local branch)? Not sure why a feature branch would be shared between devs…
Both decks of the bus follow the same [code] path. That’s a lot more like increasing the buffer size.
Please stop trying to explain it to me.
I never understand people rejecting free feedback on social media posts.


Product owner is responsible for making sure the product meets customer needs. Project manager is responsible for making sure the project is completed on time and meets the requirements that are defined by the PO.
Very high latency, though. Great for some use cases, useless for others.
It wasn’t named by IT people, though. It was named by academics. And it’s not about using computers, it’s about computing. Computer science is older than digital electronics.


Besides RAM, what resources do you think you’re saving? Not CPU cycles or IO ops, because you’re processing the same amount of DB queries either way. Not power consumption, since that isn’t affected by RAM utilization. Maybe disc space? But that’s even cheaper than RAM.
Or more importantly: the extent to which you can self-host out of sheer luck and ignorance like you suggest is very limited. If you don’t want to engage with a minimum amount of configuration, you might bump into security issues (a much broader and complex subject) long before any of the above has a material impact.
You’re mischaracterizing what I said. My point is that running multiple DB processes on a server isn’t going to have a significant impact on system load, if all other factor are kept constant.


You seem to be obsessed with optimising one resource at the expense of others. Time is a limited resource, and even if it only takes 5 minutes to configure all of your containers to share a single db backend (it will take longer than that even if you just have 2), you’re only going to save a few MB of RAM. And since RAM costs roughly $2.5/GB (0.25 cents/MB) your time would have to be worth very little for this to be worthwhile.
On the other hand, if you’re doing it to learn more about computers then it might be worthwhile. This is a community of hobbiests, after all…


Neither, I’m trying to explain that you don’t need to know the implementation details of the software running on your server to backup the entire thing.


Where are you getting that from? The fastest and easiest way to back up any server is a full filesystem backup, especially if you’re using something like zfs or btrfs.


I’m saying this based on real world experience: after a certain point you start to see deminishing returns when optimizing a system, and you’re better off focusing your efforts elsewhere. For most applications, customizing containerized services to share databases is far past that point.
I interpreted their question differently. It sounds like they’re taking about Radarr having to download a movie before they can watch it, whereas streaming services have a “complete” (compared to a new *arr setup) library available to stream instantly.
Some bittorrent clients can start playing a video before it’s done downloading, and prioritize the torrent chunks in the right order so there aren’t any interruptions as long as there are seeders and you have enough bandwidth. But I don’t think plex or jellyfin can do it, and I don’t know of any alternatives that can.