Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.
Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.
I was referring to work setups with the overengineering - if I had a cent for every time I had to argue with somebody at work to not make things more complex than we actually need I’d have retired a long time ago.
Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,
Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.
Everything is deployed via ansible - including nameservices. So I already have the description of my infra in ansible, and rest is just a matter of writing scripts to pull it in a more readable form, and maybe add a few comment labels that also get extracted for easily forgettable admin URLs.
Shitty companies did it like that back then - and shitty companies still don’t properly utilize what easy tools they have available for controlled deployment nowayads. So nothing really changed, just that the amount of people (and with that, amount of morons) skyrocketed.
I had automated builds out of CVS with deployment to staging, and option to deploy to production after tests over 15 years ago.
Had to look that lawyer bit up as it just sounded too much like Gravenreuth - and indeed it was.
I nowadays manage my private stuff with the ansible scripts I develop for work - so mostly my own stuff is a development environment for work, and therefore doesn’t need to be done on private time.
Generally yes, but you still need hardware support (mostly kernel and mesa). They upstream - but generally you currently want packages built from their git for that.
Also the installer is very mac hardware specific.
A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
Is it a ‘death by quantity’ thing?
Pretty much that - those companies rely on open projects to sort it for them, so they’re pretty much scraping open databases, and selling good data they pull from there. That’s why they were complaining about the kernel stuff - the info required was there already, just you needed to put effort in, so they were asking for CVEs. Now they got their CVEs - but to profit from it they’d still need to put the same effort in as they’d had to without CVEs in place.
Short version: A bunch of shitty companies have as business model to sell open databases to companies to track security vulnerabilities - at pretty much zero effort to themselves. So they’ve been bugging the kernel folks to start issuing CVEs and do impact analysis so they have more to sell - and the kernel folks just went “it is the kernel, everything is critical”
tl;dr: this is pretty much an elaborate “go fuck yourself” towards shady ‘security’ companies.
It starts with them only doing initial talks about buying their hardware for a project with you for a 7-figure payment, and doesn’t improve from there.
It has been a while since I touched ssmtp, so take what I’m saying with a grain of salt.
Problem with ssmtp and related when I was testing it was its behaviour in error conditions - due to a lack of any kind of spool it doesn’t fail very gracefully, and if the sending software doesn’t expect it and implement a spool itself (which it typically doesn’t have a reason to, as pretty much the only situation where something like sendmail would fail is a situation where it also wouldn’t be able to write a spool) this can very easily lead to loss of mails.
I already had a working SMTP client capable of fishing mails out of a Maildir at that point, so I ended up just doing a simple sendmail program throwing whatever it receives into a Maildir, and a cronjob to send this forward. This might be the most minimalistic setup for reliably sending out mail (and I’m using it an all my computers behind Emacs to do so) - but it is badly documented, so if you don’t care about reliability postfix might be a better choice, or if you don’t just go with ssmtp or similar. Or if you do want to dig into that message me, and I’ll help making things more user friendly.
I see you’re not working in any industry having to deal with Qualcomm.
He probably needs a comaintainer. We could select one of us and then try pressuring him into accepting that.
All my software can be configured using dedicated configuration files (.c)
Because it does JBOD if the controller supports it. Pretty much none of the controllers you’ll find in consumer hardware support that.
JBOD relies on an optional SATA extension, which most of your controllers won’t have.
That leaves you with RAID in the controller - which is a bad idea, as you don’t have much control over what is going on, and recovery if it fails will possibly messy.
I nowadays typically have three outcomes to similare situations:
There are extensions for that. Which are worse than they used to be because they didn’t provide APIs enabling to do that properly, about 10 fucking years after they dropped the old APIs. There are a lot of other feature requests from back then open, often even filed years before they went through with dropping the old APIs. The best way of doing custom keyboard shortcuts in Firefox is still injecting Javascript into each page, with all the shortcomings this has. Usability of Firefox is way worse nowadays than what it was 10 years ago - and I do understand (and agree) with the decision to dump the legacy APIs, but you can’t just break functionality lots of people use, and not provide APIs in over a decade to fix that.
I’m trying other browsers now and then, but every single one is a dumpster fire. At least the Firefox dumpster fire is a bit less out of control - but that’s the most positive thing I can say about it nowadays.