

deleted by creator
deleted by creator
So you’re telling me that there was a Mac super computer in '05?
So, I’d argue that “frontend” and “backend” are the default modes of software engineering these days, and that embedded is a more niche field.
That said, if you’re doing encryption code, you’re doing far more advanced math than backend monitoring and alerting.
You often need to be pretty good at math. But not because you’re “doing math” to write the code.
In real world software systems, you need to handle monitoring and alerting. To properly do this, you need to understand stats, rolling averages, percentiles, probability distributions, and significance testing. At least at a basic level. Enough to know how to recognize these problems and where to look when you run into them.
For being a better coder, you need to understand mathematical logic, proofs, algebra/symbolic logic, etc in order to reason your way through tricky edge cases.
To do AI/ML, you need to know a shitton of calculus and diff eqs, plus numerical algorithms concepts like numerical stability. This is kinda a niche (but rapidly growing) engineering field.
The same thing about AI also applies to any other domain where the thing being computed is fundamentally a math or logic solution. This is somewhat common in backend engineering.
I’m not “doing math” with pen and paper at work, but I do use all of these mathematical skills all. the. time.
I am an SRE on a ML serving platform.
Part of it is the community. I really like the OpenWRT community, but it’s harder to engage with them when you run a downstream distribution.
But also I’m a bit of a hacker (in the traditional sense). I like to experiment with custom builds of OpenWRT. (And FWIW, their build system uses the same menuconfig as Linux.)
I love my Turris Omnia!
I got the one with the WiFi 6 card. The cool thing is that you can easily open it up and replace parts.
I run the upstream OpenWRT rather than the customized version by Turris. They are good about submitting patches upstream.
+1
From an order of magnitude perspective, the max is terabytes. No “normal” users are dealing with petabytes. And if you are dealing with petabytes, you’re not using some random poster’s program from reddit.
For a concrete cap, I’d say 256 tebibytes…
TL;DR - We can now control swappiness per cgroup instead of just globally. This is something that userspace oom killers will want to use.
You don’t need to provide root access just because you used GPL code, you just have to follow the GPL.
Well, to follow version 3 of the GPL, you do actually need to provide effective root access.
Specifically, version 3 of the GPL adds language to prevent Tivoization.
It’s not enough to just provide the user with the code. The user is entitled to the freedom to modify that code and to use their modifications.
In other words, in addition to providing access to the source code, you must actually provide a mechanism to allow the user to change the code on the device.
The name “Tivoization” comes from the practice of the company TiVo, which sold set-top boxes based on GPL code, but employed DRM to prevent the user from applying custom patches. V3 of the GPL remedies this bug.
For Zulip, I’ve only used it on the web. Apparently they have iOS, Android, Desktop, and Terminal clients.
For Matrix, there are many clients on all platforms, but none have ever stood out to me. Element is the official client, and it’s… fine I guess.
I love this, especially the criticism of the FSF.
For coms, Zulip seems OK. I would really like Matrix to take off, but I honestly don’t really like any of the clients.
Maybe.
Linux won because it worked. Hurd was stuck in research and development hell. They never were able to catch up.
However, Linus’s kernel was more elaborate than GNU Hurd, so it was incorporated.
Quite the opposite.
GNU Hurd was a microkernel, using lots of cutting edge research, and necessitating a lot of additional complexity in userspace. This complexity also made it very difficult to get good performance.
Linux, on the other hand, was just a bog standard Unix monolithic kernel. Once they got a libc working on it, most existing Unix userspace, including the GNU userspace, was easy to port.
Linux won because it was simple, not elaborate.
You talk about “non-absolutist,” but this thread got started because the parent comment said “literally never.”
I am literally making the point that the absolutist take is bad, and that there are good reasons to call unwrap in prod code.
smdh
Fair. But unwrap versus expect isn’t really the point. Sure one has a better error message printed to your backtrace. But IMO that’s not what I’m looking for when I’m looking at a backtrace. I don’t mind plain unwraps or assertions without messages.
From my experience, when people say “don’t unwrap in production code” they really mean “don’t call panic! in production code.” And that’s a bad take.
Annotating unreachable branches with a panic is the right thing to do; mucking up your interfaces to propagate errors that can’t actually happen is the wrong thing to do.
Unwrap should literally never appear in production code
Unwrap comes up all the time in the standard library.
For example, if you know you’re popping from a non-empty vector, unwrap is totally the right too for the job. There are tons of circumstances where you know at higher levels that edge cases defended against at lower levels with Option
cannot occur.
orlp invented PDQSort and Glidesort. He collaborated with Voultapher on Driftsort.
Driftsort is like a successor to Glidesort.
Glidesort had some issues that prevented it from being merged into std, and which are addressed in Driftsort. IIRC it had something to do with codegen bloat.
Zsh
No plugin manager. Zsh has a builtin plugin system (autoload
) and ships with most things you want (like Git integration).
My config: http://github.com/cbarrick/dotfiles
Exactly.
My take is that the issue isn’t with tmpfiles.d, but rather the decision to use it for creating home directories.
Ha, I see.
Yeah, sarcasm over text forums is sometimes difficult to pick up on.