Right. GCC -f optimizations are basically like “how hard are we going to try to be clever” and are, I believe, orthogonal to the actual instructions used. Machine dependent args start with -m, like -march or -mavx etc.
Right. GCC -f optimizations are basically like “how hard are we going to try to be clever” and are, I believe, orthogonal to the actual instructions used. Machine dependent args start with -m, like -march or -mavx etc.
So you’re right that this is a bit arbitrary because the line between the standard lib and the language is blurry, but someone writing Rust is going to expect Vec to work, it doesn’t even require an extra “use” to get it.
Perhaps a better core example would be operator overloading (or really any place using traits). When looking at “a + b” in Rust you have to be aware that, depending on the types involved, that could mean anything.
Anyway, I love Rust, it just doesn’t have the 1:1 relationship with the assembly output that C basically still has.
Huh weird, these pull requests just magically accepted themselves
Rust can create native binaries but I wouldn’t call it close to the metal like C. It’s certainly possible to bootstrap from assembly to Rust but, unlike C, every operation doesn’t have a direct analog to an assembly operation. For example Rust needs to be able to dynamically allocate memory for all of its syntax to be intact.
For XP, the machine KVM presents as may be too new, but that isn’t an issue with non-virtualized QEMU.
TIOBE is weighted toward languages that have existed for a long time by virtue of counting lines written / skilled engineers etc. but the speed at which Rust is climbing that list is a better indicator. Also, a lot of the languages above it wouldn’t be appropriate for anything like a DE.
But you’re right, it’s hyped, I just think the hype is real.
This is a weird take. Rust is very popular and is the current heir apparent to C for systems level stuff. It’s a great choice to start a new DE/toolkit.
As for the rest, you’re right the end user doesn’t care about the language their graphical app is in, but the developers fielding their bug reports and making fixes/features sure do.
John Carmack, author of the Doom engine, is a long time Linux user and for a while the policy was to open source the idTech engines once they had moved on.
However, Doom was hugely popular on its own before this, and was actually more pivotal for making Windows a gaming platform (over DOS).
The reason it runs everywhere is a combination of it’s huge popularity, it’s (now) open source and it’s generally low system requirements.
It does that everywhere, even on non .deb distros.
One thing I’d like to suggest is get most of their forward facing apps as Flatpak and let them install software that way instead of using the system package manager (even if it has a GUI). This jibes with others suggesting an immutable base system.
Obviously this may be more of a concern for older kids, but my kid started with Linux and it did fine… Right up until Discord started breaking because it was too old and they didn’t want to tangle with the terminal. Same thing when Minecraft started updating Java versions. Discord and Prismlauncher from Flatpak (along with Proton and Steam now) would have kept them happier with Linux.
As for internet, routers come with parental controls these days too, which have the added advantage of being able to cover phones (at least while not on mobile data). Setting the Internet to be unavailable for certain devices after a certain time on school nights may be a more straightforward route than DE tools.
This isn’t a benchmark of those systems, it’s showing that the code didn’t regress on either hardware set with some anecdotal data. It makes sense they’re not like for like.
I used (u)xterm for like 20 years before discovering that Konsole is solid and beautiful. My whole tiling setup is backed up with KDE apps now.
I’m also glad to see Wayland tools maturing. The hand wringing about lack of X forwarding was always FUD and a nonsense reason to cling to the fiction that X works well over a socket and justify all the shitty compromises X made to remain compatible with it.
The Windows scheduler is so stupid chip manufacturers manipulate the BIOS/ACPI tables to force it to make better decisions (particularly with SMT) rather than wait on MS to fix it.
Linux just shrugs, figures out the thread topology anyway and makes the right decisions regardless.
Nobody running a FOSS third party launcher is an average end user. Also, people routinely add flags to typical games even on Windows (e.g. -skiplauncher)… It’s really not that big a deal.
Really? I use Arch native Steam and Proton no problem. You either use steam-runtime (uses built in Ubuntu runtime) or steam-native (expects Arch packages) but there is a meta package for pulling the runtime deps. Both have worked for me.
That said, Flatpak has come in clutch for me as well on the Steam Deck, and for things like Prism Launcher (modded Minecraft launcher) where you want to juggle multiple Java versions without needing to run archlinux-java between switching packs.
I dunno about ethos, but I do know Pine can also make false claims. I bought a Rock64 years back and they touted it as 4k60 video capable with an integrated GPU and that wasn’t realistic at all. The software stack was still very immature on release. From their own wiki, years later, it still doesn’t work and key parts still haven’t been upstreamed.
Honestly, I use Arch (btw) but after living on Fedora for a while, when I returned I started using podman over AUR for some stuff. If a package is going to pull a bunch of weird dependencies, or I want to easily migrate it later, it’s just so much easier to keep it containerized.
The only thing Samba is really great for is interop with Windows. If that’s not an issue, Dolphin can browse SFTP directly by adding it as a network share (you may need to setup a password-less key pair to avoid having to login). SSHFS is a similar option and works even if the client is totally naive (it just looks like any other mounted FS).