Basically, X11/Xorg doesn’t isolate programs from one another. This is horrible for security since malicious software can read every window, as well as all the input from mice and keyboards, just by querying the X server, but it’s also handy for screen reading software, streaming, etc. Meanwhile, Wayland isolates programs in their own sandbox, which prevents, say, a malicious browser tab from reading all of your keyboard inputs and logging your root password, but also breaks those things we like to use. To make matters worse, it looks like everyone’s answer for this and similar dilemmas wasn’t “let’s fix Wayland” but “let’s develop an extension to fix Wayland” and we wound up with that one fucking xkcd standards comic that I won’t bother linking because everyone has seen it a zillion times.
ETA: Basically, my (layman’s) understanding is that fixing this and making screen readers work in Wayland is hard because the core Wayland developers seem to have little appetite for fixing this themselves. Meanwhile, there’s 3-4 implementations of Wayland that do things differently, so fixing it via extensions means either writing multiple backends in your program to do the same damn thing (aka a giant pain in the ass) or getting everyone to agree on the same standard implementation (good fucking luck).
When IT folks say devs don’t know about hardware, they’re usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality–it may be practical to dump a database directly into memory when it’s a 500 MB testing dataset on your local workstation, but it’s insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it’s using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren’t as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program’s making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That’s not accounting for the real dumb shit I’ve read about, like “dev hard coded their local IP address and it breaks in production because of NAT” or “program crashes because it doesn’t account for network latency.”
Game dev is unique because you’re explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on “does it run without failing catastrophically” and almost everything else–performance, security, cleanliness, resource optimization–is given bare lip service at best.