

Steam port for IBM mainframes confirmed?!??!!!??!!?!!!??
Steam port for IBM mainframes confirmed?!??!!!??!!?!!!??
Yes, the top-level domain is still just a domain. I’m not aware of any public Internet services which are reachable from a TLD directly, and it’s strongly discouraged by ICANN, but there isn’t any technical limitation preventing e.g. someone at Verisign from setting up example@com
.
I wish I’d known this was a thing before I spent 15 minutes searching the manpages and manually upgrading my sources…
I will never touch flatpak for this reason, I’d rather deal with compiling software myself and faffing around with dependency issues than have 8 copies of every system library sitting around.
Framerate above 20 in what with what settings? That’s kinda key information :P
You shouldn’t need to download any graphics drivers, Ubuntu (and pretty much every other distribution) ships with the open-source AMD driver stack by default, which are significantly better than and less hassle than the proprietary drivers for pretty much all purposes. If you’re getting video out it’s almost certainly already using the internal GPU, but if you’re unsure you can open a terminal and run sudo apt install mesa-utils
and then glxinfo -B
to double-check what is being used for rendering.
KDE user here, I still use X11 to play old Minecraft versions. LWJGL2 uses xrandr to read (and sometimes modify? wtf) display configurations on Linux, and the last few times I’ve tried it on Wayland it kept screwing the whole desktop up.
In the future, you can generally solve these sorts of build errors by just installing the development package for whatever library is missing. On Debian-based systems, that would be something along the lines of sudo apt install libecm<tab><tab>
see what appears, choose one which looks reasonable with -dev
suffix
It takes like half a second on my Fairphone 3, and the CPU in this thing is absolute dogshit. I also doubt that the power consumption is particularly significant compared to the overhead of parsing, executing and JIT-compiling the 14MiB of JavaScript frameworks on the actual website.
Nouveau is dead, it’s been replaced with Zink on NVK.
True, but there are also some legitimate applications for 100s of gigabytes of RAM. I’ve been working on a thing for processing historical OpenStreetMap data and it is quite a few orders of magnitude faster to fill the database by loading the 300GiB or so of point data into memory, sorting it in memory, and then partitioning and compressing it into pre-sorted table files which RocksDB can ingest directly without additional processing. I had to get 24x16GiB of RAM in order to do that, though.
In my experience, nouveau is painfully slow and crashes constantly to the point of being virtually unusable for anything. The developers agree, as in the last couple months nouveau has been phased out of Mesa entirely. More recent Mesa versions now implement OpenGL on Nvidia using Zink on NVK, and the result is quite a bit faster and FAR more stable.
If your distribution currently still ships a Mesa version which uses nouveau, I would personally recommend you just stick with the Intel graphics for now.
Aside from checking the kernel log (sudo dmesg
) and system log (sudo journalctl -xe
) for any interesting messages, I might suggest simply watching for any processes which are abnormally high while the system is running slow. My initial approach would be to use htop
(disable “Hide Kernel Threads” and enable “Detailed CPU Time”), and seeing which processes, if any, are eating up your CPU time. The colored core utilization bars at the top show how much CPU time is being spent on what: gray for disk wait, red for kernel, green for regular user process, etc. That information will be a good starting point.
Again, that would be TIFF. TIFF images can be encoded either with each line compressed separately or with rectangular tiles compressed separately, and separately compressed blocks can be read and decompressed in parallel. I have some >100GiB TIFFs containing elevation maps for entire countries, and my very old laptop can happily zoom and pan around in them with virtually no delay.
There is a reason why TIFF is one of the most popular formats for raster geographic datasets :)
I have tried hosting a Tor relay on a VPS in the past and it was bottlenecked by the CPU at barely 20MB/s, although to be fair this was without hardware AES. More importantly for you, the server’s IP started getting DDoSed constantly and a whole bunch of big internet services just immediately blocked the address (the list of relay IPs is public and many things just block every address on that list instead of only exit nodes). So any of your machines are probably at least somewhat up to the task (ideally if they have hardware AES support), but this is definitely not something I’d do on my home network.
I would be very hesitant to run sed on a bunch of files consisting primarily of highly compressed binary data.
Okay, but to be fair you should divide that by at least 2^64 because ISPs are throwing out huge blocks left and right. My home plan with Swisscom gives me a single dynamic IPv4 address and an entire /64 IPv6 prefix, and I’m pretty sure it was /60 at one point.
shit