I wouldn’t be so sure about the lifetime - spinning up and spinning down put far more stress on the drive components than simply spinning at a constant rate.
I wouldn’t be so sure about the lifetime - spinning up and spinning down put far more stress on the drive components than simply spinning at a constant rate.
No, it was compiled by the team which maintains my distro’s package repository, and cryptographically verified to have come from them by my package manager. That’s a lot different than downloading some random executables I pulled from a website I’d never heard of before and immediately running them as root.
…are you trying to imply that all neurodivergent people are LGBT+?
+1 for Debian, if you just want a stable, reliable system and don’t care about the latest and greatest features there is no better choice
Downside: it’s entirety manual and not scalable whatsoever.
no that just sounds like a bug
Personally I’d be somewhat nervous using dd
to edit parts of a text file, but you do you :)
Not sure why you’re getting downvoted, as a graphics programmer AMD’s proprietary drivers are unquestionably the buggiest which I have to work with on a regular basis. Seemingly innocent stuff which works perfectly fine on every other vendor (and on the same GPU using the open-source drivers) will cause the proprietary drivers to break horribly or run slower by multiple orders of magnitude.
they’re still pretty RISC, using fixed-width instructions and fairly simple encoding. certainly a hell of a lot simpler than the mess that is x86-64
Michelangelo’s David is a well-known marble statue which was carved using a chisel.
My point was more that the SSD will likely have lower latency than an Ethernet link in any case, as you’ve got the extra delay of data having to traverse both the local and remote network stack, as well as any switches that may be in the way. Additionally, in order to deal with that bandwidth you’ll need to kit out not only the local machine, but also the remote one with expensive 400GbE hardware+transceivers, plus switches, and in order to actually store something the remote machine will also have to have either a ludicrous amount of RAM (resulting in a setup which is vastly more complex and expensive than the original RAIDed SSDs while offering presumably similar performance) or RAIDed SSD storage (which would put us right back at square one, but with extra latency). Maybe there’s something I’m missing here, but I fail to see how this could possibly be set up in a way which outperforms locally attached swap space.
Well, assuming you’ve already gone through the effort to write a custom kernel module to offload your swap pages to Google Drive, it doesn’t seem like that much of a stretch to have it encrypt the data before transmitting it.
Yeah, although the neat part is that you can configure how much replication it uses on a per-file basis: for example, you can set your personal photos to be replicated three times, but have a tmp directory with no replication at all on the same filesystem.
What exactly are you referring to? It seems to me to be pretty competitive with both ZFS and btrfs, in terms of supported features. It also has a lot of unique stuff, like being able to set drives/redundancy level/parity level/cache policy (among other things) per-directory or per-file, which I don’t think any of the other mainstream CoW filesystems can do.
The recommendation for ECC memory is simply because you can’t be totally sure stuff won’t go corrupt with only the safety measures of a checksummed CoW filesystem; if the data can silently go corrupt in memory the data could still go bad before getting written out to disk or while sitting in the read cache. I wouldn’t really say that’s a downside of those filesystems, rather it’s simply a requirement if you really care about preventing data corruption. Even without ECC memory they’re still far less susceptible to data loss than conventional filesystems.
I considered a KVM or something similar, but I still need access to the host machine in parallel (ideally side-by-side so I can step through the code running in the guest from a debugger in my dev environment on the host). I’ve already got a multi-monitor setup, so dedicating one of them to a VM while testing stuff isn’t too much of a big deal - I just have to keep track of whether or not my hands are on separate keyboard+mouse for the guest :)
Functionally it’s pretty solid (I use it everywhere, from portable drives to my NAS and have yet to have any breaking issues), but I’ve seen a number of complaints from devs over the years of how hopelessly convoluted and messy the code is.
I do this for testing graphics code on different OS/GPU combos - I have an AMD and Nvidia GPU (hoping to add an Intel one eventually) which can each be passed through to Windows or Linux VMs as needed. It works like a charm, with the only minor issue being that I have to use separate monitors for each because I can’t seem to figure out how to get the GPU output to be forwarded to the virt-manager virtual console window.
I can assure you that before I set up Cloudflare, I was getting hit by SYN floods filling up the entire bandwidth of my home DSL2 connection multiple times a week.