You mount them to /proc for extra spiciness
You mount them to /proc for extra spiciness
WTF, for the past 25 years, I thought /usr was short for /user, partially because of FreeBSDs preference for having user homes in /usr/home/*
Also, fuck /media. All of my (middle aged) homies hate /media
By promoting the distros that have this as a goal, such as Mint.
I would suggest Ubuntu in this category, but… eww…
Pretty much when you posted that, I found this in my dmesg:
[ 715.744332] e1000e 0000:00:1f.6: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[ 715.965683] e1000e 0000:00:1f.6: The NVM Checksum Is Not Valid
[ 716.008541] e1000e: probe of 0000:00:1f.6 failed with error -5
Just for the record, I compared modinfo up against lspci, and the PCI ID matches, so the driver should work. Is it possible to ignore the NVM checksum and try anyway? Because any tool I can find that communicates with the EEPROM on a hardware level is made for msdos.
Derp, I don’t think I ever did a modprobe. Anyway, I did an rmmod as I found out that there’s a newer version out, and I’m currently working on building the new version.
UPDATE: Newer version built, installed, and loaded.
I don’t know, we usually buy in bulk. I tried finding the invoice we got after a pallet of 15TB tapes, but I can’t seem to find it.
There are also different tape types depending on which capabilities you need, which of course affects the price as well. We use a few variations on the IBM 3592 tape, but most of them are WORM, and in a tape format that “anyone” can read.
Ask geophysicists or people dealing with geophysical data. Storing on tape is pretty much industry standard, and drives are upgraded now and then for better speed and data density.
Source: We sold off a bunch of TS1150 a couple of years ago after upgrading.
The ones making the comment are on your block list, either personally, or by instance.
Seconding this. I work with a lot of geophysical data, and there’s a reason why our library is stored on LTO.
Once you have the infrastructure and supply chain for it, there’s simply no cheaper way of long term storage per TB. The drives can be pricey, depending on which you use, but the standard IBM tapes are pretty cheap.
I have four identical machines. Each with the following set of disks:
2x NVMe
2x 2.5" SSD
4x 3.5" HDD in hardware RAID6
now, the device nodes for the SSDs and the RAID seems random. These populate /dev/sda through sdc, but which is which varies between the machines.
Is it possible to somehow reassign the device nodes so that I have the RAID show up as sdc on all machines?
I was thinking along the same lines. Use the online version available via portal.office.com, and use that to convert everything to something more FOSS-friendly.
Not sure if access is free, though.
I actually like that name, but it might be too close to the original for trademark comfort.
I for one really appreciate the effort of supporting non-AT drives despite the initial skepticism.
The alternative was 5mbit/s VSAT. 4G was a luxury at that time.
I don’t remember how many files, but typically these geophysical recordings clock in at 10-30 GB. What I do remember, though, was the total transfer size: 4TB. It was kind of like a bunch of .segd, and they were stored in this server cluster that was mounted in a shipping container for easy transport and lifting onboard survey ships. Some geophysics processors needed it on the other side of the world. There were nobody physically heading in the same direction as the transfer, so we figured it would just be easier to rsync it over 4G. It took a little over a week to transfer.
Normally when we have transfers of a substantial size going far, we ship it on LTO. For short distance transfers we usually run a fiber, and I have no idea how big the largest transfer job has been that way. Must be in the hundreds of TB. The entire cluster is 1.2PB, bit I can’t recall ever having to transfer everything in one go, as the receiving end usually has a lot less space.
Counterpoint: https://xkcd.com/910/
Also, both hostname and username might fit a company schema that you want to anonymize.
For me this typically involves doing a search&replace for my username, hostname, and IP addres(es)
Not necessarily. It can be, but it all depends on which nodes you get when you connect. If I end up on slow nodes I usually just reconnect, and it’s fine.
In my book WSL and VM share the same downside in that you’re only abstracting Linux functionality in relation to the hardware.
Linux really shines when it has full access to the actual hardware as opposed to asking it’s environment nicely if it’s allowed to do something.
For example, I routinely need to change my IP address to talk to specific networks and network hosts, but having to step over the virtualisation or interpretation layer to do so is just another step, thus removing the advantage of running linux in the first place.
Sure, VMs and dual booting have their uses, but the same uses can be serviced by an actual linux install while also being infinitely more powerful.
I played around with WSL for a while, but you notice really quickly that it is not the real thing. I’ve used virtual box for some use cases, but that too feels limiting ad all of the hardware you want to fully control is only abstracted.
I would say that unless he has a really good reason why he wouldn’t want to go for dual boot, then he should do just that.