• 1 Post
Joined 9 months ago
Cake day: October 4th, 2023

  • tal@lemmy.todaytoSelfhosted@lemmy.worldHDD or SSD for a home server?
    3 days ago

    For any computer today, server or no, I’d probably default to SSD today unless I expected to be making use of a large store of files that I expected to access in serial, like a large movie collection or maybe a backup server that can play well with rotational drives.

    The only thing there that looks like it could be doing that is the Samba server, depending upon what the remote clients are doing with it (could be a movie server).

    In general, if you can fit your stuff on an SSD today, I’d get an SSD.

    You also can also add a rotational drive down the line if you run low on space and need inexpensive space for something that you’re going to access in serial, and use both; just move the bulk stuff to the rotational drive then.

  • Probably not authentic as getting a secondary monitor that’s an old-school CRT and an an HDMI/DP/USB-C-to-VGA plug and sticking it right into an authentic CRT, but I’m on a laptop in a restaurant and can’t screenshot that anyway.

    You can still get the keyboard, too:


    In 1996, Lexmark International was prepared to shut down their Lexington keyboard factory where they produced Model M buckling-spring keyboards. IBM, their principal customer and the Model M’s original designer and patent holder, had decided to remove the Model M from its product line in favor of cost-saving rubber-dome keyboards.

    Rather than seeing its production come to an end, a group of former Lexmark and IBM employees purchased the license, tooling and design rights for buckling-spring technology, and, in April 1996, reestablished the business as Unicomp.


    I have one of their Endura Pros at home, which is an old-school IBM buckling-spring keyboard. That has the IBM Trackpoint nipple mouse. The buckling spring keyswitches will last forever, as far as I can tell, but I wore out the mouse button keyswitches. They might have fixed that over the years, but I would probably just one without the Trackpoint if I got another.

    If you have one of those and are typing away (“click ping! click ping! click ping!”), looking at a CRT, that’s probably about as close as you get to the BBS era.

    Probably a way to rig up simulated 9600 baud too.

  • Right now when updates get applied to the NAS, if it gets powered off during the update window that would be really bad and inconvenient require manual intervention.

    You sure? I mean, sure, it’s possible; there are devices out there that can’t deal with power loss during update. But others can: they’ll typically have space for two firmware versions, write out the new version into the inactive slot, and only when the new version is committed to persistent storage, atomically activate it.

    Last device I worked on functioned that way.

    you might lose data in flight if you’re not careful.

    That’s the responsibility of the application if they rely on the data to be persistent at some point; they need to be written to deal with the fact that there may be in-flight data that doesn’t make it to the disk if they intend to take other actions that depend on that fact; they’ll need to call fsync() or whatever their OS has if they expect the data to be on-drive.

    Normally, there will always a period where some data being written out is partial: the write() could complete after handing the data off to the OS’s buffer cache. The local drive could complete because data’s in its cache. The app could perform multiple write() calls, and the first could have completed without the second. With a NAS, the window might be a little bit longer than it otherwise would be, but something like a DBMS will do the fsync(); at any point, it’d be hypothetically possible for the OS to crash or power loss or something to happen.

    The real problem, that I need an nas for, is not the loss of some data, it’s when the storms hit and there’s flooding, the power can go up and down and cycle quite rapidly. And that’s really bad for sensitive hardware like hard disks. So I want the NAS to shut off when the power starts getting bad, and not turn on for a really long time but still turn on automatically when things stabilize

    Like I said in the above comment, you’ll get that even without a clean shutdown; you’ll actually get a bit more time if you don’t do a clean shutdown.

    Because this device runs a bunch of VMs and containers

    Ah, okay, it’s not just a file server? Fair enough – then that brings the case #2 back up again, which I didn’t expect to apply to the NAS itself.

  • I’m assuming that your goal here is automatic shutdown when the UPS battery gets low so you don’t actually have the NAS see unexpected power loss.

    This isn’t an answer to your question, but stepping back and getting a big-picture view: do you actually need a clean, automatic shutdown on your Synology server if the power goes out?

    I’d assume that the filesystems that the things are set up to run are power-loss safe.

    I’d also assume that there isn’t server-side state that needs to be cleanly flushed prior to power loss.

    Historically, UPSes providing a clean shutdown were important on personal computers for two reasons:

    • Some filesystems couldn’t deal with power loss, could produce a corrupted filesystem. FAT, for example, or HFS on the Mac. That’s not much of an issue today, and I can’t imagine that a Synology NAS would be doing that unless you’re explicitly choosing to use an old filesystem.

    • Some applications maintain state and when told to shut down, will dump it to disk. So maybe someone’s writing a document in Microsoft Word and hasn’t saved it for a long time, a few minutes will provide them time to save it (or the application to do an auto-save). Auto-save usually partially-mitigates this. I don’t have a Synology system, but AFAIK, they don’t run anything like that.

    Like, I’d think that the NAS could probably survive a power loss just fine, even with an unclean shutdown.

    If you have an attached desktop machine, maybe case #2 would apply, but I’d think that hooking the desktop up to the UPS and having it do a clean shutdown would address the issue – I mean, the NAS can’t force apps on computers using the NAS to dump state out to the NAS, so hooking the NAS up that way won’t solve case #2 for any attached computers.

    If all you want is more time before the NAS goes down uncleanly, you can just leave the USB and RS-232 connection out of the picture and let the UPS run until the battery is exhausted and then have the NAS go down uncleanly. Hell, that’d be preferable to an automated shutdown, as you’d get a bit more runtime before the thing goes down.

  • Yes. I wouldn’t be preemptively worried about it, though.

    Your scan is going to try to read and maybe write each sector and see if the drive returns an error for that operation. In theory, the adapter could respond with a read or write error even if a read or write worked or even return some kind of bogus data instead of an error.

    But I wouldn’t expect this to likely actually arise or be particularly worried about the prospect. It’s sort of a “could my grocery store checkout counter person murder me” thing. Theoretically yes, but I wouldn’t worry about it unless I had some reason to believe that that was the case.

  • I haven’t used it recently, but last time I did, I used MO2 with vanilla WINE, just setting my WINE prefix to the Skyrim Proton prefix. WINE and Proton would convert the registry in the WINE prefix back and forth each time one launched. I haven’t used SteamTinkerLaunch.

    Prior to that, I used Wrye Bash, which was a mess to get working in Linux – but could run natively, at least at one point, with some prodding. I’ve also run it under WINE. It took a lot of massaging. I don’t recommend that route unless you can program, know Python and are willing to get your hands dirty.

    And I also had a stint where I wrote my own scripts to reconstruct the modded environment from scratch.

    My most-recent attempt for Bethesda modding was in Starfield, with a much-simpler CLI mod manager, this. I have gotten some mods working but not others; don’t know if it’s a case-folding issue. Will need more experimentation. It doesn’t have the conflict-diagnosis tools that Wrye Bash does, or I assume MO2 probably does (though I haven’t run into). I don’t think it supports Skyrim, Fallout 4, or Fallout 76; that probably matters at least insofar as mod managers for those need to merge leveled lists. My (brief) impression is that the Starfield modding community is heading down the direction of avoiding needing the mod manager to do that, having a mod that merges that stuff dynamically at game runtime.

    the performance is not great.

    Uh. The performance of MO2 or Skyrim?

    MO2…I don’t recall, it might not have been snappy, but I don’t recall it being especially unusable. Certainly not at the level that I wouldn’t use the software. I was using a reasonably high-end system, but I don’t think that it’s particularly resource-intensive. I was running off SSD, and maybe some of the stuff might have been I/O intensive.

    Skyrim was fine from a performance standpoint. I mean, you can obviously kill performance with the right mods, but I assume that you mean “modding at all”.

    EDIT: If you put a lot of mods into Skyrim, like, hundreds, it can take a while to launch. IIRC, one problem – not Linux-specific – there is that loose files aggravate launch performance issues. My understanding is that, where possible, use mods that merge files into a .BSA rather than loose files. A number of mods have multiple versions; pick the .BSA one.

    EDIT2: Skyrim, Fallout 4, and the Fallout 76 versions of Bethesda’s engine don’t really take much advantage of multiple cores the way the way the Starfield version does. I get buttery-smooth performance in Starfield; Fallout 76 invariably is a bit jerky when loading resources in a new cell. I don’t get a pretty consistent framerate at 165 Hz in Fallout 76 the way I can in Starfield. But I don’t know if that’s what you’re running into, without specifics of the performance issues. And that’s not gonna be a Linux-specific issue or anything that can realistically be resolved short of forward-porting the Skyrim, Fallout 4, and Fallout 76 games to the Starfield engine.

  • Are you talking about browser cache? Sure.

    Hit about:cache in your URL bar and you can see what it’s caching.

    Firefox has a memory cache, that lasts for the life of the browser session, and a disk cache that persists from session to session.

    Note that by default, if you have Firefox set to delete data in your browser on exit (which is a sensible thing to do from a privacy standpoint), it’ll also wipe the browser cache (which is sensible behavior, since you can be identified by what your browser has cached in the past, same way cookies work). So if you have that privacy setting on, you may have no persistent cache.

    The browser disk cache size used to be exposed in the GUI preferences. At some point, IIRC, they switched to a “smart cache size” based on available disk space which is IMHO excessively conservative. You can bump it up in about:config today with the browser.cache.disk.capacity setting today, and probably have to flip off browser.cache.disk.smart_size_enabled.

  • Yeah, I use Steam as a deb too.

    I haven’t done it, but as long as Steam itself is isolated – as I expect flatpak Steam is – anything it launches will be too, and you can add arbitrary binaries. AFAIK, that works with Windows binaries in Proton.


    Referring to your response to dillekant, I’m not sure how much Steam buys you in terms of security, though, unless you’re buying from Valve. The flatpak might provide some isolation by virtue of being flatpak (though I dunno how many permissions the Steam flatpak is granted…I assume that at bare minimum, it has to grant games access to stuff like your microphone to let VoIP chat work).


    Steam, itself as of today, doesn’t provide isolation, at all.

    Adding a non-Steam game to Steam lets you launch from Steam, which might be convenient. Maybe use Proton, which has a few compatibility patches.

    If I wanted to run an untrusted Windows binary game today on my Linux box, if it needs 3d acceleration, I don’t have a great answer. If it doesn’t, then running it in a Windows VM with qemu is probably what I’d do – I keep a “throwaway” VM for exactly that. It has read access to a shared directory, and write access to a “dropbox” directory. I wouldn’t bring Steam into the picture at all. I don’t want it near my Steam credentials (Steam credentials have been a target of malware in the part) or a big software package like Steam that may-or-may-not have been well-hardened.

    It does get network access to my internal network – I haven’t set up an outbound firewall on the bridge, so a hostile binary could get whatever unauthenticated access it could get from my LAN. And it could use my Internet connection, maybe participate in a DDoS of someone or such. But it doesn’t otherwise have access to the system. It isn’t per-app isolation, but if the VM vanished today, it wouldn’t be a problem – there’s nothing sensitive on it. It doesn’t know my name. It can’t talk to my hardware, outside of what’s virtualized. It doesn’t have access to my data. There are no credentials that enter that VM. Unless qemu itself has security holes, software in the thing is limited to the VM.

    I have used firejail to sandbox some Linux-native apps, but while it’s a neat hack, and has a lot of handy tools to isolate software, I have a hard time recommending it as a general solution for untrusted binaries. I don’t know how viable it is to use with WINE, which it sounds like is what you want. It has a lot of “default insecure” behavior, where you need to blacklist a program for access to a resource, rather than whitelisting it. From a security standpoint I’d much rather have something more like Android, where firejail starts a new app with no permissions, warns me if it’s trying to use a resource (network, graphical environment certain directories) and asks me if I want to whitelist that access. It requires some technical and security familiarity to use. I think the most-useful thing I’ve used it for is that it mostly can isolate Ren’Py games, cut network access, disk write access, and a number of games (though not all; arbitrary Python libraries can be bundled) can work with a reasonably-generic restrictive firejail renpy profile. It just requires too much fiddling and knowledge to be a general solution for all users, and “default insecure” is trouble, IMHO.

    I do wish that there was some kind of reliable, no-fiddling, lighter-weight per-game isolation available for both Windows binaries and Linux binaries out-of-box. Like, that Joe User can use and I could recommend.

    I did see something the other day when reading about an unrelated Proxmox issue, talking about Nvidia apparently having some kind of GPU virtualization support. And searching, it looks like AMD has some kind of “multiuser GPU” thing that they’re billing. I don’t know how hardened either’s drivers are, but running VMs with 3d games may have become more practical since last I looked.

    EDIT: Hmm, yeah, sounds like QEMU does have some kind of GPU virtualization these days:


    Need native performance, but multiple guests per card: Like with PCI passthrough, but using mediated devices to shard a card on the host into multiple devices, then passing those:

    -display gtk,gl=on -device vfio-pci,sysfsdev=/sys/bus/pci/devices/0000:00:02.0/4dd511f6-ec08-11e8-b839-2f163ddee3b3,display=on,rombar=0

    You can read more about vGPU at kraxel and Ubuntu GPU mdev evaluation. The sharding of the cards is driver-specific and therefore will differ per manufacturer – Intel, Nvidia, or AMD.

    I haven’t looked into that before, though. Dunno what, if any, issues there are.

    EDIT2: Okay, I am sorry. I am apparently about four years out of date on Steam. Steam didn’t have any form of isolation, but apparently in late 2020, they added Pressure Vessel, some form of lxc-based per-game containerization.

    I don’t know what it isolates, though. I may need to poke more at that. Pretty sure that it doesn’t block network access, and I dunno what state the container gets access to.

  • Mans is driving vanilla chrome just taking ads and trackers to the face.

    I know that the context here is that you’re staying that he’s presumably supposed to have familiarity with the browser, but…honestly, I’m not sure that encouraging people who don’t have some level of technical familiarity with their browser to use ad-blockers is a fantastic idea.

    As obnoxious as ads can be, it’s also true that I’ve seen various ad-blocker and tracker-blocker (and now auto-EU-cookie-agreement-accept) periodically break websites over the years. If people are able and willing to troubleshoot the browser when things break, then sure, but I know a bunch of people who I don’t think would likely be able to do that, and the irritation of ads may be less-serious than the inability to access a website, even if the ads are a lot more frequent than the website breaking.

    There are also websites that will intentionally detect ad-blockers and block access to the website. Those do generally provide instructions to turn off ad-blockers – like, they want to have people viewing the ads, not just avoiding the website – but I dunno if it’s a great idea to get people in the pattern of happily following instructions to extend sites that tell them to do so additional browser permissions.

    There’s an additional problem – that some widely-installed extensions have been purchased by other companies. I don’t do a great job of following who-acquired-what-addon on the system that I use. I suspect that most people don’t. If you’re giving a browser addon access to see and twiddle all sorts of data across all sorts of websites – banks, whatever – you’re extending a lot of trust to whoever can push updates to that addon.

    I think that what Firefox might benefit from is a “troubleshoot webpage” feature that – among other things – tries doing a binary search, disabling addons, to see if some active addon is causing breakage. That won’t solve the “some company bought a browser extension and is now maybe doing less-than-salubrious-things” problem, but it’d make me more comfortable recommending that not-super-technically-savvy users use ad-blockers in terms of website breakage.

    Might also do some things like check Internet connectivity (like, is some Javascript-heavy page breaking because the Javascript is trying to fetch things and silently failing), check for VPNs being active or inactive, etc. I have a Bluetooth adapter that occasionally wedges after being woken up from sleep, and it’s not always obvious to me that that’s the cause. I know a lot of people who would spend a lot of time frustrated with that.

  • tal@lemmy.todaytoLinux Gaming@lemmy.worldInfected games under Proton.
    17 days ago

    I was just wondering what would happen if I downloaded a game that was infected by a computer virus and ran it in Linux using Proton.

    Depends on the mechanism. Some viruses will target stuff that WINE doesn’t emulate – like, if it tries to fiddle with Windows system files, it’s just not going to work. But, sure, a Windows executable could look for and infect other Widows executables.

    Has this happened to anyone?

    I don’t know specifically about viruses or on Proton. But there has been Windows malware that works under WINE. Certainly it’s technically possible.

    How would the virus behave?

    Depends entirely on the virus in question. Can’t give a generic answer to that.

    What files, connections or devices would it have access to?

    WINE itself doesn’t isolate things (which probably is reasonable, given that it’s a huge, often-changing system and not the best place to enforce security restrictions). On a typical Linux box, anything that you, as a user, would, since Linux user-level restrictions would be the main place where security restrictions would come into play.

    I do think that there’s a not-unreasonable argument that Valve should default to having games – not just Proton stuff – run in some kind of isolation by default. Basically, games generally are gonna need 3d access, and some are gonna need access to specialized input devices. But Steam games mostly don’t need general access to your system. But as things stand, Steam doesn’t do any kind of isolation either.

    You can isolate Steam as a whole – you can look at installing Steam via flatpak, for one popular option. I don’t use flatpaks, so I’m not terribly familiar with the system, but I understand that those isolate the filesystem that Steam and its games have access to. That being said, it doesn’t isolate games from each other, or from Steam (e.g. I can imagine a Steam-credentials-stealing piece of malware making it into the Steam Workshop). On the other hand, I’m not totally sure how much I’d trust Valve to do a solid job of having the Steam API be really hardened against a malicious game anyway – that’s not easy – so maybe isolating Steam too is a good idea.

    Could it be as damaging as running in in Windows?

    Sure. If it’s not Linux-aware, it probably isn’t going to do anything worse than deleting all the files that your user has access to, but in general, that’d be about as bad anyway. If it is Linux-aware, it could probably do something like intercept your password next time you invoke sudo, then make use of it to act as root and do anything.

  • tal@lemmy.todaytoSelfhosted@lemmy.worldServer for a boat
    17 days ago

    What hardware and Linux distro would you use in this situation?

    The distro isn’t likely to be a factor here. Any (non-super-specialized) distro will be able to solve issues in about the same way.

    I mean, any recommendation is going to just be people mentioning their preferred distro.

    I don’t know whether saltwater exposure is a concern. If so, that may impose some constraints on heat generation (if you have to have it and storage hardware in a waterproof case).

  • If there’s a better way to configure Docker, I’m open to it, as long as it doesn’t require rebuilding everything from scratch.

    You could try using lvmcache (block device level) or bcachefs (filesystem level caching) or something like that, have rotational storage be the primary form of storage but let the system use SSD as a cache. Dunno what kind of performance improvements you might expect, though.

  • I would suggest, unless you have a very unusual situation, that you’re going to have an easier time of it with a keyboard and display.

    If your computer can do HDMI out, you can use a television as display.

    In all seriousness, unless this is some kind of super-exotic situation (like, you’re on a sailboat in the middle of the Pacific and are suddenly needing to set up a Debian server) I would probably get an inexpensive USB keyboard to keep around. Even if you don’t normally need it (like, you use a laptop or something) there are a number of situations that it solves, like “one of my laptop keys has just stopped working” or “I actually need to work on some kind of computer that doesn’t have an integrated keyboard”.



    That’s not gonna be a very pleasant typing experience, but it’s under $4 for two, if you’re determined to spend as little as possible.

    If you can’t get access to a television, here’s a small, 640x480 USB/HDMI display under $50:


    I’d probably get a larger display, maybe used – I mean, maybe you think that you’re never gonna need to look at a computer’s output again, but you might find yourself troubleshooting a machine like this one, and 640x480 is a kind of significant limitation – but that’s at least a baseline.

    If you specifically don’t want a keyboard, and if you have some other device with a display and text input and USB (well, or serial) support, I’d bet that the Debian installer can probably handle an RS-232 serial console install.




    But I’m guessing that you don’t have the serial hardware. Having a USB-to-serial adapter is another thing that I keep one of around because every now and then I need to work on headless devices that have a serial interface, but I’ll concede that the serial port is getting pretty elderly.

    I’d probably get a USB-to-serial male and USB-to-serial female adapter if neither end has an existing serial port (which these days, with desktop hardware, may be very possible). Something like this:




    But then you have to be sure that you can get your machine to boot into the Debian install media. On machines that are designed to be run headless, routers and such, it’s common for the BIOS to support a serial interface. On desktop machines…not so much. So if it’s already configured to boot off USB, that may be fine, but if it’s not, well…

    Debian also has a fully-automated installer, as long as you can set your machine up to boot into it without a keyboard or display:


    That kind of thing is normally more used to set up VMs or manufacture hardware.

    I would be very careful with that thing and probably wipe it after you use it, since it’s gonna be a USB key that wipes computers if you reboot and they’re set to boot off USB.

    It almost certainly isn’t a great fit for your use case – like, the time you’re probably going to expend setting it up isn’t going to be worth whatever you’d save spending on hardware – but mentioning it for completeness.