• 5 Posts
  • 65 Comments
Joined 1 year ago
cake
Cake day: October 20th, 2023

help-circle
  • … mostly the other way around?

    Theoretically it is possible that a compromised machine could compromise a USB stick. If you are at the point where you are having to worry about government or corporate entities setting traps at the local library? You… kind of already lost.

    Which is the thing to understand. Most of what you see on the internet is, to borrow from a phrase, Privacy Theatre. It is so that people can larp and pretend they are Steve Rogers fighting a global conspiracy while necking with a hot co-worker at an Apple store. The reality is that if you are actually in a position where this level of privacy and security matters then you need to actually change your behaviors. Which often involves keeping VERY strong disconnects between any “personal” device and any “private” device.

    There have been a lot of terrible (but wonderfully written) articles about journalists needing to do this because a government or megacorporation was after them. Stuff like having a secret laptop that they never even take out of a farraday cage unless they are closer than not to an hour away from wherever they are staying that night.


  • I think any “privacy oriented OS” is inherently a questionable (kneejerk: Stupid and reeks of stale honey) strategy in the first place.

    A very good friend of mine is a journalist. The kind of journalist where… she actually deals with the shit the average person online larps and then some. And what I and her colleagues have suggested is the following:

    Two flash drives

    • One that is a livecd for basically any linux distro. If you are able to reboot the machine you are using and boot to this, do it. That helps with software keyloggers but obviously not hardware
    • One that is just a folder full of portable installs of the common “privacy oriented” software (like the tor browser) supporting a few different OS types.

    Given the option? Boot the public computer to the live image. Regardless, use the latter to access whatever chat or email accounts (that NEVER are logged into on any machine you “own” or near your home) you need.


  • It isn’t about being reasonable.

    If you are expected to track your time to this degree (and, to make it clear, the majority of employers actively don’t want you to), there is a reason. That reason usually being different funding sources. Generally a mix of grants and clients.

    And if a client or grant source finds out you are lying about those? Maybe you only had enough work to do 34 hours instead of 40 hours in one week. Would you be cool paying extra because the guy repairing your muffler had a slow week?

    And if people think being proud of a tool that openly talks about what everyone else silently does isn’t a red flag for employers? Hey, its a great job market so I am sure none of that will matter.








  • Not sure on ApeLegs but they have increasingly been disabling linux “support” for the Battlefields because of their anti-cheat.

    I don’t know how popular ApeLegs actually is. But for a lot of live games? Those are making MASSIVE bank and anything that can hurt the economy can kill the game. So a lot of studios actively just disable/block linux support because the added effort of making sure everything works in Proton is too big of a risk. Because nothing would increase Linux marketshare quite like free vdollars in Fortnite.

    It really fucking sucks. But I find that many studios (like Digital Extremes) are really good about making it clear that even though they don’t officially support Linux, they are very much fans of Proton (and Warframe has had a lot of bugfixes specifically FOR Proton support). Whereas EA has spent the past few months systematically disabling Linux “support” for every game they develop.


  • If you need an “off the shelf, low effort” IDE then you pick whether you are using VSCode or Vim/Emacs and then go to youtube and google “best plugins for ${LANGUAGE} in ${EDITOR}”. And you get basically a minute of copy pasting to have it set up to about the same level of optimization.

    Aside from that? The reality is that everything takes time to learn. It took you time to learn your preferred emacs config. It took me time to learn default vim and then what my preferred vim config should be and how to take advantage of it. Just like it took time to learn the editor that came with python on windows for years (still might?).

    Which gets back to this being a boomer ass article.


  • Yes. A much less boomer-coded article would be better.

    But as someone who has actually used a lot of the various IDEs over the decades and keeps coming back to vim (and is already expecting to go back to vim within a year because of invasive copilot shit…): Those niche editors? They are either genuinely bad ideas (think TempleOS levels of insanity) or they became plugins for every other IDE. I like vim a lot but emacs is the same (actually emacs is an OS with a text interface but…). And many of those plugins ALSO exist for vscode and atom/sublime and so forth.

    Because good (uncopyrighted…) ideas propagate. That is development and design.


  • I REALLY hate articles like this

    Saying we “lost” this software just shows that people don’t understand what software design/engineering is.

    Basically every screenshot of the “lost” TUIs look like a normal emacs/vim session for anyone who has learned about splits and :term (guess which god I believe in?). And people still use those near constantly. Hell, my workflow is generally a mix between vim and vscode depending upon what machine and operation I am working on. And that is a very normal workflow.

    And that is what we want out of software development. The good ideas move forward. The less good ideas become plugins for sickos. Because everyone loves vscode right now but… Microsoft is shitting that up REAL fast with copilot and just wait until every employer on the planet realizes that and ban it.

    And the rest just ignores the point of an IDE. Yes, taking your hand off the keyboard to touch the mouse LOWERS YOUR EFFICIENCY*. But it also means you can switch between languages or even environments trivially. Yes, it is often more annoying to dig through twelve menus to find what you want or talk a co-worker through how to do basic git operations that would be three commands. But holy crap I hate the people who “can’t work without my settings” that mean they are incapable of doing any “live” debugging or doing any peer programming where they aren’t driving.

    Back in the day we had plenty of people who were angry that not everyone was using vi and a bunch of tcsh scripts to develop because it clearly meant they didn’t understand what they were doing and were too dependent on compilers and debuggers. And it was just as stupid then as it is now.




  • More drives is always better. But you need to understand how you are making it better.

    https://en.wikipedia.org/wiki/Standard_RAID_levels is a good breakdown of the different RAID levels. Those are slightly different depending on if you are doing “real”/hardware RAID or software raid (e.g. ZFS) but the principle holds true and the rest is just googling the translation (for example, Unraid is effectively RAID4 with some extra magic to better support mismatched drive sizes)

    That actually IS an important thing to understand early on. Because, depending on the RAID model you use, it might not be as easy as adding another drive. Have three 8 TB and want to add a 10? That last 2 TB won’t be used until EVERY drive has at least 10 TB. There are ways to set this up in ZFS and Ceph and the like but it can be a headache.

    And the issue isn’t the cloudflare tunnel. The issue is that you would have a publicly accessible service running on your network. If you use the cloudflare access control thing (login page before you can access the site) you mitigate a lot of that (while making it obnoxious for anything that uses an app…) but are still at the mercy of cloudflare.

    And understand that these are all very popular tools for a reason. So they are also things hackers REALLY care about getting access to. Just look up all the MANY MANY MANY ransomware attacks that QNAP had (and the hilarity of QNAP silently re-enabling online services with firmware updates…). Because using a botnet to just scan a list of domains and subdomains is pretty trivial and more than pays for itself after one person pays the ransom.

    As for paying for that? I would NEVER pay for nextcloud. It is fairly shit software that is overkill for what people use it for (file syncing and document server) and dogshit for what it pretends to be (google docs+drive). If I am going that route, I’ll just use Google Docs or might even check out the Proton Docs I pay for alongside my email and VPN.

    But for something self hosted where the only data that matters is backed up to a completely different storage setup? I still don’t like it being “exposed” but it is REALLY nice to have a working shopping list and the like when I head to the store.


  • A LOT of questions there.

    Unraid vs Truenas vs Proxmox+Ceph vs Proxmox+ZFS for NAS: I am not sure if Unraid is ONLY a subscription these days (I think it was going that way?) but for a single machine NAS with a hodgepodge of drives, it is pretty much unbeatable.

    That said, it sounds like you are buying dedicated drives. There are a lot of arguments for not having large spinning disk drives (I think general wisdom is 12 TB is the biggest you should go for speed reasons?), but at 3x18 you aren’t going to really be upgrading any time soon. So Truenas or just a ZFS pool in Proxmox seems reasonable. Although, with only three drives you are in a weird spot regarding “raid” options. Seeing as I am already going to antagonize enough people by having an opinion, I’ll let someone else wage the holy war of RAID levels.

    I personally run Proxmox+Ceph across three machines (with one specifically set up to use Proxmox+ZFS+Ceph so I can take my essential data with me in an evacuation). It is overkill and Proxmox+ZFS is probably sufficient for your needs. The main difference is that your “NAS” is actually a mount that you expose via SMB and something like Cockpit. Apalrd did a REALLY good video on this that goes step by step and explains everything and it is well worth checking out https://www.youtube.com/watch?v=Hu3t8pcq8O0.

    Ceph is always the wrong decision. It is too slow for enterprise and too finicky for home use. That said, I use ceph and love it. Proxmox abstracts away most of the chaos but you still need to understand enough to set up pools and cephfs (at which point it is exactly like the zfs examples above). And I love that I can set redundancy settings for different pools (folders) of data. So my blu ray rips are pretty much YOLO with minimal redundancy. My personal documents have multiple full backups (and then get backed up to a different storage setup entirely). Just understand that you really need at least three nodes (“servers”) for that to make sense. But also? If you are expanding it is very possible to set up the ceph in parallel to your initial ZFS pool (using separate drives/OSDs), copy stuff over, and then cannibalize the old OSDs. Just understand that makes that initial upgrade more expensive because you need to be able to duplicate all of the data you care about.

    I know some people want really fancy NASes with twenty million access methods. I want an SMB share that I can see when I am on my local network. So… barebones cockpit exposing an SMB share is nice. And I have syncthing set up to access the same share for the purpose of saves for video games and so forth.

    Unraid vs Truenas vs Proxmox for Services: Personally? I prefer to just use Proxmox to set up a crapton of containers/vms. I used Unraid for years but the vast majority of tutorials and wisdom out there are just setting things up via something closer to proxmox. And it is often a struggle to replicate that in the Unraid gui (although I think level1techs have good resources on how to access the real interface which is REALLY good?).

    And my general experience is that truenas is mostly a worst of all worlds in every aspect and is really just there if you want something but are afraid of/smart enough not to use proxmox like a sicko.

    Processor and Graphics: it really depends on what you are doing. For what you listed? Only frigate will really take advantage and I just bought a Coral accelerator which is a lot cheaper than a GPU and tends to outperform them for the kind of inference that Frigate does. There is an argument for having a proper GPU for transcoding in Plex but… I’ve never seen a point in that.

    That said: A buddy of mine does the whole vlogger thing and some day soon we are going to set up a contract for me to sit down and set her up an exporting box (with likely use as a streaming box). But I need to do more research on what she actually needs and how best to handle that and she needs to figure out her budget for both materials and my time (the latter likely just being another case where she pays for my vacation and I am her camera guy for like half of it). But we probably will grab a cheap intel gpu for that.

    External access: Don’t do it, that is a great way to get hacked.

    That out of the way. My nextcloud is exposed to the outside world via a cloudflare tunnel. It fills me with anxiety but as long as you regularly update everything it is “fine”.

    My plex? I have a lifetime plex pass so I just use their services to access it remotely. And I think I pay an annual fee for homeassistant because I genuinely want to support that project.

    Everything else? I used to use wireguard (and openvpn before it) but actually switched to tailscale. I like the control that the former provided but much prefer the model where I expose individual services (well, VMs). Because it is nice to have access to my cockpit share when I want to grab a file in a hotel room. There is zero reason that anything needs access to my qbitorrent or calibre or opnsense setup. Let alone even seeing my desktop that I totally forgot to turn off.

    But the general idea I use for all my selfhosted services is: The vast majority of interactions should happen when I am at home on my home network. It is a special case if I ever need to access anything remotely and that is where tailscale comes in.

    Theoretically you can also do the same via wireguard and subnetting and vlans but I always found that to be a mess to provide access both locally and remotely and the end result is I get lazy. Also, Tailscale is just an app on basically any machine whereas wireguard tends to involve some commands or weird phone interactions.


  • Naomi Wu is one of the OGs of maker youtube and a lot of consumer grade 3d printing can be traced right back to her.

    Teaching Tech have talked about this a fair amount over the past year or two. But Naomi basically trying to walk a fine line and not get CCP’d is pretty well known at this point. The issue is that she isn’t seeking help (because any help is likely to get her and her partner in trouble) and the major “gossip” youtubers just want to say “Stupid girl has tits”.

    Real shit situation all around but hopefully she and her partner are safe-ish and happy.


  • Agreed.

    But take a look at computing and UX in general. There is a reason that a common refrain at the college and entry level job levels is “These kids don’t know what a folder is because they are used to everything at the top level”. And… there are very good reasons to not deal with folders in google drive or whatever. Hell, everyone lamented the loss of the start menu but how many of us still just do “winkey, ‘makemkv’” or whatever to launch something? Which is how you get thought processes that hide that until they are outright gone.

    And the same with error messages. Hell, I was in a meeting not too long ago where we had a very serious discussion about whether we should even still emit error data to the console for an application when NOBODY ever thinks to copy and paste that. So what are we gaining when the first day of any support ticket is “Okay… can you get me this file from this folder? Okay, open up explorer and click this box and type c colon slash…”


  • Its honestly a REALLY good idea. Still pisses me off that windows has had a QR code for years but it just goes to a generic support page.

    That said: There are plenty of environments where a QR code is not viable. Secure environments where you cannot have a camera is one. But also most server rooms where the KVM has been abused for years and is covered in filth. What you can squint and scratch down on a piece of paper and what your phone can process are two very different things.

    Linux so easy enough to have both code and text but I do have concerns on the broader impact of this being normalized.