• 2 Posts
  • 324 Comments
Joined 1 year ago
cake
Cake day: June 6th, 2023

help-circle

  • Assuming C:S2 uses DX and you’re running it through proton/dxvk, it’s ultimately the Vulkan driver’s job to page to system memory correctly. This honestly sounds like you’re seeing a bug. In that circumstance, it shouldn’t crash, it should just hurt performance from all the paging. I see a couple of older issues where people were seeing exactly this kind of issue with DXVK+Nvidia.

    • This old Witcher 3 one where they blamed it on Nvidia’s memory allocator not playing well with linux THP (transparent huge pages). Disabling THP was a workaround.
    • This other issue for several titles that were hitting memory alloc failures despite having tons of system memory, just as you describe. They try several workarounds, but ultimately they believe it was fixed by a driver update.

    One other thing to try is, idk if you’re running the game in dx11 or dx12 mode, but apparently both exist. If it’s currently running in dx11 mode, try the launch flag -force-d3d12. If you’re already using dx12, maybe try swapping back to dx11. Good luck!


  • Shared GPU memory (as described in that article) is just how Windows decided to solve the problem of oversubscription of VRAM. Linux solves it differently (looks like it just allocates what it needs in demand and uses GART to address it, but I would like to know more).

    So I’m curious what you mean when you say you miss it. Are you having programs crash OOM when running on Linux? Because that shouldn’t be happening.

    It’s not ideal to be relying on shared gpu mem anyway (at least in a dgpu scenario). Kinda like saying you have a preference on which crutches to use.


  • he’s big into the clickbait game

    Don’t hate the player, hate the game.

    Smarter Every Day did a video on using clickbait titles and thumbnails. The data is clear: everyone complains about it, but it performs far better than anything else on YT. And if the goal is to most efficiently spread educational videos to the largest number of people, then unfortunately, it’s really the only option.

    TBH, the tone isn’t that different from Bill Nye. Wacky colors, loud obnoxious personality, gotta get kids excited about science somehow.


  • Let’s Encrypt is good practice, but IMO if you’re just serving the same static webpage to all users, it doesn’t really matter.

    Given that it’s a tiny raspi, I’d recommend reducing the overhead that WordPress brings and just statically serve a directory with your site. Whether that means using wp static site options, or moving away from wp entirely is up to you.

    The worst case scenario would be someone finding a vulnerability in the services that are publicly exposed (Apache), getting persistence on the device, and using that to pivot to other devices on your network. If possible, you may consider putting it in a routing DMZ. Make sure that the pi can only see the internet and whatever device you plan to maintain it with. That way even if someone somehow owns it completely, they won’t be able to find any other devices to hack.







  • I disagree that it’s the same for multiple reasons: first off the project and telemetry were never profit-driven. Their goal was always to use modern methods of software development to make the software better.

    The fact is, these days all for-profit projects gather a ton of info without asking, and then use that data to inform their development and debugging (and sell, but that’s irrelevant to my point). To deny open source software the ability to even add the option of reporting telemetry is to ask them to make a better product than for-profit competition, with fewer tools at their disposal, and at a fraction of the pay (often on a voluntary basis). That’s just unreasonable.

    Which is why the pushback wasn’t that they were using telemetry, it was that they were going to use Google Analytics and Yandex, which are “cheap” options, but are obviously for-profit and can’t be trusted as middlemen. They heard the concern over that and decided to steer away to a non-profit solution.

    But as a software dev and a Linux user, I often wish I could easily create bug reports using open source, appropriately anonymized telemetry reporting tools. I want to make making a better system for me to use as easy as possible for the saints that are volunteering their time.

    As for the issues in tenacity, it was likely specific to what I was doing. I was rapidly opening and closing a lot of small audio clips, and saving them to network mounted dirs under different names. I remember I had issues with simple stuff like keyboard shortcuts to open files, and I had to manually use the mouse to select a redundant option every single time (don’t recall what it was), and I think it would just crash trying to save to the network mounted dir, so I had to always save locally and copy over manually. So I just switched back and continued my work.


  • Afaik, back when it all went down, they heard the public reaction about the telemetry thing and completely reversed course. On top of that, many distros would be sure to never distribute a build with telemetry enabled anyway. So there has never been any cause for concern. Would love to be proven wrong, though.

    Also, Audacity is handy, but it’s not perfect, and I’ll gladly use a better alternative. But the last time I tried Tenacity, it had a bunch of little differences that made the tool just a bit harder to use. So I still default to audacity.


  • Yeah, but I think it can feel too much like a circle jerk around here sometimes. I get that people want to win over new users, but some of it goes too far I think. The fact is Linux isn’t perfect, and while no OS is, there are some critical things you can do on Windows that are still a pain in the ass on Linux. Some of that is a vendor/proprietary software problem, but a good chunk of it is just people being willing to overlook a thin layer of jank in their normal workflows.

    I think we’d all be better off to all acknowledge and clean up the jank rather than try to pretend it’s fine as is.


  • There was a time when there was an annual “Linux Sucks” presentation that I liked because it was a roundup of candid, yet constructive criticism of Linux (and then at some point the person running that went off the deep end and started yelling about woke agendas).

    I wouldn’t mind there being a whole community devoted to pointing out shit that is poorly designed or just broken when running linux, and we as a community then try to fix them or find workarounds.

    But as others have pointed out, that community isn’t a community, it’s literally just one account hanging out by themselves.


  • On top of all the other informative comments answering a plethora of questions you understandably have when entering the Linux ecosystem, I want to express: don’t feel like you need to learn all this stuff if it doesn’t interest you, or otherwise turns you off the idea of Linux.

    It’s perfectly fine to ignore all the terminology, install whatever new-user friendly version of Linux you can, and just start using it. If it’s not to your taste, or it asks too much of you, maybe try a different one. But I’m of the firm belief that immediately inundating a new user with a bunch of new vocab and unfamiliar workflows is the mark of a bad new user experience, and you shouldn’t feel required to put up with that.

    The fact is, unlike MSFT who has a bunch of terminology internal to the windows dev teams, Linux is developed in the open, so all the terminology leaks into the user world too. And you just need to get good at saying, “if this doesn’t help me use my PC better for what I need it to do, I don’t care”.




  • In the last 10 years there has been a seemingly noteworthy uptick in hardware bugs in both intel and amd CPUs. Security researchers find and figure out potential attack vectors that rely on these bugs (ex. Specter/Meltdown). Then operating systems have to put workarounds in their kernel code to ensure that these hypothetical attack vectors are accounted for, at the cost of performance and more complicated code.

    Linus is saying how annoyed he is with all this extra work they have to do, resulting in worse performance, all to plug vulnerabilities that we’ve never actually seen any real attackers use. He’s saying instead we should just write the code how it should be, and if the hardware is insecure, let it be the hardware company’s problem when customers don’t use the hardware.

    The problem is, customers will continue to use the hardware and companies who need a secure OS (all of them) will opt to not use Linux if it doesn’t plug these holes.


  • Agreed with using keepass. If you’re one person accessing your passwords, there’s no reason you need a service running all the time to access your password db. It’s just an encrypted file that needs to be synced across devices.

    However, if you make frequent use of secure password sharing features of lastpass/bitwarden/etc, then that’s another story. Trying to orchestrate that using separate files would be a headache. Use a service (even if self-hosted).