• 5 Posts
  • 471 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle


  • I have just run into such an insane amount of problems with atomic distros. The thing is that you don’t know it will be a problem until you start having a need for the functionality

    I still daily drive bazzite, but embedded programming, wireshark (constantly breaks upgrading on atomic fedora), any VM that had to connect to the LAN, any sort of document signing, key management, using any sort of government ID software like Belgium’s EID to log in on a web browser, and much more is very difficult with most of the examples being dead in the water and will apparently never be attempted to be fixed.

    It works great for most people, until they need to do 1 thing outside of the mainstream and it falls apart. Hell, there is literally no documentation at all on how adding a user to a group is fundamentally broken (fedora’s fault, not bazzite) and you have to copy groups manually from a non-documented file to /etc/group.


  • And for any of the people saying “he changed”.

    One of his most recent “philanthropic” ventures was to partner with Nestle (good start) to “modernize and increase yields” of the dairy industries in impoverished countries.

    The two organizations then sold modern (likely non-servicable) equipment and entrenched them in corporate supply chain systems geared towards export and making it much harder to trade locally (not sure how that part worked, but was in what I read).

    For a grand total of… 1% increased dairy yields.

    Then 3-4 years later they pulled out, leaving heavily indebted farmers without the corporate supply chains and delivery systems they were forced to switch to, and making it very difficult to switch back to the old ways of working, so they can’t sell nearly as much locally.

    Who do you think will buy up those farms when the farmers go bankrupt and have to sell ar rock bottom prices.


  • Similar goal, different function.

    There aren’t install scripts like lutris, which makes it harder, once in a while, to install certain games that might need a modification.

    What makes it special is that it puts each program in a “container” (hence the name) that is sandboxed from your system. E.g. if you were trying to run a program infected with malware, it would have a very hard time trying to infect the rest of your system, where with lutris and Heroic, that separation doesn’t exist so it would have full access.

    It is less targeted at games and more at general programs.

    That is about it. The interface is much worse than lutris or heroic, but it is still a useful program.








  • What other people haven’t quite touched on is that the in-built system certainly won’t be powerful enough to run demanding VR games with good frame rates and resolution.

    I also have my doubts about the 6GHz WiFi connection being enough for it, I hope there is also a wired option.

    But it will be awesome to be able to do normal tasks like coding, writing, etc… outside in the garden, as an example. I think for people that don’t have a dedicated VR space, this could be awesome with 6GHz WiFi outside without needing base stations.


  • Hey, something I can maybe help with.

    Flatpak IDEs on the main system are not very useful for development. I got rid of mine entirely. I am developing firmware so it might be a bit different from your case, but what I did in have a single arch distrobox where I could install everything embedded-dev-related that had to work together (JLink, nordic tools, code-oss, etc…) on that. Then a few standalone debugging tools like STLink and Saelae logic2 could be installed to the home folder by default and Code could still find them from the distrobox (but they could be installed in the distrobox also). It doesn’t even need to have an init system, but I ran into a few problems like having to manually chmod usb devices to give STLink access. Udev rules are also hit or miss in /etc/udev/rules.d, e.g. the STM udev rules just don’t work, but nordic does.

    High storage consumption is likely negligible (or at least nitpicky) since storage is so cheap nowadays. Your SSD doesn’t care if it has 15GB or 20GB of system programs, especially when development codebases and SDKs, games, and media will likely make up 90% of space and almost never share libraries even on traditional systems.


  • But actual results and bugs have very little to do with corporate firings or open positions, as 30 years of history show us.

    If corporations “think” they can fire people, with AI as an excuse, and put that cost in their pockets, they will do it. We are already seeing it in the US tech-bro sphere.

    Companies will tank themselves in the medium-long term to make short term profits. Which I think is the “dev market” that OP is talking about. It shouldn’t affect the market, but it will because you have MBAs making technical decisions. I could be wrong, but the tech market is very predictable as far as behavior. They will hire a skeleton crew and work them to burnout to fix the AI slop. (Tech industry needs unions now)


  • That only solves maybe one of the listen problems. Whatever instance you have, you still have to get and serve media to other viewers and instances. The only problem that this solves is potentially CSAM spam/moderation.

    Let’s say it was a cell phone, it could handle maybe 2 concurrent transcoding streams before stalling out and people running into buffer times (which makes them leave).

    If every person had their own tiny, low powered servers, then you could have max like 5 concurrent transcodes on any instance in all of peertube for old laptop or desktop computers. Assuming an average of people have a 100/30Mbps connection (which is true in much of the world outside of major cities, or even lower), then that would be absolutely maxing out at 10 concurrent viewers if everyone is running AV1 compatible clients (which is not the case) and more like 6 concurrent viewers per video at h.264. Those estimates are at low bitrates also, so low quality, absolutely no slowdown from your ISP, and absolutely no other general home or work-from-home use. In reality it would be closer to 3-6 concurrent viewers per instance (not even per video)

    Still not even counting storage which is massive for anyone that creates more than a couple videos per year.

    My point is just that it is an extremely difficult and costly problem that is not as simple as “more federation” like in text and image-based social media because of the nature of video, the internet, and viral video culture. Remember, federation replicates all viewed and subscribed content on the instance (so the home instance has to serve the data and both instances have to store it)


  • Yep. I have posted on stack overflow exactly 3 times. One time it was marked as duplicate and referenced to something that was not even the same topic. One time I had too much detail and debugging done for the classic knowitalls to come make a smartass remark and was completely ignored. The final time I got one comment, addressed it, and that person was never heard from again lol.



  • Just a few thoughts as to why it hasn’t taken off:

    Video is multiple orders of magnitude more difficult and expensive to serve than text or even audio.

    • Your server needs a great upload speed which is not achievable for on-site home servers for most people in the world

    • Your server has to have at least one dedicated encoding GPU (no raspberry pis or Intel nucs if you want any meaningful traffic)

    • Your server has to have a ton of storage, especially if you allow 4k content to be uploaded, which while much cheaper than before, is still expensive. Here in the EU, reliable storage is around 300€/12TB for drives, which fills up very fast with 4k videos or if you try to store different resolutions to reduce transcoded loads.

    • Letting random people upload video onto your instance is significantly harder to moderate than text or photos. Like think of the CSAM spam that was on Lemmy when it started in taking many new users…

    • The power usage (and bill) of the server will also be much higher than without peertube because of constant transcoding

    The cost, both financial and server taxation-wise is simply too great for me, and many others to setup a peertube instance.

    Regardless of how easy it is for people to create on peertube, someone has to bear the cost of hosting it. That is cheap-ish for Lemmy or mastodon, but there is a reason YouTube was a loss leader for a long time for google, and many streaming services restrict 4k video.

    That isn’t even getting into compensation for the content makers.



  • Yes, but I am also of the opinion that not one single acronym should be used without at least once in the section saying what the acronym is. Many many programing docs with say what am acronym is exactly once, somewhere in the docs, and then never again.

    Also, if there are more complex concepts that they use that they don’t explain, a link to a good explanation of what it is (so one doesn’t have to sift through mountains of crap to find out what the hell it does). Archwiki docs do this very well. Every page is literally full of links so you can almost always brush up on the concepts if you are unfamiliar.

    There seem to be 10 extremely low quality, badly written, low effort docs for every 1 good documentation center out there. It is hard to RTFM when the manual skips 90% of the library and gives an auto-generated api reference with no or cryptic explanations on parameters, for example.