• 0 Posts
  • 18 Comments
Joined 11 months ago
cake
Cake day: August 16th, 2023

help-circle

  • Sort of. If you’re receiving a notification from a remote server on iOS or standard android, they go through Apple or googles servers. That said, some apps rather than sending your device the actual notification (where this vulnerability comes from) will instead send a type of invisible notification that basically tells the app to check for a new message or whatever and then will display a local notification so the actual message stays on device and inside of the hosting services servers (like a self host.)




  • I am skeptical of Bluesky. It’s led by Jack and we’ve already seen how that goes. Second, there isn’t really a good technical reason for it to exist as it’s own protocol outside of the fact that they want to control it given that Fedi/Mastodon was already there and they could have just as easily contributed to that with the things they wanted, they just wouldn’t have had full control. Similar to Threads promise to federate, I will be somewhat surprised if they ever do it.

    Were Bluesky/Threads not a corporate effort, I have a feeling that it would have followed a similar pattern as the fediverse - build the protocol and release that, then the clients will follow. Bluesky still isn’t federating even with its own protocol, and Threads isn’t either. Given that’s stuff that tiny teams with far, far fewer resources than the corps have accomplished, it’s a little wild that neither have gotten there.

    Especially with Bluesky, there doesn’t seem to be a stated plan for how it’s going to make money. And we’re talking about a lot of the same people that destroyed the Twitter API and started locking things down even before Elon killed it completely and they’re trying to convince us that they are pushing for an open environment.



  • I don’t think that even the languages are the problem, it’s the toolchain. While certainly if you went back to C or whatever, you can design more performant systems, I think the problem overall stems from modern toolchains being kinda ridiculous. It is entirely common in any language to load in massive libraries that suck up 100’s of mb of RAM (if not gigs) to get a slightly nicer function to lowercase text or something.

    The other confounding factor is “write once, run anywhere” which in practice means that there is a lot of shared code and such that does nothing on your machine. The most obvious example being Electron. Pretty much all of the Electron apps I use on the reg (which are mostly just Discord and slack) are conceptually simple apps that have analogues that used to run on a few hundred mbs of storage and 10’s of mb of RAM.

    Oh, one other sidetone - how many CPUs are wasting cycles on things that no one wants, like extremely complex ad-tracking/data mining/etc.

    I know why this is the case, and ease of development does enable us to have software that we probably otherwise wouldn’t, but this is a thing that I think is a real blight on modern computing, and I think it’s solvable. I mean, probably the dumbest idea, but improving translation layers to run platform-native code can be vastly improved. Especially in a world where we have generative AI, there has to be a way to say “hey, I’ve got this javascript function, I need this to work in kotlin, swift, c++, etc.”


  • Lots of stuff -

    On the internet, more open standards and community driven stuff. It’s currently really, really annoying that on my mastodon there are a lot of people sharing bluesky codes, as if that’s not just punting the ball for another couple of years. Although this will hopefully be a better outcome than straight up silos like the old social media, fediverse still should be the default way we think about connecting humanity (or something like it, the underlying tech isn’t really that important.) Also, far more things should just be like, a dollar a month or whatever instead of having a massive amount of privacy invading, user experience destroying ads.

    In software in general, more privacy. It should be assumed that unless I explicitly opt in, my data is just that, mine. This is a tricky one because I remain hopeful about generative AI and that needs data to improve the models, I’m leery of sharing my data with it because so far the more pedestrian uses of data mining have not been used for things that I can really support. I remain extremely leery about GAI that isn’t explicitly open source and can’t be understood generally.

    On the hardware side, computers have mostly been good enough for a while now. Tech will always get better, but I would like to see more of a focus on keeping working devices useful. Like, at some point, technology products will cease being possible to be useful in a practical way because it can’t run modern software, but we’re leaving a lot of shit behind where that’s not the case. Just about any device with an SSD and a processor from the last 10 years (including phones!) should be able to be easily repaired, supported longer, and once support ends, opened up for community support.


  • The vast majority of comments here complaining about Mac and macOS specifically seem to stem from really, really not understanding much about them. This comment is unfortunately not any different.

    I’ve seen developers working for FAANGs unironically praise the M1 Macbooks as work machines.

    The FAANG companies that fight tooth and nail to hire the best people who can basically work wherever they want because of their skill like Macs? Surely, they’re the dumb ones.

    I have one and the damn thing has an option to change the “modifier key” for the fucking mouse

    Originally, and for quite a while (probably early 2000’s) Macs shipped with a one button mouse, and there was no concept of a “right-click.” Originally, they were pretty dogmatic that the OS should be simple enough that one button was enough. You shouldn’t need to hide functionality in a context menu, it should be available through the standard UI. Eventually, that lost out, but they decided they wanted to make context menus* (or other “right-click” actions) a power user feature, rather than a default. So the decided to make it make sense for all of the machines that had always shipped with one button mice, you could hold ctrl and then click an item and you’d get the context menu. For decades now, they support right click, but if you built up years of muscle memory around ctrl+clicking instead, you still can.

    like press the meta key

    You like the meta key? Probably better thank Apple. Apple has had a “meta” key basically forever, only it’s been called “command.” I’m old enough to remember when more manufacturers started to add their own meta keys. If you go grab an older keyboard, you’ll probably find they also have a “context menu” button, which is basically a “right-click” and you almost def won’t find one now.

    you want to do basic window manager things

    Lots of people in this thread seem to really, really like being able to window snap, which I kind of get but also generally disagree with. macOS (again, going back a thousand years) has a different philosophy when it comes to managing windows. On [MS] Windows, pretty much all software aims for full screen, and users def do the same. Window snapping now means you have a convenient way to see 2 whole things. If you really, really want window snapping similar to how MS does it, there are a hojillion ways to accomplish this with very simple app installs. macOS has instead tried to make it so that you can manage multiple apps/windows easily without full screen, going back to tiny, tiny screens.

    But let’s talk about “basic window manager things” for a sec. Windows has easily, and I mean easily had the worst window management generally for like 2 decades. Windows 10 and Windows 11 help catch up to things I switched off of Windows and to Linux for in like, 2004. Expose, or “Task View” as it’s now called in Windows started in macOS, and was adopted in Linux in the mid 2000’s. Not until Windows 10, and not even the first version, do we get that. Ditto for virtual desktops. In Windows, I can press alt-tab and switch between any open app. In macOS, I can press cmd+tab and switch between any open app, but I can also press cmd-` and switch between an app’s windows. In Windows, I can minimize windows to the task bar just as I can in macOS. However, I can also just choose to hide all app windows, or hide all windows except the app I’m looking at. And on macOS, I can use hot corners (which Windows barely touches with its “show desktop” hotcorner, sort of) which I can configure however I want. I can throw my mouse in any corner of the screen and get more “basic window manager things” than exist on Windows.

    Its keyboard is that weird, unresponsive, flat form factor that makes it a nightmare to actually use as a portable device

    If you have one the bad butterfly keyboards, yes. If not, this is nonsense. All laptop keyboards are bad, mac versions (with the very large caveat that the butterfly keyboards were insanely stupid/bad) are generally better.

    I get that it’s a relatively powerful computer for the ludicrous amount of battery life it gives you, but that’s purely because it’s an extremely optimized ARM based processor that’s only designed to work with this specific operating system.

    How is this supposed to be a negative? If we zoom out a little, this comment might as well be “oh sure, you can get your fancy graphic effects when you use a, what did you call it? graphics processing unit?” And even then, this is still not really accurately understanding why Apple has absolutely dominated CPU in mobile, and then is crushing in the class of laptop/desktop processors it competes in.**

    But Apple is practically an antonym for FOSS at this point.

    Aside from darwin, the kernel macOS runs on, Webkit, the browser engine that Chrome forked from, or passkeys, the thing that might replace passwords, you’re still really wrong.

    Beyond those complaints, it’s got good speakers and never produces any heat. Honestly, the only good things about the machines are those hardware elements: the speakers, battery life, and lack of heat.

    How about screens? Trackpad? Physical material, etc?

    I also have a Thinkpad X1 Carbon, which is physically a worse machine: it gets hot, has a fraction of the battery life, etc.

    “I can get vastly less done, and it’s going to be more uncomfortable the entire time.”

    I wonder if the people that really like the M1s like them because it’s the laptop equivalent of an iPhone.

    Lots of misunderstanding here, but I’m already a phone book in.

    * really, they probably never would have added right clicks, but as more software adopted right click actions, especially cross platform stuff like Adobe software, they pretty much had to.
    ** they’ve basically ceded the extreme high end. If you really want the most performant CPU and power\heat aren’t a concern, it’s not Apple.


  • There are a few things I’d consider:

    • How many users are going to be on the MC server? MC is pretty notorious for eating RAM, and since most of my home server adventures often includes multiple VMs, I would look for something with at least 32 gb of ram.
    • for plex (I’m guessing similar is going to be the case for Jellyfin) how many users do you expect to support concurrently, and how good are you at downloading in formats that the clients support direct play for? Most remote plex users are going to require transcoding because of bandwidth limits, but if you have direct play for most of your local clients or have a good upload and don’t have to transcode 3+ streams at a time, you’re probably fine with just about anything from the last 10 years in terms of CPU.
    • also re: plex, do you have any idea in terms of storage requirements? Again, if you’re just getting started < 10 tb of storage in mind, you can get by with most computers.

    Anyway, to give you an idea, I run both of these and quite a few other things besides on a Dell R710 I bought like 4 years ago and never really have any issue.

    My suggestion would be grab basically any old computer laying around or hit up eBay for some ~$100-$200 used server (be careful about 1u’s or rack mounts in general if noise is a concern, you can get normal tower-case servers as well) and start by running your services on that. That’s probably just about what all of us have done at some point. Honestly, your needs are pretty slim unless you’re talking about hosting those services for hundreds of people, but if you’re just hosting for you and a few friends or immediate family, pretty much any any computer will do.

    I wanted to keep things very budget conscious, so I have the r710 paired with a rackable 3016 jbod bay. The r710 and the rackable were both about $200, and then I had to buy an HBA card to connect them, so another $90 there. The r710 has 64 gb of ram and I think dual Xeons plus 8 2.5" slots. The rackable is 16 3.5" slots, so what this means is I basically don’t have to decommission drives until they die. I run unRAID on the server, which also means that I can easily get a decent level of protection for drive failure, and I don’t have to worry about matching up drives and all that. I put a couple of cheap SSDs in the 710 for cache drives and to run things I wanted to be a little more performant (MC server, though tbh I never really had an issue running it on spinning disks) and this setup has been more or less rock solid for about 5 years now hosting these services for about 10 people.


  • Maybe if the Linux community decided on one default there would be more progress on inroads with desktop Linux.

    Well, Linus at least agrees with you. I just watch a talk he gave the other day in which he described one of the biggest problems with Linux desktop being that the distros can’t even decide on a default package manager/way to package applications and all of the difficulties that creates.

    It’s funny because even for simple stuff like when I used to update my Plex install manually I’d go to the Plex website, and the list is:

    Windows
    Mac

    Linux: Debian x 32 Bit Debian x.1 32 Bit
    Debian x 64 bit
    Debian x.1 64 Bit
    Fedora …
    Ubuntu …
    Cent …

    and god help you if you’re not on one of those versions or you don’t use one of those distros.


  • whofearsthenight@lemm.eetoLinux@lemmy.mlMicrosoft causes learned helplessness
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    9 months ago

    I’m going to have to interject on even on the first point. FWIW, I’m a person who vastly prefers to use a keyboard when possible, can totally live on a CLI only system, etc. Anyway:

    It’s faster and easier than using a GUI. This is because you can type a lot faster than you can click-click-drag with a mouse.

    This is just not true for the vast majority of people. Have you ever watched normies type?

    The other thing is that even with simple stuff like file operations normal users get lost with a GUI where it’s far easier to visualize what is actually happening. If they get a few basic mechanics (click+drag, right click, double click) that’s about all they have to remember to move files around. Compared to learning ls, cd, mv, cp, the directory tree, symbols like . and .. and so forth. Or perhaps my favorite example, quick name a valid tar command. On a GUI system like windows/Mac, they just need to remember they can do things to files by selecting them and right-clicking them. On a CLI only system, how the fuck are you supposed to get a regular user to remember that to compress a file, you type in tar to start with, much less remembering flags (my flavor of choice is usually -xvf.) How many people who regularly use linux even know wtf it’s called tar?

    And that’s even forgetting the things like the defaults often being much harder to recover from. In Mac/Windows (and I think even most distros, though I haven’t daily driven a gui linux in a while) deleting a file the default way is a safe operation and easily recoverable because by default the gui is designed to be more user-safe.

    Though I don’t think anyone will disagree with the fact that the CLI is an immensely powerful tool that a lot of us can’t do without, it has never been really designed in a way to be accessible to normal users, and I’d be willing to bet that if you were designing a CLI today in a vacuum, it wouldn’t look anything like the one we’re familiar with. It’s why I’d also guess that very few of us that use the command line all of the time don’t have a mile long list of aliases, scripts, switching to shells like zsh and things like zsh-autosuggestions or zsh-syntax-highlighting, colorls, a specific terminal emulator they use, and so on and so forth.



  • I also think that it’s not a great take that the OS vendor shouldn’t include decent default apps for most people. I mean, I know we’re in c/linux, but the vast majority of people don’t want to start with a terminal and build their system out from there. Hell, even the vast majority of linux users don’t, so then it’s just nitpicking where the line of which defaults should be included is.

    I have to believe the person who uses Apple Notes, Reminders, Safari, Calendar, etc

    I am that person now. Your example about Reminders is basically exactly why. I used to try and then pay for a ton of services to cover reminders/todos because I too was looking for that perfect app that worked just the way I wanted, and really the only thing I got out of it was making a slightly different trade off that I was then paying for in quite a lot of cases. it also happens that nearly all of those apps were closing gaps with the reasons I moved away from them to begin with. For the average user, they likely won’t even look much past the defaults because the defaults are actually pretty good, and so if you don’t have an advanced use case, your needs are covered. Like, I used Trello and Todoist for kanban for larger projects and it’s now native in Reminders.



  • It’s not really because the developers are cheaper, it’s because the vast reduction in complexity is cheaper. Let’s say you’ve got a great general app idea and you’re going to build a startup. Your app is going to have to be mobile and desktop. To do that well, natively, this means:

    • you’re going to need a backend dev who are probably going to be building APIs that are touching on web tech.
    • You’re going to need a developer team who can target Apple platforms, Android, and Windows. I lump Apple together here because although it’s not entirely fair to say that it’s as simple as they promise where you just click a box and your iOS app works on macOS, you’re at least able to work in the same general toolset (Swift, SwiftUI, Xcode, etc.)
    • You’re going to need designers who can design to the specific needs of the platforms, which is also going to mean more domain expertise.
    • testing for each of those platforms.
    • This is true regardless, but you’re going to have to deal with more platform-specific support. More platform specific documentation, etc. How do you do think x on platform y? Where is the button on this platform vs that one?
    • maintaining feature parity as you continue to build is going to be much more difficult, and you’re going to have to decide if you want to maintain feature parity and slow the whole process, or give up and launch on some platforms first (hopefully there is no one that uses a Mac and an Android phone or Windows and an iPhone or an iPhone and a Samsung Tablet or that gets annoying real fast.)

    In short, moving from one platform to two natively doesn’t double complexity and cost, it’s far, far worse than that. It’s not that a good web dev costs $70k vs an iOS dev that makes $90k, it’s that a good iOS dev costs $90k, and a good Android dev costs $85k, and a good Windows dev costs $80k and one of those people hopefully is familiar enough with each platform to be the team lead so you can tack on another $20k for them…

    And all the while you’re building that team and building your 3 different platform native apps, a competitor or several will launch on Electron and web tech and take the market because no one except us nerds give a shit about whether something is using the right platform idiom or even knows what they are, and far fewer still have any idea how to check RAM usage and the like.



  • Linus = LMG. Company/brand is dead without him. Besides that, his wife is CFO, and his friends are the management team, basically.

    Also, his title of Chief Vision Officer and bringing in a real CEO is what they needed to do a while ago. Had they, this probably could have been avoided. I think during the apology video, Linus said something like “As Chief [air quotes] Vision Officer we have to … in order to execute my vision.” I think it’s very clear the rationale is that everyone knew/knows Linus isn’t CEO material, doesn’t want to do boring CEO shit, but also doesn’t want to give up control, so they brought in an adult to do the boring business parts. But it’s clear that he still sees LMG and LTT as his thing, and he wanted to retain control.

    Tbh, the GN stuff is exactly the type of thing that I found entirely unsurprising because LMG is not really being run by business people, it’s being run by people who lucked into having a business and are trying to fill business people roles, and if it stopped there with this apology video, I would have bet that in a couple of weeks it was largely forgotten. The Madison allegations though and the timing of them are going to stick around. People sharpened their pitchforks after GN, and the Madison stuff is not going to be sated by an announcement that they’re hiring investigators.

    Even if it’s proven that everything Madison said is completely bullshit (which seems extremely unlikely) this will be extremely difficult to come back from.