• 0 Posts
  • 31 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle
  • A minimal but powerful language can feel like magic. Like, literally. The whole appeal of magic in stories is that you can step out of the normal rules and do something that defies belief, and who hasn’t fantasized about that in real life?

    If the language you’re using has a lot of magic built into it, things that the language can do but you can’t do, you feel mundane, like the language is letting you look at the cool things it can do, but doesn’t let you do them yourself. A more minimal language, where the important things are in the library, means that the language designers haven’t kept that stuff to themselves. They built the language such that that power is available to everyone. If the language gives its standard library authors the power to do things beautifully and elegantly without special treatment, then your library is getting those benefits too.

    It’s also a sign of good design, because just saying “well, this thing magically works differently” tends to be a shortcut, a hack, an indication that something isn’t right and couldn’t be fixed nicely.


  • Most people don’t need a file to be in two places at once, it’s more confusing than convenient. And if they do want two of a file at all, they almost certainly want them to be separate copies so that the original stays unmodified when they edit the second one. Anyone who really wants a hard link is probably comfortable with the command line, or should get comfortable.

    The Mac actually kind of gets the best of both worlds, APFS can clone a file such that they aren’t hard links but still share the same blocks of data on disk, so the second file takes up no more space, and it’s only when a block gets edited that it diverges from the other one and takes up more space, while the unmodified blocks remain shared. It happens when copy-pasting or duplicating a file in the Finder as well as with cp on the command line. I’m sure other modern file systems have this as well.


  • It doesn’t have to be a big baroque thing. When there’s a dotfile I configure regularly, I move it to a Git repo and use stow to put it “back” into place with a symlink. On new machines, it isn’t long before I try something that doesn’t work or see the default shell prompt and go “oh yeah, I want my dotfiles”, check out the repo, run a script that initializes a few things (some stuff is machine-specific so the script makes files for that stuff with helpful comments for me to remember the differences between login shells or whatever) and then I’m off to the races.



  • I don’t understand why this works, but it does

    What was happening before was this: Git received your commits and ran the shell script. It directed the script’s output stream back to Git so that it could relay it back over the connection to display on your local terminal with remote: stuck in front of it. Backgrounding the npx command was working, the shell was quitting without waiting for npx to finish. However, Git wasn’t waiting for the shell script to finish, it was waiting for the output stream to close, and npx was still writing to it. You backgrounded the task but didn’t give npx a new place to write its output to, so it kept using the same output stream as the shell.

    Running it via bash -c means “don’t run this under this bash shell, start a new one and have it just run this one command rather than waiting for a human to type a command into it.”

    The & inside the quote is doing what you expect, telling the subshell to background this task. As before, it’ll quit once the command is running, as you told it not to wait.

    The last bit is &> /dev/null which tells your original, first shell that you want this command to write somewhere else instead. Specifically, the special file /dev/null, which, like basically everything else in /dev/, is not really a file, it’s special kernel magic. That one’s magic trick is that when you write to it, everything works as expected except all the data is just thrown away. Great for things like this that you don’t want to keep.

    So, the reason this works is you’re redirecting the npx output elsewhere, it just goes into a black hole where no one is actually waiting for it. The subshell isn’t waiting for the command to finish, so it quits almost immediately. And then the top level shell moves on after the subshell has finished.

    I don’t think the subshell is necessary, if you do &> /dev/null & I think it’d be the same effect. But spawning a useless shell for a split second happens all the time anyway, probably not worth worrying about too much.


  • In Haskell, that’s “unit” or the empty tuple. It’s basically an object with no contents, behavior, or particular meaning, useful for representing “nothing”. It’s a solid thing that is never a surprise, unlike undefined or other languages’ nulls, which are holes in the language or errors waiting to happen.

    You might argue that it’s a value and not a function, but Haskell doesn’t really differentiate the two anyway:

    value :: String
    value = "I'm always this string!"
    
    funkyFunc :: String -> String
    funkyFunc name = "Rock on, "++name++", rock on!"
    

    Is value a value, or is it a function that takes no arguments? There’s not really a difference, Haskell handles them both the same way: by lazily replacing anything matching the pattern on the left side of the equation with the right side of the equation at runtime.





  • Opt out means “we will be doing this, without permission, unless you tell us not to” and opt in means “if you give us permission we will do this.” Codebases can contain important and sensitive information, and sending it off to some server to be shoved into an LLM is something that should be done with care. Getting affirmative consent is the bare minimum.


  • The right thing is to make it opt-in for everyone, simple as that. The entire controversy goes away immediately if they do. If they really believe it’s a good value proposition for their users, and want to avoid collecting data from people who didn’t actually want to give it, they should have faith that their users will agree and affirmatively check the box.

    If free users are really such a drain on them, why have they been offering a free version for so long before it became a conduit to that sweet, sweet data? Because it isn’t a drain, it’s a win-win. They want people using their IDE, even for free, they don’t get money from it but they get market share, broad familiarity with their tool amongst software engineers, a larger user base that can support each other on third party sites and provide free advertising, and more.








  • My characterization would be that there’s a spectrum here:

    • 100% yes code: compilers, IDEs, scripting environments, databases, you wanna get something done, you are going to be specifying it in something that at the very least looks like traditional source code.
    • Completely on the other side of the spectrum, traditional consumer-oriented software: word processor, web browser, accounting/bookkeeping (not spreadsheets though, we’ll get to those), photo/video/audio editor, maps, music player, etc.

    That first side of the spectrum is pretty easy to pin down. It has little to no metaphor or abstraction, and the pointy tip of this side is no metaphor at all, just writing machine code and piping it directly into the CPU. A higher level language will let you gloss over some details like registers, memory management, multithreading, maybe pretend you’re manipulating little objects or mathematical functions instead of bits on a wire, but overall you are directing the computer to do computer things using computer language, and forced to think like a computer and learn what computers can and cannot do. This is, of course, the most powerful way to use a computer but is also completely inaccessible to almost everybody.

    The second, I’d link together as all being software with a metaphor that is not particularly related to computing itself, but to something more real world. People edited music by physically splicing tapes together, an audio editor does an idealized version of that. Typewriters existed, and a word processor basically simulates that experience. Winamp wasn’t much more than a boom box and a sleeve of CDs. There is usually a deliberate physicality and real-world grounding to the user’s mental model of the software, even if it is doing things that would be impossible if the metaphor were literal. You don’t need to use code, but you also don’t get anything code-like out of it.

    No-code is in between. It’s intended for a similar audience as the latter category, who want a clear, easy-to-understand mental model that doesn’t require a computer science degree, but it tries to enable that audience to perform code-like tasks. Spreadsheets are the original example of this; although they originate as a metaphor for paper balance sheets, the functions available in formulas fundamentally alter the metaphor to basically “imagine if you had a sheet of paper that could do literal magic” and at that point you’re basically just describing a computer with a screen. Everything in a spreadsheet is very tactile, it’s easy to see where your data is, but when you need to, you can dip into a light programming environment that regular people can still make work. In general, this is the differentiator for “no code” apps: enabling non-coders to dip their toes into modifying program behavior, scripting tasks, and building software. They’re limited to what the tool provides, but the tool is trying to give them the power that actual coding would provide.

    I’d never thought of WordPress as low-code, but I think that fits. Websites go beyond paper or magazines, and WordPress allows people to do things that would otherwise require code and databases and web servers and so on.