Person interested in programming, languages, culture, and human flourishing.

  • 4 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • I switched from Zsh to Nushell almost two years ago and I have never looked back. If you need POSIX compliance, Nushell is a no go. But it sounds like your real problem was just that Zsh was familiar whereas fish was not. Nushell strikes the perfect balance of offering the commands you’re used to but letting everything just make intuitive sense. Plus, its help command is so far above and beyond other shells. I rarely need to open the Nushell docs (even though they’re really good), and I never have to go the community (even though it’s awesome), because I can figure pretty much everything out just from interacting help within the terminal.








  • I think the point is that they don’t want to have to use a full JS framework (which is what HTMX is) for this behavior.

    And this is where HTMX fits in. It’s an elegant and powerful solution to the front-end/back-end split, allowing more of the control logic to operate on the back-end while dynamically loading HTML into their respective places on the front-end.

    But for a tech-luddite like me, this was still a bit too much. All I really want to do is swap page fragments using something like AJAX while sticking to semantically correct HTML.

    EDIT: Put another way, if you look at HTMX’s "motivation"s:

    motivation

    • Why should only <a> & <form> be able to make HTTP requests?
    • Why should only click & submit events trigger them?
    • Why should only GET & POST methods be available?
    • Why should you only be able to replace the entire screen?

    By removing these constraints, htmx completes HTML as a hypertext

    It seems the author only cares about the final bullet, and thinks the first three are reasonable/acceptable limitations.



  • There are several things I disagree with in this article, although I see where the author is coming from. I will never be onboard with “I’ll take my segfaults and buffer overflows.”, and I fundamentally disagree about concurrency. I also think that cargo is fantastic, and a lack of standard build tools is one thing that holds rust’s predecessors back.

    However, a majority of the authors points can be boiled down to “C is more mature”, which doesn’t tell us much about the long-term viability and value of these languages. For example, in the author’s metric of stability and complexity, they use C99 as the baseline, but C99 is the state of a language that had already had almost 3 decades of development, whereas Rust has been stable for less than a decade. Talking about superior portability, stability, and even spec, implementations, and ABI is in some real sense just saying “C is older”.

    That’s not to say those things aren’t valuable, but rather they aren’t immutable characteristics of either language. And given that safety is playing an ever more important role in software, especially systems software, I think Rust will catch up in all the ways that are meaningful for real projects more quickly than most of us realize. I certainly don’t think it’s going anywhere anytime soon.




  • Ah ok I think I get you now. To be clear, fall through is implicit - when the case being fallen through is empty. I forgot that, if you want to execute some statements in one case, and then go to another case, you need gotos. To be fair, I’ve never needed that behavior before.

    I absolutely see your point on break not being the default. It is sad, although I will say I don’t mind a little extra explicitness in code I’m sharing with a large team.


  • I’m not sure I understand your point about fall through having to be explicit, but I agree that switch statements are lacking ergonomics - which makes some sense considering they were added a looooong time ago. Luckily, they added recently the switch expression, which uses pattern matching and behaves more like Rust’s Match expression. It’s still lacking proper exhuastiveness checks for now, but that’s a problem with the core design of composition in C#’s type model and one they are looking to solve (alongside Discriminate Unions in all likelihood).



  • I certainly see where you’re coming from, but I think the designers of C# have done fairly good job evolving the language to balance backwards compatibility, simplicity (in terms of having “only one way” to do things), and the ergonomics expected of modern languages. I think C++ and JS are great comparisons because C++ has at this point added everything and the kitchen sink to it’s language and standard library, whereas C# has gone much more like JS introducing features that evolve the best practices for writing but still feel and read like essentially the same language. For example, primary constructors still look just like regular C#, it’s just a nicer way to define simple POCOs when desired.

    As far as important language features, I think it’s easy to pick on discriminated unions because it seems like C#’s users unanimously want that. However, if you read through proposals and discussions, it’s obvious that there’s a lot of nuance and trade offs in deciding how and what form of discriminated unions should exist in C# (and the designers are very active in working through that nuance and trade off - they said they have a working group that meets weekly to discuss it I believe*). And to be fair, they have introduced a LOT of other important features (like records and the vastly improved pattern matching) in just the last few years. Without those features, discriminated unions wouldn’t be nearly as appealing, and those features are great for the language even without DUs.

    *Edit: Source for my claim is the recent Languages & Runtime Community Standup on the official dotnet YouTube channel. Mads talks about the working group at 21:05, but the discussion of discriminated unions begins at 7:09.




  • I’ve been daily driving for right around a year now. There have been less breaks and difficulties than I expected from pre-1.0 software and it has made my shell experience so delightful!

    I find that when I want to do something simple quickly, nushell enables me to do it with no context switching, little to no friction, and no googling. I can just open/http get my data, pipe it through a really straight forward pipeline that practically writes itself with how clear the commands are, and save it in whatever format is convenient to me. I don’t have to monkey around with Python and packages and virtual environments, and I don’t have to spend 75% of my time googling and debugging insane bashisms. Nushell just works, and the help is so convenient I almost never have to go to the docs.

    My absolute favorite feature is that it’s truly cross-platform. I don’t have to install a compatibility layer like minGW on Windows, I can just make it my default shell and it works great. Then I can use it the exact same way in WSL, macOS, and Linux.

    The reasons to not be interested in nushell imo are:

    1. You’re already comfortable to the point of mastery with bash/zsh/fish, so the ease of use and quality of life improvements from nushell won’t be as valuable to you compared to the cost of switching.
    2. You spend more time in the shell on random servers you don’t want to customize than you do in your own shell. Obviously we are (infinitely?) far away from nushell becoming a default on any platform, so if you aren’t gonna be able to install in the places you would want it most, you’ll just end up infuriated that nothing else is as good as it.


  • “the only primary difference was that one happens before the aggregation and the other happens after, and all the other implications stem from that fact.”

    This is correct. The biggest implication of that difference is that, when you filter rows via a HAVING clause, the query will first select all the rows and aggregate them, and only then begin to filter them. That can be a massive performance hit if you thought that the filter would prevent filtered rows from ever being selected. Of course this makes perfect sense, there’s no logical way to filter an aggregate without first aggregating, but it’s not obvious.

    “PRQL’s simplification, rather than obscuring, seems like a more clear and reasonable way to express that distinction.”

    My main point is that PRQL makes no distinction. If you didn’t inspect that SQL output and already know about the difference between WHERE and HAVING, you would have no idea, because in PRQL they’re both just “filter”. Hence, PRQL is not simplifying the complexity (you still need to learn the full SQL syntax and the specifics of how it works), but it does obscure (you have no hints that one of your filter statements will behave completely differently from the other).

    As far as removing arbitrary SQL features, I agree that that is it’s main advantage. However, I think either the developers or else the users of PRQL will discover that far fewer of SQL’s complexities are arbitrary than you might first assume.


  • Because at the end of the day, SQL it what’s being run by the database. For example, in the Showcase on the front page, they have an “Orthogonality” example that demonstrates filtering both before and after an aggregation, which compiles to a WHERE clause and a HAVING clause respectively. WHERE and HAVING have very different impacts on SQL queries, and vastly different performance implications, but the simplification in PRQL obscures that complexity.

    At the end of the day, the transpiled langauge will have to either support only a subset of SQL’s features, or else be at least as complex as SQL. It cannot support all of SQL’s features and yet be less complex, because it is just a wrapper around SQL.

    I suppose for the right crowd, possibly people who run queries only once and do not care about performance implications, data integrity, etc., this could be a really useful tool. And in all fairness, they mention exactly that on their homepage:

    "PRQL’s focus is analytical queries

    PRQL was originally designed to serve the growing need of writing analytical queries, emphasizing data transformations, development speed, and readability. We de-emphasize other SQL features such as inserting data or transactions."

    But for developers who need to maintain an application database, I don’t foresee this becoming a useful substitute for SQL.


  • I don’t know anything about the newsletter. The core of the article seems to be observing a shift in AI/ML/LLM opportunities. Where before, most people in the field were developing the base models and doing the arduous and highly complex work of training models (what the author calls ML Engineers), now the majority of this field will be people who use those pre-made and pre-trained models, tweaking and applying them for more and more specific and quantifiable uses (what the author calls AI Engineers). He drew a malleable line between the two as whether you’re interacting with the model directly or via an API.


  • This seems nice in theory, but the tradeoffs make me question it’s real world viability. It has to be a transpiled language, because SQL is so ubiquitous is may never die. And yet, because it’s a transpiler, I’m skeptical that it will actually be easier to write than SQL, because you’ll still need to know all the gotchas and eccentricities of SQL.

    Maybe for users who already experts in SQL this would be a quaint alternative syntax. However, personally I’ve already invested so much time developing familiarity with SQL that I see no advantage in moving to a new syntax that would take more time to become deeply familiar with, and that my co-workers won’t understand.