• 0 Posts
  • 404 Comments
Joined 3 years ago
cake
Cake day: June 21st, 2023

help-circle
  • It looks like this was briefly touched in the article, but LLMs don’t learn shit.

    If I tell you your use of a list is dumb and using a set changes the code from O(n) to O(1) and cuts out 15 lines of code, you probably won’t use a list next time. You might even look into using a deque or heap.

    If your code was written by a LLM? You’ll “fix” it this time (by telling your LLM to do it) and then you’ll do it again next time.

    I’m sorry, but in the latter case, not only are you mentally handicapping yourself, but you’re actively making the project worse in the long term, and you’ve got me sending out resumes because, and I mean this in the politest way possible, but go fuck yourself for wasting my time with that review.




  • 8GB RAM isn’t a small amount (though by no means a lot). As far as RAM usage goes, the amount you need will scale with project+dependencies size, so for smaller projects, it shouldn’t be a problem at all.

    8GB RAM doesn’t tell us about the rest of your system though. What CPU do you have? Is your storage slow? Performance is affected by a lot of factors. A slow CPU will naturally run programs slower, fewer hardware threads means less running in parallel, and slower storage means that reading incremental build data and writing it could be a bottleneck.


  • Right now it’s no big deal to any AI company because more code means more training for the AI, but will we get to the point that they’re happy with code output enough and then turn around claiming they own those?

    At least in the US:

    The vast majority of commenters agreed that existing law is adequate in this area and that material generated wholly by AI is not copyrightable.

    So it seems unlikely that they would be able to claim any ownership.

    As for the rest of your comment (the parts around ownership): you always own the copyright for any copyrightable work you create, including code. When you post on a website, according to the ToS of that site, you’re licensing your comment/code/whatever to the website (you need to for them to be able to publish your work on their website).

    Some (many, most depending on what you use) websites overlicense your work and use it for other purposes as well (like GitHub), but in the US the judges have basically ruled that AI companies can pirate whatever works they want without any attempt to license them and still be fine, so the “overlicense” bit is more of a formality at this point anyway.


  • there should be a fork of dotnet.

    Dotnet is maintained by the .NET Foundation and is entirely open source. There are thousands of forks and local clones of the repos under that organization. Rather than hoping someone does this, it’d actually be a huge benefit to everyone for you to create a local clone of the repo and update it now and then, assuming you’re worried it might go down anyway.

    telemetry being totally removed

    DOTNET_CLI_TELEMETRY_OPTOUT=1, though it’s lame that it’s an opt-out and not opt-in. The CLI does give a fat warning on first use at least (which hilariously spams CI output). Opt-in would be so much better though, and opt-out by default is really not great.

    an alternative to nuget.org

    You can specify other package sources as well, so nothing technically stops someone from making their own alternative. That being said, you’d have to configure it for each project/solution that wants to use that registry.

    Setting such a thing up could be insurance in case they pull anything in the future, too.

    The main thing I’d be worried about here is nuget.org getting pulled. As far as I can tell, it’s run by MS, not the foundation. That’d be basically the entire ecosystem gone all at once. Fortunately, it’s actually super easy to create private registries that mirror packages on nuget.org, and it’s actually standard practice to do this at many companies. This means that at the very least it would be possible to recover some of the registry if this happened.


    For a fork, I would think these would be the main goals I’d look for:

    • Default to opt-in for telemetry, or make it local-only. Telemetry should go to a sink owned by the forking organization if sending telemetry is even possible at all.
    • Default package registry should be one owned and maintained by the forking organization. This would be incredibly expensive though, so they’d need funding for this.
    • Organization should be independent, and not funded at all, by MS. Alternatively, MS can provide funds, but not a majority amount - instead, sponsors are limited so that no single sponsor can fund enough of the fork to have independent control over it. In either case, the goal is that no single company has enough control to shift the direction meaningfully themselves.

  • Please cite one example of Microsoft ever giving a fuck about users.

    There aren’t many examples, but one that comes to mind is the adaptive controller. It’s not cheap, but it’s also presumably low volume, and it’s unbelievably configurable.

    Outside of that, I’m out of ideas. Usually every good change comes in response to user backlash, from my experience anyway. I’ve moved over to Linux by now because I’m tired of dealing with what Windows has become.







  • Can’t speak for Git, but caching responses is a common enough problem that it’s built into the standard HTTP headers.

    As for building a cache, you’d want to know a few things:

    • What is a cache entry? In your case, seems to be an API response.
    • How long do cache entries live? Do they live for a fixed time (TTL cache)? Do you have a max number of cached entries before you evict entries to make space? How do you determine which entries to evict if so?
    • What will store the cache entries? It seems like you chose Git, but I don’t see any reason you couldn’t start simple just by using the filesystem (and depending on the complexity, optionally a SQL DB).

    You seem locked into using Git, and if that’s the case, you still need to consider the second point there. Do you plan to evict cache entries? Git repos can grow unbounded in size, and it doesn’t give you many options for determining what entries to keep.



  • This has always been an issue. From my experience, the best way to get in was through internships, co-ops, and other kinds of programs. Those tend to have lower requirements and count as experience.

    Of course, today, things are a lot different. It’s a lot more competitive, and people don’t care anymore about actual software dev skills, just who can churn out SLOC the fastest.


  • If the version ranges for those dependencies which depend on vulnerable versions of packages cover the fixed versions as well, then just updating your Cargo.lock dependencies should pull the fixed versions. You can do this with cargo update.

    If the ranges don’t cover the fixes, you have a couple options:

    • If the vulnerability doesn’t affect you, do nothing.
    • If it does affect you, you can patch the dependencies to a local version or one in a git branch.

    If you choose to patch the dependency, the version of the patched package still needs to be compatible with what your dependencies are requesting. If foo v2.1.1 depends on bar = "3", then it can’t use a patched bar v4.1.2 for example, but can use bar v3.3.4. You may need to backport a fix to an earlier version of a package in some cases. You can do that locally and use a path specifier in your patch for that.

    In most cases, the vulnerability probably won’t affect you. You should check to make sure though on a case-by-case basis.


  • I keep trying to manually write code that I’m proud of, but I can’t. Everything always needs to be shipped fast and I need to move on to the next thing. I can’t even catch my breath. The only thing allowing me to keep up with the team is Cursor, because they all use it as well. The last guy that refused to use AI was just excluded from the team.

    This is the problem. It’s not new that a company rushes its devs to deliver new features at a pace that results in garbage code. What’s new is that devs who are willing to can deliver those features fast using a LLM. This obviously looks great to the imbecilic C-suites. Deliver features fast, get to market quickly, and spend less on devs!

    This is just short-term thinking, and it looks like you’ve noticed this. The team you’re on won’t change because the culture at your company is to deliver the next feature ASAP and focus on the short term. This is common with startups, for example, because it’s a constant race to get more funding. However, it always results in some half-assed product that inevitably needs to be rewritten at some point. With LLMs now, you’ll also have a team of people who don’t even understand their own code, making it take even longer to fix things or rewrite it later.

    Anyway, if you hate it, start applying places now. At least in the US (where I am), the job market is ass. The more time you give yourself to search, the better the chance is that you’ll find an option you like.



  • If you’re writing a script that’s more than 269 lines long, you shouldn’t be using Bash.

    Jokes aside, the point isn’t the lines of code. It’s complexity. Higher level languages can reduce complexity with tasks by having better tools for more complex logic. What could be one line of code in Python can be dozens in Bash (or a long, convoluted pipeline consisting of awk and sed, which I usually just glaze over at that point). Using other languages means better access to dev tools, like tools for testing, linting, and formatting the scripts.

    While I’m not really a fan of hostility, it annoys me a lot when I see these massive Bash scripts at work. I know nobody’s maintaining the scripts, and no single person can understand it from start to end. When it inevitably starts to fail, debugging them is a nightmare, and trying to add to it ends up with constantly looking up the syntax specific commands/programs want. Using a higher level language at least makes the scripts more maintainable later on.