• 1 Post
  • 37 Comments
Joined 1 year ago
cake
Cake day: July 10th, 2023

help-circle








  • I love the term “write-only code”, it’s perfect. I used to love Perl as it felt like it flowed straight from my brain into the keyboard. What a free and magical language.

    So it turned out I had ADHD. Took meds, went back to C/++ with renewed appreciation, haven’t touched Perl since as it horrifies me to look at it. What a nightmare of dangling references and questionable typing. Any language that allows you to cast a string to a function and call it really needs to sit down and think about what it’s doing.


  • If you don’t want memory-safe buffer overruns, don’t write C/C++.

    Fixed further?

    It’s perfectly possible to write C++ code that won’t fall prey to buffer overruns. C is a lot harder. However yes it’s far from memory safe, you can still do stupid things with pointers and freed memory if you want to.

    I’ll admit as I grew up with C I still have a love for some of its oh so simple features like structs. For embedded work, give me a packed struct over complex serialization libraries any day.

    I tend to write a hybrid of the two languages for my own projects, and I’ll be honest I’ve forgotten where exactly the line lies between them.


  • LLM AI is a fad, but not the same kind of fad as VR. It doesn’t need to be integrated into everything, but the technology has genuine utility and will not be going away.

    I think the trend for “AI in everything” is stupid, yet I’m running a Vscode plugin that integrates local LLM models and it’s very useful.

    This is the same sort of thing that can be useful in a browser too. The web is so spammy these days that feeding it to an LLM to summarize and filter it is a legitimate use case.




  • I wouldn’t try parametric models in freecad

    I would clarify that you’re talking about a specific usage case, that OpenSCAD does indeed do better at. However for most CAD tasks I find OpenSCAD is overkill and less intuitive.

    “Parametric design” usually refers to the workflow used in the Part Design workbench, as well as SolidWorks etc. where geometry is defined by constraints.

    The Part Design workbench does work well and despite the topological naming issue is sufficient for most hobbyist and many light industrial tasks. If I need to draw up an arbitrary bracket or bushing or similar, I don’t even bother using a workflow that guards against the issue, I just use it casually like I would SolidWorks. Only if the part is complex or if I know it will need to be tweaked do I bother doing everything on datum planes etc. because it’s a lot slower and more hassle.

    That’s very good news that the topological naming issue is being solved, though. #1 issue with FreeCAD IMO and the one that holds it back from serious industry use.


  • A million tiny decisions can be just as damaging. In my limited experience with several different local and cloud models you have to review basically all output as it can confidently introduce small errors. Often code will compile and run, but it has small errors that can cause output to drift, or the aforementioned long-run overflow type errors.

    Those are the errors that junior or lazy coders will never notice and walk away from, causing hard to diagnose failure down the road. And the code “looks fine” so reviewers would need to really go over it with a fine toothed comb, which only happens in critical industries.

    I will only use AI to write comments and documentation blocks and to get jumping off points for algorithms I don’t keep in my head. (“Write a function to sort this array”) It’s better than stack exchange for that IMO.


  • I tried using AI tools to do some cleanup and refactoring of some legacy embedded C code and was curious if it could do any optimization or knew any clever algorithms.

    It’s pretty good at figuring out the function of the code and adding comments, it did some decent refactoring of some sections to make them more readable.

    It has no clue about how to work in a resource constrained environment or about the main concepts that separate embedded from everything else. Namely that it has to be able to run “forever”, operate in realtime on a constant flow of sensor data, and that nobody else is taking care of your memory management.

    It even explained to me that we could do input filtering by using big arrays to do simple averaging on a device with only 1kB RAM, or use a long long for a never-reset accumulator without worrying about what will happen because “it will be years before it overflows”.

    AI buddy, some of these units have run for decades without a power cycle. If lazy coders start dumping AI output into embedded systems the whole world is going to get a lot more glitchy.


  • evranch@lemmy.catoLinux@lemmy.mlIs DNS Bloat too?
    link
    fedilink
    arrow-up
    5
    ·
    6 months ago

    Really annoying is when recent devices don’t respect the DNS you’re advertising or allow configuration (Android…)

    My site is behind CGNAT on IPv4 with recently added fully routed IPv6. There are legacy control devices all over it that don’t speak IPv6, with local DNS records that allow them to be readily accessed while walking around with a mobile device… Allowed them to be accessed that is, until IPv6.

    The Android IPv6 stack ignores the RA for my local DNS and also resolves via v6 by default, forwarding local queries upstream and returning no results. Then it doesn’t bother to fall back to v4. Unrooted Android has no exposed configuration for IPv6 of any sort to modify its behaviour, no hosts file to override or any way I can see to fix this. I can’t even disable IPv6 on my phone.

    So to access my local devices from Android I need to use their full IPv4 address or VPN back into my own network… Oh wait, the stack is so broken that despite setting DNS in Wireguard, it still tries to resolve through upstream v6 first!

    Apparently recent smart TVs are doing similar even on IPv4, hard-coded to 1.1.1.1 or 8.8.8.8 to dodge ad blocking, which is plain malicious and ignores all standards…

    So anyways this is why DNS is dragon #3



  • We’re talking about replacing lost content here though. And as such you can use the streaming services as a “backup” by re-ripping your whole collection if you lose it.

    I’m actually doing this now as part of a library cleanup. Zotify + beets are a great combo to pull down vast quantities of music and properly sort and tag it.

    Then I stream it to my phone in my truck using ampache and ultrasonic, which does have a local buffering option.

    However if you have some exotics that you ripped from rare discs, demos or prerelease, live recordings with sentimental value etc. I would suggest keeping those properly backed up. I don’t have many of these, but the ones I do have are backed up both cloud and offsite.