Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and is now exploring new vistas in social media.

  • 0 Posts
  • 57 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • When you ask an LLM to write some prose, you could ask it “I’d like a Pulitzer-prize winning description of two snails mating” or you could ask it “I want the trashiest piece of garbage smut you can write about two snails mating.” Or even “rewrite this description of two snails mating to be less trashy and smutty.” In order for the LLM to be able to give the user what they want they need to know what “trashy piece of garbage smut” is. Negative examples are still very useful for LLM training.




  • FaceDeer@kbin.socialtoLinux@lemmy.mlMy First Regular Expressions
    link
    fedilink
    arrow-up
    15
    arrow-down
    2
    ·
    6 months ago

    Just to chip in because I haven’t seen it mentioned yet, but I fing LLMs like ChatGPT or Microsoft Copilot are really good at making regexes and also at explaining regexes. So if you’re learning them or just want to get the darned thing to work so you can go to bed those are a good resource.


  • That’s not been my experience. It’ll tend to be agreeable when I suggest architecture changes, or if I insist on some particular suboptimal design element, but if I tell it “this bit here isn’t working” when it clearly isn’t the real problem I’ve had it disagree with me and tell me what it thinks the bug is really caused by.


  • It’s not actually meaningless. It means “I did test this and it did work under certain conditions.” So maybe if you can determine what conditions are different on the customer’s machine that’ll give you a clue as to what happened.

    The most obscure bug that I ever created ended up being something that would work just fine on any machine that had at any point had Visual Studio 2013 installed on it, even if it had since had it uninstalled (it left behind the library that my code change had introduced a hidden dependency on). It would only fail on a machine that had never had Visual Studio 2013 installed. This was quite a few years back so the computers we had throughout the company mostly had had 2013 installed at some point, only brand new ones that hadn’t been used for much would crash when it happened to touch my code. That was a fun one to figure out and the list of “works on this machine” vs. “doesn’t work on that machine” was useful.



  • Well, I also am “pretty good” at getting the science right when I write sci fi. Makes me just as qualified as them, I guess.

    The problem remains that the overriding goal of a sci fi author remains selling sci fi books, which requires telling a gripping story. It’s much easier to tell a gripping story when something has gone wrong and the heroes are faced with the fallout, rather than a story in which everything’s going fine and the revolutionary new tech doesn’t have any hidden downsides to cause them difficulties. Even when you’re writing “hard” science fiction you need to do that.

    And frankly, much of Asimov, Clarke and Heinlein’s output was very far from being “hard” science fiction.


  • Ironically, one of the reasons AI imagery doesn’t have voluntary tagging is because of the anti-AI sentiment. It’s resulted in anti-AI witchhunts and abuse. So why should people flag themselves and paint a target on their backs like that? Since voluntary tagging is basically the only way to know if something’s AI generated, the extreme anti-AI folks have dug their own grave on this one.

    PS, my avatar is AI-generated imagery.







  • Autonomous weaponized drones are useful for fighting wars more effectively, and with fewer lives placed at risk using manned platforms. You may not like that wars are fought, but they will be fought regardless. Drones solve problems that arise in war-fighting.

    Likewise, mass surveillance solves problems faced by intelligence agencies. It’s also useful for things like marketing studies, medical studies, all kinds of such things. And again, you may not like some of these problems being solved, but they’re real-world problems that are being solved.

    Nuclear weapons have kept the world’s superpowers at bay from each other. They’ve stopped “world wars” from happening. They don’t stop all wars from happening, but there haven’t been any major direct clashes between nuclear-armed powers since their invention.

    Those metaverses and reality TV shows are entertainment. They are aimed at entertaining people.

    MoviePass’ ad system is an effort to monetize entertainment, allowing for more to be made.

    AI facsimiles of dead relatives are for psychological purposes - helping people work through grief, helping people relive fond memories, providing emotional support, and so forth.

    There you go, real-world problems they’re all there to solve. And none of them are dystopic nightmares as depicted by the science fiction scenarios you listed, which is the main point I’m making here.

    Science fiction authors got their predictions wrong. They spun nightmare scenarios because that’s what makes for compelling drama and increased sales of their books or shows. They’re not good bases for real-world decision-making because they’re biased in incorrect directions.



  • We have all of those things and the dystopic predictions of the authors who predicted them haven’t come remotely true. All of these examples prove my point.

    We have autonomous weaponized drones and they aren’t running around massacring humanity like the Terminator depicted. Frankly, I’d trust them to obey the Geneva Conventions more thoroughly than human soldiers usually do.

    We have had mass surveillance for decades, Snowden revealed that, and there’s no totalitarian global state as depicted in 1984.

    We’ve had nuclear weapons for almost 80 years now and they were only used in anger twice, at the very beginning of that. A good case can be made that nuclear weapons kept the world at large-scale peace for much of that period.

    Various companies have made attempts at “Corporate controlled hypercommercialized microtransaction-filled metaverses” over the years and they have generally failed because nobody wanted them and freer alternatives exist. No need to ban anything.

    Netflix’s Squid Game is not a “real-life” Squid Game. Did you watch Squid Game? That was a private spectacle for the benefit of ultra-wealthy elites and people died in them. Deliberately and in large quantities. Netflix is just making a dumb TV show. Do you really think they’d benefit from massacring the contestants?

    "MoviePass to track people’s eyes through their phone’s cameras to make sure they don’t look away from ads” - ok, let’s see how long that lasts when there are competitors that don’t do that.

    “Soulless AI facsimile of dead relatives” - firstly, please show me a method for determining the presence or absence of a soul. Secondly, show me why these facsimiles are inherently “bad” somehow. People keep photographs of their dead loved ones, if that makes you uncomfortable then don’t keep one.

    Each and every one of these technologies were depicted in fiction over-the-top unrealistic ways that emphasized their bad aspects. In reality none of them have matched those depictions to any significant degree. That’s my whole point here.


  • Nope. Isaac Asimov was a biochemist, why would he be particularly qualified to determine whether robots are safe? Arthur C. Clarke had a bachelor’s degree in mathematics and physics, which technology was he an expert in? Heinlein got a bachelor of arts in engineering equivalent degree from the US Naval Academy, that’s the closest yet to having an “understanding of technology.” Which ones did he write about?