• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 27th, 2023

help-circle
  • Don’t even need to make it about code. I once asked what a term meant in a page full of a certain well known FOSS application’s benchmarks page. It gave me a lot of garbage that was unrelated because it made an assumption about the term, exactly the assumption I was trying to avoid. I try to deviate it away from that, and it fails to say anything coherent and then loops back and gives that initial attempt as the answer again. I was stuck unable from stopping it from hallucinating.

    How? Why?

    Basically, it was information you could only find by looking at the github code, and it was pretty straightforward - but the LLM sees “benchmark” and it must therefore make a bajillion assumptions.

    Even if asked not to.

    I have a conclusion to make. It does do the code thing too, and it is directly related. Once asked about a library, and it found a post where someone was ASKING if XYZ was what a piece of code was for - and it gave it out as if it was the answer. It wasn’t. And this is the root of the problem:

    AI’s never say “I don’t know”.

    It must ALWAYS know. It must ALWAYS assume something, anything, because not knowing is a crime and it won’t commit it.

    And that makes them shit.