• 0 Posts
  • 15 Comments
Joined 6 months ago
cake
Cake day: May 19th, 2024

help-circle



  • How about something autonomous that makes choices of its own will, and performs long term learning that influences the choices it makes, just as a flat benchmark.

    LLMs don’t qualify, they’re trained, retain information within a conversation, then forget it after the conversation is closed. They don’t do any long term learning after their initial training so they’re basically forever trapped in the mode of regurgitating within the parameters set by the training data at the time they’re trained.

    That’s just a very fancy way to search and read out the training data. Definitely not an active intelligence in there.

    They also don’t have any autonomy, they’re not active of their own accord when they’re not being addressed. They’re not sitting there thinking, so they have no internal personal landscape of thought. They have no place in which a private intelligence can be at play.

    They’re innert.



  • Oof, programmers calling LLMs “AI” - that’s embarrassing. Glorified text generators don’t need ethics, what’s the risk? Making the Internet’s worst texts available? Who cares.

    I’m from an era when the Anarchists Cook Book, and The Unabombers Manifesto were both widely available - and I’m betting they still are.

    There’s no obligation to protect people from “dangerous text” - there might be an obligation to allow people access to them though.