• 0 Posts
  • 28 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlLittle bobby 👦
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    23 days ago

    Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

    For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.


  • You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

    Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

    We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

    One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

    There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

    While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.









  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlOops, wrong person.
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    6 months ago

    I don’t think the code is doing anything, it looks like it might be the brackets.

    That effectively the spam script has like a greedy template matcher that is trying to template the user message with the brackets and either (a) chokes on an exception so that the rest is spit out with no templating processor, or (b) completes so that it doesn’t apply templating to the other side of the conversation.

    So { a :'b'} might work instead.


  • I’ve suspected that different periods of Replika was actually just this.

    Like when they were offering dirty chat but using models that didn’t allow it, that behind the scenes it was hooking you up with a Mechanical Turk guy sexting you.

    There was certainly a degree of manual fuckery, like when the bots were sending their users links to stories about the Google guy claiming the AI was sentient.

    That was 1,000% a human initiated campaign.


  • I find it odd when people get upset at the idea of having access to their own aggregated data but almost never get upset when they hand over massive amounts of data to companies that can privately do the same things on their data.

    Google already processes your Photos data, and while you get their facial recognition data pipeline fed back to you, there’s a fair bit of other analysis going on that you aren’t always seeing. But people aren’t generally complaining that they are scanning your photos for criminal activity or trying to maximize product engagement using the data.

    But if suddenly they turn back over access to that deep analysis so you can ask a chatbot “what did I eat for my birthday two years ago and who was there” and get a description of the meal, who else was there, and relevant images without needing to scroll back your timeline - now it’s suddenly creepy and we don’t want it (even though literally all that information is already being processed at roughly the same level of fidelity already).

    People are weird.


  • I learned so much over the years abusing Cunningham’s.

    Could have a presentation for the C-suite for a major company, post some tenuous claim related to what I intended to present on, and have people with PhDs in the subject citing papers correcting me with nuances that would make it into the final presentation.

    It’s one of the key things I miss about Reddit. The scale of Lemmy just doesn’t have the same rate and quality of expertise jumping in to correct random things as a site with 100x the users.





  • Let me know when they invent one of those, because they sure as fuck haven’t done it yet.

    This was literally part of the 2022 PaLM paper and allegedly the thing that had Hinton quit to go ringing alarm bells and by this year we now have multimodal GPT-4 writing out explanations for visual jokes.

    Just because an ostrich sticks its head in the sand doesn’t mean the world outside the hole doesn’t exist.

    And in case you don’t know what I mean by that, here’s GPT-4 via Bing’s explanation for the phrase immediately above:

    This statement is a metaphor that means ignoring a problem or a reality does not make it go away. It is based on the common myth that ostriches bury their heads in the sand when they are scared or threatened, as if they can’t see the danger. However, this is not true. Ostriches only stick their heads in the ground to dig holes for their nests or to check on their eggs. They can also run very fast or kick hard to defend themselves from predators. Therefore, the statement implies that one should face the challenges or difficulties in life, rather than avoiding them or pretending they don’t exist.

    Go ahead and ask Eliza what the sentence means and compare.