I think that makes sense. I am 100% a layman with this stuff, buy if the “AI” is just predicting what should be said by studying things humans have written, then it makes sense that actual people were more likely to give serious, solid answers when the asker is putting forth (relatively) heavy stakes.
I think that makes sense. I am 100% a layman with this stuff, buy if the “AI” is just predicting what should be said by studying things humans have written, then it makes sense that actual people were more likely to give serious, solid answers when the asker is putting forth (relatively) heavy stakes.
Who knew that a training in carpet salesmanship helps for a job as a prompt engineer.
Yep exactly that. A fascinating side-effect is that models become better at logic when you tell them to talk like a Vulkan.
Hmm… It’s only logical.