I heard first from @anilkseth that:
ai models confabulate fake outputs,
while human brains hallucinate reality ;)
My take for better hallucinate terminology:
ai defects or limitations
@ByrdNick indeed, I've been saying this for a while now (I'm not a neurologist though) https://www.kevinmarks.com/confabulation.html
@KevinMarks @ByrdNick that's a great list of toots. I like this one:
"This is my deeper worry about AI - as we demand that the AI justify its decision to us, we create an incentive model for confabulation, just as we have for natural intelligence. Personalised reassuring salesmanship and gaslighting."
Especially: "Personalised reassuring salesmanship and gaslighting" Ouch...
I was familiar with the concept so when I saw LLMs do it that immediately struck me. But it looks like you were predicting something like this years in advance.
I see some of this in the context of machine understandability, with plausible yet false explanations for other processes. That would almost be nice as I suspect a lot of human speech is a bit like that. LLMs don't typically do that - they just explain themselves badly.
@ByrdNick Or just "bullshit" which has fewer syllables and means more to more people.
I’ve argued that language models often function like bullshitters, but people who publish on the topic of bullshit didn’t buy it.
The thread, in case you want to see their objections: https://x.com/byrd_nick/status/1773757919353286664
@ByrdNick I think that @ct_bergstrom and Jevin West have been quite eloquent on this. I'ld love to read the discussion but reluctant to go back to the dreaded X. (oops - forgot the link: https://thebullshitmachines.com/index.html)
Research by Alex Wiegmann et al. suggests that the concept of 'lie' employed by people in many cultures often involves psychological capacities and nuances that probably do not apply to language models.
@Kristofferabild @ByrdNick @nerdd lie generally signifies an intent
@Kristofferabild @ByrdNick @nerdd i guess you could argue the intent is to fool the user into thinking it's an accurate answer but i think it's different
@peterbutler @ByrdNick @nerdd Well a big problem for LLMs is that they are designed to lie and you have no way of telling when it is lying to you, because it doesn’t know. So I guess it comes down to semantics, but I would still call it lying.
@Kristofferabild @peterbutler @nerdd
Again, I recommend Alex Wiegmann’s and colleagues’ research on how people actually apply the concept of lying.
It is showing that an intent to deceive is often not necessary for people to label something lying.
Moreover, people often label true statements as lies, if they imply something false.
Wiegmann et al. have already pre-registered experiments about how people think about lying in the context of AI systems; follow Wiegmann on Google Scholar to learn what they find (when the results are published):
@ByrdNick
I think they blather and bullshit.
@ByrdNick Don't want to be a sick child destined to be treated in Boston Children's Hospital.
Doctors deliberately fail in understanding what it means to keep data private.
They use Zoom/MS Teams meetings for exchange, showing patient data, pictures, radiology material, lab values - you name it. They send unencrypted mails with patient data - to Google, Apple, etc. mail servers. They use Whatsapp...
THEY MOSTLY do not understand IT security the way they should know! Beware!
@ByrdNick Or, as others say, they're "Bullshitting", as they make up things and are very convinced it actually is true...
@ByrdNick There is even a scientific article on it:
https://link.springer.com/article/10.1007/s10676-024-09775-5
Right. That's all part of the thread I linked to on X (in reply to the earlier comment).