nerdculture.de is one of the many independent Mastodon servers you can use to participate in the fediverse.
Be excellent to each other, live humanism, no nazis, no hate speech. Not only for nerds, but the domain is somewhat cool. ;) No bots in general. Languages: DE, EN, FR, NL, ES, IT

Administered by:

Server stats:

1.2K
active users

Nick Byrd, Ph.D.

Overheard at a conference about in :

Speaker: "I hear neurologists prefer we say that generative AI systems 'confabulate' and not that they 'hallucinate'."

Neurologist [shouting from the back of the room]: "CORRECT!"

@ByrdNick

I heard first from @anilkseth that:
ai models confabulate fake outputs,
while human brains hallucinate reality ;)

My take for better hallucinate terminology:
ai defects or limitations

@KevinMarks @ByrdNick that's a great list of toots. I like this one:
"This is my deeper worry about AI - as we demand that the AI justify its decision to us, we create an incentive model for confabulation, just as we have for natural intelligence. Personalised reassuring salesmanship and gaslighting."

Especially: "Personalised reassuring salesmanship and gaslighting" Ouch...

@KevinMarks

@ByrdNick

I was familiar with the concept so when I saw LLMs do it that immediately struck me. But it looks like you were predicting something like this years in advance.

I see some of this in the context of machine understandability, with plausible yet false explanations for other processes. That would almost be nice as I suspect a lot of human speech is a bit like that. LLMs don't typically do that - they just explain themselves badly.

@ByrdNick Or just "bullshit" which has fewer syllables and means more to more people.

@Kristofferabild @ByrdNick @nerdd i guess you could argue the intent is to fool the user into thinking it's an accurate answer but i think it's different

@peterbutler @ByrdNick @nerdd Well a big problem for LLMs is that they are designed to lie and you have no way of telling when it is lying to you, because it doesn’t know. So I guess it comes down to semantics, but I would still call it lying. 🙂

@Kristofferabild @peterbutler @nerdd

Again, I recommend Alex Wiegmann’s and colleagues’ research on how people actually apply the concept of lying.

It is showing that an intent to deceive is often not necessary for people to label something lying.

Moreover, people often label true statements as lies, if they imply something false.

Wiegmann et al. have already pre-registered experiments about how people think about lying in the context of AI systems; follow Wiegmann on Google Scholar to learn what they find (when the results are published):

nerdculture.de/@ByrdNick/11425

@ByrdNick
I think they blather and bullshit.

@ByrdNick Don't want to be a sick child destined to be treated in Boston Children's Hospital.
Doctors deliberately fail in understanding what it means to keep data private.
They use Zoom/MS Teams meetings for exchange, showing patient data, pictures, radiology material, lab values - you name it. They send unencrypted mails with patient data - to Google, Apple, etc. mail servers. They use Whatsapp...
THEY MOSTLY do not understand IT security the way they should know! Beware!

@ByrdNick Or, as others say, they're "Bullshitting", as they make up things and are very convinced it actually is true...

@ligasser

Right. That's all part of the thread I linked to on X (in reply to the earlier comment).

@ByrdNick @Mabande I would say that SALAMIs always confabulate. Always. Occasionally, those confabulations correspond to reality, but that is merely a statistical fluke.