I agree that confabulation/hallucination/lying is a huge problem with LLMs like ChatGPT, Bard etc

But I think a lot of people are underestimating how difficult it is to establish "truth" around most topics

High quality news publications have journalists, editors and fact checkers with robust editorial processes... and errors still frequently slip through

Expecting a LLM to perfectly automate that fact checking process just doesn't seem realistic to me

What does feel realistic is training these models to be MUCH better at providing useful indications as to their confidence levels

The impact of these problems could be greatly reduced if we could counteract the incredibly convincing way that these confabulations are presented somehow

I also think there's a lot of room for improvement here in terms of the way the UI is presented, independent of the models themselves

Show thread
Follow

@simon but can we also please reflect upon the fact that banning technology on the grounds that some people fail to understand the disclaimer that LLMs generate false information sometimes and not banning, I don't know, Facebook or YouTube that do not provide a warning that much of the content there is spreading misinformation is beyond inconsistent.

But also the notion of "banning" technology without public hearing is a bit cringe to say the least. Kinda anti-democratic and censor-y.

I really appreciate how moderate your response is to the ongoing fearmongering, I'll keep being more direct because freedoms are at stake.

Sign in to participate in the conversation
Doma Social

Mastodon server of https://doma.dev.