top of page

Hallucinations - A most human trait

  • Writer: Vinay Payyapilly
    Vinay Payyapilly
  • 3 minutes ago
  • 2 min read

It's interesting how GenAI's hallucinations are often brought up as an argument against using it. But more than anything else, hallucinations are GenAI's most human trait.

From a LinkedIn post, I learned that a friend of mine was in town, but he didn't tell me that he would be in town, and neither did he meet me. Hurt, I assumed that he didn't want to keep in touch. Later, I realized that he was in town for an interview that involved multiple rounds. He didn't want to keep me on hold so never told me about it. He thought that he would reach out if he had the time, which he didn't, since he went through all the rounds, leaving him with just enough time to get to the airport for his flight back. My assumption was a hallucination.

Humans do this all the time. We make assumptions about another person's intentions from their actions.

A friend of mine passionately described to me how women in Kerala were taxed to be able to cover their breasts. It was for her yet another example of the sexualization of the female form. Her assumption was based on disparate bits of data: there was something called breast tax; women in Kerala didn't cover their breasts; men are always looking at women's breasts. She put these together to come to a plausible conclusion: she hallucinated. The breast tax had nothing to do with sexuality or a lecherous patriarchy.

A political leader looked out of the window of his car and was assaulted with the sight of people defecating along the side of the road. He ordered that toilets be built for everyone. He put together the bits of data he had: people defecating in the open are poor; they can't build toilets for themselves; and came up with the solution to build toilets for everyone. The solution was a hallucination.

Our mistakes are called assumptions. GenAI's mistakes are called hallucinations.

Just like humans, GenAI is built such that it is compelled to give an answer even when the answer is unverified.

We have seen how GenAI can look at historical data and come up with completely wrong answers such as predicting that the best person for a role would be white, male, evangelical, and in his mid-thirties. I once asked ChatGPT what it knew about my brother - a priest. It confidently mixed fact with assumptions and told me that my brother had written books that even my brother didn't know he'd written.

The real danger with GenAI's answers is the same as the danger from WhatsApp forwards: they seem confident and researched so we tend to believe them, especially if they reinforce our personally held belief on the matter.

We often tell each other that we should always verify our assumptions from reliable sources, it may soon come to pass that GenAI will be trained to check its responses against verified sources. When it gives an unverified response, it should call it out that the response is an assumption and not verified.

The best way to treat GenAI is how you treat that friend who is always making tall claims that seem too good to be true. Verify!



Subscribe Form

Thanks for submitting!

©2020 by Pavement Views. Proudly created with Wix.com

bottom of page