top of page
Search

Why Do Chatbots Get It Wrong Sometimes?

  • Writer: Rishika Aggarwal
    Rishika Aggarwal
  • Jul 28
  • 2 min read
ree

I’m building a chatbot to help students with science and math questions. It uses a keyword-based system. It reads from a CSV file, looks for key terms in a question, and shows the answer that matches best. I’m still developing it, but even in this early stage, I’ve already started noticing something important.

Chatbots get things wrong. More often than you’d expect.

At first, I assumed that if the bot could match a keyword, it would lead to the right answer. But what actually happens is more complicated. Sometimes the questions have similar words that confuse the match. Sometimes there’s more than one keyword, and the bot picks the wrong one. And sometimes, even though the answer is technically correct, it doesn’t feel like it actually answered what the student meant.

For example, in the early stages of testing, I noticed that if someone typed “digestion,” the bot would return an answer for “respiration.” It took me a while to figure out why. “Digestion” wasn’t even in the dataset. But “respiration” was. Because both words end in “tion,” the bot matched on that substring and pulled up the wrong answer instead of showing an error message.

The bot isn’t actually understanding anything. It’s just matching parts of words. It doesn’t know that digestion and respiration are two different things in science.

It wasn’t a huge bug, but it showed me how easily the logic can go off track when the matching is too loose.

That’s when I realised that bots don’t really get context. They don’t understand tone or intention or how the meaning of a word changes depending on the subject. This made me think more deeply about how language models work in general, even the advanced ones. Mistranslations, weird phrasing, answers that sound confident but are totally off. All of that happens more than we realise.

Because AI doesn’t understand the way we do. It learns patterns, not meaning. That’s why it can sound smart but still mess up basic logic.

The more I work on this, the more I’ve started paying attention to how people actually ask questions. Some mix languages. Some are too vague, or super specific. Some use words in ways I didn’t expect. And that’s okay.

The challenge isn’t just to give answers. It’s to actually hear the question.

Apart from building the chatbot, I’m also learning how the technology behind it actually works. How computers recognize words, figure out meanings, and try to respond in ways that make sense, even though they don’t “understand” like humans do.

It’s interesting to see how tricky language can be for machines and how we can teach them to handle it better.

So I’m not just making the bot. I’m also figuring out how language and technology work together, behind the scenes.

And building this chatbot has already shifted how I think about language, education, and tech.

Maybe that’s the real starting point. It’s not about getting it perfect but more about noticing what goes wrong, and asking why.


 
 
 

Comments


bottom of page