top of page
Screenshot 2025-07-24 at 11.28.36 AM.png

Search Results

10 results found with an empty search

  • AI and the Learning Process: Friends, Not Substitutes

    With AI tools just a few clicks away, it’s never been easier to get quick answers. Stuck on a tricky math question? There’s a tool for that. Confused about a Shakespeare scene? AI can summarize it in seconds. Need a hook for your essay? A chatbot can generate five options instantly. But here’s the question: when AI does the thinking for us, are we still learning? Think about a typical study session. A student starts an assignment, runs into a roadblock, and instead of pausing to figure it out, pastes the question into an AI tool. Within seconds, they have a neat solution. It’s efficient. But, what did they actually learn? I believe real learning often happens in the struggle. When we wrestle with an idea, make mistakes, and slowly figure things out, that’s when understanding deepens. It’s like exercising a muscle: the more you engage it, the stronger it gets. If AI removes the effort entirely, we risk skipping the part where the brain actually grows. That said, AI isn’t the villain here. It’s a powerful assistant. The challenge lies in how we use it. For example, instead of asking AI to solve a problem, what if students used it to check their thinking? Or to explain concepts in different ways when a textbook doesn’t make sense? When used well, AI can be a learning partner: providing hints, offering feedback, or guiding students to think more deeply. Schools and educators are also starting to explore this balance. For example, some teachers may now ask students to submit both their AI-generated answers and a reflection on what they learned from the process. Others may encourage using AI to test different viewpoints in an essay rather than just generating a final draft. These practices promote critical thinking, not passive consumption. There’s also a growing need for students to become AI-literate. Just like learning how to search effectively on Google, using AI tools wisely is becoming an essential skill. This means knowing when to rely on AI and when to rely on your own reasoning—and being able to tell the difference between understanding a topic and simply copying an answer. AI isn’t going anywhere. But neither is education. The key isn’t choosing between them– it’s learning how they can support each other. With the right mindset, students can use AI not to bypass learning, but to boost it.

  • Brains and Bytes: The Best of Both Worlds

    A few months ago, I was stuck on a physics question. Not the “I’ll get it if I revise the concepts once more” kind of stuck. This was full-on panic. The equations looked like random symbols marching across the page. My teacher had explained it clearly in class, but for some reason, I just couldn't wrap my head around it. I felt confused and frustrated. So, I did what I usually do when I’m too embarrassed to raise my hand again: I opened ChatGPT. What I like about ChatGPT is that I can keep asking the same question in different ways without feeling judged. Most of us have asked ChatGPT to explain concepts like we’re five years old. Nothing to be embarrassed about because it works! I don’t have to worry about interrupting a lesson or annoying anyone. It rewrote explanations, gave me analogies, simplified the steps, and helped me break things down. Eventually, something clicked. I actually got it. But here’s the thing: ChatGPT didn’t teach me how to learn. That was my physics teacher. He’s the one who said, “Let’s go over it again.” He encouraged me to keep working. He’s the one who taught me how to approach hard problems without panicking. ChatGPT helps me understand what I’m learning. My teacher helped me understand how to learn. We often talk about AI replacing teachers, but that misses the point. I don’t think of ChatGPT as a replacement for my teacher—I think of it as an extension of the classroom. When I walk out of school with half-baked notes and a foggy memory, I can use ChatGPT to reinforce and revisit what I learned. It’s not about choosing between AI and teachers. Honestly, they’re better together. One gives me 24/7 access to explanations and helps me fill the gaps. The other gives me the mindset I need to become a better learner and sparks interest in the subject. Together, they work best.

  • I almost got hacked too!

    It started with a DM that looked harmless. My friend had received a message from someone she knew—a close friend, actually. The message said something like, “Hey! I just entered an art competition, can you vote for me?” with a link attached. It looked genuine. Why wouldn’t it be? She clicked. Within seconds, her Instagram account was locked. She was logged out, couldn’t get back in, and the same message started getting sent to her entire contact list, from her account. She called me immediately to warn me. A minute later, I received the same message, this time from her. Same wording. Same link. I didn’t click. Instead, I put up a story warning mutual friends not to click either, and she started the process of recovering her account. This isn’t a post about blaming anyone. It’s more about how easy it is to fall for things that look familiar. We hear about phishing and scams, but most of us think of them as emails from some “King” or suspicious lottery messages. Not a casual DM from a friend you trust. But that’s the thing: cyber threats aren’t always loud. Sometimes they wear the face of someone you know. Digital literacy isn’t just about knowing how to use a phone or app—it’s about knowing what to trust. Clicking a link, logging in on a weird page, or giving access to one “harmless” browser extension can give someone control over your digital identity. And recovery isn’t always easy. Social media platforms often take time to respond. In the meantime, the hacker might change your password, email, and even your recovery options. We need to start treating our data and access like we treat our house keys. Would you give your house keys to a random person who said they were “sent by a friend”? Probably not. So here’s what I learned: If something feels even a little off, it’s okay to pause. Always verify through a second channel. If you get a strange message, call or text the person directly. Use two-factor authentication. And never log in through random links. Go to the website or app directly. Digital literacy starts with knowing how to protect yourself and then passing that awareness on to others.

  • Can AI Prevent Dementia?

    I’ve started wondering: what happens when we outsource memory to AI tools? Sure, asking ChatGPT to explain a concept or remind me of a fact is convenient. But could this reliance weaken our brains over time? That thought feels especially relevant when I consider how memory works. Psychologists have long said that we remember better when we retrieve information ourselves—like exercising a muscle. If AI tools do the remembering for us, are we skipping our mental workouts? But then I stumbled upon research that flipped my thinking. There’s a growing field exploring how AI might actually strengthen memory, especially in people showing early signs of dementia or cognitive decline. One study trains episodic memory through chatbot-based games using people’s own life stories. It doesn’t feel clinical. It feels like talking to someone who remembers your past. Other tools like CogniHelp use speech practice, journaling, and tailored prompts to keep users mentally active each day. These systems are stimulating memory, not replacing it. Even more fascinating, some AI systems are working on analysing things like sleep EEG patterns and predicting cognitive decline years before symptoms appear. That means earlier intervention, better preparation, and maybe even more time for meaningful engagement with memory itself. Similarly, in education, there are so many ideas that can be explored: What if these simple tools could be designed to encourage recall, not just provide answers? Could a chatbot nudge students to remember, explain, or rephrase ideas before revealing the answer? We don’t fully know yet what makes memory stick or what role AI should play in that process. Some “brain training” apps only help with the games they offer, without broader benefits. But research shows that emotional connection, story-sharing, and routine matter too. These are things that AI can help strengthen, if designed right. Maybe the risk isn’t AI making us forget. Maybe the opportunity is for AI to help us hold on to what matters, for longer.

  • Brainspace and Dataspace: Exploring Human and AI Learning

    When you think of artificial intelligence, it's easy to picture massive server farms filled with high-powered GPUs, processing terabytes of data every second. But here's a paradox: even with all that computational firepower, artificial intelligence still struggles to match the efficiency and elegance of the human brain — a 1.4 kg organ running on the energy equivalent of a dim light bulb. This comparison between brainspace and dataspace raises a question I keep returning to: Why does the brain, with its relatively small size and limited energy use, outperform massive AI systems in flexibility, memory efficiency, and creativity? Neural networks in AI are modeled after biological neurons. Each artificial neuron is a simplified version of its biological counterpart, connected in layers, adjusting weights based on training data. This structure has allowed machines to recognize images, translate languages, and even write poems. But it’s still narrow because AI systems require enormous amounts of labeled data, suffer from catastrophic forgetting when learning new tasks, and lack common sense reasoning. Meanwhile, the human brain uses synaptic plasticity, distributed encoding, and contextual memory to learn efficiently from just a few examples. We can generalize, adapt, and even dream, all while dealing with incomplete or ambiguous information. And unlike AI, our memory is meaningfully linked to emotion, attention, and survival instincts. This difference isn’t just academic. It has real-world consequences. In AI, limited “understanding” leads to biased outputs or brittle reasoning. In neuroscience, diseases like Alzheimer’s or Parkinson’s remind us how fragile and complex learning systems really are. Interestingly, understanding how brains forget might also teach us how to help AI remember better.  There are so many interesting questions: How can we design machines that not only process data, but understand it meaningfully? Is that even possible without a body, emotions, or lived experience? We may one day close the gap between brainspace and dataspace — but first, we need to appreciate what each can and cannot do.

  • Does AI bias exist? What a Scientist image taught me

    When I asked an AI tool to generate an image of a scientist, it gave me a man in a lab coat. No hesitation and no question. Just a male scientist. I didn’t tell it the gender, I just said “scientist.” That was the first red flag. But it didn’t stop there. A few days later, I asked the same AI to generate an image of a person based on their hobbies, like coding, badminton, guitar and drawing. Again, it gave me a male. I hadn’t mentioned gender at all. Still, the tool defaulted to “he.” That moment revealed the bias that exists in AI systems. These aren’t random glitches. They’re reminders that bias in AI isn’t science fiction. It’s actually built into the systems we use every day. When an AI makes these assumptions, it’s not thinking. It’s just reflecting the data it was trained on. If most of its training examples show men as scientists or men as default figures, then that’s what it repeats. Not because it “believes” it, but because that’s what it’s seen the most. I’m a student building my own chatbot — a simple one, really. Mine doesn’t generate content or draw pictures; it just answers questions using text patterns. But even in a system like mine, I’ve had to think about what examples I include. What questions are students most likely to ask? I’m still working on how to make sure the bot doesn’t ignore questions that come from different ways of phrasing? Bias may not be widely visible. It is sly.  Gender bias in AI doesn’t just affect how tools respond, it shapes how we see ourselves. If an AI can’t imagine a woman (or any other gender, for that matter) as a scientist, what message does that send to someone who wants to become one? If it assumes every user is male, who gets erased? The problem isn’t just technical — it’s human. And that’s actually good news. Because if people built these systems, people can make them better. The first step? Asking better questions. Like: Why did it assume that? Who gets to be the default? And how do we change that?

  • Three Students. One Question. Three ChatGPT Answers.

    The big question: Can Chatbots Replace Chalkboards? Here’s my take, based on a real incident. After our math test, three of us were in a heated discussion. We’d all gotten stuck on a tricky calculus problem and were now arguing over the answer. One of us said the limit was zero. Another said one. I was confident it was infinity. Classic post-test chaos. Naturally, we turned to ChatGPT to settle the debate. Each of us typed the same question into our own account. A couple seconds later, each of us proudly announced we were right. Somehow, we were. ChatGPT gave each of us a different answer. We laughed it off, but the real punchline came the next day. Our teacher solved the problem on the board in under five minutes and showed us the actual answer: neither zero, one, nor infinity. It was one-third . Turns out we’d all misapplied L’Hôpital’s Rule. So, can chatbots replace chalkboards? The short answer: no. Mostly. ChatGPT is great for quick explanations, practice problems, and late-night cramming. I’ve used it more times than I can count, especially when I’m too shy to ask a question in class again. But AI can’t always detect why I’m stuck. It doesn’t see the uncertainty on my face or catch the tiny math error I keep repeating. That’s where teachers win. My teacher didn’t just give us the right answer—she showed us how to think. She asked questions, noticed our confusion, and helped us connect concepts we were treating as separate. That kind of support doesn’t show up in a chat window. So no, chatbots can’t replace chalkboards. But they definitely help. When I use both my teacher and ChatGPT, I learn faster and with more confidence. One gives me quick answers. The other teaches me how to ask better questions.

  • Designing for 12-Year-Olds: It’s Harder Than You Think!

    When I started building LexThread, I assumed the hardest part would be the tech: training the chatbot, making it understand questions, figuring out how to handle different languages. And yes, all of that took time and it is still very demanding. But the most unexpectedly frustrating part? The colors. The layout. The vibe. At first, I thought I had it all figured out. I picked a bright purple background and added a bunch of small illustrations like robots, doctors, scientists, pens, computers, planets. The idea was to make it look playful and educational, like a digital sticker book of STEM dreams. Very “fun,” very “for kids.” But something felt off. The more I looked at it, the more I started to wonder: what if 12-year-olds aren’t that different from people my age? We all scroll through the same reels, watch the same YouTubers, and share Pinterest boards with muted tones and soft gradients. Most of the websites I like aren’t full of clipart or neon colors. So... why would they want that either? After all, they aren’t toddlers getting distracted by colours! So I decided to test it out. I made a second version of the site with a beige background, minimal icons, softer colors. It was cleaner, more relaxed, and honestly more my style too. I showed both versions to a few 12- and 13-year-olds. (Okay, not exactly a proper experiment, but close enough.) Their responses were kind of brutal, but really helpful. “The purple one looks like it’s trying too hard,” one said (Ouch). “This one feels more like a real website,” another added, pointing to the minimal version. Deep down, I knew that it’s true! I, myself, would prefer something minimalistic. That’s when it hit me: designing for younger users doesn’t mean designing down. It means designing better with respect for how sharp and intuitive they already are. They’re not looking for something that screams “educational kids’ app.” They want something that looks clean, modern, and - honestly - aesthetic. And I totally get it. I’ve clicked away from websites that looked too cluttered or too childish. So why should they be any different? In the end, I rebuilt the site with a calmer color palette, simple fonts, and cleaner pages. I want the experience to feel thoughtful, welcoming and human to people of all ages, especially kids. Turns out, designing for 12-year-olds isn’t about adding more stuff. It’s about understanding what they actually like and trusting that even at 12, they know what “good design” looks like.

  • Why Do Chatbots Get It Wrong Sometimes?

    I’m building a chatbot to help students with science and math questions. It uses a keyword-based system. It reads from a CSV file, looks for key terms in a question, and shows the answer that matches best. I’m still developing it, but even in this early stage, I’ve already started noticing something important. Chatbots get things wrong. More often than you’d expect. At first, I assumed that if the bot could match a keyword, it would lead to the right answer. But what actually happens is more complicated. Sometimes the questions have similar words that confuse the match. Sometimes there’s more than one keyword, and the bot picks the wrong one. And sometimes, even though the answer is technically correct, it doesn’t feel like it actually answered what the student meant. For example, in the early stages of testing, I noticed that if someone typed “digestion,” the bot would return an answer for “respiration.” It took me a while to figure out why. “Digestion” wasn’t even in the dataset. But “respiration” was. Because both words end in “tion,” the bot matched on that substring and pulled up the wrong answer instead of showing an error message. The bot isn’t actually understanding anything. It’s just matching parts of words. It doesn’t know that digestion and respiration are two different things in science. It wasn’t a huge bug, but it showed me how easily the logic can go off track when the matching is too loose. That’s when I realised that bots don’t really get context. They don’t understand tone or intention or how the meaning of a word changes depending on the subject. This made me think more deeply about how language models work in general, even the advanced ones. Mistranslations, weird phrasing, answers that sound confident but are totally off. All of that happens more than we realise. Because AI doesn’t understand the way we do. It learns patterns, not meaning. That’s why it can sound smart but still mess up basic logic. The more I work on this, the more I’ve started paying attention to how people actually ask questions. Some mix languages. Some are too vague, or super specific. Some use words in ways I didn’t expect. And that’s okay. The challenge isn’t just to give answers. It’s to actually hear the question. Apart from building the chatbot, I’m also learning how the technology behind it actually works. How computers recognize words, figure out meanings, and try to respond in ways that make sense, even though they don’t “understand” like humans do. It’s interesting to see how tricky language can be for machines and how we can teach them to handle it better. So I’m not just making the bot. I’m also figuring out how language and technology work together, behind the scenes. And building this chatbot has already shifted how I think about language, education, and tech. Maybe that’s the real starting point. It’s not about getting it perfect but more about noticing what goes wrong, and asking why.

  • Can AI Speak My Language?

    I’ve always believed that learning should feel natural, but sometimes it doesn’t,  especially when the words on the screen don’t sound like the way you actually speak. A lot of students around me switch between languages every day and are still very fluent in English. But there are students in rural areas who struggle with English. When they look up questions online, they mostly find English-only answers and have a hard time understanding them properly. That affects how confident they feel. Even if they know the answer, they start doubting themselves. A few months ago, I worked with a friend to create a Kannada-English transliteration guide. I don’t speak Kannada myself, but I wanted to help students at a learning center who were more comfortable in Kannada than English. My friend handled the language side, and I focused on the structure and design. That project made me realize how much language affects understanding, especially when someone is already unsure about the subject. That’s what pushed me to explore a similar idea in Hindi, a language I’m fluent in. This is where multilingual NLP comes in. NLP (Natural Language Processing) powers technologies like voice assistants (Siri, Alexa), language translators (like Google Translate), and chatbots. The goal of NLP is to bridge the gap between how humans speak and how machines understand. But this technology is still learning. Sometimes it gets things wrong. Sometimes it just doesn’t get what you’re trying to say. This is not because your words are wrong, but because the context is missing. Despite these challenges, NLP keeps improving by learning from large amounts of text and human interactions. Right now, I’m working on a chatbot that helps students ask science or math questions. I want it to be easy to use, even for people who aren’t used to typing perfect English. My goal is to make it feel like you’re just chatting with someone who understands your question, no matter how you word it. I haven’t finished building the bot yet, but even while testing it, I keep thinking about the people I want it to reach: Students who speak other languages at home. Students who want to be fluent in English, but aren’t there yet. People who mix languages naturally without even realizing it. These are the kinds of users who don’t usually get tools made for them. Students who want to learn new languages can also use this tool. The bot gives answers in English, in code-mixed form (English + selected language), and in the full selected language. This helps users understand new words, sentence structures, and meanings in a way that’s easy and fun. Making AI more accessible is about building something that listens better. Something that helps you in your  language. It’s about making learning feel comfortable — not just translating words. And that’s the kind of AI I want to help create.

bottom of page