You Are Not Reading About AI. You Are Reading About a Mirror.
- IndianMHS indianmhsummit
- Apr 19
- 4 min read

This is the most AI-esque title I could put in. It is not A, it’s B.
We know some parts are redundant, some sound too formal or very unreal, and yet, it has quietly become an alternative to Google. Most searches are curiosities about ourselves, other people, or the world. You open your phone and you start typing : ‘What does pain in my neck mean?’ ‘Where should I invest my money?’ ‘Give me research articles on ....’
Millions of people are doing it. it’s not an edge case any more, it’s not just you and me.
Now here’s where it gets interesting. The moment you read that, something happened in your mind. A reaction. Maybe a flicker of concern, a quiet resistance, or even recognition.
That reaction is what this piece is really about.
We are in the middle of a story about AI and mental health. But like all stories mid-telling, most of us have only heard fragments. A headline here, a viral case study there, a webinar or podcast episode someone forwarded. And fragments, when they're emotionally charged, tend to calcify fast. They stop being data and start feeling like truth.
The conviction, right now, runs something like this for us - AI in mental health is either saving grace or catastrophe. It will either fix the broken system or break people who are already fragile. Both of these are stories. But if you step back, something else starts to come into view. Let’s do that for a moment.
Mental health care in most parts of the world and India is not exempt from this, it’s a system under extraordinary strain. Professionals are leaving the field faster than new ones are trained. Access to a therapist remains a privilege of geography and income. Wait times in public systems stretch into weeks. And the mental health need keeps growing. In many Indian cities, even finding an available therapist can take weeks, while in smaller towns, there
may not be one at all.
Into this gap, people are reaching for whatever is available. That turns out, increasingly, to be an AI on their phone. Nearly half of people with mental health conditions, in surveys across multiple countries report having used a large language model for support. Of those, the overwhelming majority cite one reason above all others: it was there. At 2 a.m, when no one else was.
This does not make AI a therapist. But it does make the conversation far more complicated than AI is dangerous. The question we need to ask is what kind of story we're telling about it, and whether that story is actually helping anyone think more clearly.
Right now, the loudest narratives about AI in mental health are built on fragments. A small study, a simulated conversation, a tragic case, treated as representative. These spread because they are emotionally legible. They fit the shape of a warning.
But a warning is only useful if it leads somewhere. If it only amplifies fear, it shuts down the very thinking we need. At the same time, enthusiasm built on hype is just as distorting. The AI-will-fix-mental-health narrative tends to flatten complexity in the other direction, turning a nuanced problem into a product pitch. Both reactions make sense but neither is the whole truth.
So what changes when you zoom out and see the system?
You stop asking "is AI good or bad for mental health", because that question is structured like a verdict, and we are nowhere near the evidence needed for one.
You start asking better questions instead. Questions like: Under what conditions does AI support become harmful? What kinds of interactions seem to genuinely help, and why? How do we build feedback loops between clinicians and the systems people are already using? Who is responsible when something goes wrong? These are design, policy, and clinical questions. They are hard and specific and ugly. But they are the questions that actually lead somewhere. They move things forward.
The people who will shape how AI integrates into mental health care are not just the technologists. They are the practitioners noticing patterns in their clients. The researchers willing to study what's actually happening rather than what's theoretically possible. The policymakers deciding what counts as care. The users who know what helped and
what didn't.
You are probably one of these people. Which means you are not just a reader of this story. You're a participant in it. AI is a mirror. It reflects what we put into it like our fears, our loneliness, our coping patterns, our questions at 2 a.m. It also reflects what we, as a society, put into building it, with our assumptions about what care looks like, who deserves it, and what counts as help.
That's not a reason to fear it. It’s a reason to look at it more carefully. The story isn't finished yet. But the people reading this piece are among those holding the pen.
Author Bio:
Vinamra Vasudeva is a psychologist and clinical strategist working at the intersection of mental health and technology. She consults with products, educates therapists on AI, and builds tools for modern clinical practice through TinT. Her work explores agency, ethics, and what it means to stay human in an AI-shaped world.



Comments