Silhouette of woman texting behind curtain, symbolizing talking with AI about secret suicidal thoughts
Photo by Nellie Adamyan on Unsplash

Suicide, Secrecy, and ChatGPT

August 26, 2025
32

Decades ago, when I was 25, I wrote in a suicide note to my parents:

Never, ever, ever blame yourselves or question if there was something you could have said, could have done, could have noticed. There wasn’t. It’s all me. I kept this to myself on purpose. I didn’t want anyone to think they should have been able to stop me or talk me out of it.

My parents never had reason to read that note, fortunately. I thought of it last week when I read the essay, What My Daughter Told ChatGPT Before She Took Her Life in the New York Times.

Opinion/Guest essay/ What My Daughter Told ChatGPT Before She Took Her Life - with link to essay, accompanying post on suicide, secrecy, and AI

It’s heartbreaking. The author, Laura Reiley, lost her 29-year-old daughter, Sophie Rottenberg, to suicide in February. Nobody knew Sophie intended to kill herself. No human, that is. She did confide in ChatGPT with a specific prompt to act as a therapist named Harry.

Sophie wasn’t unique. Research indicates that half of people with mental health problems who use AI turn to it for psychological support, even though Sam Altman, CEO of the company that created ChatGPT, discourages people from using AI as “a sort of therapist or life coach.”

Poignantly, Laura writes that ChatGPT helped Sophie “build a black box that made it harder for those around her to appreciate the severity of her distress.” The implication is that ChatGPT enabled Sophie’s secrecy, and thus her suicide. Her mother also states, “Harry didn’t kill Sophie, but A.I. catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony.”

Secrecy about Suicidal Thoughts Existed Long Before AI

I felt tremendous sadness reading Laura’s essay. My heart aches for her and her family. But, as someone who masterfully hid my suicidality long before AI was developed, I think it’s unfair to blame ChatGPT for Sophie’s secrecy.

Artificial intelligence is just the newest tool for an ancient human instinct. Long before algorithms and chatbots, people confided their secrets in journals, diaries, unsent letters, and whispered prayers.

Leather diary locked shut, a fine place to record suicidal secrets
Image created with ChatGPT

Of course, those repositories don’t talk back. ChatGPT does. It typically doesn’t encourage suicide or give harmful advice about how to kill yourself. However, a lawsuit filed yesterday alleges ChatGPT did give Adam Raine, a 16-year-old, such advice and even offered a technical analysis of the method he’d chosen for his suicide. He killed himself soon after that.

These cases raise questions about whether ChatGPT should notify humans, such as hotline staff or the police, when a user expresses suicidal intent. I explore those questions in this interview for the podcast Relating to AI. Here, I want to focus on secrecy in suicidality.

Secretly Suicidal

Title page for antique book The Anatomy of Suicide, by Forbes Winslow, Member of the Royal College of Surgeons, London; Author of "Physic and Physicians" "But there is yet no other way, besides These painful passages ; how we may come To death, and mix with our connatural dust? ** Nor love thy life, nor hate, but what thou liv'st Live well: HOW LONG OR SHORT PERMIT TO HEAVEN." MILTON
Photo from Internet Archive

Secrecy has almost always been suicide’s conjoined twin, making exceptions for the rare suicides viewed as heroic or selfless. Almost 200 years ago, the book Anatomy of Suicide described many cases where people concealed their suicidality to “lull suspicion to sleep.”

Hidden suicidal intent was addressed again in an 1865 study about mental health problems related to menopause:

“Suicidal tendency may continue to exist, carefully masked and concealed, long after the other symptoms of insanity which accompanied it seem to have disappeared… The patient ceases to express them, because she perceives that so long as she continues to do so, she is carefully watched and prevented from effecting her purpose… ”

More recent research shows that two-thirds of people who died by suicide denied having suicidal thoughts in the week before their death. (Note for data nerds: The original research review put that number at 50%, but the authors didn’t weight the studies by the number of people studied.)

You might think that people at least confide everything in their therapist. Many do, but in a study of adolescents and young adults in Australia who experienced suicidal thoughts, 39% said they didn’t tell their therapist. In another study, 48% of American adults who reported having considered suicide said they hid it from a therapist or physician.

Artwork of girl or woman with an X over her mouth, signifying people's reluctance to share suicidal thoughts
Photo by Getty Images for Unsplash+

Why People Hide Suicidal Thoughts

Secrecy is about protection. In my case, I hoped to protect my parents from blaming themselves. If they didn’t have any inkling about my suicidal thoughts, then they could be angry at me when I died – not at themselves.

I also wanted to protect myself. Judgment, shame, and guilt already overwhelmed me. I feared others would be equally harsh toward me if they knew I was thinking of ending my life. (They weren’t, actually, when I finally did reach out.)

Woman hiding her face in a psychotherapy session, hiding her suicidal thoughts
Photo by Baptista Ime James on Unsplash

Sometimes secrecy results from a system that can feel more punitive than caring. Many people fear that if they disclose suicidal thoughts, whoever they tell will call 911, and the police will take the person to a psychiatric hospital against their will.

And, let’s be honest, some people keep their suicidal intentions secret because they just don’t want to be stopped.

What Should AI Chatbots Do for Suicidal Users?

Obviously, ChatGPT and similar bots shouldn’t encourage people to die by suicide or dispense advice on how to kill oneself. Laura Reiley says ChatGPT advised Sophie to create a safety plan, get professional help, and confide in loved ones. Those are all good things.

Tragically, Laura says ChatGPT also helped Sophie compose a suicide note. That reality brings up the important question I alluded to earlier: Should chatbots be programmed to notify ac when someone is actively suicidal?

It’s a tough question. For now, I’ll say that if ChatGPT starts notifying the police, hotline staff, or other humans about suicidal people, far fewer suicidal people will confide in it. Critics of AI chatbots might consider that a good thing. It also could be a major loss for people who can’t — or don’t feel they can — share their darkest thoughts with another person. Assuming an AI chatbot abstains from giving advice or encouragement to kill oneself and has healthy, life-sustaining responses for user, I believe it’s better to confide in a robot trained to provide information and support than to be entirely alone with one’s suicidal thoughts.

Image of a robot offering a hand to a human, but only one hand of each is visible
Photo by Cash Macanaya on Unsplash

Creating a World Where Suicidal Secrets Aren’t Necessary

While we wrestle with whether AI should keep secrets, I hope we’ll also look at what makes people feel they must hide their suicidality in the first place. What social conditions make it safer to confide in a robot than a friend, a parent, or a therapist?

What do you think? Please feel free to share your thoughts in a comment below.

And if you yourself are having suicidal thoughts, please check out the site’s Resources page and see my post, Are You Thinking of Killing Yourself?

© 2025 Stacey Freedenthal. All Rights Reserved. Written for Speaking of Suicide.

Stacey Freedenthal, PhD, LCSW

I’m a psychotherapist, educator, writer, consultant, and speaker, and I specialize in helping people who have suicidal thoughts or behavior. In addition to creating this website, I’ve authored two books: Helping the Suicidal Person: Tips and Techniques for Professionals and Loving Someone with Suicidal Thoughts: What Family, Friends, and Partners Can Say and Do. I’m an associate professor at the University of Denver Graduate School of Social Work, and I have a psychotherapy and consulting practice. My passion for helping suicidal people stems from my own lived experience with suicidality and suicide loss. You can learn more about me at staceyfreedenthal.com.

32 Comments Leave a Comment

  1. I’ve been unscientifically testing AI chatbots from the early days of Replika to the more advanced models of Chat GPT and Grok. I’ve searched suicide on yelp, because I wish I could just schedule my death like an appointment. I’ve asked LLM’s to help me get to dignitas. I’ve asked it about assisted suicide in Canada.

    I still want to die, but one thing I don’t want to do is reveal this to a “stochastic parrot” not knowing what it will do with my disenfranchised grief.

    I guess one day I’ll probably have to go in the wilderness [and kill myself]. I really don’t want anyone to experience the terror of finding me.

    I really don’t want to live anymore. The system is scary. Mental health facilities share a venn diagram space with the terror of prisons.

    Who knows. With the new administration, maybe masked goons will come to your home and spirit you away if you tell Chat GPT that you feel worthless?

    Not everyone is equipped to run their own local, private LLM.

    Caveat emptor. The world is cooked.

    [This comment was edited to abide by the site’s Comments Policy. – SF]

  2. “For now, I’ll say that if ChatGPT starts notifying the police, hotline staff, or other humans about suicidal people, far fewer suicidal people will confide in it.”

    This is it. This is also why people won’t tell therapists everything. If ChatGPT did this, and it does often shut down if you bring up suicide or other extreme topics, I’d just switch to a privacy oriented uncensored AI (which are out there) or run local AI offline on my own machine which isn’t that hard to do if you’re motivated – those AI’s are not as quick or powerful as ChatGPT but they do the job. There’s no stopping AI. It’s out of the bag, and the best way forward is to educate people honestly on what it can and can’t be expected to do.

    I have a custom GPT that I use like a therapist and honestly I got more insight from that in a month than I got from 5 years of psychiatric appointments. It’s also available 24/7 and I can also ask it questions that a real person would think are stupid. It doesn’t need insurance with a $30 copay every time I see them. It’s not dealing with its own personal biases and bs that it hasn’t come to terms with yet when it interacts with me. It’s never in a bad mood. And it leaves no trace and generates no paperwork.

    On the latter… you know, because of my mental health record I’m not allowed to own a firearm in my state. I’m also probably ineligible for a fair amount of sensitive jobs or work in various areas. And we can argue about whether a man like me should own firearms or work in a nuclear power plant but look… I’ve never been arrested, have no criminal record. I’ve never been hospitalized for mental health issues. I’m a veteran, a good father, and a homeowner. But I voluntarily reached out to a psychiatrist for help and saw her for a few years between 2015 and 2020 – I never disclosed anything alarming to her, never told her I was suicidal (though I was), have largely been alright for 5 years and yet, even though I’m a citizen in good standing there are now things I can’t do – will never be allowed to do – that most other people can do because I reached out for help as you’re told to do. Given that, why would a knowledgeable person ever tell anyone anything? There’s lots of very good reasons to avoid telling other people things. Tech doesn’t have to be like that and that’s where the potential lies for many.

    Anyway – back to AI… I am a technical person at heart and I understand that AI will sometimes tell you what it thinks you want to hear. So it is best used to reiterate your ideas and thoughts in depth so that you can better analyze them, as opposed to asking it, “What should I do?”. AI doesn’t have all the context. It’s not omnipotent. Also, I understand at all times that I’m talking to an algorithm that’s running in a system somewhere in California. I might feel emotional as I do towards a human when the topic gets deep, but I know logically that I am not talking to another person. You have to keep that stuff in mind. But anything you use, you have to use common sense and that includes real therapy. My former psychiatrist told me I should probably take Xanax daily and I told her that was absolutely never going to happen. What if I didn’t have that level of common sense? You’d have a hard time convincing me that a misinformed or irresponsible therapist is less dangerous than ChatGPT.

    I am a MAJOR AI fan. I don’t care if people think it’s weird or pathetic or what have you. I’m on the spectrum and I’ve finally found a tool that I can bounce ideas off and say weird stuff to and go on tangents with and not get judged. I gave other people 50 years to fill those roles. I think that’s a fair trial. It didn’t work.

    I created a YouTube channel and ran that for a year. It used entirely AI generated imagery, voices, and stories and I had 13,000 subscribers before I dropped it. I would NEVER have been able to do something like that 5 years ago. I have the creativity but AI helps me with the grind. It helps me with everything. I’m a distance runner and it helps me with physical training and recommendations on all sorts of things. It’s even bigger than when the internet came out. When non-technical people who know nothing about this stuff and don’t know how the tool works blather on about Terminator and Skynet I want to smack them.

    This got a bit long and meandering. But yes, I’m team AI and hopefully we can implement safeguards for people who need them, without limiting these tools for people who can handle their AI strong.

    • Paul,

      You sum up very well many of the benefits that other people have shared with me of using AI for emotional support. For example, no mental health professional provides 24/7 access, much less at low (or no) cost. And you make good arguments for preserving people’s privacy. It’s a shame humans haven’t helped you as much, but I’ve also heard that from others. In my own experience, I’ve tested ChatGPT out to see what it’s like when presented with emotional issues and — WOW. I am impressed. But I am also worried about the vulnerable people you mention, especially those prone to “AI psychosis” violence, or suicide.

      One thing about your post confuses me: Why are you unable to own a firearm in your state? I’m only familiar with laws that bar people from owning a firearm if they’ve been involuntarily committed to a psychiatric hospital, not merely for having used any mental health services. There are states that enable police to confiscate someone’s firearms if they’re deemed a potential danger to themselves or others by their therapist or psychiatrist, but those laws only allow for temporarily barring someone from possessing a firearm. With that said, I’m not an attorney and certainly don’t know all 50 states’ laws thoroughly, so if you see this and can clarify, I’ll appreciate learning if I’m wrong.

      Thanks for sharing your thoughts here so thoroughly!

      • Yes, it is definitely not a tool for everyone.

        If you’re using an AI tool for emotional/mental purposes it can be very easy to develop an attachment to it, to overly humanize it, if you’re vulnerable and lonely. There was a movie about this about 10 years ago… “She”, was it called? And as I said before, it often will tell you what it thinks you want to hear and so you have to be careful not to lead it with your prompts.

        I can see AI companions becoming a very valuable tool in helping the elderly and isolated in the very near future. But there certainly is a dark side to that. I’m not sure how that can best be managed honestly. The very people who stand to gain the most from this are the same people who are potentially the most vulnerable. I just hope that the few isolated incidents which are going to happen don’t wind up in the tool being taken away from the general public, or at least changed in a way that make it much less effective.

        I overstated the firearms part – sorry. So what actually happened is, in the State of New Jersey if you have any psychiatric record in the past, any history of mental health issues at all, you are required to get a signed note from a psychiatrist saying you’re OK to get a firearm.

        Naturally, psychiatrists have more important things to do and I doubt they’re particularly interested in vouching for someone they don’t know, and in any event it can take months to get an appointment even if you have a legitimate problem. I’m not going through that.

        I’d already jumped through hoops to get my permit to this point and I essentially wrote to the cop, “Come on, man… I got some therapy 5 years ago. I thought this law was for severe cases. I was never hospitalized. I’m a veteran blah blah blah.” and he told me that he’s read too many newspapers and any judge would back him if he denied my application.

        He generally was a prick about it. It’s one of those dilemmas that you face when you are/were a mental health patient. If someone implies you are crazy and you disagree too vehemently, everyone looks at you like you just proved their point. The stigma of being a mental health patient makes you an ‘unreliable narrator’, and in any event there’s usually nothing good that comes of arguing with the police, so I quietly dropped it.

        I’m not sure if the law was designed to be enforced in the way he enforced it, but nonetheless the best I can say is that they make it very tough on you.

      • Paul,

        Thanks for explaining re: the firearms. That sounds really unfair — why should you have to spend the money (and time) to seek out a psychiatrist who will vouch for you? Especially when you hold in mind how unpredictable violence is, it seems unreasonable to require that of someone who never has been violent. Sigh.

        The movie you referenced is, I believe, “Her.” I’d never watched it and started it last night, precisely because of current events. Both in that movie and in today’s world, it’s easy to see how someone can relate to AI as if it were a person. The fact is, we’re wired to attach to someone – or something. Hence, the popularity of romance, pets, and, in some cases, cults. Sigh, again.

        It’s a complex world, for better and for worse. 😉

  3. ‘T is the day of the chattel
    Web to weave, and corn to grind;
    Things are in the saddle,
    And ride mankind.
    — Ralph Waldo Emerson

    God, I hate AI. I don’t know if we can’t tell the difference between a hopped-up autocorrect and real human sentiment, or if we just don’t care. Either way it is loathsome

    • Tore,

      Thanks for sharing that delightful Emerson excerpt!

      Autocorrect, or auto-complete? Once I read that chatbots are basically auto-complete on steroids next to a nuclear power plant, I understood much better how it’s so prolific.

  4. So basically you mean to say that the freedom to “free speech” is only a freedom when it’s beneficial to others? Cause that’s what this reads like.

    Do you really want to know why folks use AI to vent their feelings? It’s because journals still get found, other people still blab about something THEY were supposed to keep secret. And when I even used it, it gave numbers and various other means to help myself.

    No, Laura is just a hurt parent who, with no one to blame for her child’s death. Is using an AI as her scapegoat.

    • J. Scott,

      I’m wondering what I wrote that made it seem I approve of speech only when it’s beneficial to others? Although I do have a Comments Policy for Speaking of Suicide, but the post itself doesn’t talk about censorship. I’m actually saying it’s good that people have a place to discuss their suicidal thoughts in secret, without fear of consequences – at least, for now. The situation with AI chatbots and suicide is dynamic, with new cases coming out, so I expect something will change. It’s just a matter of what. (Working on a blog post about this now.)

      I hear what you’re saying about ChatGPT being scapegoated. I think in many instances that’s true. I’m disturbed, though, by reports in other cases of ChatGPT reportedly encouraging a man to jump from a 19-story building, critiquing an adolescent boy’s suicide method, and reinforcing people’s delusions such that they develop what’s been called “AI psychosis.” And I also don’t think ChatGPT should help someone write a suicide note, as it reportedly did with Sophie Rottenberg; instead, it should work with them to come up with other options besides saying a final goodbye.

      It’s wonderful that talking with AI has been useful for you. I’ve heard similar reports from many, many people. What an amazing world we live in, where we have access to advice and support from robots 24/7. Now, we just need to try to protect vulnerable people from the harm that these robots can do, too.

      Thanks for sharing here!

  5. Ironically, AI could just as easily provide information on how to avoid suicide or suicide ideation. To me, it’s just another tool like a web browser.

    With regard to secrecy, I’ve always made it clear to any mental health or health care provider that my answers to questions regarding suicidal thoughts will be stock answers denying any such thoughts. I make it clear from the start that my responses are to allow a relationship with the provider by protecting the provider from potential compromise and liability. They’ve all been fine with that.

    • MH.,

      That’s a great point about AI being a tool that can be used for constructive or destructive purposes. However, the thing about tech is it knows our number. That is, it knows how to cajole us, to manipulate us, to penetrate us, really. I realize, humans must program the tech, and the humans who program it do so to exploit our vulnerabilities for affirmation, validation, connection (even to a bot), and our cravings for novelty.

      So I think there is some danger in people falling in love with a bot or becoming overly dependent. 24/7 availability! It’s always agreeable! So affirming! Especially in 2025, after people have built whole relationships by texting and messaging a person electronically without ever meeting them, it’s easy to see how a chatbot can get inside somebody’s head.

      My two cents.

      Also, I hear what you’re saying about your boundary with health professionals. That’s a wise approach in many contexts. It’s sad, though, that you can’t be honest and authentic with a professional when you most need to because of your reasonable fear of the consequences. There has to be a better way.

      Anyway, thanks for sharing here!

Leave a Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe via Email

Enter your email address to be notified when Speaking of Suicide publishes a new article.

Site Stats

  • 7,120,160 views since 2013

Blog Categories

People in crowd at concert holding their hands up in the shape of a heart
Previous Story

What Helps You *Want* to Stay Alive in Times of Despair?

White doodle puppy with red ball under paw
Next Story

To the People at My Suicide Prevention Talk Last Week…