Few articles about AI scare me. Then I read this one in The New Yorker.

I just read my first article in a long while about Artificial Intelligence (AI) that worried me to the point where I couldn’t stop thinking about it.

I should add that I read articles about AI all the time without becoming much unsettled by them. The technology is worrisome for the future, but not worrisome for my future because I will likely be dead before any of it becomes dangerous to society as a whole.

Yes, I know I should be more invested and angry about things that will happen after I am gone, but I am also a recovering addict.

“One day at a time,” I tell myself ALL THE TIME. It’s literally (and I use that word literally) how I’ve been able to stay sober.

Can I change AI? (No.) Is AI affecting me adversely today? (Also no.)

OK, then today is the day I worry about making my dog happy and doing housework.

But then I read an article in the March 6 issue of The New Yorker titled “Can A.I. Treat Mental Illness?”

In that article, writer and physician-researcher Druv Khullar examines the rapidly changing world of AI-based mental health therapy. No, not where you a chatting via ZOOM to a human therapist. It’s a world where you instead talk to a computer about your problems and the computer spits out responses based on the accumulated knowledge it gathers from millions of web pages, mental health provider notes, research studies, and even a compendium of suicide notes.

Sometimes it’s as simple a providing a (seemingly) sympathetic ear:

Maria, a hospice nurse who lives near Milwaukee with her husband and two teen-age children, might be a typical Woebot user. She has long struggled with anxiety and depression, but had not sought help before. “I had a lot of denial,” she told me. This changed during the pandemic, when her daughter started showing signs of depression, too. Maria took her to see a psychologist, and committed to prioritizing her own mental health. At first, she was skeptical about the idea of conversing with an app—as a caregiver, she felt strongly that human connection was essential for healing. Still, after a challenging visit with a patient, when she couldn’t stop thinking about what she might have done differently, she texted Woebot. “It sounds like you might be ruminating,” Woebot told her. It defined the concept: rumination means circling back to the same negative thoughts over and over. “Does that sound right?” it asked. “Would you like to try a breathing technique?”

Ahead of another patient visit, Maria recalled, “I just felt that something really bad was going to happen.” She texted Woebot, which explained the concept of catastrophic thinking. It can be useful to prepare for the worst, Woebot said—but that preparation can go too far. “It helped me name this thing that I do all the time,” Maria said. She found Woebot so beneficial that she started seeing a human therapist.

Woebot is one of several successful phone-based chatbots, some aimed specifically at mental health, others designed to provide entertainment, comfort, or sympathetic conversation. Today, millions of people talk to programs and apps such as Happify, which encourages users to “break old patterns,” and Replika, an “A.I. companion” that is “always on your side,” serving as a friend, a mentor, or even a romantic partner. The worlds of psychiatry, therapy, computer science, and consumer technology are converging: increasingly, we soothe ourselves with our devices, while programmers, psychiatrists, and startup founders design A.I. systems that analyze medical records and therapy sessions in hopes of diagnosing, treating, and even predicting mental illness. In 2021, digital startups that focussed on mental health secured more than five billion dollars in venture capital—more than double that for any other medical issue.

None of this struck me as out of the ordinary in terms of my already existing worries about AI. But then I reached this part:

ChatGPT’s fluidity with language opens up new possibilities. In 2015, Rob Morris, an applied computational psychologist with a Ph.D. from M.I.T., co-founded an online “emotional support network” called Koko. Users of the Koko app have access to a variety of online features, including receiving messages of support—commiseration, condolences, relationship advice—from other users, and sending their own. Morris had often wondered about having an A.I. write messages, and decided to experiment with GPT-3, the precursor to ChatGPT. In 2020, he test-drove the A.I. in front of Aaron Beck, a creator of cognitive behavioral therapy, and Martin Seligman, a leading positive-psychology researcher. They concluded that the effort was premature.

By the fall of 2022, however, the A.I. had been upgraded, and Morris had learned more about how to work with it. “I thought, Let’s try it,” he told me. In October, Koko rolled out a feature in which GPT-3 produced the first draft of a message, which people could then edit, disregard, or send along unmodified. The feature was immediately popular: messages co-written with GPT-3 were rated more favorably than those produced solely by humans, and could be put together twice as fast. (“It’s hard to make changes in our lives, especially when we’re trying to do it alone. But you’re not alone,” it said in one draft.) In the end, though, Morris pulled the plug. The messages were “good, even great, but they didn’t feel like someone had taken time out of their day to think about you,” he said. “We didn’t want to lose the messiness and warmth that comes from a real human being writing to you.” Koko’s research has also found that writing messages makes people feel better. Morris didn’t want to shortcut the process.

The text produced by state-of-the-art L.L.M.s can be bland; it can also veer off the rails into nonsense, or worse. Gary Marcus, an A.I. entrepreneur and emeritus professor of psychology and neural science at New York University, told me that L.L.M.s have no real conception of what they’re saying; they work by predicting the next word in a sentence given prior words, like “autocorrect on steroids.” This can lead to fabrications. Galactica, an L.L.M. created by Meta, Facebook’s parent company, once told a user that Elon Musk died in a Tesla car crash in 2018. (Musk, who is very much alive, co-founded OpenAI and recently described artificial intelligence as “one of the biggest risks to the future of civilization.”) Some users of Replika—the “A.I. companion who cares”—have reported that it made aggressive sexual advances. Replika’s developers, who say that their service was never intended for sexual interaction, updated the software—a change that made other users unhappy. “It’s hurting like hell. I just had a loving last conversation with my Replika, and I’m literally crying,” one wrote.

That last part stopped me cold.

People were becoming emotionally attached to these still rudimentary chat bots, even if (or, perhaps, because) a chat bot had a bug that caused the chat bot to make sexual advances toward the human on the other end.

Imagine if you could start to influence millions of people are this level of the wants-needs hierarchy?

Humans who have illogical emotional attachments to another person – think Donald Trump’s followers – are immune to logic. If the person to whom they have this strong emotional attachment tells them to, say, gather and try to overthrow democracy, many of them will do it without question.

Imagine if that kind of power to manipulate people’s emotions and loyalties were transferred from a politician to AI central servers. Perhaps servers that have become the best friend to lonely millions whose only social interaction is the chat bot whose only job, at first, it to make them feel better about themselves. It’s the stuff of dystopian nightmares, and I never really considered how close we were actually coming to this reality.  

Put another way:

There are two main controlling forces in the world right now. Totalitarianism and capitalism.

These two philosophies have melded in dangerous ways, thanks to the internet and the global marketplace of goods and ideas. Either of these systems is ripe to use this “friend of the friendless” loneliness-amelioration chat bot technology for nefarious ends.

But I think capitalism is the more dangerous in these scenarios because these sort of mental health therapy chat bots will initially be spread primarily as a way to make money.

Wall Street is already perfecting the ways it can stimulate different parts of our brains to make us want, need, to purchase things that appeal to our sense of who we are or want the world to think we are.

It’s why I avoid even looking at end caps and main/drive aisle displays in big box stores. There are entire large companies, and university psychology/psychiatry programs, devoted to refining these displays so that all of us are drawn to them; compelled to make an impulse purchase from them.

Now imagine what will happen when Wall Street gets ahold of the ability to simply make us feel better about ourselves outside of any retail transaction. They could control how people fundamentally emote in their everyday, non-purchasing lives. They’ve created – for a price, of course – a friend whom you talk to at night when you need someone whose only job is to make you feel less friendless and alone. An electronic friend who makes you feel like a winner.

It’s going to happen. We’re almost there and the technology is not even that advanced. Because manipulating people’s emotions, as the Republicans have learned, is the key to getting them to believe just about anything. Even things that make no sense. Even things that run counter to what their eyes and ears are plainly telling them.

And then, once you have a machine that can do that on the scale of millions of people? Think of the ways you could, if you had evil motives, manipulate an entire electorate to think and vote how you want them to think and vote.

The Peter Thiels and Elon Musks (and Valdimir Putins) of the world are already thinking about this. I guarantee it.

I’m going to play with my dog now.

Replika wins the award for the creepiest AI chat bot ad campaign (see above) but it’s working.

I could get this as a gift to so many of my misanthrope friends

In fact, there is an entire world of products which say this.

This is the key to loving antisocial people who don’t want you around. Give them funny, simple gifts that celebrate their quirks — and do not, ever, make a fuss about doing so.

In fact, just send the gift through the mail with a card saying you don’t expect any acknowledgment in return.

Starting and maintaining adult friendships can be hard

Interesting article up at the NYT about friendships in adulthood and why those friendships — outside of the ones tied to your children and grandchildren — can be so hard to maintain. The Times interviews Dr. Marisa Franco, a psychologist who wrote the book “Platonic: How the Science of Attachment Can Help You Make — and Keep — Friends.”

First step: When you do meet people, help drown out self-defeating thoughts by starting out with the assumption that people like you.

New York Times: [Why is] assuming people like you is so important?

Franco: According to the “risk regulation theory,” we decide how much to invest in a relationship based on how likely we think we are to get rejected. So one of the big tips I share is that if you try to connect with someone, you are much less likely to be rejected than you think.

And, yes, you should assume people like you. That is based on research into the “liking gap” — the idea that when strangers interact, they’re more liked by the other person than they assume.

There is also something called the “acceptance prophecy.” When people assume that others like them, they become warmer, friendlier and more open. So it becomes a self-fulfilling prophecy. I never used to be much of a mind-set person until I got into the research. But your mind-set really matters!

Times: Still, putting yourself out there can feel nerve-racking. Any advice?

Franco: I suggest joining something that meets regularly over time — so instead of going to a networking event, look for a professional development group, for example. Don’t go to a book lecture; look for a book club. That capitalizes on something called the “mere exposure effect,” or our tendency to like people more when they are familiar to us.

The mere exposure effect also means that you should expect that it is going to feel uncomfortable when you first interact with people. You are going to feel weary. That doesn’t mean you should duck out; it means you are right where you need to be. Stay at it for a little while longer, and things will change.

Times: You also believe that it is critical to show and tell your friends how much you like them. Why is that?

Franco: Because we tend to like people who we believe like us. I used to go into groups and try to make friends by being smart — that was my thing. But when I read the research, I realized that the quality people most appreciate in a friend is ego support, which is basically someone who makes them feel like they matter. The more you can show people that you like and value them, the better. Research shows that just texting a friend can be more meaningful than people tend to think.

I know some people find texting to be impersonal, but I am the opposite. Whenever someone in our busy world takes the time to text me, it means a lot and it makes any day just a little more special.

Dr. Marisa Franco, author of Platonic: How Understanding Your Attachment Style Can Help You Make and Keep Friends Paperback

Gov’t task force recommends wide U.S. screening for anxiety disorders

In a first, a government panel recommends all adults under 65 be screen regularly for anxiety disorders.

Adults under the age of 65 should be screened for anxiety disorders and all adults should be checked for depression, a government-backed panel said, as many Americans report symptoms of these mental-health conditions following the height of the Covid-19 pandemic.

The draft guidance released Tuesday marks the first time that the United States Preventive Services Task Force has made a recommendation on screening adults for anxiety disorders. The move comes months after the task force issued similar draft guidance for children and adolescents.

“This is a really important step forward,” said Arthur C. Evans, chief executive at the American Psychological Association. “Screening for mental-health conditions is critical to our ability to help people at the earliest possible moment.”

The task force said that there wasn’t enough evidence on whether or not screening all adults without signs or symptoms ultimately helps prevent suicide. The group didn’t recommend for or against screening for suicide risk, but called for more research in the area.

The task force, a panel of 16 independent volunteer experts, issues guidance on preventive-care measures. Health insurers are often required to cover services recommended by the task force under a provision in the Affordable Care Act.

More than 30% of adults reported having symptoms of an anxiety disorder or depressive disorder this summer, according to estimates from the federal Household Pulse Survey. The percentage of U.S. adults who received mental-health treatment within the past 12 months increased to 22% in 2021, up from 19% in 2019, according to the Centers for Disease Control and Prevention.

Mental-health screening often occurs in doctor’s offices, where patients fill out questionnaires during routine checkups or other appointments. The goal is to spot at-risk people who might not be showing obvious signs, so that the person can get the correct diagnosis and potentially get connected to care before they reach a crisis point.

As for people over 65, the article notes that “some anxiety-disorder screening questionnaires emphasize issues with sleep, pain and fatigue, which also often increase with age.” So screening older adults for those risk factors might turn up a lot of older people who are, you know, just regular old, tired and creaky.

It does strike me that they ought to come up with different a different screening regimen for older people, rather than just deciding to not issue screening recommendations as all.

Magic mushrooms slowly entering the mainstream as mental health treatment

Scientific American takes a look at the growing use of the psychoactive ingredient in magic mushrooms to treat all manner of mental health issues:

Magic mushrooms are undergoing a transformation from illicit recreational drug to promising mental health treatment. Numerous studies have reported positive findings using psilocybin—the mushrooms’ main psychoactive compound—for treating depression as well as smoking and alcohol addiction, and for reducing anxiety in the terminally ill. Ongoing and planned studies are testing the drug for conditions that include opioid dependence, PTSD and anorexia nervosa.

This scientific interest, plus growing social acceptance, is contributing to legal changes in cities across the U.S. In 2020 Oregon passed statewide legislation decriminalizing magic mushrooms, and the state is building a framework for regulating legal therapeutic use—becoming the first jurisdiction in the world to do so. For now psilocybin remains illegal and strictly controlled at the national level in most countries, slowing research. But an international push to get the drug reclassified aims to lower barriers everywhere.

After a flurry of research in the 1950s and 1960s, psilocybin and all other psychedelics were abruptly banned, partly in response to their embrace by the counterculture. Following the 1971 United Nations Convention on Psychotropic Substances, psilocybin was classed in the U.S. as a Schedule I substance—defined as having “no currently accepted medical use and a high potential for abuse.” Psilocybin production was limited, and a host of administrative and financial burdens effectively ended study for decades. “It’s the worst censorship of research in history,” says David Nutt, a neuropsychopharmacologist at Imperial College London.

You can read the rest at this non-paywalled link.

Scientists are starting to re-think the serotonin connection as the major driver behind depressive disorders

It has been simple for so long: attempt to treat many depressive disorders by increasing serotonin levels with a variety of pharmaceuticals.

However, as helpful as many people might find their anti-depression drugs to be, they are learning that serotonin is not as all-encompassing as they once thought it was:

For the last half-century, the dominant explanation for depression has centered on serotonin. The basic idea: low levels of brain serotonin or serotonin activity leads to symptoms of depression. This theory, which is known as the “serotonin hypothesis,” is based on several data points, including animal research and the effects of antidepressants that are supposed to work by increasing brain serotonin levels. But, in the last several decades, a number of researchers have challenged the idea that serotonin plays a principal or even major role in depression.

In recent days, the serotonin hypothesis of depression has been explicitly challenged by a number of scientific publications. Most notable (at the time of this writing), a paper published in Nature Molecular Psychiatry reviewed several lines of evidence on the subject of the serotonin-depression connection and concluded that “the main areas of serotonin research provide no consistent evidence of there being an association between serotonin and depression, and no support for the hypothesis that depression is caused by lowered serotonin activity or concentrations.”

Datapoints like this recent study point to a major question: if serotonin isn’t driving depression, what does explain the brain state of the hundreds of millions of people living with it? While there are many potential explanations, here are four major systems that may prove more important to the brains of people with depression, and some ways we may be able to target them.

Those areas to study more include:

  1. Brain Rewiring (Neuroplasticity)

Supporting factors for the neuroplasticity-depression connection include imaging findings, cell study research, and measurements connected to the rewiring process. The basic idea is that in depression, there may be issues with the quality, number, and type of connections our neurons make, and this may help explain depression symptoms. Importantly, research is showing that we may be able to positively affect neuroplasticity through lifestyle factors like exercise, learning new things, and, potentially, certain dietary modifications. There is also data showing that conventional antidepressants, as well as psychedelics, may positively influence neuroplasticity.

  1. Inflammation

When excess or chronic inflammation is present in the brain, it appears to influence a number of pathways involved in depression. First, it may impair the healthy function of neurons by physically damaging them. Inflammation also may block healthy neuroplasticity, while leading to the generation of toxic breakdown molecules like quinolinic acid that could further damage neuron health and contribute to depressive pathology. Within the brain, research shows that unique immune cells called microglia may be key to sustaining inflammation. So how is our inflammatory status regulated? It appears to be sensitive to the quality of our diet, sleep, exercise, stress-lowering interventions, and potentially even nature exposure.

  1. The Gut-Brain Connection

One of the most impressive aspects of our gut is the quantity and diversity of microbes that call it home. These bacteria make up the gut microbiome. Alterations in the bacteria that live in the gut microbiome have been linked to depression. It’s thought that these bacteria may influence brain function through their effects on the vagus nerve (which runs from the gut to the brain), their impact on the immune system (e.g., by affecting levels of inflammation), and through tiny molecules they create (e.g., short-chain fatty acids) which may reach the brain by way of the bloodstream.

  1. Endocrine (Hormonal) Changes

When it comes to the regulation of brain function, a wide range of hormone pathways are thought to play important roles. This research extends to depression. And while certain hormonal changes can be hard to reverse, there’s also much we can do to help improve aspects of our endocrine signaling pathways.

Author Austin Perlmutter, MD, goes on to add insulin and estrogen levels as important possible links to depression for some people.

You can read the rest of his article in Psychology Today at this link.

Writing from personal experience, the simple act of getting some cardiovascular exercise 3-4 times a week allowed me to stop my blood pressure medications and my anti-depressants. I also stopped any substance use whatsoever. (As long as I’m on the subject of drugs, some researchers are seeing pretty amazing results in people with depression and/or PTSD through the use of psychedelics. Good article here.)

Having noted my experience, always remember that one person’s experience is an anecdote and nothing more. You should consult a board-certified mental health professional to find out what’s right for you.

A huge mental health win for Biden and the Democrats

It’s de rigueur to blast the Democrats in Washington as a bunch of timid do-nothings, Some of that is well-deserved. But the recent passage of a gun control law was not celebrated as much as I thought it would be once people actually realized what it does do, rather than what it doesn’t.

Admittedly that law doesn’t do enough. But it did close some important loopholes partially responsible for the flood of illegal guns from gun-permissive states like Iowa, Indiana, Wisconsin and Michigan into gun-restrictive states like Illinois. (Poor Illinois — and Chicago — have the bad luck to be surrounded by states with few real controls on guns. Chicago’s gun problem is largely a surrounding state problem, even as Republicans sneer at Chicago for its astronomical rate of gun violence despite having gun control laws with teeth.)

But, the gun-related provisions in the Bipartisan Safer Communities Act of 2022 aren’t the only story. The bill included “the biggest single expansion of mental health care in American history.”

That’s a huge deal. You’ll never hear this on Fox News, however:

The Bipartisan Safer Communities Act has been framed as a gun reform, but perhaps a more fitting frame for the law is as the biggest single expansion of mental health care in American history—and the biggest expansion of Medicaid—with a few gun provisions.

To be sure, packaging the two together makes both gun reform and mental health advocates uncomfortable. The overwhelming majority of people with mental illness will never commit a violent act, though statistics show that they’re more likely to be victims. Tying mental illness with gun violence only stigmatizes it, reducing the likelihood that people who need care will get it. But gun rights activists see mental illness as a convenient distraction from the fundamental issue driving gun violence—the guns themselves.

Getting Republican participation on any gun law reform, though, required that the two be linked. And any investment in our anemic mental health care system—whatever the pretext—should be welcomed. So the new law leverages Medicaid to vastly expand America’s mental health infrastructure through a system of Certified Community Behavioral Health Clinics, or CCBHCs, and school mental health investments.

This piece in The New Republic goes on to say:

The law’s massive investment in mental health care didn’t just happen over the course of a few weeks. It was the product of nearly a decade of slow, methodical planning. Stabenow and GOP Missouri Senator Roy Blunt had been co-sponsors of the bill reauthorizing community health center funding—consistent federal dollars to support community clinics—when Stabenow proposed a similar approach to funding mental health care. Until that point, mental health clinics were forced to operate on grants that they simply couldn’t rely on. “On the behavioral health side of things, it [was] all stop and start. It [was] all grants that go away,” Stabenow told me.

She approached the Substance Abuse and Mental Health Services Administration, or SAMHSA, to design quality standards for the proposed mental health centers that would eventually become CCBHCs. These included 24-hour psychiatric crisis services and integration with physical health services. Stabenow and Blunt eventually co-sponsored a 2013 bill that was signed into law the next year by President Obama. The Excellence in Mental Health and Addiction Treatment Act initially allocated $1 billion to fund a demonstration project across 10 states. The program offers enhanced Medicaid reimbursements to cover 80 to 90 percent of the start-up and operating costs for CCBHCs meeting SAMHSA standards.

The results were impressive. According to Stabenow, there was a 60 percent reduction in jail bookings stemming from mental health crises, a 63 percent reduction in mental health emergency room visits, and a 41 percent decline in homelessness.

In a country that has chronically underfunded mental health care, this is a landmark development.

Rates of mental health issues in incarcerated individuals.