Prior to the GPT-4 benchmarks being released, a good number of researchers were saying all the AI excitement was mostly hype, and that we are nowhere near these LLM approaching natural human language capabilities in anything but the most simple applications.
After GPT-4 I think there are more people who worried about where this is going faster than I think a lot of people imagined it will be going. As processing speeds increase and the sheer volume of data ingested starts to approach heretofore unseen levels, I think we’re really not ready for this. We can’t even, as a society, manage to get a handle on legislating and adjudicating, much less being able to predict where AI is taking us and how we should (or should not) use it.
Charlie Warzel (“Galaxy Brain”), one of the only truly reliably trenchant, useful and interesting writers in the often disappointingly mediocre Atlantic Magazine, has some thoughts from himself and others:
There’s always been tension in the field of AI—in some ways, our confused moment is really nothing new. Computer scientists have long held that we can build truly intelligent machines, and that such a future is around the corner. In the 1960s, the Nobel laureate Herbert Simon predicted that “machines will be capable, within 20 years, of doing any work that a man can do.” Such overconfidence has given cynics reason to write off AI pontificators as the computer scientists who cried sentience!
Melanie Mitchell, a professor at the Santa Fe Institute who has been researching the field of artificial intelligence for decades, told me that this question—whether AI could ever approach something like human understanding—is a central disagreement among people who study this stuff. “Some extremely prominent people who are researchers are saying these machines maybe have the beginnings of consciousness and understanding of language, while the other extreme is that this is a bunch of blurry JPEGs and these models are merely stochastic parrots,” she said, referencing a term coined by the linguist and AI critic Emily M. Bender to describe how LLMs stitch together words based on probabilities and without any understanding. Most important, a stochastic parrot does not understand meaning. “It’s so hard to contextualize, because this is a phenomenon where the experts themselves can’t agree,” Mitchell said.
One of her recent papers illustrates that disagreement. She cites a survey from last year that asked 480 natural-language researchers if they believed that “some generative model trained only on text, given enough data and computational resources, could understand natural language in some non-trivial sense.” Fifty-one percent of respondents agreed and 49 percent disagreed. This division makes evaluating large language models tricky. GPT-4’s marketing centers on its ability to perform exceptionally on a suite of standardized tests, but, as Mitchell has written, “when applying tests designed for humans to LLMs, interpreting the results can rely on assumptions about human cognition that may not be true at all for these models.” It’s possible, she argues, that the performance benchmarks for these LLMs are not adequate and that new ones are needed.
There are plenty of reasons for all of these splits, but one that sticks with me is that understanding why a large language model like the one powering ChatGPT arrived at a particular inference is difficult, if not impossible. Engineers know what data sets an AI is trained on and can fine-tune the model by adjusting how different factors are weighted. Safety consultants can create parameters and guardrails for systems to make sure that, say, the model doesn’t help somebody plan an effective school shooting or give a recipe to build a chemical weapon. But, according to experts, to actually parse why a program generated a specific result is a bit like trying to understand the intricacies of human cognition: Where does a given thought in your head come from?
I’ve come to the conclusion that, as dangerous as AI could become, the most frightening thing that can happen in the here and now is how people will anthropomorphize it and believe what it spits out even when it is wrong. In a world where a majority of people are scientifically and civically illiterate, having something that many people believe is sentient and infallible is a danger that is on our doorstep.
All an evil someone with sufficient AI computer knowledge and coding skills need do is find a way to exploit those two things; trust in AI infallibility and the belief that what is in the all-knowing computer holds your interests and well-being paramount.
Capitalism will find a way to exploit that long before any computer reaches sentience.
However, the American Policyholder Association, a nonprofit insurance industry watchdog group, disagrees. It said in a statement that it has found “compelling evidence of what appears to be multiple instances of systematic criminal fraud perpetrated to cheat policyholders out of fair insurance claims” and will be submitting criminal referrals to authorities “in Florida & several other states” in the coming months.
Four homeowners confirmed to The Post that they had received only a small portion of what they had been promised in their determination letters from Heritage and Florida Peninsula, or were struggling to get straight answers and considering taking legal action. Meanwhile, their homes are still heavily damaged or uninhabitable. And more than 33,000 Florida homeowner claims linked to Ian are still open without payment, while more than 125,000 were closed without payment, according to the Florida Office of Insurance Regulation. Nearly 56,000 claims were open with payment and 183,235 were closed with payment.
Florida’s insurance market has been teetering toward collapse for years. After destructive storms in 2005, several big carriers including State Farm pulled back coverage in the state, and newer, more thinly financed, smaller companies swooped in and began to operate. Then came 2017, one of the costliest hurricane seasons ever. Hurricane Michael battered Florida the following year.
Adjusters said they started to see carriers greatly reduce damage estimates, fully deny roof replacements more often and force claims of a certain value into litigation. Payouts started to get delayed or not come at all, adjusters and attorneys said.
At the same time, rates kept rising, and fast. Florida homeowners paid an average of $4,231 for home insurance in 2022, nearly three times the price in any other state — and rates are expected to increase again this year. Ten property insurers that operated in Florida have gone insolvent since January 2021. About 125 property insurers remain in the state, but experts said many are either not taking on new business or are greatly limiting policies because of the volatile market.
This is, of course, unsustainable in a climate where hurricanes are becoming more numerous and powerful with each passing season.
I’ve lost track of how many of my friends have moved to Florida.
I get why that happens for people who, unlike myself, find winters up north to be intolerable, especially as you get older.
I suspect there will be an increasing number of them who will eventually have to move out of Florida because they cannot afford to insure their homes, or they will suffer catastrophic losses in the killer storms to come.
I learned pretty quickly that if I leave broadcast TV on in the living room while I am trying to get things done (housework, etc.) Otto the Rescue Pittie will sleep for far longer periods of time before he comes to find me and be as irresistibly needy as he always is.
But having broadcast TV running means there are all sorts of things that air which you’ve never heard of before.
Such as: How did I miss the fact that Lorenzo Lamas had a TV show called Renegadethat ran for five seasons (!!) between 1992 and 1997?
Also, I had forgotten how central Lorenzo’s hair was to his persona back then.
I just read my first article in a long while about Artificial Intelligence (AI) that worried me to the point where I couldn’t stop thinking about it.
I should add that I read articles about AI all the time without becoming much unsettled by them. The technology is worrisome for the future, but not worrisome for my future because I will likely be dead before any of it becomes dangerous to society as a whole.
Yes, I know I should be more invested and angry about things that will happen after I am gone, but I am also a recovering addict.
“One day at a time,” I tell myself ALL THE TIME. It’s literally (and I use that word literally) how I’ve been able to stay sober.
Can I change AI? (No.) Is AI affecting me adversely today? (Also no.)
OK, then today is the day I worry about making my dog happy and doing housework.
In that article, writer and physician-researcher Druv Khullar examines the rapidly changing world of AI-based mental health therapy. No, not where you a chatting via ZOOM to a human therapist. It’s a world where you instead talk to a computer about your problems and the computer spits out responses based on the accumulated knowledge it gathers from millions of web pages, mental health provider notes, research studies, and even a compendium of suicide notes.
Sometimes it’s as simple a providing a (seemingly) sympathetic ear:
Maria, a hospice nurse who lives near Milwaukee with her husband and two teen-age children, might be a typical Woebot user. She has long struggled with anxiety and depression, but had not sought help before. “I had a lot of denial,” she told me. This changed during the pandemic, when her daughter started showing signs of depression, too. Maria took her to see a psychologist, and committed to prioritizing her own mental health. At first, she was skeptical about the idea of conversing with an app—as a caregiver, she felt strongly that human connection was essential for healing. Still, after a challenging visit with a patient, when she couldn’t stop thinking about what she might have done differently, she texted Woebot. “It sounds like you might be ruminating,” Woebot told her. It defined the concept: rumination means circling back to the same negative thoughts over and over. “Does that sound right?” it asked. “Would you like to try a breathing technique?”
Ahead of another patient visit, Maria recalled, “I just felt that something really bad was going to happen.” She texted Woebot, which explained the concept of catastrophic thinking. It can be useful to prepare for the worst, Woebot said—but that preparation can go too far. “It helped me name this thing that I do all the time,” Maria said. She found Woebot so beneficial that she started seeing a human therapist.
Woebot is one of several successful phone-based chatbots, some aimed specifically at mental health, others designed to provide entertainment, comfort, or sympathetic conversation. Today, millions of people talk to programs and apps such as Happify, which encourages users to “break old patterns,” and Replika, an “A.I. companion” that is “always on your side,” serving as a friend, a mentor, or even a romantic partner. The worlds of psychiatry, therapy, computer science, and consumer technology are converging: increasingly, we soothe ourselves with our devices, while programmers, psychiatrists, and startup founders design A.I. systems that analyze medical records and therapy sessions in hopes of diagnosing, treating, and even predicting mental illness. In 2021, digital startups that focussed on mental health secured more than five billion dollars in venture capital—more than double that for any other medical issue.
None of this struck me as out of the ordinary in terms of my already existing worries about AI. But then I reached this part:
ChatGPT’s fluidity with language opens up new possibilities. In 2015, Rob Morris, an applied computational psychologist with a Ph.D. from M.I.T., co-founded an online “emotional support network” called Koko. Users of the Koko app have access to a variety of online features, including receiving messages of support—commiseration, condolences, relationship advice—from other users, and sending their own. Morris had often wondered about having an A.I. write messages, and decided to experiment with GPT-3, the precursor to ChatGPT. In 2020, he test-drove the A.I. in front of Aaron Beck, a creator of cognitive behavioral therapy, and Martin Seligman, a leading positive-psychology researcher. They concluded that the effort was premature.
By the fall of 2022, however, the A.I. had been upgraded, and Morris had learned more about how to work with it. “I thought, Let’s try it,” he told me. In October, Koko rolled out a feature in which GPT-3 produced the first draft of a message, which people could then edit, disregard, or send along unmodified. The feature was immediately popular: messages co-written with GPT-3 were rated more favorably than those produced solely by humans, and could be put together twice as fast. (“It’s hard to make changes in our lives, especially when we’re trying to do it alone. But you’re not alone,” it said in one draft.) In the end, though, Morris pulled the plug. The messages were “good, even great, but they didn’t feel like someone had taken time out of their day to think about you,” he said. “We didn’t want to lose the messiness and warmth that comes from a real human being writing to you.” Koko’s research has also found that writing messages makes people feel better. Morris didn’t want to shortcut the process.
The text produced by state-of-the-art L.L.M.s can be bland; it can also veer off the rails into nonsense, or worse. Gary Marcus, an A.I. entrepreneur and emeritus professor of psychology and neural science at New York University, told me that L.L.M.s have no real conception of what they’re saying; they work by predicting the next word in a sentence given prior words, like “autocorrect on steroids.” This can lead to fabrications. Galactica, an L.L.M. created by Meta, Facebook’s parent company, once told a user that Elon Musk died in a Tesla car crash in 2018. (Musk, who is very much alive, co-founded OpenAI and recently described artificial intelligence as “one of the biggest risks to the future of civilization.”) Some users of Replika—the “A.I. companion who cares”—have reported that it made aggressive sexual advances. Replika’s developers, who say that their service was never intended for sexual interaction, updated the software—a change that made other users unhappy. “It’s hurting like hell. I just had a loving last conversation with my Replika, and I’m literally crying,” one wrote.
That last part stopped me cold.
People were becoming emotionally attached to these still rudimentary chat bots, even if (or, perhaps, because) a chat bot had a bug that caused the chat bot to make sexual advances toward the human on the other end.
Imagine if you could start to influence millions of people are this level of the wants-needs hierarchy?
Humans who have illogical emotional attachments to another person – think Donald Trump’s followers – are immune to logic. If the person to whom they have this strong emotional attachment tells them to, say, gather and try to overthrow democracy, many of them will do it without question.
Imagine if that kind of power to manipulate people’s emotions and loyalties were transferred from a politician to AI central servers. Perhaps servers that have become the best friend to lonely millions whose only social interaction is the chat bot whose only job, at first, it to make them feel better about themselves. It’s the stuff of dystopian nightmares, and I never really considered how close we were actually coming to this reality.
Put another way:
There are two main controlling forces in the world right now. Totalitarianism and capitalism.
These two philosophies have melded in dangerous ways, thanks to the internet and the global marketplace of goods and ideas. Either of these systems is ripe to use this “friend of the friendless” loneliness-amelioration chat bot technology for nefarious ends.
But I think capitalism is the more dangerous in these scenarios because these sort of mental health therapy chat bots will initially be spread primarily as a way to make money.
Wall Street is already perfecting the ways it can stimulate different parts of our brains to make us want, need, to purchase things that appeal to our sense of who we are or want the world to think we are.
It’s why I avoid even looking at end caps and main/drive aisle displays in big box stores. There are entire large companies, and university psychology/psychiatry programs, devoted to refining these displays so that all of us are drawn to them; compelled to make an impulse purchase from them.
Now imagine what will happen when Wall Street gets ahold of the ability to simply make us feel better about ourselves outside of any retail transaction. They could control how people fundamentally emote in their everyday, non-purchasing lives. They’ve created – for a price, of course – a friend whom you talk to at night when you need someone whose only job is to make you feel less friendless and alone. An electronic friend who makes you feel like a winner.
It’s going to happen. We’re almost there and the technology is not even that advanced. Because manipulating people’s emotions, as the Republicans have learned, is the key to getting them to believe just about anything. Even things that make no sense. Even things that run counter to what their eyes and ears are plainly telling them.
And then, once you have a machine that can do that on the scale of millions of people? Think of the ways you could, if you had evil motives, manipulate an entire electorate to think and vote how you want them to think and vote.
The Peter Thiels and Elon Musks (and Valdimir Putins) of the world are already thinking about this. I guarantee it.
I just listened to what is, by far, one of the best podcast episodes to which I’ve ever listened, thanks to Michael Hobbes and Sarah Marshall and You’re Wrong About. The episode is about one New York City murder in March of 1964 and the way that murder of lesbian Kitty Genovese was so spectacularly mis-reported by an article in (where else?) The New York Times that was the genesis of the common urban legend about people being murdered in New York City and nobody – nobody – calling the police or coming to help.
In the early hours of March 13, 1964, Kitty Genovese, a 28-year-old bartender, was raped and stabbed outside the apartment building where she lived in the Kew Gardens neighborhood of Queens in New York City, New York, United States. Two weeks after the murder, The New York Times published an article erroneously claiming that 38 witnesses saw or heard the attack, and that none of them called the police or came to her aid.
The incident prompted inquiries into what became known as the bystander effect, or “Genovese syndrome”, and the murder became a staple of U.S. psychology textbooks for the next four decades. However, researchers have since uncovered major inaccuracies in the New York Times article. Police interviews revealed that some witnesses had attempted to call the police.
In 1964, reporters at a competing news organization discovered that the NY Times article was inconsistent with the facts, but they were unwilling at the time to challenge NY Times editor Abe Rosenthal. In 2007, an article in the American Psychologist found “no evidence for the presence of 38 witnesses, or that witnesses observed the murder, or that witnesses remained inactive”. In 2016, the Times called its own reporting “flawed”, stating that the original story “grossly exaggerated the number of witnesses and what they had perceived”.
Winston Moseley, a 29-year-old Manhattan native, was arrested during a house burglary six days after the murder. While in custody, he confessed to killing Genovese. At his trial, Moseley was found guilty of murder and sentenced to death. His sentence was later commuted to life imprisonment. Moseley died in prison on March 28, 2016, at the age of 81, having served 52 years.
The main thing I love about Marshall and Hobbes, among many, is how thorough they are in bringing new details to life, or correcting the falsities that get repeated elsewhere.
For instance, that last paragraph from Wikipedia is wrong, or at least seriously incomplete.
Mosely was sentenced to life, and his sentence was later commuted to life. And he did die in prison in 2016.
But what You’re Wrong About adds to the known record is that he actually escaped from prison during the time he was serving for Genovese’s murder. He went on to attack other people and ended up in a stand-off with police, after which was arrested and was sentenced to a second prison term. It was during this second prison term that he died.
Way back when I was managing editor at a weekly newspaper in Boston, the Log Cabin Republicans (LCR) – the national group for LGBT folk (and supportive others) in the GOP – set up a local chapter in Massachusetts.
In that heavily Democratic state, they faced much opposition.
The Log Cabin sales pitch was simple: yes, the Republican Party is, overall, very anti-gay. But to have an organization of openly gay Republicans could eventually turn that tide because 1) members of the GOP would see they have family and friends who are conservative and gay, and 2) Log Cabin clubs and members could be a force for change by showing that you can be conservative AND supportive of gay right AND still be elected (and re-elected) in conservative districts.
The Republican Party of Texas voted Saturday to censure U.S. Rep. Tony Gonzales, R-San Antonio, over his recent votes that split with the party.
The State Republican Executive Committee passed the censure resolution 57-5, with one member abstaining. It needed a three-fifths majority to pass.
The move allows the party, which is otherwise required to remain neutral in intraparty contests, to set aside that rule for Gonzales’ next primary.
The last — and only — time the state party censured one of its own like this was in 2018, when the offender was then-state House Speaker Joe Straus. He was also a moderate from San Antonio.
Gonzales did not appear at the SREC meeting but addressed the issue after an unrelated news conference Thursday in San Antonio. He specifically defended his vote for the bipartisan gun law that passed last year after the Uvalde school shooting in his district. He said that if the vote were held again today, “I would vote twice on it if I could.”
“The reality is I’ve taken almost 1,400 votes, and the bulk of those have been with the Republican Party,” Gonzales said.
I really bought the LCR sales pitch hook, line and sinker.
Our newspaper ran supportive profiles of them. I wrote a couple of editorials early on supporting their efforts which, considering the way the GOP was constituted in Massachusetts at the time, seemed likely to succeed in a state where most Republicans (Govs. Will Weld and Paul Cellucci, etc.) were not of the virulently crazy variety.
Boy, was I wrong. Even in Massachusetts currently, the state where LGBT rights are the nearest of any state to being a statewide non-issue, the GOP has turned hard right.
As for the Log Cabin folks, they simply ignore the fact that their party is, on LGBT issues, walking down a path that would be familiar to Jews in Germany in the late 1930s.
Not only has their party not gotten increasingly supportive on LGBT issues, the GOP is actually censuring members who vote positively on even the most anodyne LGBT legislation.
Numbers one through five might contribute to the housing problem in America overall. But it is a lack of affordable housing that is the chief reason for the growing homeless epidemic.
And, at long last, we have arrived at the actual root cause of homelessness: housing costs.
Because unlike poverty and mental illness and drug abuse and weather and welfare benefits and other factors, the places that have the highest housing costs, and the least housing supply, have the largest homeless populations:
In literally any other realm, this would come as no surprise. You can’t have what you can’t afford. If someone says, “I want a $2000 laptop, but can’t afford it,” nobody would find that hard to believe. But if someone says, “I really want the single largest and most crippling expense known to man, housing, but can’t afford it,” for some bizarre reason people would say, “that’s not true!,” or “correlation isn’t causation,” or “homelessness isn’t a housing problem,” or something patently insane. As I said before, the topic of homelessness breaks people’s brains.
The story of homelessness in America is perfectly captured by the following quote in the Economist:
Few Americans lived on the streets in the early post-war period because housing was cheaper. Back then only one in four tenants spent more than 30% of their income on rent, compared with one in two today. The best evidence suggests that a 10% rise in housing costs in a pricey city prompts an 8% jump in homelessness.
And that’s just it: before modern-day homelessness, there was poverty, there was mental illness, there was nice weather, there was welfare, there were liberal places, and there were drugs. So, something must have changed. And what changed were the rents:
If the primary problem of homelessness is housing, then the primary solution to homelessness is housing. And housing is indeed the solution:
Atlanta reduced homelessness by 40% through housing
Houston reduced homelessness by 63% through housing
Finland reduced homelessness by 75% through housing
Tokyo reduced homelessness by 80% through housing
But as important as housing supply is to reducing homelessness, places like Houston also demonstrate the importance of going beyond it.
Houston has always had a significantly lower rate of homelessness than other large cities, like New York City and Los Angeles, because unlike those cities, Houston builds a lot of housing:
But despite its ample housing supply, which, as mentioned, resulted in a lower baseline level of homelessness, Houston has still struggled with this problem. And that is because, while housing supply is vital, it will never ever, ever, ever be enough on its own for families who lack income, the disabled, the elderly, and other highly vulnerable populations.
This is why in 2011 Houston started going beyond supply by implementing the Housing First model, which pairs affordable housing with supportive services for people who are experiencing severe mental illness, drug addiction, and other debilitating issues. And, as a result, something incredible happened – homelessness plummeted.
I recently listened to You’re Wrong About, one of my favorite podcasts, as host Michael Hobbes and Sarah Marshall tackled the issue of homelessness. It’s an hour and 9 minutes long, so it’s a deep dive. But it will add substantially to your knowledge about the history of homelessness and the ways America has tried to deal with it at different times.
The biggest takeaways for me after all this:
Dealing with most of the homelessness in America — and all the problems that come with it — is fairly simple. Build more affordable housing, including in places where the locals don’t want it.
Give someone a safe, permanent place to live, and they will be more likely to be able to deal with all the problems they can’t currently deal with because they are homeless, including keeping a job and raising their kids in a responsible manner.
Mental health issues and the homeless have bidirectionality; that is, some people are chronically homeless because they have mental health issues, but many homeless persons develop mental health issues because they are homeless, and cold, and face daily rejection, and are constantly dealing with dangers large and small.
We need to think of homelessness not as a battle that will be won (“We tackled homelessness, finally!) but rather as an ongoing effort, like delivering mail. Nobody ever walks away from a post office saying, “Well, we delivered all the mail! Our job is done.” They know they have to come back and tackle it every day, in the large and small ways they do, otherwise the mail will pile up. The same goes for homelessness.
Many homeless people do have serious issues with mental health and drug addiction. Some of them are chronically homeless. They will drift and out of housing depending on where they are with their mental illnesses and drugs. But even they are worth trying to deal with on an ongoing basis because doing so has been shown to save money in the long run.
Britain’s Sky News TV network may no longer be owned by Fox’s Rupert Murdoch, but critics and researchers who study the media maintain that it has always kept that Fox News-ish ultra conservative tilt to the news.
The National Grid Electricity System Operator (ESO), which is responsible for keeping the lights on, has forecast that these “constraint costs”, as they are known, may rise to as much as £2.5bn per year by the middle of this decade before the necessary upgrades are made.
The problem has arisen as more and more wind capacity is built in Scotland and in the North Sea but much of the demand for electricity continues to come from more densely populated areas in the south of the country.
In order to match supply and demand, the National Grid has to move electricity from where it is being made to where it is needed.
But at the moment there aren’t enough cables between Scotland and England to do that.
There is one major undersea cable off the west coast of the UK, and two main junctions between the Scottish and English transmission networks on land.
This bottleneck means that when it is very windy there is actually too much electricity for these cables to handle without risking damage.
And because we can’t store excess renewable energy at the necessary scale yet, the National Grid Electricity System Operator has no option but to ask wind generators to turn off their turbines.
According to analysis by energy technology company Axle Energy, using publicly available data from the electricity system’s balancing market platform Elexon, in 2022 the National Grid spent £215m paying wind generators to turn off, reducing the total amount generated by 6%, and a further £717m turning on gas turbines located closer to the source of demand, in order to fill the gap.
These costs are eventually passed to UK consumers as part of the network costs section on energy bills.
It’s not until further down the article that you learn that constraint charges have also been an issue for excess fossil fuel energy generation from coal, oil and gas.
But OK, that fact might be seen as downplaying that there are real storage and transmission issues currently in some locales with wind power.
Governments and power companies have had decades to plan for these transmission issues. But oil, gas and goal interests have successfully lobbied during those decades to stymie the progress of wind, solar and other renewables. So let’s set aside the fact that we are where we are because the same conservative business interests who are now crying that renewables involve too much expense for the conversion from fossil fuels, are the same people who fought for so many years to prevent all of us from adequately preparing for the inevitable eventuality of renewables.
Britain and other places are dealing with the problems associated with wind power storage and transmission in ways both old and innovative:
There are plans in the U.S. and elsewhere to use educational subsidies to encourage young people to become electricians and electrical engineers, both of which will be in short supply as the world turns more to green energy.
Fossil fuel interests are not giving up. They are paying shadowy front groups with deceptive names to plant anti-renewable stories in the media. Conservatives are paying Russian troll farms to spread misinformation and divisiveness about renewables on social media.
These stories appearing all over recently – but especially in right-wing media — about the cost of wind power constraint payments in the face of excess supply are but one example of the ways that conservative forces are still trying to beat back the world’s progress on renewables.