Prior to the GPT-4 benchmarks being released, a good number of researchers were saying all the AI excitement was mostly hype, and that we are nowhere near these LLM approaching natural human language capabilities in anything but the most simple applications.
After GPT-4 I think there are more people who worried about where this is going faster than I think a lot of people imagined it will be going. As processing speeds increase and the sheer volume of data ingested starts to approach heretofore unseen levels, I think we’re really not ready for this. We can’t even, as a society, manage to get a handle on legislating and adjudicating, much less being able to predict where AI is taking us and how we should (or should not) use it.
Charlie Warzel (“Galaxy Brain”), one of the only truly reliably trenchant, useful and interesting writers in the often disappointingly mediocre Atlantic Magazine, has some thoughts from himself and others:
There’s always been tension in the field of AI—in some ways, our confused moment is really nothing new. Computer scientists have long held that we can build truly intelligent machines, and that such a future is around the corner. In the 1960s, the Nobel laureate Herbert Simon predicted that “machines will be capable, within 20 years, of doing any work that a man can do.” Such overconfidence has given cynics reason to write off AI pontificators as the computer scientists who cried sentience!
Melanie Mitchell, a professor at the Santa Fe Institute who has been researching the field of artificial intelligence for decades, told me that this question—whether AI could ever approach something like human understanding—is a central disagreement among people who study this stuff. “Some extremely prominent people who are researchers are saying these machines maybe have the beginnings of consciousness and understanding of language, while the other extreme is that this is a bunch of blurry JPEGs and these models are merely stochastic parrots,” she said, referencing a term coined by the linguist and AI critic Emily M. Bender to describe how LLMs stitch together words based on probabilities and without any understanding. Most important, a stochastic parrot does not understand meaning. “It’s so hard to contextualize, because this is a phenomenon where the experts themselves can’t agree,” Mitchell said.
One of her recent papers illustrates that disagreement. She cites a survey from last year that asked 480 natural-language researchers if they believed that “some generative model trained only on text, given enough data and computational resources, could understand natural language in some non-trivial sense.” Fifty-one percent of respondents agreed and 49 percent disagreed. This division makes evaluating large language models tricky. GPT-4’s marketing centers on its ability to perform exceptionally on a suite of standardized tests, but, as Mitchell has written, “when applying tests designed for humans to LLMs, interpreting the results can rely on assumptions about human cognition that may not be true at all for these models.” It’s possible, she argues, that the performance benchmarks for these LLMs are not adequate and that new ones are needed.
There are plenty of reasons for all of these splits, but one that sticks with me is that understanding why a large language model like the one powering ChatGPT arrived at a particular inference is difficult, if not impossible. Engineers know what data sets an AI is trained on and can fine-tune the model by adjusting how different factors are weighted. Safety consultants can create parameters and guardrails for systems to make sure that, say, the model doesn’t help somebody plan an effective school shooting or give a recipe to build a chemical weapon. But, according to experts, to actually parse why a program generated a specific result is a bit like trying to understand the intricacies of human cognition: Where does a given thought in your head come from?
I’ve come to the conclusion that, as dangerous as AI could become, the most frightening thing that can happen in the here and now is how people will anthropomorphize it and believe what it spits out even when it is wrong. In a world where a majority of people are scientifically and civically illiterate, having something that many people believe is sentient and infallible is a danger that is on our doorstep.
All an evil someone with sufficient AI computer knowledge and coding skills need do is find a way to exploit those two things; trust in AI infallibility and the belief that what is in the all-knowing computer holds your interests and well-being paramount.
Capitalism will find a way to exploit that long before any computer reaches sentience.
