Humans are Large Language Models, only better. Reflecting on a couple of recent blog posts on the topic …

Andreas Ortmann
7 min readMay 12, 2024

--

As I have been reading and discussing large language models, I find I’ve learned as much about us humans as I have about the AI that replicates some of what we do. Introspecting, am I really that much more than a large language model? …

I recognize that I have around a thousand stories. Most of my conversation and writing, especially for this blog, opeds, interviews, and discussions, consists of prompts which lead to one of those prepackaged stories. …

The Grumpy Economist (John Cochrane) recently had an interesting blog post on AI, humans, and the tools of economics.

He makes the intriguing claim that a) most of his stories are not original and that they are the result of his interaction with other people and b) learning and education are largely formal training to learn more stories in response to prompts. …

He then ponders what the ever higher sophistication of Large Learning Models (LLMs) means for the business of blogging, opeds, and forth:

Your natural instinct might be, this business is toast and will be totally displaced by automation. Not so fast. Here is a new story for you. Look at supply and demand: … “

The upper supply curve (rising to the right) shows the supply of commentary, and its intersection of current demand for commentary (the light blue dot). LLMs push the supply curve down and to the right, along the demand curve to a new transitional equilibrium indicated by the dark green point. That is intuitive because much of one’s (blog) writing can be farmed out to AI tools (and clearly many of my students understand that point just fine.) At the dark green point, us humans produce more for the same cost in time, or we write faster.

Also at the dark green point, quantity demanded has increased — possibly so much that total revenue increases. It all depends on the elasticity of demand and supply curve. Since blogging seems a fairly competitive industry, I conjecture that the supply curve will be less steep than indicated in the graph. It is not clear from this framework what the net result is in total revenue, and whether demand will also shift and lead to yet another transitional equilibrium. Says Cochrane:

This has happened many times before. Moveable type (Gutenberg) lowered the price of books. Did bookselling crash, and the monks starve? No, demand at a lower price was so strong that bookselling took off, and more people made more money doing it. (Though, as always, perhaps different people. The monks went on to other pursuits than copying manuscripts.) The printing press, radio, TV, movies, and the internet each had the same effect.

Cochrane follows up with a discussion of the demand for commentary, its elasticity, and how it could be moderated (e.g., by using to condense the flood of blog posts and interesting readings that he comes across; again, that’s certainly something many of my students have become pretty good at.)

Current algorithms are said to be pretty good, often too good, at feeding you what you like, but I want new things that expand my set of stories, and best of all the rare ones that successfully challenge my beliefs. …

In sum, perhaps AI will also help on the demand side, shifting demand to the right as well.

Cochrane then goes on and states, correctly in my view, that commentary is both a quality question, and a quantity question. Most commentary sux, especially on twitter/X etc. Cochrane expresses the hope that AI editing might dramatically improve the quality of commentary. Maybe. Maybe not.

He then goes on to more intertemporal questions, and ultimately the claim in the title of the medium post you are just reading:

I also learn a lot by writing. When I write a blog post like this one, I have to think things through, and often the underlying story gets clearer, or I realize it was wrong. If the AI writes all by itself, nether of us is going to get any better. But perhaps the editing part will be just as useful as my slow writing. I learn a lot by conversations my writing sparks with readers, where I often find my ideas were wrong or need revising. Once the comments are taken over by bots I’m not sure that will continue to work, at least until I get a comment-reading bot going.

I am quite sympathetic to that point. While I worked on this medium post, I had online conversations with several of the students that got flagged by an AI detection tool as having made undeclared use of AI during the case study part of a final exam.

There seem to be two main standard operating procedures: one is foreign students whose English is wanting who answer the questions in their own language (possibly using AI tools) and then use translation software to ultimately submit their English answers, the other is students that just rely on AI tools to give them the answer. (Our marking rubric provides — typically false — answers that ChatGPT would give to our questions, so as to alert markers.)

One of the students we talked with was born in AU and had used AI tools in intriguing ways: She took meticulous AI-assisted notes from the pre-lecture individual core learnings. She showed us the impressive folder with a file for each lecture and walked us through one of the files. I was (we were) in awe. We are about to hire her to help us better understand which parts of the individual pre-lecture learning materials are particularly challenging and hope to be able to use the students’ notes to simplify/streamline those materials.

There is all kinds of interesting IP issues here (e.g., the student who took the AI-assisted reading notes was not supposed to feed the pre-lecture materials into an AI tool) although I suspect that horse has bolted and we won’t be able to bring it back into the box. There is also the interesting issue what over-reliance on AI tools does to critical thinking skills. Arguably, us humans are Large Language Models, only better. And we are better because we are more than large sets of stories that get trained on historic corpora of text data. Which I think is the point that Cochrane made. While in awe, I worry about the student who took those AI-assisted reading notes and what it does to her critical thinking skills. A useful parallel might be the folks at cash registers that grew up with readily available calculators but that get stumped when, on paying a bill of $8.30, you give them $10.30. Of course, under-developed thinking skills are much more consequential.

(Over at Facebook, a friend commented: “we are better because we are more than large sets of stories that get trained on historic corpora of text data.” I hope so, but the statement does require proof that billions of stories contain less information than a finite amount of human reasoning plus a relative handful of stories.” Fair enough but I did qualify my statement with “arguably” and it’s not clear to me how that proof could be engineered.)

In The Pinocchio Effect, Economies of Scale and Market Concentration in AI (substack.com), Scott Cunningham briefly reminds us of AI’s potential to augment workers’ productivity, causing demand to rise for some types of work, and decline or even be eliminated entirely for others. The rise in productivity and accompanied efficiency has self-evident significant implications for economic growth and wage distribution.

In his substack post, Cunningham focusses on market concentration and market power that AI will bring about. He predicts that on the supply side — because of the staggering fixed costs of AI and the economies of scale they imply — , at most a handful natural monopolies will emerge. It seems an uncontestable point and Cunningham provides some quite interesting examples and numbers.

Cunningham then proceeds to a discussion of a demand-side effect that he calls the “Pinnochio Effect”:

With OpenAI’s latest update introducing “memory” in ChatGPT-4, these AI systems now keep track of past interactions, significantly enhancing their ability to act as continuous conversational partners. This feature might be primarily viewed as a productivity tool, but it also intensifies the psychological bond users form with their chatbots, leading to what I term the “Pinocchio Effect” — where a non-living program begins to “feel real.” This phenomenon isn’t just a curious side effect; it has profound implications for market dynamics. As these AI entities become more personalized and retain more about their interactions with users, the psychological barrier to discontinuing use grows, akin to ending a relationship with a living being.

In other words, there will be significant financial and psychological switching cost which will re-inforce suppliers’ market power.

Experimental research by Matthew Jackson and colleagues at Stanford highlights this anthropomorphic tendency further, showing that AI can exhibit traits such as high openness and altruism — often outscoring humans in these areas. These traits make AI appear not just helpful, but empathetic and friend-like, reinforcing the psychological realism and attachment users feel towards these digital entities (read more about Jackson’s research).

This blending of sophisticated AI behavior with human-like memory and interaction capabilities suggests a future where AIs serve not just functional roles but become integrated into the social fabric of users’ lives. OpenAI’s policy that deleting an account erases all accumulated memories only deepens the perceived significance of these digital relationships. This setup implies a high cost of switching, not just in losing a service but in “erasing” a friend or confidant, a concept that is becoming less far-fetched as AI continues to evolve.

As AI technologies like ChatGPT become more embedded in daily life, their role expands from tools to companions. This shift, combined with the significant costs associated with developing these technologies, suggests a future dominated by a few large players capable of sustaining such investments. The blurring line between AI as a tool and AI as a psychological companion will challenge traditional notions of interaction, attachment, and even regulation in the tech space, making it an essential area for ongoing economic and ethical exploration.

That’s an interesting point although not particularly new. AI “friends” have been a thing for a while and the distress that they can bring about is well-documented. Academic papers have been written about it. Here is a good one.

Consider making your opinion known by applauding, or commenting, or following me here, or on Facebook.

--

--

Andreas Ortmann

EconProf: I post occasionally on whatever tickles my fancy: Science, evidence production, the Econ tribe, Oz politics, etc. Y’all r entitled to my opinions …