ChatGPT, the generative artificial intelligence processor found in a growing number of applications, uses “natural language processing” to estimate the sequence of words that users want next in phrases, sentences, and paragraphs. In other words, it’s a calculator. Deal with it.

Cranks and crotchets in high dudgeon over calculators are nothing new. We made our kids learn long division, and their multiplication tables, because…. well, because we did. In 1990, Jerry Adler published an article in Newsweek, entitled “Creating Problems:  It’s time to minimize rote learning and concentrate on teaching children how to think.” The article starts this way:

Let us consider two machines, each capable of dividing 1,128 by 36. The first is a pocket calculator. You punch in the numbers, and in a tenth of a second or so, the answer appears in a digital display, with an accuracy of, for all ordinary purposes, 100 percent.

The second is a seventh grader. You give him or her a pencil and a sheet of paper, write out the problem, and in 15 seconds, more or less, there is a somewhat-better-than-even chance of getting back the correct answer.

As between them, the choice is obvious. The calculator wins hands down, leaving only the question of why the junior high schools of America are full of kids toiling over long division, an army of adolescents in an endless trudge, carrying digits from column to column.

Later in that article, Thomas Romberg, of the University of Wisconsin, Madison, is quoted:  “There isn’t anyone out there anymore who makes his living doing long division.”

This argument was unpersuasive to many. Luddites argue that the point for educators was not to obtain the correct answer in the fastest and most reliable way. Rather, learning to do the long division problem “by hand” meant that the student actually understood the process of calculation, rather than simply producing an answer mysterio-mechanically. Still, a more persuasive argument, made by Professor Romberg, is that doing long division is archaic and inefficient, and you can’t get paid for it because there is a better and faster way. At some point, we switch to using a calculator.

That wasn’t always true, of course. The original calculators were just people, called “computers.” They actually did “make their living doing long division,” and computing square roots, and so on. Those people were put out of business by mechanical, and then electronic, calculators and computers of the sort we take for granted today. It was not easy to get a job as a computer, because you had to be smart and quick, and able to focus for long periods. A modern spreadsheet program, installed on an off-the-shelf $700 laptop, can do the work of 1,000 person-hours or more in a few seconds.

The advent of machine/electronic “computing” had two effects. First, it cost thousands of people their jobs. But second, because the cost of computing fell by more than 99.9 percent, there was a massive burgeoning of economic activity. Things became faster, cheaper, and more convenient on a scale that would have seemed like science fiction as recently as 1955.

Old Whines in New Bottleneckers

Note that there are three separate arguments:

 (a)   People need to learn how to think, and understand deeply!

(b)   Protect the jobs! People have worked hard to do this!

(c)   New tech is disruptive, and the effects are hard to predict!

On a larger time scale, we have seen exactly the same argument play out over centuries in the case of many new technologies. It is hard to imagine how disruptive the introduction of the printing press was for society, but think about it: There were thousands of people who were highly accomplished scribes, and “illuminators.” An illuminated text, done by artists who had practiced their craft for decades, was a work of art. The cost of such a book was the equivalent of decades of salary for the average worker, well beyond the ability of any but the richest elites to own. The printing press was capable of producing text, and illustrations, at a cost that was (comparatively) so low that skilled manuscript copiers became obsolete within less than a decade.

But, of course, the democratization of books, both because of the reduction in cost and the decision to print in the vernacular instead of only Latin, transformed the European world. As Andrew Pettigree has written in Brand Luther, the net effect was an enormous increase in the number of jobs in the printing industry, and upward trends in literacy, reading, and the ability to reach mass publics. One could argue that the effects, including the Reformation and shockingly violent wars that it provoked, were disruptive, and of course that’s right. But very few of us, other than Patrick Deneen, want to go back.

More recently, but just as catastrophically for the “workers” involved, we saw the disruptive impact of universal access to GPS on phones using apps such as Waze. London’s famous “black cabs” (originally short for “cabriolet”) could only be operated by licensed drivers. And the most formidable part of the licensing process was simply called “The Knowledge.” Established in 1865, this required that applicants acquire a mental map of all 25,000 streets, lanes, and alleys (London is a maze, not a grid). But ride-share companies, such as Uber, need not require the knowledge because they have “the app.”

Which is better? In large measure — except for cost! the two are indistinguishable when operating properly. Waze has the advantage of real-time updates on congestion, accidents, and construction, of course. Human drivers who know the shortest route, but don’t know there’s an accident, are at a disadvantage. But all of us have had the experience of Waze, or Google or Apple Maps, telling us to turn into a building, or sending us on a bizarre route just because the AI is confused.  Drivers who have paid the costs of acquiring “The Knowledge,” just like book copyists before them, protest that the new technology should be banned.

But as Mellor and Carpenter argued in their book Bottleneckers, such movements are trying to count benefits as costs. It is good that people no longer have to waste years to acquire “The Knowledge,” just as it is good that people can now spend their time on more productive activities rather than use pencil and paper to compute solutions to long division problems. It is difficult for those who currently find themselves displaced, but in just a few years the dramatic increase in productivity and decline in costs will dwarf those difficulties. These old whines in new bottlenecker form must not be acted on by policymakers.

ChatGPT

And so we come, finally, to ChatGPT. I’m assuming the reader is familiar with the technology, and I want to suggest that the analogy to the printing press, to calculators, and to GPS, is apt. In January 2023, I wrote a piece for Reason that I then considered satire. Now, I’m not so sure. There is nothing conceptually difficult about using natural language processing to create all possible word sequences, for documents ranging from haikus to enormous tomes. Of course, storing and indexing this trove would not be physically possible, but that limitation is at least in principle one that can be overcome. It would be Borges’ “Library of Babel,” only more comprehensive.

And then that is the end of that. There is no more writing to do, it’s done. All we need is to find the right text from the universal library and use it. No writer’s block, no staring at that mocking-blinking cursor, it’s all there.

Of course, I can hear the traditionalists lining up for the old whines. Just like for the calculator: better to learn to think, no shortcuts, good for you to acquire the skill, just because you should, and so on. Further, people actually do “make a living” by writing. But then people made a living by spending years learning to be a human “computer” before calculators came along.

Look, folks. ChatGPT is happening. People are rapidly learning how to use it. For many routine tasks — and, honestly, most writing is routine, not creative — it is faster and actually better to have the AI create the text, at least for the first draft. Or to have the AI create 5 or more versions of a text so that you can pick one and then edit that.

Does this mean that we as a society will value writing less? Does it mean that the people — and I’d include myself, writing this right now! — who “make a living” writing are going to have to rethink our choices? Does it mean that 20 years from now we will look back, with 2020 hindsight, and say that the opposition to AI natural language applications was misplaced?  I think the answer to all these questions may be “yes.” Deal with it.

Michael Munger

Michael Munger

Michael Munger is a Professor of Political Science, Economics, and Public Policy at Duke University and Senior Fellow of the American Institute for Economic Research.

His degrees are from Davidson College, Washingon University in St. Louis, and Washington University.


SOURCE