Nicolas Carr summarizes some thoughts of Stephen Wolfram. For a long time we have believed that natural language processing was just around the corner. But what if we had it backwards? What if the future was not computers talking like people, but people talking like computers? And how do we feel about that?
Carr’s treatment of this possibility tends towards the dystopic:
If we’re going to depend on computers to fulfill our purposes, we’re going to need a shared language. We’re going to need to describe our purposes, our desires, in a code that can run successfully through a machine. Most of those who advocate teaching programming skills to the masses argue that learning to code will expand our job prospects. Wolfram’s view is more interesting. He argues that we need to learn to code in order to expand our ontological prospects.
In adopting a new language, a machine language, to describe our purposes, we will also, necessarily, change those purposes. That is the price of computer automation. “What do the humans do” in a world where “things can get done automatically?” Wolfram asks. The answer, of course, is that we compose the instructions for the machines to follow to fulfill our wishes. Will it compile? is the iron law of programming. Either the machine can follow the instructions written for it, or it can’t. Will we compile? would seem to be the great ontological question that lies ahead of us in our automated future. Have we formulated our purposes in such a way that machines can carry them out? (Source)
While there are dystopic aspects to this formulation, it’s helpful to see this shift through the lens of history. Because this is not the first time that external forces have changed the way we thought.
The most obvious shift is the evolutionary accident that allowed us to produce meaningful language. If you’re like me when you walk around you can hear, quite often, a running voice in your head, thinking thoughts, making plans, reacting to experience. We think of that voice as the stream of our thought, as if our thought was just there, independent of language. But of course what has happened is our minds have evolved to make better use of the technology of speech, and created consciousness in the process.
Did consciousness, based on our acquired tendency to filter experience through language, reduce our capacity for thought? In a small way, yes. In almost all meaningful ways, no.
The textual revolution is another example. Scientific thought became possible with the invention of writing, which could separate the utterance from the author, externalizing speech as ideas. That ability was made broadly available with the introduction of cheap, mass produced texts. But we did not end up with texts that could more accurately capture the human voice. Instead, we learned to formulate thoughts in the medium of text, and think thoughts not possible without text. See Age of the Incunable
Did the invention of text, and the necessity of putting our thoughts on the page decrease our capacity for thought? In some limited ways, yes. In almost all meaningful ways, no.
A smaller example: consider music notation. Music notation was originally a record of music as played. But the direction of musical notation was not that it became more like performance. The direction was that composers began to think in notes, movements, harmonies, seeing relationships impossible without the notation. Similar changes happened with the introduction of recording, which started out as a way to capture performance, but in the hands of folks like Brian Wilson became a new way to conceptualize music, a way to think thoughts impossible before the multitrack. See Music Notation as a Medium for Thought
Did recording technology and musical notation decrease our musical capacity, or increase it?
The point here is not really “will we compile?”, as the above examples show. That assumes we have a vast and broad intent that we are trying to shove through a narrow linguistic domain.
In fact, time and time again throughout history our interaction with machines and new methods of notation have shown not that the tools are too limited for our expansive intent, but that our intent is too limited for the expansive tools at our disposal. This was Alan Kay’s (and Ivan Sutherland’s, and Doug Englebart’s, and Ted Nelson’s) point about computing. Rather than building a “Star Trek computer”, we could start to think in new modes — we could discuss issues by altering simulations of them, think in terms of models, see a sort of intertextuality impossible in daily life. Working with machines we would think new sorts of thoughts not expressible without them.
So Wolfram is right, but not in the way that Carr implies. Look at Carr’s presentation:
The answer, of course, is that we compose the instructions for the machines to follow to fulfill our wishes.
Look at this through a medium like musical notation. In cultures without musical notation, there is no conception of movements within a song, just as a pre-literate culture will have no sense of paragraphs as a unit of thought. It has been noted that in pre-notation cultures it is not even possible to start a song in the middle for the purposes of practice. See Absence of Notation
Is it really fair to say that the person who writes music then is more constrained than person who practices by rote? That the modern writer’s thoughts are more limited than those of the 7th century B.C.E person who engaged in nothing but the spoken word? Certainly, we lose a certain perspective as we evolve. But, on the whole, the notations we develop for our technologies expand our capability for thought in the long run, making a broader variety of experience comprehensible to us, whether that notation is mathematical, musical, literary, or computational.
The problem, in the end, is that too many of us — including many in Silicon Valley — see the world the way Carr does. For these people, computation is just a low-pass filter on some predetermined set of tasks. In this formulation, the task is to increase the fidelity of the translation, to create the Star Trek computer so we can communicate our 20th century thoughts to our 21st century technology with more ease. That’s fine as it goes. But it’s not, and has never been, where the truly exciting story lies.