What we gain, and lose, when machines read for us

Prof. Lutz Koepnick, Vanderbilt University

Computers do more and more of our reading for us today. In most cases, however, we are unaware of this fact. Search engines such as Google read billions of web pages in the blink of an eye to facilitate our quest for knowledge. Servers and routers constantly read incoming blocks of data to reconstruct coherent information from scattered transmission packages. Computers read us whenever we type words on a keyboard so as to display our thoughts on our monitors. GPS devices read satellite positions and preprogrammed maps to provide us with accurate information about our locations and desired directions. Machine voices read prefigured bits and pieces of text to all of us, at all corners of contemporary life.

We have mostly come to rely on machines reading for and to us because of their reliability. Computational reading might generate things that surprise users as informative and unexpected, but only because it executes codes and functions that leave little room for error or deviation. We might not understand exactly what happens inside the machine when Google facilitates our thirst for knowledge. But our response to its mode of reading is largely driven by our belief in the objectivity of rule-bound numerical operations, the assumption that any computational process is not only predictable but could be made entirely transparent.

Human readers, on the other hand, are notoriously fickle and unpredictable. No one really knows exactly what it takes for the human mind to understand and process text. But, then again, isn’t this sense of fuzziness, subjectivity, and unpredictability what reading is all about? We may want to stop and think whether we should call machine reading, precisely because of its utter objectivity and consistency, reading at all.

Share This