Our computational capacity grows but does that mean we are smart?
After sleepwalking through high school, I landed in a senior English class that caught my fancy, and one of the first things I did was shell out thirty-five cents for a Roget’s Pocket Thesaurus. After years of disaffection and tomfoolery, I wanted to be smart. And one way to be smart, by my adolescent reckoning, was to pepper my essays with words like disaffection and tomfoolery. I was being a smarty-pants rather than smart, perhaps, but new words represented for me a fresh world of books and writing and using my mind.
That old thesaurus, crumbling a little more every time I open it, has a wealth of synonyms and antonyms for a word like smart, and I would roam through it, as I am now, finding threads and connections. To be smart bears similarity to being intelligent, ingenious, or resourceful, and to being clever, witty, or quick. And then there are book smarts versus street smarts. And working smart. And, of course, a smart aleck with a smart mouth. Humanity lofty and flawed lives in these definitions.
Five or six years before I bought that thesaurus, the brilliant computer scientist John McCarthy coined the term artificial intelligence to describe the simulation of intelligent behavior by computers: recognizing visual shapes, solving well-defined problems, and improving performance through repeated trials. These capabilities, combined with the computer’s extraordinary processing capacity, would, within a few decades, deeply embed the computer in our lives. By the 1980s, Harvard Business School professor Shoshana Zuboff would be writing about the world of the smart machine. Now we have smart phones, smart cars, smart classrooms, even—heaven help us—smart bombs.
Language is complex and adaptable—smart is also a verb that means “to sting,” and an adjective that means “stylish”—so it’s no surprise or big deal when smart is used to describe the added cognitive processing power computer technology brings to objects and places. As opposed to the term artificial intelligence, however, terms like smart phone and smart classroom have no qualifier in front of them to denote their anthropomorphism. Semantically, we cede intelligence to the machines we make, enhancing our era’s faith that there are technological solutions to just about any problem and, in the process, narrowing our definition of what it means to be smart.
Consider the smart classroom, or, in the more materially accurate phrasing, the wired classroom. This is a setting equipped with computers that potentially enable students to do research on the Internet, work with all sorts of digital media, and network with others in and beyond their classrooms. Students can also receive individualized instruction and have their progress monitored by teachers. Marvelous things can happen in the wired classroom.
Although obvious, it needs to be said that all this potential will be realized only if the computer technology is well designed and its programming is done by people with a deep understanding of teaching and learning. The technology also has to be integrated into a rich curriculum, and the teacher should be pedagogically astute about the instructional uses of the computer. Only then can smart things happen in the smart classroom. Conversely, cognitive sparks can fly in the absence of laptops and tablets. The most sophisticated instructional classroom technology that lit me up about language was a chalkboard.
This distortion of the word smart, and of our understanding of intelligence in general, is also found in the commonplace distinction between the “new economy,” driven by the emerging work of the smart machine and the smart worker, and the “old economy,” based on heavy manufacturing and traditional services. There’s no denying that the last few decades have seen significant changes in the organization and technologies of work; many blue- and white-collar occupations require new skills and knowledge, particularly from the realm of computer technology. But these developments have been reduced in endless opinion pieces and popular management books to a simplistic and cognitively loaded separation between the economy of previous generations and that of our own. “Whereas organizations operating in the Industrial Age required a contribution of employees’ hands alone,” write Sandra Burud and Marie Tumolo in their award-winning Leveraging the New Human Capital (2004), “in the Information Age intellect and passion—mind and heart—are also essential.” Before the era of the smart machine, it seems, you didn’t have to draw on much of your intelligence to build airplanes, keep the books, or care for the sick.
The word smart is being appropriated into a broad technocratic worldview that includes not only the schoolhouse and the workplace but the very space in which we live. Some urban theorists are promoting a vision of smart cities that are extensively wired and connected, driven by business development with an emphasis on high tech and entrepreneurship and populated by critical masses of highly educated people. Smart schools provide smart citizens who work in smart corporations in the smart city. To be sure, there are many advantages to such a city, from flexible work hours to traffic control. But there are potential downsides, including the danger of expanded surveillance and lopsided, business-dominated development, for in the smart city, high-tech and managerial smarts carry great power.
This is not a Luddite’s lament; I used Google and word-processing software while writing this essay. My worry is that our digital rapture will blind us to the fact that smart, as my old thesaurus reminds us, is a big tent of a word. The guys delivering a truckload of stoves and refrigerators are working smart. The class clown is smart. So, too, is the kid who deflects the clown’s barb. The Mars rover team is smart, and Sojourner Truth was smart. To lose sight of this intellectual abundance would be simple, unwise, and, well, anything but smart.