THR Web Features   /   October 26, 2022

Twenty Five Years After Imagined Worlds, What World Are We Living In?

Our surprisingly Napoleonic twenty-first century.

Erik J. Larson

( Microsoft CEO Bill Gates speaks on a television above a Windows 95 display at a store in Vienna, Virginia.)

This year marks the twenty-fifth anniversary of famed scientist and author Freeman Dyson’s Imagined Worlds. The book, fashioned from a series of lectures Dyson gave in Jerusalem in 1995, is partly a historical discussion about why technologies—some familiar, like nuclear power, others not, like airships—succeed or fail in what he called a Darwinian process of selection. It’s also an enjoyable piece of futurism. He delighted in the possible, and in Imagined Worlds he speculated boldly about space colonization and an entirely new species evolved from future humans. Dyson was aware of the difficulties of prediction—Imagined Worlds fails to anticipate the rise of the Internet or World Wide Web—yet like H.G. Wells, whom Dyson admired, he leaves us with a sense of having encountered important ideas on a journey led by someone who knows the terrain.  

Imagined Worlds was Dyson’s attempt to explore, as he put it, “the interaction of technology with human affairs.” Like the philosopher of science Thomas Kuhn, he thought science “moved ahead along old directions” until a conceptual revolution. Unlike Kuhn, he thought scientific revolutions could also be driven by tools, by technology. He explained that science and technology entered “Napoleonic” phases, when big institutions with deep pockets set research agendas, and “Tolstoyan” periods, when scientists engage more in tinkering and exploration. Napoleonic was “rigid organization and discipline”; Tolstoyan was “creative chaos and freedom.” Where are we now?

Science and technology today are Napoleonic. Silicon Valley is now Big Tech, the age of garage start-ups being long behind us. Neuroscience is pursued with “exascale” supercomputers and big data. Ditto physics, which also relies on billion-dollar particle accelerators like CERN’s seventeen kilometer long Large Hadron Collider. Consumers—you and me—now provide data to cloud servers, centralized repositories (“cloud” is a misnomer) of massive datasets owned by a relatively small number of governments and organizations. If anyone is “tinkering” with science and technology these days, they are not making the news. We live in Napoleonic times.

The world circa 2000 was not Napoleonic. Little more than two decades ago people were experimenting madly with technologies, business models, and seemingly everything else. Business theorists predicted that industries would demassify and disintermediate, old media gatekeepers would fall, products like encyclopedias would simply disappear, and entire industries would revamp, collapse, reshape, and emerge. Not just the tech world but the entire world seemed to be in a constant state of flux. Venture capital flowed out of a cornucopia. The NASDAQ hit 5,000 in March 2000, though the dot-com bubble burst roughly a year later. 

In the years before that setback, when Imagined Worlds was published, ideas about the future direction of science and technology were diverse, interesting, and abundant. Kevin Kelly, co-founder of Wired magazine, predicted in his 1998 hit New Rules for the New Economy that computer networks would replace the PC as the dominant feature of information technology. He missed the iPhone, understandably, but he saw the future of the Web not as PCs with modems but as a vast network of devices. In essence, he predicted the Internet of Things. Kelly envisioned the coming networked world, the new web, as “ground up,” amounting to a kind of revolution in business and society in which, instead of corporate bosses sitting atop hierarchies executing plans, networks of people would conjure and promote ideas in waves of unpredictable innovation. New Rules for the New Economy reads like a paean to creativity and human freedom, all made possible by liberating society from the old stodgy big business models and yesterday's tech and ideas. Kelly was foretelling a Tolstoyan future, Dyson’s “creative chaos and freedom.”

But surprising to many, big business made a reappearance in the “new economy.” In her now famous 2002 book Technological Revolutions and Financial Capital, business theorist Carlota Perez argued that investment frenzy and stock market crashes precede periods of technology maturation, in which the promise and fruits of a tech revolution become evident. Technology revolutions have an installment phase, she wrote, followed by a deployment phase. Then (if conditions are right) comes a “golden age” marked by growth, employment, and successful consolidation of new businesses and industries. Like Google. And Facebook. Perez's analysis—applicable to earlier golden ages, such as those of steel and electricity, oil, and mass production—accurately predicted what would happen to the Web as it matured in the new century.

What Perez didn't see—or didn't discuss—was the connection between the Napoleonic changes in society and culture and the maturing phases of a tech revolution. She described this calcification in terms of the economy—dwindling profits, unemployment. Dyson described it in terms of the minds of scientists and practitioners—a loss of creative chaos and freedom. A rigidified status quo. Today’s status quo is data-driven AI and Big Tech. Google, Facebook, Instagram, and Twitter. If Kelly’s “new economy” at the turn of the century was a soft drink, the world we inhabit today is a 7-11 Big Gulp. Kelly and others of his ilk assumed networking meant Tolstoyan freedom. Like others, he assumed a “power-to-the-people” movement would derail big corporations and gate keepers and empower Everyday Joes. Big Brother was supposed to disappear, not return on steroids.

Enter the new Napoleonic. Predictably, tech pundits and critics have largely abandoned bottom-up rhetoric for worries about top-down big data collection, housed in server farms owned by big tech companies. Our imagined world has become a kind of bureau of statistics for government and big business (and science), which treat digital data as intelligence and value, not as something connected to billions of humans and their ideas.

We can’t lay the blame on Big Tech alone. The data-centric model was an irresistible path to profits and growth. The Web was bound to mature commercially one way or the other, and large—not small—companies were the likely result.

But the “bureau of statistics” mindset is now a problem. It dominates thinking everywhere, not just in technology businesses aiming for sticky ads and more captive users. Nearly every institution one can point to today, from government to science, media, medicine, insurance, and many others, embraces a centralized, data-capture model requiring massive computing resources and actively downplaying human ingenuity in favor of number crunching and prediction. More troubling perhaps, is the way this has shaped the zeitgeist. Confidence in human smarts and imagination seems at an all-time low. Entire books are written now on how people are, in effect, cognitively biased, limited, and indeed stupid. Given this cultural climate, Dyson’s time of “creative chaos and freedom” seems not only distant but beyond recovery.

Dyson called the Cold War science of the 1950s and 1960s Napoleonic because research occurred mostly in huge companies like RAND and involved teasing out the implications of earlier scientific results from brilliant Tolstoyan tinkerers like Max Plank or Albert Einstein. As in our present time, results were achieved through the investment of huge sums of money, and were typically conservative in scope, reflecting already formed interests and agendas. Much of the money during that time was spent on making larger fission, then fusion bombs. The math was already done. That time and ours both correspond to Perez’s depiction of a fully matured technology revolution showing signs of slowdown and decay. We seem to have wandered into the 1950s again, this time with Web companies instead of IBM and General Motors.

Artificial intelligence has become thoroughly Napoleonic as well. It is a textbook case in calcification. Large, central repositories of data now power ubiquitous artificial intelligence algorithms, which are great for self-navigating drones and automated surveillance cameras but frustratingly poor at basic conversation and other cherished facets of human intelligence. Among other worries, data-centric AI today requires massive amounts of old-fashioned electricity, still largely supplied by conventional fossil fuels. And the central data version of AI is adept at various forms of malfeasance, as everyone now knows. We are increasingly caught in fake news and deep fakes of facial and other unreal images, generated by today’s Big Data AI. Depressingly, the seventy-plus-year program of artificial intelligence is largely equated with centralized data repositories and statistical number crunching today. Younger generations probably don’t know that Napoleonic, Big Data AI is only one approach, one way of conceiving machine intelligence. Big Data AI makes sense in a fully matured technology world, with big players like Google and Facebook. It doesn’t make sense for Tolstoyan tinkerers, who have no access to supercomputers or petabytes of others’ personal data.

Fortunately, the twenty-first century is still young. A little over two decades into the last century was—true—a midway point between two catastrophic wars. But scientists were enjoying Tostoyan freedom. Relativity and then quantum mechanics were discovered, without the supervision or control of big science or big business. Henry Ford was mass producing automobiles, but automobiles would enjoy decades of further Tolstoyan tinkering. Tellingly, computers had not arrived in 1922, and no one anticipated the revolution they would bring. We can only wonder what imagined, and unimagined, worlds still await us in this century.