In March 2019, heavy rains in California led to a brilliant carpet of orange poppies in Walker Canyon, part of a 500,000-acre habitat reserve in the Temescal Mountains southeast of Los Angeles. Run by a state conservation agency, the reserve was mainly a local attraction until a twenty-four-year-old Instagram and YouTube influencer with tens of thousands of followers posted two selfies of herself amid the poppies. The result, as technology critic Nicholas Carr explains in his book Superbloom (named after the viral hashtag #superbloom), brought a Woodstock-size influx of selfie-seekers who “clogged roads and highways,” “trampled the delicate flowers,” and in general “offered a portrait in miniature of our frenzied, farcical, information-saturated time.”
Today, Walker Canyon is closed until further notice. But the frenzied farce continues, as Donald Trump’s electoral victory sets the Big Tech companies against one another in a new, more politically visible way. If you’re wondering how we arrived at this pass, Carr is your man. An eloquent, levelheaded writer, he has been sticking pins into the hot-air optimism of Big Tech since 2001, when, as editor of the Harvard Business Review, he published several politely skeptical articles on the uses of what was then the new “information technology.” Less politely, and with sharper pins, Superbloom appraises the past and present of that technology and issues a warning about its future.
Regarding the past, Carr offers a cogent and lucid overview of America’s unique system of media regulation. Starting with the time-honored “secrecy-of-correspondence” doctrine adopted by the British postal service in 1660, and a century later by the leaders of the American Revolution, he shows how that doctrine’s insistence on protecting the privacy of one-to-one communication transferred quite easily to the telegraph, and then—after a legal dispute over the presumed difference between “tangible” (written) and “intangible” (spoken) communication—to the telephone.
More challenging was radio, a radical departure from all previous modes of communication. Like newspapers and other print media, radio was one-to-many, concerned not with privacy but with reaching the largest possible audience. But radio was also uncannily swift, able to reach masses of people simultaneously. In passing, Carr mentions the contrast between America’s reaction to radio and that of most other nations—namely, that their governments hastened to take control of the new medium while ours tarried, letting it develop as a business in the private sector. This contrast deserves more attention than Carr gives it, because while the US government’s initial response to radio was hands-off, the success of radio depended on subsequent responses that were decisively hands-on.
Radio in America began as a free-for-all, with everyone from brilliant hobbyists to mischievous meddlers transmitting whatever struck their fancy. This posed a problem for seaborne vessels and especially the US Navy, which had come to rely on radio communications. Things came to a head on April 14, 1912, when the transatlantic ocean liner Titanic collided with an iceberg off the coast of Newfoundland. The initial distress call got through, but the efforts of nearby vessels to respond were disrupted by a flood of useless chatter and, worse, a fake news report saying the ship was still afloat and on its way to Halifax, Nova Scotia. In response, Congress passed the Radio Act of 1912, which required licenses for all operators, laws against “malicious” messages, and the confinement of amateurs to shortwave. A decade later, Secretary of Commerce Herbert Hoover held a series of conferences on the best way to assure that “the ether,” defined as “a public medium,” would be used for the “public benefit.”
Rather than stunt the growth of the commercial broadcast media, the regulations administered by what would eventually become the Federal Communications Commission (FCC) encouraged the major television networks (NBC, CBS, ABC) to develop a now familiar business model. Costly news and public-affairs divisions, which were mandated by the FCC but rarely profitable, were to be paid for by more lucrative sports and entertainment divisions. No such model emerged from the early phase of the Internet, because while it, too, was chaotic, the government chose not to follow its own precedent and develop a reasonable system of regulation.
In Carr’s view, that work should have been undertaken in the early 1990s, when, as he writes, the “general-purpose, wide-area computer network…morphed into…a vast library-cum-agora, open to anyone with a modem and a web browser.” But as he goes on to remind us, the early 1990s were also when the triumphalism of America’s Cold War victory, combined with the utopianism of Silicon Valley, convinced a generation of decision-makers that “an unfettered market seemed the best guarantor of growth and prosperity” and “defending the public interest now meant little more than expanding consumer choice.” So rather than try to anticipate the dangers and excesses of commercialized digital media, Congress gave it free rein in the Telecommunications Act of 1996, which, as Carr explains,
erased the legal and ethical distinction between interpersonal communication and broadcast communications that had governed media in the twentieth century. When Google introduced its Gmail service in 2004, it announced, with an almost imperial air of entitlement, that it would scan the contents of all messages and use the resulting data for any purpose it wanted. Our new mailman would read all our mail.
As for the social-media platforms, Section 230 of the Act shields them from liability for all but the most egregiously illegal content posted by users, while explicitly encouraging them to censor any user-generated content they deem offensive, “whether or not such material is constitutionally protected” (emphasis added). Needless to say, this bizarre abdication of responsibility has led to countless problems, including what one observer calls a “sociopathic rendition of human sociability.” For Carr, this is old news, but he warns us once again that the compulsion “to inscribe ourselves moment by moment on the screen, to reimagine ourselves as streams of text and image…[fosters] a strange, needy sort of solipsism. We socialize more than ever, but we’re also at a further remove from those we interact with.”
Dismayed as ever by this old bad news, I kept reading Superbloom in the hope that its final chapter would offer some remedy that passes muster with Big Tech’s most exacting critic. So when I saw Carr describe “frictional design”—a scheme to re-engineer the existing platforms to slow down the process of posting and re-posting—as “the boldest and most creative” way “to encourage civil behavior” online, my spirits lifted. But then Carr pulls an especially long pin from his pincushion and judges this solution too little, too late, because the frictionless efficiency of social media “has burrowed its way too deeply into society and the social mind.”
Meanwhile, the view from Silicon Valley remains sunny. In June 2023, at the height of public concern about the misuse of artificial intelligence (AI), the prominent venture-capital influencer Marc Andreessen issued a statement called “Why AI Will Save the World,” which ran as follows:
This isn’t just about intelligence! Perhaps the most underestimated quality of AI is how humanizing it can be. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve their ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.
Call me a Luddite, but I do not think it wise to wait until these kindly bots are in place before deciding how effective they are. Better to roll them off the nearest cliff today, in the spirit of Carr’s final words: “Maybe it’s not too late to change ourselves.”