THR Web Features   /   May 14, 2024

How to Survive and Thrive in the AI Apocalypse

Living in the ecosystem of the machine.

Eric B. Schnurer

( THR illustration/Shutterstock photos.)

For thousands of years, humankind has fancied itself the apex of creation and the dominant force in the world. Yet humans are now gripped by the fear that yet another species of our own creation—artificially intelligent machines—will presently displace us from our position of unchallenged domination, perhaps even enslaving us.

This is simultaneously a misplaced conceit and misconceived fear: There are varied environmental niches to exploit and to dominate even in ecosystems with an apex-predator competitor. After all, various species, from bacteria to dogs to maize, have all been able to get humans to do their work for them, sometimes fatally addicting us to doing so. Still more have survived the onslaught of the Anthropocene and even devised strategies to turn the advent of mankind to their advantage. Human society can do the same even in the most extreme vision of an “AI Apocalypse.”

Doing so requires understanding the essence of the era’s dominant technology (as I have discussed more fully in my article for this magazine’s summer 2022 issue, “Democracy Disrupted”). Contrary to what many believe, the essence of today’s digital technologies is disaggregative, deconstructive, decentralizing, and destabilizing. Digital technologies (especially the Internet) not only destroy any notion of authoritativeness but also have a profoundly destabilizing effect on all forms of authority, especially the authority of objective reality. Indeed, alternative realities are the reality of the digital era. Combine that with the economic displacement occurring because of this technological change, and you get a good explanation for the social, political, and economic disruptions in which we now find ourselves.

So what is the “essence” of artificial intelligence? That’s the question we must ask in addressing what effects AI will have on society. In the history of technology, the essence of a new technology is usually not obvious at the outset, however. A good example is literacy: Originally accessible only to the elite and serving primarily as a force for state formation and social control, literacy came to be possibly the most democratizing force in the world.

None of us, including AI experts, know yet how this revolutionary technology will come to reshape our world, but we can make some guesses. Here, then, are four hypotheses as to what might prove to be the essence of AI, which, for shorthand, I’ll label “more-of-the-same,” “prediction,” “evolution,” or “authoritarianism.” As in the natural world, even if AI assumes the most powerful and predatory of these alternatives, there are modes of survival and even avenues of counterattack.

More of the Same

In many ways, AI might be thought of as simply an extension of existing digital technologies, different in degree but not in kind: a more social medium, perhaps, with even bigger data. After all, one current major emergent use of AI is to produce deep fakes, particularly in political or propaganda campaigns, which are already ripe venues for fraud and false information. In most ways, this is simply the moving-picture version of photoshopping or airbrushing opponents in or out of photos of the Kremlin leadership. In short, AI, at least in its current iterations, only produces more sophisticated versions of what current digital technologies produce: alternative (and false) representations of reality.

The news business, in all its forms, is collapsing; the “fake news” business, meanwhile, is flourishing. When it comes to local matters, most people know first-hand either the relevant facts or involved individuals whose truthfulness and reliability they can assess. Such proximity makes it possible for differences to be resolved and compromises to be reached because there is a shared reality among participants. On the other hand, because most people lack first-hand knowledge of the events or people involved in national and international developments, it is much harder to determine the veracity of what happens at those levels. The lack of first-hand knowledge, and thus shared reality, is driving the world apart and bringing humanity to the brink of a new state of nature.

But this information-pollution environment has already created a new market need for skills in separating fact from fiction. Software now exists to spot whether photos or videos have been manipulated. As the ability of AI to create false realities becomes more sophisticated, however, defenses against it will be harder to mount. It may become harder to identify truth from falsehood, making falsity increasingly profitable, but when prevaricators are unmasked, the damage to their enterprise is high: Credibility is a bridge that can be burned only once. Truth-tellers, truth-identifiers, and trust engines thus will become tremendous value-producers.

Of course, one person’s truth is another person’s falsehood. Many people believe falsehoods today because they trust false trust-engines. There will be a growing market for such trust-engines. Already, systems for rating the reliability of information and its sources exist (see, e.g., Politifact’s Truth-o-Meter or Media Bias/Fact Check), and people rely on a variety of such systems, of varying reliability. Over time, however, facts are stubborn things: The belief that you can fly will meet a hard landing with reality when you jump out of a window. Perhaps trust-engines with untrustworthy results will eventually be flushed out of the market. Obviously, we have not yet reached that point.

Prediction

Formidable as it is, AI does not think or provide knowledge or even answer questions. Rather, it responds to prompts by scouring its database and predicting what answer will best satisfy the prompt based on word choice and sentence structure. AI gives the illusion of understanding and responding; in reality, it only draws on its repertoire of known building blocks and then quickly calculates the assemblage of words most likely to provide a satisfactory response.

Some fear, however, that as AI gets better at predicting outcomes, it potentially could eliminate every job known to mankind that doesn’t involve the simple employment of brute strength. Will there be anything left for humans to do or think about?

The answer—I think—lies in two problems with prediction: AI is only as good as the database on which it draws; and, even in a world of complete information, there cannot be complete knowledge.

The first of these is easy enough to see, and it already has become a significant problem in artificially intelligent systems: AI is only as good as the datasets on which it has been trained. Even when these datasets encompass billions of information sources, they are necessarily incomplete (even the biggest of Big Data sets being unable to incorporate all possible information), and because the selection of sources is not randomized, they are biased. We know that the creation and selection of data inputs by humans leaves these sets, and the resulting AI systems, polluted by underlying human biases. Indeed, these biases are often exaggerated by the systems we create. In research that was widely reported on last year, linguists showed that large language models displayed evidence of gender bias, assuming stereotypes about the professions of men or women. ChatGPT, for example, which has been trained on existing English language sources, inferred that combining the concepts doctor and woman will yield the result nurse. Obviously, many people today would find that both incorrect and offensive, and AI companies are struggling to ensure that chatbots do not simply reflect the biases of society.

Moreover, current social media algorithms tend to push conversations or viewing choices in increasingly confrontational and extreme directions because doing so tends to produce the most reactions. Similarly, conversational AI systems so far have been very prone to human steering, or simply tipping over on their own, into highly racist and misogynistic rants. Much effort will be expended to cleanse AI systems of these biases, but so far this has resulted only in overcorrections that produce sanitized and politically correct formulations that simply reflect other built-in biases.

Even beyond this, though, it is not clear that a system of perfect knowledge can exist, even if based on total and complete information, which itself probably does not exist. To grossly simplify Gödel’s incompleteness theorem, any system of knowledge cannot encompass complete and perfect knowledge of itself. Only something exogenous—such as, say, God—can do that. Even the biggest Big Database will be incomplete. Furthermore, because modern physics suggests a certain randomness at the most fundamental levels of physical reality, we can predict, but we cannot predict with total accuracy. Even the best predictions—and the best prediction machines—will be wrong sometimes.

Succeeding at the prediction game doesn’t require being right all the time, however, or even being right most of the time. As any horse better or stock picker can tell you, success is being right when it matters. That is, when others are wrong. No matter how good AI becomes at making predictions, it will never be perfect, and that leaves the door open to competitors.

Those who can succeed in such an environment are those who can divine outcomes that AI misses. As we have discussed, one major area for this is overcoming biases that creep into AI’s “thinking.” This is already a profitable niche in the human-only economy, where those who hire based on talent rather than race, ethnicity, gender, nepotism or the like tend to outperform those who restrict their talent pools through non-performance-related biases. This competitive advantage is true at the societal level as well. To the extent that human biases are and will continue to infect AI system, humans who are able to identify and avoid most such biases will make better choices and better predictions than AI, at least at the margins. (And all outcomes are determined at the margin.)

The same is true of predictors willing—and emotionally disposed—to balance efficient and accurate risk calculations with a greater taste for risk-taking. In decision theory, for example, the correct choice is not necessarily the one determined through a strict multiplication of odds and expected pay-outs: It involves, rather, a utility function for risk. One would expect that a perfect AI would have a risk-neutral utility function; a gambler with different tolerances for risk, therefore, can beat such a system.

In short, individuals with superior rationality, a low susceptibility to bias, and a keen sense of where existing data points over- or understate probabilities and expected values can prevail in an environment of artificially intelligent prediction machines. Like trustworthiness and truth-determining abilities, these are skills that we should strive to inculcate and nourish in the next generation of human beings.

Evolution

What is perhaps most distinctive about AI is not its ability to make predictions. After all, existing “dumb” calculating machines can do so already. Rather, AI gets better at making predictions as it goes. This ability to improve on its own—to “learn,” as the terminology has it, or, perhaps more accurately, to evolve—sets AI apart from all previous technologies. As AI exercises this fundamental property of learning and evolving, it will challenge, surpass, and render obsolete anyone and anything that cannot do so, including human beings.

Is it, then, hopeless for us? Not if we too can learn new things and evolve in our thinking. Humans may be poor competitors both in the size of our knowledge base and in our processing power, but the holes and imperfections in AI can be exploited by even “dumber” and less powerful competitors.

A classic example of that potential is Muhammad Ali’s strategy for the famous 1974 “Rumble in the Jungle” world title fight against George Foreman. Foreman—the unbeaten defending champion, considerably younger than Ali, and the overwhelming betting favorite—was perhaps the most powerful puncher in boxing history. Ali couldn’t possibly outslug him. Instead, the Ali camp developed the famed “rope-a-dope” strategy, in which Ali spent the early rounds of the fight basically lying against the ropes around the ring and letting Foreman wear himself out by pummeling Ali while the ropes absorbed most of the impact. With one ferocious flurry late in the match, the smaller, older Ali ultimately knocked out an exhausted Foreman, a feat of intelligence even more than of strength or skill.

Human history is replete with similar examples of triumphs over seemingly invincible forces. We would do well to train today’s young in such resilience and creativity so they acquire the skills to anticipate, learn, and adapt.

Authoritarianism

AI’s great capacity for knowledge-collection and prediction could result in the total authoritativeness of the machine, a consummation that would reverse the trend toward the dissolution of all authority that I mentioned earlier. Many commentators fear that AI will constitute an intelligence so far beyond our own that it will assert mastery over humanity—a greatly extrapolated version of the sentient computer HAL’s attempt to usurp humans and commandeer their spaceship in 2001: A Space Odyssey. Alternatively, some expect that the total surveillance that AI makes possible will lead to a societal nightmare that might make George Orwell’s 1984 look tame. We can already see such a world emerging in Beijing’s totalitarian supervision of the Uyghur minority in Xinjiang Province.

But even more important are the implications for authority in the more fundamental sense of establishing truth: The trend today is to splinter truth, so that everyone can and does have his or her own reality. If a machine eventually commands authoritative mastery of all knowledge, however, what room will there be to argue with its singular assertion of truth? We already see this emerging in the increasing use of AI to make unappealable decisions as to (somewhat ironically) which Silicon Valley employees should be laid off in the current profit-maximizing retrenchment of the IT industry. We also see this at use in making probation and even sentencing decisions in criminal justice systems around the United States. What hope will there be for us in the face of an all-knowing authoritarian power?

Beyond the limits to absolute knowledge already discussed, however, there are important forms of “knowing” not based on information as conventionally understood—or as likely to be understood by AI. We can call this “gnostic” knowledge, from the Greek word gnosis (to know), usually used today in relation to early Christian religious and philosophic movements that distinguished intellectual knowledge from such gnosis, personal knowledge, which can be thought of as mystical in some way. Whether such wisdom-related phenomena spring from our reptilian brain, some oddly firing synapse, an emergent reality, or a spiritual source, the human mind is capable of all sorts of supra-rational, intuitive, and epiphanic insights and revelations.

In a world dominated by an all-knowing authoritarianism, the knowledge, insight, or wisdom of gnosis—inaccessible through mere logic and data-processing—might offer a competitive alternative. It may be that artificial intelligence systems can be made to think in such ways (or, at least, to reverse-engineer them). Such systems can already not only defeat the world’s best human Go players (a game far more complex and abstract than chess) but do so through moves of such transcendent insight that Go experts are unable to trace the logic backward from the result. AI may be capable of taking flight from “fixed-route” information processing and thereby creating, or least mimicking, gnostic thought. Whether AI will prove able to do so—and thereby be able to produce such supra-rational religious and metaphysical systems that resonate deeply with human perceptions (as have the teachings of Plato, Moses, Jesus, Muhammad, Zoroaster, Buddha, Confucius, the Baal Shem, and others)—is, at the very least, debatable.

If not, there will exist a niche that only gnostic thinkers can fill. In any event, an authoritarian regime of all-knowing machines will find, if not its Achilles’ heel, then at least a challenge from anyone or anything able to manufacture knowledge and beliefs outside its monopoly on facts and deductive processes.

But whether people can “outsmart” an omni-intelligent AI through varieties of gnostic knowledge that AI systems cannot access, they can certainly outsmart—and outcompete—other people. Such gnostic knowledge may provide real value. Religious belief, for instance, has been shown to increase survival odds in the face of cancer, and there is a valid argument that the reductionist approach of western science—of which AI is one fruit—misses or obscures larger truths and emergent realities (including, perhaps, reality itself). In fact, the Go example may demonstrate the superiority of transcendent, emergent, non-linear, or integrative thinking—or even something beyond “thinking.” Even if AI becomes capable of that, humans certainly are. And whether that ultimately produces truth or merely the illusion of it to those who believe, it will have inestimable value and appeal. Throughout human history, seers, shamans, and prophets consistently gained followings, large and small, especially in times of upheaval, confusion, or doubt. The numbers of followers are likely only to increase if AI takes the oppressive form envisioned by its detractors.

The ironic result of the triumph of the machine, Big Data, and their grinding logic, then, may be the creation of an ecosystem in which supra-rational gnostic appeals flourish. Humanity’s future thus may lie in the triumph of emotion, intuition, and faith—all of which, from the machine’s perspective, and to those who find the world already too full of unreason and alternative facts, might seem absurd. But, of course, the “absurd” has often been the strongest weapon of those resisting oppressive regimes aimed at reducing humans to mere cogs in the machine.