The US national security profession must reform how it conducts deterrence to protect the nation in the AI age. The problem is our naive adherence to coercion theory’s narrow conception of coercive deterrence and compellence. Coercion theory is a Cold War strategy designed to deter nuclear conflict, a survival imperative of any responsible actor given the catastrophic global consequences of nuclear winter. The theory’s logic offers coercive compellence and coercive deterrence via punishment or denial as alternatives to brute force. These options do not expand strategy because they are designed to prevent global nuclear-triggered destruction. So the theory constrains strategy by defining compellence and deterrence as subsets of coercion.
We still face the ultimate danger of nuclear brute force and other threats escalating into nuclear war—massive material destruction, so we also must deter that. However, we must prevent and influence other forms of warfare from human and artificially intelligent agents. Expanding strategy beyond coercive compellence and deterrence is vital to combining coercion, or not, with other effects, too. That creates more options.
On-Off Fight Switch
This off-on fight switch constrains integrating competitive effects because combat is confined to “breaking things and killing people” and applies only when deterring violence fails. As lethality and access to it increase, warrior identities adapt beyond won-and-done battles to battle rhythms across the competition continuum. Winning wars has always required political victory, but now it’s contested everywhere with information. Lethal force also relies on and produces information.
So, adversaries have expanded warfare while denying they are doing so. Take “deterrence.” China’s concept, weishi (威慑), literally deterrence and coercion. Russia’s concepts are sderzhivanie and ustrashenie. They include deterrence and intimidation by any means. Iran’s extremist interpretation of Islam weaponizes a sacred narrative to justify attacks while deterring combat operations against Iran proper. North Korea’s concepts of dunojeon (wars of brains) and jihyejeon (wars of wisdom) actively use deterrence and force to create policy advantages in military-diplomatic campaigns (see Narushige Michishita’s excellent analysis).
Theory Lags Reality
Armed with new joint concepts, information forces and schoolhouses across the interagency already exceed coercion theory’s limited vocabulary. For instance, the highly credible interdisciplinary study and book ISIS in Iraq: The Social and Psychological Foundations of Terror explains how ISIS gained social support by influencing basic human needs. Information-oriented communities of practice know that competition involves coercive and non-coercive forms of persuasion, inducement, compellence, dissuasion, security, deterrence, and defense. Despite this reality, the US national security, defense, and military strategies limit their efforts. They call for integrated deterrence and then fighting “when deterrence fails.”
In contrast, the top US strategies limit integrating effects in doctrine. For instance, information-related capabilities (IRC) is a doctrinally accepted term, but “information-related effects” (IRE) is not. IRE is broader because it includes influencing will, not just employing capability. Will and capability are interrelated in neural networks, so a superior strategy must consider both.
The need for change is more compelling when we listen critically to experienced leaders calling for change and see it can’t happen with just coercive effects.
Insights from Three Leaders Calling for Change
Consider insights from three leaders whose ideas demand a broader strategy language, but supplying it is choked by coercion theory’s hold. The leaders are David Spirk (former Chief Data Officer at the Pentagon), General Glen VanHerck (Commander of North American Aerospace Defense Command and US Northern Command), and Christian Brose (author of The Kill Chain: Defending America in the Future of High-Tech Warfare). Collectively, their arguments emphasize public-private partnerships, integrated forces, and information dominance.
In a Breaking Defense article, David Spirk rethinks conflict in “the age of exponential data.” He describes data and information as warfighting tools. They have three characteristics—evolving, enduring, and expendable. Spirk wants four significant changes. First, human talent to build data fluency for data-driven warfighting. Second, a flexible data architecture to leverage the edge of commercial capabilities. Third, agile investments to strengthen government-commercial partnerships. Fourth, continuous learning that includes commercially available technology.
General VanHerck’s article in War on the Rocks argues for new tools to proactively create the time and information to deter cyber, hypersonic, and other global threats. His call for information dominance reflects coercion theory’s “when-deterrence-of-violence-fails” restraint on warfare. The article focuses on forces that integrate data and information at the speed of AI. We can be more proactive by presenting better options before deterring armed conflict fails.
Christian Brose’s masterful book, The Kill Chain, focuses on one end of the competition continuum—killing the enemy. He argues for more weapon systems that are superior in quality to what we currently have. The US acquisition system and its politics produce small numbers of exquisite-technology weaponry that’s hard to integrate and vulnerable to asymmetric warfare. We need practical, resilient, and adaptive kill chains unencumbered by the slow pace of government-centric development.
Expanding Strategy–Narrow v. Broad
All three authors equate conflict and warfare with violence. When deterrence fails, Spirk’s data and info, VanHerck’s proactive deterrence, and Brose’s kill chains are tools or weapons of war. Deterrence of what? US national strategies’ answer is armed conflict. Naturally, authoritarian competitors wage warfare broader than that. The idea is to exploit weak strategies, not just forces.
To lead change, the US government must develop a broader perspective on strategy to promote whole-of-government and private-sector collaboration. Properly restrained by democratic accountability and civil-military relations, a government profession of effects can compete and wage warfare when and where appropriate. How?
An expanded strategy should appreciate three critical aspects: what’s not changing and what is changing, the role of information and operations, and the limitations of offense and defense.
Expanding Strategy #1–Ask What Is Not Changing and What Is Changing?
First, think about what is not changing and what is. The way we work with the basic elements of strategy has not changed. We still break it down into ends, ways, and means. Those elements interrelate and change rapidly due to technology, but we still use them to analyze and organize how to compete and wage warfare.
What is changing is that strategy’s ends, ways, and means are rapidly interconnected. For instance, social media is a means to an end involving many influential ways. Strategists must account for a complex competition with multiple access points, accessible preferences, chatrooms, chatbots, intended audiences, unintended audiences, and many more. So, when we make analytical choices about ends, ways, and means, we must also combine, rearrange, and synthesize them. Their interactions are instantaneous, dynamic, and vast.
Examples
For instance, we see diverse actors using various platforms for different ends: profit, market share, radicalization and recruitment, election interference, social activism, and other advantages. This competitive space provides authoritarians plenty of attack surfaces. They can use unaccountable ways and means, such as fake social media accounts, to influence public policy in porous democracies. Chinese party-government agents freely operate to co-opt politicians and coerce other nations’ citizens abroad. This aggressive approach exploits our separated lanes of legally denoted responsibility.
A strategically significant example is provoking social opposition to rare earth mineral facilities in Australia and the United States. China’s party-government openly seeks to dominate that market. State-owned enterprises and private firms might exercise what agency they can, but the Chinese Communist Party (CCP) rules. CCP-ruled China combines rare earth mineral dependence with predatory loans, non-transparent business practices favoring Chinese firms, and military and paramilitary operations in disputed territory to create synergistic effects.
Meanwhile, democracies use a narrower when-deterrence-fails strategy. We develop warfighting capabilities for a state of war shaped by authoritarians’ broad warfare. While we prepare for our type of war, the interplay of narrow and broad strategies chronically cedes competitive space to our adversaries. US national security strategy that relies on deterring only armed conflict is leaving citizens open to predatory operations. If you think technology is the answer, you’re partly right and wholly wrong.
Technology creates specific opportunities for holistic, agile, and asymmetric integration. Brose clarifies that the demand for sensors has increased as they have become ubiquitous, including quantum and genetically altered plant sensors. General VanHerck states, “The United States needs more time and better options to deter conventional threats to the homeland. The key to developing these options is data.” An expanded perspective on strategy is vital to recognizing more exploitable opportunities.
Expanding Strategy #2–Make Information Operate & Make Operations Inform
As democracies establish ministries, departments, and commands to create information advantage, we need a strategy language that facilitates integration, not separation. We must recognize data and information as operating to shape operations. Information is integral to operations (“information in operations”), and operations generate information. Taken together, information operates & operations inform.
Instead of treating data or information in support of operations (“supporting”) or supported by operations (“supported”), commanders should lead in arranging data, information, intelligence, and knowledge in operating cycles. That’s operationalizing learning to understand data, information, intelligence, and knowledge for advantage. Information in operations should influence targets.
Leading must be collaborative, with centralized command, distributed control, and decentralized operations. Ops are still commander-centric. For instance, a commander must integrate operational and information design to sense and shape conditions. Whether we label the environment as an information environment, operational environment, or strategic environment, we can at least recognize that information operates in it. An operation without information is useless.
Expanding Strategy by Integrating Information in Operations
FIGURE 1: Info-in Ops Cycle of Understanding

This process of sense-making and shaping conditions in the environment involves three interrelated processes.
- Putting data into context and attributing meaning to that information—which creates information
- Processing that information—which creates intelligence
- Gaining acceptance of the intelligence—which creates knowledge
All that is contested— it’s fighting to win. How? By imposing an understanding, such as surrender, a narrative, or a rules-based world order.
The highly contested cycle of understanding can skip steps and go in reverse to destroy understanding. Some victims accept information as knowledge, especially if the information’s context is a compelling narrative. Intelligent agents manipulate the meanings and contexts of data to generate information, process that information as intelligence, and create and destroy knowledge. Censorship and a toxic narrative, for instance, are cognitive maneuvers that fight against free thought.
A fully equipped strategy must recognize the ongoing competition and warfare over understanding. This fight is not limited to data or information-centric kinetic operations. Cognitive battles over context and meaning in narrative warfare can drive understanding. Relevant agents are those that try to shape thought and behavior.
Expanding Strategy #3–Effects Beyond Offense and Defense
Consequently, the two most basic and widely accepted effects are too narrow to compete against all-effect strategies.
Authoritarians are less domestically constrained to organize all-effect operations. That is, effects well beyond “offense” and “defense.” Expanding strategy domestically is their greatest vulnerability. We don’t use offensive military capability to target that because it would cross our redline of war. Authoritarians wage such warfare all the time with military forces and civilian agencies. They don’t restrain themselves by waging only violent warfare. They prosecute domestic and foreign domestic and international operations to coerce, compel, induce, and persuade compliant behavior while defending against, deterring, securing from, and dissuading democratic freedoms.
Expanding strategy with more offense and defense is too narrow to counter or defeat China’s nuclear and long-range strike capabilities and AI with networked sensors at scale. If we apply our when-deterrence-fails strategy to AI, nth G, and quantum computing, we lose the war China has waged since the mid-1990s. Why? Because China will avoid armed conflict and defeat us by other means while developing a superior capability for armed conflict. Armed conflict is still essential but most effective when holistic competition shapes its terms.
Our strategies are fixated on offense and defense as the most relevant effects. They are not. Offense tends to be self-justifying and vague as an effect. Defense is limited to when deterrence fails. Yet, more agents influence the behavior of many more people in many more ways. On top of that, democracies are vulnerable because representative government presents many more opportunities for less domestically accountable authoritarians to attack. Democratic politics favors immediate needs and wants, but we can do better by speaking a broader, more precise strategy language.
Combined Effect Influence Strategy
In the combined effect influence strategy framework, the competition for advantage includes more effects than coercive compellence and coercive deterrence. Expanding strategy with this framework involves eight distinct, combinable effects. One of those effects is defense, but instead of “offense,” its polar opposite effect is “coercion.” How is this better?
Coercion theory and popular strategy do not have this definitional precision. We are particularly sloppy in defining “security” or “stability.” These terms connote values with many meanings and contexts. To capture this complexity relatively simply, we have eight basic effects defined by the convergence of three dimensions of strategy: causative-preventive, cooperative-confrontational, and psychological-physical:
Expanding Strategy with Eight Basic Combinable Effects
Coercion (Cr): causative, confrontational, and physical
Defense (Df): preventive, confrontational, and physical
Compellence (Cp): causative, confrontational, and psychological
Deterrence (Dt): preventive, confrontational, and psychological
Inducement (In): causative, cooperative, and physical
Security (Sc): preventive, cooperative, and physical
Persuasion (Pr): causative, cooperative, and psychological
Dissuasion (Ds): preventive, cooperative, and psychological
Expanding Strategy with Different Meanings
Effects consist of information, so all eight effects have polar opposite and partial opposite meanings. What’s the use of expanding strategy this way–to be more complex? No, these definitions create more alternatives than coercion theory. We need more alternatives in different contexts to make more effective decisions. Granted, the most effective decisions are not the easiest as issues become politicized.
Dimensions and Elements of Strategy
A straightforward way to think about options in context is with three basic dimensions and elements of strategy. No matter how complex the sensors and sense-making are, we can analyze them as combinations of preventive and causative, psychological and physical, and cooperative and confrontational effects. These apply to all strategy elements: ends, ways, and means. Expanding strategy with this analysis, we must synthesize combined effects, too. Notice that cooperation is competitive but based on mutually accepted rules, not authoritarian or non-transparent behavior.
For instance, an active defense might also coerce—coercive defense of people, facilities, or data. Defense also can persuade, such as a diplomat negotiating a dispute. Defense can deter as a porcupine or induce terms as an insurance policy against the fear of unsustainable loss. We know all this, but instead of expanding strategy, we limit warfare to brute force and “peacetime” strategy to coercion. That’s narrowing strategy in a world with an unprecedented information deluge.
This model offers 28 possible combinations of effects (if n = the number of discrete effects and x = the number of effects in a combined effect, there are Σn-x possible combinations). Those combinations consist of seven double combinations of the eight effects, six triples, quadruples, five pentacles, three sextuples, two septuples, and one octuple. That’s good enough for starters in any context. It’s also sufficient to train a generative AI.
These basic interactions simplify reality. There is so much more to grasp. AI can identify at least six dimensions of relationships, while we humans struggle with visualizing three or four. We must contextualize them, too. Fortunately, humans are ahead there. We determine the context that makes sense to us as AI threatens to outstrip our capacity for control.
Expanding Strategy to Control AI
Humans are on it, but we need an expanding strategy to control AI. Current research on “superalignment” seeks human control of superintelligence. Technology is changing faster than ever, so strategy must stay ahead.
Here are four quick examples.
Human-directed AI explodes strategy into inexhaustible combinations of potential effects, good and bad. Medical researchers use AI to develop anti-cancer drugs, some inexplicably. Malicious humans use AI to weaponize FIGURE 1 in reverse. The race to discover biochemical compounds that attack nervous systems is always on in authoritarian regimes, while democracies develop guidelines for ethical AI.
Autonomous weapons in kill chains are necessary but narrow applications of AI. Compare them to influence networks. Virtual influencers outperform humans using hyper-realistic images and text with quickly developed code.
Generative AI can detect anomalous behaviors and suspicious code. AI that scans code for vulnerabilities can help increase productivity or find ways around defenses.
Open-source AI like chatGPT produces “hallucinations” and other low-cost false information. It’s an asymmetric strategy, like Houthis firing relatively low-cost, high-tech missiles against more expensive targets. Why? To influence US policy supporting Israel. Chatbots and botnets also spread believable or difficult-to-verify false images and text with influential effects.
The call for public-private partnerships, integrated forces, and information dominance must include a broader strategic approach that considers all effects in all domains, all the time. Here are four recommendations.
Three Recommendations for Information Forces
- Information forces should always use influence cycles that subsume and shape kill chains, not just when deterrence of armed conflict fails.
To compete effectively in different contexts, information forces must consider forms of influence in combination with coercive compellence and coercive deterrence. Desired effects should be combinable to create and present more potential dilemmas for our opponents. The Info in Ops Cycle diagram helps visualize the fight over understanding and influence. - Information forces should recognize confrontation and cooperation as competitive processes.
Confrontation is broad fighting that subsumes conflict by including non-violent strategies that democracies don’t recognize as warfare. Cooperation is a legitimate competition based on rules of behavior that authoritarian regimes routinely break and ignore or claim to follow. A competition continuum with cooperation and confrontation (including conflict) at each end can promote collaboration to create superior effects.
- Information forces should consider combined effects and develop concepts of influence to generate them.
A concept of influence explains how an activity, task, or operation intends to affect the will or capability of a selected audience or target for a desired effect. This addition to a CONOP also promotes anticipating and managing unintended effects.