AI in Defence and Security: Capability, Constraints, and Ethical Reality
AI is already embedded in defence and security systems. It is not speculative, and it is not optional. What is optional — and frequently mishandled — is how these systems are designed, constrained, and governed.
Public discussion of AI in defence tends to oscillate between two extremes: exaggerated fear of autonomous weapons, and uncritical optimism about technological superiority. Neither is useful. Real-world systems operate in between, constrained by physics, law, ethics, and organisational reality.
This article focuses on what AI can actually do in defence and security contexts, where it consistently fails, and what responsible deployment requires.
Capability: Where AI Genuinely Adds Value
AI performs well in defence and security when it is used to compress information, not replace judgement.
Effective applications include:
- Sensor fusion across multiple modalities
- Target detection and classification under time pressure
- Anomaly detection in large data streams
- Prioritisation of threats or alerts
- Logistics forecasting and maintenance planning
- Intelligence triage and pattern discovery
In these domains, the value comes from scale and speed. AI can process volumes of data no human team could manage and surface candidates for attention.
Crucially, these systems support decisions rather than make them.
When AI is positioned as an accelerant for human cognition, it tends to be effective. When it is positioned as a substitute, it becomes dangerous.
Constraints: Physics, Environment, and Adversaries
Defence systems do not operate in controlled environments.
They face:
- Denied or degraded communications
- Power and compute constraints
- Harsh physical conditions
- Incomplete or deceptive data
- Active adversaries attempting manipulation
These constraints shape what AI can realistically do.
For example:
- Large, cloud-dependent models are often unusable in contested environments
- Training data rarely reflects adversarial behaviour accurately
- Models are probed deliberately for weaknesses
- Sensors are jammed, spoofed, or destroyed
Any AI system that assumes clean data, stable connectivity, or cooperative inputs is unsuitable by default.
Capability without resilience is not capability.
Adversarial Pressure Changes the Design Rules
In civilian systems, edge cases are accidental. In defence and security, they are often intentional.
Adversaries will:
- Manipulate inputs to induce false positives or negatives
- Exploit confidence thresholds
- Observe and adapt to system behaviour
- Target brittle assumptions
This forces a different design posture:
- Conservative defaults
- Explicit uncertainty handling
- Redundant sensing where possible
- Avoidance of single points of failure
- Continuous red-teaming and stress testing
AI systems must be designed with the assumption that someone is actively trying to break them. If they are not, they will succeed.
Autonomy Is Not a Binary Choice
Much of the ethical debate around AI in defence focuses on “autonomous weapons” as a binary category. In reality, autonomy exists on a spectrum.
Key dimensions include:
- Speed of action
- Degree of human oversight
- Reversibility of decisions
- Scope of authority
- Contextual awareness
Most deployed systems today operate in constrained autonomy:
- AI proposes, humans approve
- AI acts within narrow bounds
- AI disengages under uncertainty
- Humans retain escalation authority
The ethical and operational challenge is not eliminating autonomy, but bounding it tightly and transparently.
Unbounded autonomy is rarely defensible. Zero autonomy is often impractical.
Explainability Is an Operational Requirement
In defence and security, explainability is not about public transparency. It is about command responsibility.
Operators and commanders must be able to answer:
- Why did the system behave this way?
- What information did it rely on?
- How confident was it?
- What alternatives existed?
This does not require every model to be simple. It requires the system to be interrogable.
Black-box behaviour undermines trust, slows decision-making, and creates unacceptable legal exposure. Systems that cannot explain themselves will be bypassed or disabled — often at the worst possible time.
Ethical Reality: Constraints Are Enforced by Design
Ethics in defence AI is often discussed as policy. In practice, it is an engineering problem.
Ethical constraints must be enforced through:
- Explicit limits on autonomous action
- Hard-coded rules of engagement
- Conservative behaviour under ambiguity
- Mandatory human authorisation for escalation
- Comprehensive logging and auditability
Relying on training data or post-hoc review is insufficient.
If the system can violate ethical constraints under pressure, eventually it will.
Responsible design assumes failure and constrains its consequences.
Accountability Cannot Be Delegated to Systems
One of the most dangerous narratives around AI in defence is that responsibility can be shifted to the system.
It cannot.
Legal and moral accountability always rests with human actors:
- Commanders
- Operators
- Designers
- Decision-makers
AI systems must therefore be designed to support accountability, not obscure it. This means:
- Clear ownership of decisions
- Traceable system behaviour
- Defined override mechanisms
- Documented assumptions and limits
If accountability becomes ambiguous, the system is not deployable — regardless of capability.
Data Limitations Are Structural, Not Temporary
Unlike commercial AI, defence systems cannot rely on abundant, well-labelled data.
Challenges include:
- Rare events
- Classified or sensitive data
- Rapidly changing threat profiles
- Adversarial adaptation
This limits what purely data-driven approaches can achieve.
As a result, defence AI often relies on:
- Hybrid systems combining rules and learning
- Strong priors and domain knowledge
- Simulation and synthetic data
- Conservative generalisation
Expecting defence AI to behave like consumer AI is a category error.
Where AI Should Not Be Used
Restraint matters most in defence.
AI should not be used when:
- Errors are irreversible and catastrophic
- Context dominates pattern recognition
- Legal judgement is required
- The system cannot be meaningfully constrained
- Failure modes are poorly understood
The pressure to deploy “because others are doing it” leads to fragile systems and strategic risk.
Choosing not to automate certain decisions is often the most responsible choice.
The Strategic Risk of Overconfidence
The greatest danger of AI in defence is not malfunction. It is overconfidence.
Systems that appear authoritative can:
- Narrow human judgement prematurely
- Create automation bias
- Mask uncertainty
- Encourage risk-taking based on false precision
Designing against this requires humility:
- Visible uncertainty
- Encouragement of challenge
- Training that emphasises limits
- Cultural reinforcement that AI is advisory, not authoritative
Technology does not remove responsibility. It amplifies its consequences.
AI in defence and security is neither salvation nor catastrophe. It is a tool — powerful, limited, and dangerous when misunderstood.
The organisations that use it well focus less on capability and more on constraints. They design for failure, assume adversarial pressure, and enforce ethical boundaries through engineering, not rhetoric.
In this domain, intelligence is not the primary virtue.
Control is.