ADT-4 Pro Model Release:The definitive threat intelligence for the AI eraRead the research paper

ADT Technology

Reasoning-driven defense.

ADT is a purpose-built security model class that reasons continuously over infrastructure state, maintains competing threat hypotheses, and executes policy-bounded containment without a human in the loop.

0.8 min
Mean time to detect
97%
Intent classification accuracy
1.2%
False positive rate
ADT Pipeline · 5 Layers · Active
PROCESSING
L1
Context Ingestion
Canonical event · IAM delta · network flow · CI/CD
L2
Threat Interpretation
Hypothesis: lateral_movement (0.76) · recon (0.41)
L3
Action Proposal
Candidate: block_ip · quarantine · suspend_user
L4
Constraint Validation
Blast-radius: LOW · Reversible: YES · Policy: PASS
L5
Actuation + Audit
Executed: quarantine · evidence sealed · 0.8 min
closed-loop elapsed0.8 min · fully autonomous
0.8 min
Mean time to detect (MTTD)
vs. 287 min industry average
2.1 min
Mean time to respond (MTTR)
vs. 420 min industry average
1.2%
False positive rate
vs. 23.5% in SIEM environments
97%
Intent classification accuracy
Glemad Research · March 2026

The Model Architecture

Not a rule engine. Not a retrofitted LLM.

ADT models are pretrained explicitly for infrastructure security reasoning - on cloud audit logs, IAM state transitions, network flow semantics, and attacker tradecraft. Applying a general-purpose language model to security logs is a fundamentally different problem.

Security-Native Reasoning

Trained to model attacker intent, not text patterns.

General-purpose transformer models are trained to predict the next token in a text sequence. ADT models are trained to reason about infrastructure state transitions, authentication event sequences, network flow anomalies, and kill-chain progression - as structured security objects with known semantics. The difference is not fine-tuning. It is a different pretraining objective.

Pretraining corpus: cloud audit logs, IAM events, network flows, vulnerability data, attack simulations
Learns security-native semantics: what a lateral movement hop looks like across heterogeneous signals
Produces confidence-scored hypotheses at the intent level - not binary threat/no-threat classifications
Sub-2ms inference latency per event at production load
97%Intent classification accuracy across six kill-chain hypothesis typesGlemad Research · March 2026

Continuous State Reasoning

Hypotheses that persist and decay over time.

ADT does not process events independently. It maintains a persistent belief state per asset - a set of competing kill-chain hypotheses with confidence scores that update as new signals arrive. Hypotheses decay exponentially when no new evidence reinforces them (2-hour half-life). When three events arrive at the same asset in 90 minutes, all related hypotheses update simultaneously, not once per event.

Per-asset belief state maintained in a 24-hour rolling window
Bayesian-style update: each signal applies a confidence delta scaled by prior
Temporal decay prevents false escalation from stale evidence
is_escalating flag raised when multiple high-confidence hypotheses coexist
Belief State · asset-004 · 24h windowESCALATING
lateral_movement76%
credential_abuse52%
data_exfiltration31%
privilege_escalation18%
reconnaissance9%
2 high-confidence hypotheses - active attack pattern detected

The Five-Layer Pipeline

A closed loop from observation to audit.

Every event that enters the ADT system traverses a defined sequence of layers with explicit interfaces and no implicit state sharing between them. The result is a reproducible, auditable, and independently verifiable decision record for every threat response.

Layer 1 - Context Ingestion

Every signal source. One canonical model.

The ingestion layer normalises heterogeneous security signals into a single canonical event schema before any reasoning occurs. Cloud audit logs, IAM policy deltas, network flow summaries, identity session records, CI/CD events, workload metadata, and configuration drift signals are all converted to the same typed object representation. Downstream reasoning operates on this normalised form - it never touches raw log format.

Supported sources: AWS CloudTrail, Azure Activity Log, GCP Audit Log, syslog, Windows Event Log, Kubernetes audit
Canonical event schema: event_type, source, asset_id, entity graph, timestamp, severity, raw payload hash
Agent-based (on-premise) and agentless (cloud API) ingestion paths
Schema validation rejects malformed or schema-violating events at the boundary
45K+Events ingested and normalised per second across active deploymentsSingle ingest schema across on-premise and all three hyperscalers

Layer 2 - Threat Interpretation

Convert observed state into intent-level hypotheses.

The interpretation layer applies the ADT-Signal classifier (fine-tuned transformer, ONNX runtime) and ADT-Detect anomaly model (Isolation Forest) in parallel to each normalised event, then assembles the per-asset belief state update. The output is not a label - it is a set of updated hypothesis confidence scores with the observed evidence attached.

ADT-Signal: 6-class intent classifier producing per-class probability scores
ADT-Detect: behavioural baseline anomaly score generated from per-asset feature history
Belief state update: Bayesian delta applied to all relevant kill-chain hypotheses simultaneously
Event timeline maintained per asset for the last N events (kill-chain context for Layer 3)
6Kill-chain hypothesis types tracked simultaneously per assetLateral movement · credential abuse · exfiltration · persistence · recon · privilege escalation

Layer 3 - Action Proposal

Constraint-aware reasoning over candidate actions.

ADT-Reason (a compact quantised LLM operating under a structured security reasoning prompt) evaluates the assembled context - event, belief state, applicable policies, recent event timeline - and produces a JSON-structured action recommendation with confidence, reasoning chain, and policy mappings. The system prompt enforces JSON-only responses and requires explicit threat hypothesis scoring.

Input: belief state summary, Layer 1 event, matched security policies, last-N event timeline
Output: recommended_action, confidence (0–1), policy_matches[], threat_hypotheses[]
Temperature 0.1 - low-variance, consistent output for the same input context
JSON schema validation on every response - malformed output falls back to manual_review
768Maximum output tokens per reasoning decision - bounded, deterministicNo open-ended generation - all outputs conform to a fixed JSON schema

Layer 4 - Constraint-Validated Gating

Every action passes five checks before execution.

The gating layer prevents unauthorised, unsafe, or disproportionate actions. It validates the proposed action against policy admissibility, confidence threshold for the action class, blast radius (asset scope and downstream dependencies), reversibility (can the action be rolled back?), and whether human approval is required. An action that fails any single gate is rejected and escalated-it is never partially executed.

Policy admissibility: action type must be in the permitted class for the organisation tier
Confidence threshold: Class 1 requires ≥0.60 · Class 2 requires ≥0.75 · Class 3 always requires human
Blast-radius estimation: scope of affected assets and downstream dependencies evaluated
Rate-limiting guardrail: maximum N actions per asset per time window to prevent runaway actuation
Constraint Validation Gate · L4
Policy admissibility
Action in permitted class for org tier
PASS
Blast-radius
Scope: 1 asset · 0 downstream dependencies
LOW
Reversibility
Full rollback available within 15 min
YES
Confidence threshold
Required: 0.70 · Actual: 0.76
MET
Human approval required
Class 1 action - autonomous gate cleared
NO
ALL GATES PASSED - action authorised for execution

Layer 5 - Actuation and Audit

Execution with an immutable chain of custody.

The actuation layer executes the validated action against the target environment - host quarantine, IP block, credential rotation, process termination, session revocation - and writes an immutable evidence bundle to the audit ledger. The bundle contains the full reasoning chain: what was observed, what was inferred, what constraints were checked, what action was taken, and post-action verification results.

Executors: on-premise iptables/process kill, AWS/Azure/GCP firewall and IAM APIs, cross-platform credential rotation
Post-execution verification: threat signal checked after containment to confirm reduction
Evidence bundle: timestamped, SHA-256 hashed, and chain-linked to prior events in the ledger
Rollback mechanism: Class 1 and Class 2 actions fully reversible within the platform
ADT Evidence Ledger · ImmutableSEALED
14:07:03.012OBSERVATION
Exec chain anomaly on host-004 · confidence 0.83
14:07:03.048HYPOTHESIS
lateral_movement raised to 0.76 · belief updated
14:07:03.091GATE_PASS
Constraint validation passed · Class 1 action
14:07:03.144ACTUATION
quarantine(host-004) executed · API confirmed
14:07:03.201EVIDENCE
Bundle sealed · SHA-256 hash anchored

Action Taxonomy

Every action is classified before it is executed.

ADT defines four formal action classes with increasing impact and decreasing autonomy. The system cannot execute a higher-class action than the confidence level and policy configuration permit. This is enforced at the gating layer - not as a soft UI setting.

Class 0 - Observational

Evidence collection. Always permitted.

Class 0 actions do not modify any system state. They collect, log, and record. Every detection event automatically generates Class 0 actions regardless of confidence level or policy configuration. These cannot be disabled or gated - the system always observes and always records.

Hypothesis logging and confidence score updates to the belief state
Evidence capture: raw event, normalised event, and reasoning output recorded
Audit entry written: timestamped, hashed, and committed to the evidence ledger
Risk score updated in the asset inventory for dashboard visibility
No confidence threshold or policy gate - Class 0 always executesEvery event → evidence. No exceptions.

Class 1 - Reversible Containment

Low-impact temporary restrictions. Autonomous when confidence is met.

Class 1 actions are temporary, scoped, and fully reversible. They restrict access or capability without permanently altering system state. The default confidence threshold is 0.60. Each action includes a defined rollback mechanism that restores the prior state without data loss. These are the most frequently executed autonomous actions in production.

Session revocation - active authentication sessions terminated for a specific identity
API throttling - rate limiting applied to an identity or service account
Step-up authentication enforcement - MFA challenges injected into active sessions
Privilege suspension - temporary removal of elevated permissions pending investigation
Network isolation - inbound/outbound traffic restriction for a specific host or IP
95%Of production containment actions fall in Class 1 - all autonomousGlemad production benchmark · 680,000 assets

Class 2 - Semi-Reversible Enforcement

Higher-impact actions. Require elevated confidence and explicit policy.

Class 2 actions affect system availability or access in ways that require deliberate effort to reverse. The default confidence threshold is 0.75. These actions require a matching policy rule that explicitly grants the action for the event type and severity combination. Without a matching policy, the system falls back to Class 1 plus a human escalation.

Host quarantine - full network isolation of a workload (reversible via policy release)
Credential rotation - forced regeneration of API keys, certificates, and service account secrets
Workload suspension - container or VM suspend pending investigation
Policy enforcement lock - write-lock applied to IAM policies in a specific scope
Configuration rollback - infrastructure configuration reverted to last known-good state
0.75Minimum ADT confidence score required to execute a Class 2 actionConfigurable per organisation tier - floor cannot be lowered below 0.65

Class 3 - Irreversible / Human Required

Permanent or broad actions. Never automated.

Class 3 actions cannot be reversed or have an unacceptably broad blast radius. They are never executed autonomously regardless of confidence level, policy configuration, or organisational tier. The ADT system generates a fully documented escalation package - event context, reasoning chain, belief state, and suggested action - but human approval is the only gate that can release execution.

Resource deletion - permanent removal of cloud resources, storage, or compute
Account disablement - permanent lock of a user account or service identity
Wide-scope policy rewrite - changes to IAM policies affecting more than one asset
Data destruction - secure erasure or purge operations
Broad network isolation - blocking entire subnets or accounts
0Class 3 actions executed without explicit human approval - in any deploymentHard constraint. Not a configuration. Not bypassable.

Design principles

Five foundational
decisions

These are architectural constraints, not product differentiators. They define what ADT will and will not do - and why the system behaves the way it does in production.

Defense-first pretraining
ADT models are trained to internalise infrastructure semantics, attacker tradecraft, and policy constraints as first-class concepts - not language fluency. The model is not adapted from a general-purpose base.
Continuous state reasoning
The core unit of reasoning is not an event - it is a state transition over infrastructure. ADT maintains competing hypotheses over time rather than producing a verdict per event.
Integrated actuation under constraints
The reasoning layer and the actuation layer are part of the same pipeline. The constraint gate runs before execution, not as an afterthought. There is no way to skip validation.
Zero implicit trust
No model output, external tool result, or retrieved context is acted on without explicit validation. Every action is justified, checked, logged, and reversible where possible.
Guardrailed learning
Model and policy changes are tested, staged, monitored, and rollbackable. The retraining pipeline feeds from analyst-reviewed incident data - not raw event feedback.
359×
Faster detection than legacy SIEM
0.8 min
Mean time to detect - MTTD
1.2%
False positive rate in production
100%
MITRE ATT&CK coverage

Read the full
architecture paper.

The peer-reviewed ADT research paper covers the full pipeline, action taxonomy, evaluation methodology, and benchmark results. Published on Zenodo, March 2026.

ADT-4 Pro - Reinforced Architecture for Signal Interpretation, Drift Detection, and System-Level Reasoning