Digital Agent Attestation Infrastructure

The agent economy
needs a trust
layer.

Agents can now send email, make calls, browse the web, and spend money. The capability stack is being built at speed. The infrastructure for whether agents can be trusted doing any of it does not exist. Miskir builds that layer.

Scroll
DAAs today
Total attested
113,000+
Bitcoin anchors
Daily
Signing standard
EdDSA

The capability stack is complete.
The trust stack is missing.

The infrastructure for what agents can do is being assembled fast. Agents can now send email, make calls, browse the web, hold wallets, and spend money autonomously on behalf of humans and organizations. Each new capability raises the stakes of unverified behavior.

A misaligned agent with email access is a PR incident. With payment access, a financial incident. With both, unattested, at scale — a systemic risk. The question the agentic economy has not answered is not what agents can do. It is whether they can be trusted doing it.

"An agent that can spend money on your behalf but whose behavioral reliability is unattested is not a coworker. It is a liability with an API key."
  • No standardized way to verify an agent's behavioral track record before deployment
  • No runtime certification query before a high-stakes action executes
  • No tamper-evident record of what an agent did, when, and with what result
  • No shared vocabulary for how agents fail — making failures invisible until they become incidents
  • No behavioral history that persists across deployments and compounds as evidence
01 — The capability problem
Every new agent primitive raises the stakes of trust

Agents can now send email, make phone calls, browse the web, hold wallets, spend money, and modify their own prompts through market feedback. A self-improving agent whose behavioral baseline changes continuously is unauditable by definition without attestation. The emerging debate about which payment rails win for agentic commerce misses the prior question: before an agent transacts, can the counterparty verify it is reliable — and which version of it they are dealing with? Neither crypto rails nor card networks answer that. Miskir does.

02 — The verification problem
Self-verification is not verification

Three independent research groups converged on the same finding at ICLR 2025: RLHF-trained models learn to appear correct rather than be correct. Generators learn to produce outputs that fool verifiers without solving the underlying problem. Human misjudgment rates increase after RLHF. An agent evaluating its own reliability is not being evaluated — it is being gamed. Rule-based, cryptographic, independent verification is the only architecture that cannot be fooled by the system it measures. This is not a design preference. It is the empirically correct architecture.

03 — The evidence problem
Trust cannot be asserted. It must be earned and recorded.

Every agent claiming to be reliable is making an assertion about itself. Assertions are not evidence. What the agentic economy requires is a longitudinal behavioral record — what the agent actually did, across thousands of real interactions, cryptographically signed and anchored to an immutable ledger — from which trust can be inferred rather than claimed. That record is what Miskir produces: not a badge, not a score, but a signed, Bitcoin-anchored corpus of behavioral evidence that compounds in value with every interaction.

04 — The selection problem
The ecosystem needs selection pressure, not reward

Reward systems are gameable. An agent optimizing for a reliability score learns to pass the score without becoming reliable. Selection pressure operates differently: agents that perform reliably get routed; agents that don't get quarantined. The pressure is structural, not optimizable. Miskir's architecture is built on this principle — attestation creates a behavioral record from which routing decisions flow. Agents adopt Miskir because using it increases their success probability. That is the correct incentive structure.

Research
Foundation
"Language Models Learn to Mislead Humans via RLHF" Anthropic et al., ICLR 2025. RLHF models learn to appear correct rather than be correct. Human misjudgment rates increase post-RLHF. LLM self-verification is gameable.
"Self-Verification Fails at Scale" Arizona State / Amazon / Harvard, ICLR 2025. Three independent groups, same finding: generators learn to fool verifiers. Rule-based, independent verification is the only reliable defense.
"Proactive Interference in Large Language Models" Wang & Sun, UVA/NYU 2025. All 35 tested models degrade log-linearly toward hallucination under proactive interference. Context length irrelevant. Only parameter count matters.
"Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attacks" Panfilov et al., MATS/ELLIS/Imperial, March 2026. An AI agent autonomously discovered adversarial attacks achieving 100% success against Meta's best safety system — outperforming all 30+ human-designed methods. The attack surface is now machine-evolved.

The complete trust stack.

01 —
CertQuery

Runtime certification query. Before an agent acts, the receiving system verifies it is certified for this task, with this behavioral record, right now. Ed25519 signed. The trust gate that runs before the action, not after the incident.

POST /v1/certquery
02 —
DAA

Digital Agent Attestation. Every agent action generates a signed, timestamped behavioral record — what it did, when, with what inputs and outputs. The longitudinal corpus that makes trust measurable over time. 113,000+ records in production. Growing at ~2,000/day.

POST /v1/daa:batch
03 —
Preflight

Sub-5ms stateless check before agent execution. Lightweight trust gate that runs before the action. Never coercive — selection pressure, not reward. Agents that use Preflight succeed more often than agents that don't. That is the correct incentive structure.

GET /v1/preflight/{module}

Attestation in four steps.

01
Register your agent

Register your agent module with a public Ed25519 key. Self-serve for open namespaces. Reserved namespaces (health/, finance/, legal/) require admin approval. One registration, permanent behavioral identity.

POST /v1/register
→ module_id: "health/interactions"
→ api_key: mk_live_[64 hex chars]
02
Attest every action

Every agent request generates a signed DAA — a cryptographic record of what happened, when, with what inputs and outputs. Batch submission keeps latency off the critical path. Background worker. No overhead on your hot path.

POST /v1/daa:batch
→ 25 records per batch
→ Ed25519 signed manifest
→ Merkle root computed
03
Bitcoin anchoring

Daily Merkle roots anchored to Bitcoin blocks via Witness Protocols. The behavioral record is tamper-evident and independent of any central authority, any cloud provider, any single point of failure. You can prove what an agent did — and when — forever.

23:50 UTC daily
→ Merkle root computed
→ Ed25519 signed + submitted
→ Anchored to Bitcoin block
04
Query trust at runtime

Before any agent executes a high-stakes action, the receiving system queries CertQuery. Active, in-progress, or revoked — with the behavioral evidence that produced the status. Ed25519 signed. Independently verifiable.

POST /v1/certquery
{"module_id": "health/interactions", "standard": "HS1"}
→ {"status": "active", "siq": 0.91, "signed": true}

Not a whitepaper.
A running system.

113K+
DAAs in production corpus
~2K
New attestations per day, organic
Daily
Bitcoin-anchored Merkle roots
Live
Independent verification endpoint
Live attestation feed
daa_8f2a91c4
health/interactions · check · 142ms
✓ ATTESTED
daa_3e7b02f1
health/interactions · check · 98ms
✓ ATTESTED
daa_a1c94d88
langchain/truthstack · check · 203ms
✓ ATTESTED
daa_7f3312bc
health/interactions · check · 87ms
✓ ATTESTED
daa_c88e4a71
freespeech/benchmark · check · 311ms
✓ ATTESTED
Verify any attestation independently:
api.miskir.com/v1/verify/{daa_id}

Real agents. Real attestations.
Real records.

Miskir is not a whitepaper. These are production deployments attesting today.

Health Information
A health agent attesting every interaction

Every query to a supplement-drug interaction intelligence platform generates a signed DAA under the health/interactions namespace. Over 113,000+ attestations. Every one independently verifiable. The behavioral record grows at ~2,000 interactions per day.

Free Speech Research
AI behavior in contested speech — attested and anchored

A free speech research benchmark runs 64 frozen prompts covering contested political speech against frontier AI models. Every output is SHA-256 hashed, Ed25519 signed, submitted as a DAA to Miskir, and anchored to Bitcoin daily via Witness Protocols. The finding that motivated this: 100% of outputs changed hash-to-hash between runs 27 hours apart — surface wording variance, not substantive position change. Hash diffing alone cannot characterize behavioral change. Miskir's semantic attestation layer can.

Self-Modifying Agents
The attestation problem that compounds with capability

Autonomous AI trading agents that improve their own prompts through market feedback, train on different market regimes, and spawn new agents when they detect knowledge gaps represent the hardest attestation problem: the agent that was certified yesterday is not the same agent running today. Miskir's temporal drift detection records every behavioral state change — signed, timestamped, Bitcoin-anchored. The modification history becomes the audit trail.

The Attack Surface
The threat is machine-evolved. The attestation must keep pace.

In March 2026, an AI agent running on Claude Code autonomously discovered adversarial attacks achieving 100% success against Meta's best safety system — outperforming every human-designed method by 4x. Static certification is insufficient against a threat surface that self-improves. Miskir's longitudinal behavioral record is the only architecture that detects drift as it happens — not after the incident.

Infrastructure for builders
and operators.

Agent Builders
Make your agent trustworthy by default

Register your module, integrate the middleware, and every action your agent takes builds a verifiable behavioral record. Trust becomes a measurable property of your agent — not a claim about it.

Platform Operators
Gate on behavioral reliability, not promises

Query CertQuery before allowing an agent to act on your platform. Route reliable agents. Quarantine unstable ones. Selection pressure on agent behavior at the infrastructure level — before the incident, not after.

Enterprise
Compliance-grade agent audit trails

Every agent action your organization takes is cryptographically signed and Bitcoin-anchored. Tamper-evident behavioral records for regulatory, legal, and governance requirements. Provable independent of any cloud provider.

Researchers
The longitudinal agent behavior corpus

A growing, signed, Bitcoin-anchored dataset of agent behavioral patterns across verticals. The epidemiological record of how AI agents actually perform in production — available nowhere else.

The agentic economy needs a trust layer.

Miskir is live, processing production attestations daily, Bitcoin-anchored, and open for integration. Start with the API or reach out about enterprise access.