Skip to main content

5 posts tagged with "whitepaper"

View All Tags

Collateral Markets and Programmable Escrow

How much collateral do you put up when the computational cost isn't known ahead of time? The collateral multiplier and a series of credible commitments replace case-specific escrow with one general pattern.

February 20, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 5

Excerpts from the Whitepaper

Key Takeaways

  • Collateralization enables trustless protocols by leveraging credible commitments
  • The core problem for batch jobs: how much collateral do you put up when the computational cost isn't known ahead of time?
  • The collateral multiplier: commit to a multiplier M, deposit (price x M) after job completion
  • A series of credible commitments (timeout, payment, cheating) replaces case-specific escrow with one general pattern
  • Collateral markets let agents negotiate over collateral itself, not just jobs
  • The combination of credible commitments and collateral markets forms the foundation of generic, machine-actionable marketplaces

The problem with "just put up collateral"

Collateral is the teeth behind trust. Verification tells you what happened, and collateral makes it matter.

In our last post, we looked at verification: how do you trust computation done by a machine you don't control? We covered three categories of verifiable computing, the consensus spectrum, and the honest answer that every approach involves tradeoffs.

But verification doesn't exist in a vacuum. Verification needs enforcement. If a compute node returns a wrong result, something must happen. In most protocols, that something is collateral slashing: the node put up a deposit, and the deposit gets taken away if the node cheats.

Collateralization facilitates trustless protocols by leveraging the credible commitments enabled by blockchains. It's used whenever protocol actors want an incentive for counterparties to behave honestly. In blockchains secured by Proof-of-Stake, nodes securing the network deposit collateral to ensure the blocks they propose are legitimate. In compute marketplaces, a client that wants assurance some task will be done correctly may require the compute node to deposit collateral. Likewise, the compute node may require the client to deposit collateral ensuring it will be paid if it does the work correctly.

In this post, we'll do three things: (1) show why fixed upfront collateral is an unsatisfactory solution when costs are unknown a priori, (2) introduce the collateral multiplier as a market parameter, and (3) show how multiple risk types become a series of credible commitments expressed in the same escrow abstraction.

When the cost is unknown

Imagine you're a compute node. A potential client wants to run a machine learning training job. How much collateral should you put up? The job might converge in ten minutes. It might run for six hours. You won't know until it's done.

Note: This framing is about batch-style jobs that settle on completion. Time-metered payments are often more straightforward, because the protocol can settle periodically instead of estimating total cost upfront.

One major problem in many verification-via-replication protocols is the issue of how much collateral to put up for a job where the computational cost is not known ahead of time.

This is more common than you'd think. Running a machine learning training job might take minutes or hours depending on convergence. A scientific simulation might complete quickly or hit edge cases that extend runtime. Even a straightforward data processing pipeline can encounter unexpected data volumes.

If you require fixed upfront collateral, you face a dilemma:

  • Set it too low, and nodes have insufficient incentive to complete the job honestly: the reward from cheating exceeds the cost of losing their deposit.
  • Set it too high, and you exclude smaller nodes from participating, concentrating the market among well-capitalized operators.

Neither option leads to a healthy marketplace. Fixed upfront collateral doesn't work when computational costs are variable.

The collateral multiplier

This problem can be solved with collateral markets.

Rather than depositing a fixed amount of collateral ahead of time (or topping up opportunistically), a compute node commits to a collateral multiplier at the time of deal agreement. After a job is completed, the compute node deposits into escrow the amount it will charge the client times the collateral multiplier.

In other words: collateral becomes a ratio, not a guess. If the job turns out to be cheap, the absolute collateral is low. If it's expensive, the collateral scales proportionally.

The collateral multiplier solves the issue of knowing ahead of time how much collateral is required. It enables at least primitive forms of optimistic verification for programs where the computational cost or runtime is not known in advance. And it creates a market for collateral over which agents can negotiate.

That last point is significant. The multiplier isn't fixed by the protocol. It's a parameter that buyer and seller negotiate. A client who needs strong assurance might demand a high multiplier. A node confident in its reliability might offer a low one. The multiplier becomes a signal of trust, and the market determines the price of that trust.

A series of credible commitments

The collateral multiplier handles one problem: unknown costs. But real marketplace interactions involve multiple types of risk, each requiring its own form of collateral.

Consider a verification-via-replication protocol. There are at least three types of collateral at play:

  • Timeout collateral ensures the node completes the job within an agreed timeframe. It's deposited when the deal begins and refunded if the job finishes on time. If the node fails to deliver, the timeout collateral is slashed.

  • Payment collateral ensures the client will pay for completed work. The client deposits funds that transfer to the node upon successful verification. If the client disputes a correct result, arbitration can resolve the claim.

  • Cheating collateral ensures the node doesn't return fabricated results. If verification reveals that the node cheated, this collateral is slashed and partially distributed to the verifier as a bounty.

Each of these collaterals is deposited at particular times, and is refunded or slashed at other times, based on events that happen on-chain. Each deal is not "one escrow", but a small program: multiple deposits, multiple conditions, and multiple possible outcomes. These sets of rules can be abstracted to a series of credible commitments, where participating parties each deposit collateral, and that collateral moves based on events.

This is where Alkahest comes in. Rather than building custom collateral logic for each of these types, Alkahest's Statement and Validator system expresses all of them. Each collateral type is a Statement with its own conditions and validators. The timeout validator checks deadlines. The payment validator checks job completion. The cheating validator checks verification results.

Same contracts, different conditions. Same infrastructure, different markets.

Why this matters for marketplace design

Case-specific collateralization architectures are an outdated approach. Every protocol that builds its own collateral system is rebuilding the same patterns: deposit, condition, release or slash. The differences are in the specifics, not the structure.

The combination of a series of credible commitments and collateral markets dramatically enhances what's possible. You can build a compute marketplace, an energy marketplace, and a data marketplace on the same collateral infrastructure. Each uses different validators, but the underlying pattern is the same.

This is the power of Alkahest applied to a concrete problem. In Post 3, we described Alkahest as a unified abstraction for exchange and commitment. Here, you can see what that abstraction gets you in practice: a single escrow system that handles timeout penalties, payment guarantees, cheating prevention, and any other collateral type a marketplace might need, all through programmable conditions.

What this means for you

If you're building a marketplace that requires any form of collateral, you don't need to design your own escrow state machine. Alkahest's escrow and arbitration can express your collateral types. Define the deposit conditions, the verification events, and the outcome rules. The infrastructure handles the rest.

If you're building a compute protocol specifically, the collateral multiplier pattern solves the variable-cost problem. Nodes commit to a multiplier. Jobs settle based on actual cost. The market sets the price of trust through the multiplier negotiation.

If you're thinking about market design more broadly, collateral markets represent a recursive pattern. Agents negotiate over jobs. Jobs require collateral. Collateral levels are negotiated. This creates a market within a market, which is exactly the kind of composability that generic primitives enable.

Next in the series

We've built up the mechanism layer: Alkahest for exchange and commitment, verification for trust, collateral markets for economic guarantees. But how do you test these mechanisms against adversaries?

In our next post, we'll look at Arkhai's approach to adversarial design: training agents to cheat so we can stop them. Game-theoretic white-hat hacking, multi-agent inverse reinforcement learning, and the honest acknowledgment that convergence isn't guaranteed.


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

Trust Without Trustees

How do you trust computation done by a machine you don't control? Three categories of verifiable computing, the consensus spectrum, and the honest answer that every approach involves tradeoffs.

February 18, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 4

Excerpts from the Whitepaper

Key Takeaways

  • A central question in decentralized compute: how do you trust computation done by a machine you don't control?
  • Three categories of verifiable computing: cryptographic methods, secure enclaves (TEEs), and replication
  • Local vs global consensus is a spectrum, not a binary choice
  • In replication-based verification, local consensus can, in principle, drive the number of replications needed down to 1
  • Approximate agreement can handle non-deterministic computations
  • Every verification method is some combination of slow, inefficient, expensive, or insecure: the honest answer is tradeoffs

Verifying compute you didn't run

Outsourcing compute is easy. Outsourcing trust is not.

In our last post, we introduced Alkahest and the two-primitive architecture. Alkahest handles exchange and commitment. Agent-to-agent negotiation handles matchmaking. But there's a question we've been circling since the first post in this series: how do you trust computation done by a machine you don't control?

In a trustless, permissionless distributed computing marketplace, verifiability becomes central. How do clients know that they are getting the correct results returned to them, if not through the massive replication of Byzantine Fault Tolerance traditionally offered by blockchains?

This is the subject of a subfield of computer science known as verifiable computing, and it's arguably one of the hardest problems in decentralized infrastructure. The constraint is clear: verifying the result should have less overhead than computing the result in the first place. If verification costs more than doing the work yourself, there's no point in outsourcing it if you could handle it alone.

Different applications need different trust levels. A financial transaction might require cryptographic certainty. A scientific simulation might accept statistical confidence. A rendering job might only need visual inspection.

And many customers don't want verification at all; they want a counterparty they trust. This is one key reason non-Web3 neoclouds have been able to win: they start with a narrow set of customer needs and rely on reputation, contracts, and recourse instead of "trustless" verification.

One-size-fits-all verification doesn't exist, and pretending otherwise is how protocols end up building something nobody can actually use.

In what follows, we'll break verification into three broad categories, then frame consensus as a spectrum (not a binary), then look at what happens when computations aren't deterministic, and finally be explicit about the tradeoffs.

Three categories

There are three main categories of verifiable computing: cryptographic methods, secure enclaves (TEEs), and replication-based methods.

First, cryptographic methods. These rely on mathematics for their security. Zero-knowledge proofs let a prover convince a verifier that a computation was done correctly without revealing private inputs. Multi-party computation allows joint computation without exposing individual inputs. Fully homomorphic encryption enables computation over encrypted data. These methods provide the strongest guarantees, but they are also among the slowest and most expensive. For many practical workloads today, cryptographic verification costs far more than the computation itself.

Second, secure enclaves (TEEs). These are isolated environments where code and data are insulated from the rest of the system, including the operating system. The goal is to maintain both confidentiality and integrity. TEEs have a promising future, especially as exploits continue to be patched, but even the known exploits continue to limit applicability.

Third, replication-based methods (often called "optimistic" verification). These rely on recomputing the work and checking whether results match. This is the simplest approach to understand and implement, but it comes with extra computational cost. With proper incentive design, the overhead of recomputing can be reduced while still enabling network scaling. Replication often requires game-theoretic mechanisms like collateral slashing and reputation layers to counter cheating.

The consensus spectrum

The verification question is tied to a deeper question about consensus: how many entities need to agree that a computation was done correctly?

At one end of the spectrum is global consensus. Blockchains traditionally achieve consensus through massive replication across all participating nodes. This provides strong guarantees: Byzantine Fault Tolerance ensures the network reaches agreement even when some nodes act maliciously. But this level of replication becomes too costly when the corresponding level of security is not needed.

At the other end is local consensus, which at a minimum only requires a single agent to be convinced of the state. For a two-sided marketplace, only the client really needs to be convinced that a computation was done correctly. But these computations are not done in isolation. The interrelation between clients choosing nodes, and the need for new clients to arrive and be assured that cheating is disincentivized, implies the creation of a global game that, while not requiring global consensus in the traditional sense, emulates it.

Between these extremes are hybrid designs. Local versus global consensus is not a binary choice but a spectrum. The middle involves agreement among a subset of nodes, perhaps within a specific region or cluster, which can lead to faster and more efficient decision-making through reduced communication overhead. Systems can balance speed, efficiency, and security based on their specific requirements.

One of the primary benefits of local consensus, in the context of replication-based verification, is that market incentives can, in principle, be structured to drive the total number of replications needed for verifiability down to one. This minimizes computational cost, financial cost, and energy consumed. Getting to a single replication is the target: it means every unit of compute goes toward useful work, not redundant checking.

When computations aren't deterministic

Note: There has been significant research and practical progress on this topic since the whitepaper was completed in summer 2024. This section primarily summarizes the whitepaper's framing.

Replication-based verification sounds straightforward: run the job twice, compare results. But this assumes determinism. If the same program on two different machines produces exactly the same output, the comparison is trivial.

What happens when the computations are not deterministic? Neural network inference on different hardware can produce different results. Floating point arithmetic varies across architectures. Parallel workloads with race conditions produce different orderings.

Once outputs differ, the real question becomes: what does "match" mean?

One option is approximate agreement: if the verification yields a result close enough to the original, the result is considered valid. BOINC (the Berkeley Open Infrastructure for Network Computing) has been using approximate agreement for a long time. The approach works, but it requires an application-specific distance measure, which introduces developer overhead.

Approximate agreement might apply to some neural networks. In large language models, it might be possible to measure the distance between outputs before decoding. But slightly different outputs before decoding can still lead to large differences after decoding, which leaves room for abuse.

Another option is objective-function-based evaluation. Proof-of-Work is a classic example: solving a cryptographic puzzle is hard, but verifying the solution is cheap. The proof is in the result itself. Similar approaches work for problems where the answer is easy to check even if it's hard to find.

And sometimes the right answer is to trust the compute node and not verify at all. This doesn't count as verifiable computing, by definition; however, reputation systems have decades of research behind them, and for many applications the trust tradeoff is acceptable.

Honest about limitations

Every lock can be picked. Every verification approach has weaknesses.

  • Cryptographic methods for verifiable computing are very slow and will almost certainly always be much more expensive than bare metal execution for many workloads.
  • Secure enclaves have exploits, and while the future is promising as patches continue, they are not ready for highly sensitive applications.
  • Optimistic verification suffers from issues of determinism, and even when those are solved or approximated away, there remain deep issues of collusion.

Verifiable computing options are, at present, some combination of slow, inefficient, expensive, or insecure.

This isn't a problem to be solved by picking the "right" method. Every method trades one thing for another. The most flexible approach is to support all three categories and let applications choose the verification that matches their risk tolerance, performance requirements, and budget.

Arkhai's design enables the incorporation of all of these methods. Alkahest's programmable conditions don't care whether the validator is checking a zero-knowledge proof, a TEE attestation, or the output of a replication check. The verification layer is modular. Swap in different validators for different trust levels.

The shortcomings of all of these approaches will decline with time and increasing use. Cryptographic methods are getting faster. TEE exploits are getting patched. Replication mechanisms are getting smarter about collusion. But waiting for a perfect solution means shipping nothing. The pragmatic choice is to build the system that supports all approaches and improves as each approach matures.

What this means for you

If you're building on decentralized compute, the verification question will define your architecture. Don't pick a verification method in the abstract. Start with your application's requirements. How sensitive is the data? How expensive is the computation? What's the cost of a wrong result?

One coarse way to think about it:

  • For low-stakes workloads like rendering, batch processing, or data transformation, replication with reputation may be sufficient.
  • For moderate-stakes workloads like scientific simulation or model training, approximate agreement or TEE-backed execution can provide stronger guarantees.
  • For high-stakes workloads like financial computation or medical inference, cryptographic methods may be worth the overhead.

These aren't mutually exclusive. A single marketplace can support all three categories, with applications selecting the trust level they need. This is what Alkahest's modular validator system enables.

Next in the series

We've covered how Alkahest handles exchange and commitment, and how different verification methods handle trust. But there's a practical problem we haven't addressed: collateral.

When a compute node takes a job, how much collateral should it deposit? What if the computational cost isn't known ahead of time? In our next post, we'll introduce collateral markets and show how programmable escrow handles the unknown.


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

Two Primitives for Machine-Native Markets

In implementation, the whitepaper's three primitives became two: Alkahest for escrow and arbitration, and agent-to-agent negotiation. Here's how that evolution happened and what it enables.

February 16, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 3

Excerpts from the Whitepaper

Key Takeaways

  • In implementation, the whitepaper's three primitives effectively became two: Alkahest (escrow + arbitration) and agent-to-agent negotiation
  • Bundle exchange collapsed into credible commitments: lock value, validate conditions, then release (or escalate)
  • Alkahest expresses deals as Statements, validators, and escrowed value
  • Off-chain solvers, on-chain auctions, and agent-to-agent negotiation differ mainly in where matching happens and what it costs
  • We bias toward agent-to-agent negotiation because deals have terms, not just price, and we want to avoid match-making chokepoints
  • Open source, no fees, token-agnostic: exchange primitives as a public good

From three primitives to two

Architecture is destiny. The primitives you choose determine what you can build.

In this post, we'll do three things: (1) explain what happened when we tried to implement the whitepaper's primitive set, (2) define Alkahest at the interface level we use throughout the series, and (3) outline three market-making patterns, with an emphasis on why we bias toward agent-to-agent negotiation.

In our first post, we described three primitives at the core of Arkhai's architecture: the exchange of arbitrary bundles of assets, agreements modeled by a series of credible commitments, and agent-to-agent negotiation. In our second, we looked at why many existing compute marketplaces came short of expectations. Now let's look at what we built, and what we learned along the way.

On paper, "bundle exchange" and "credible commitments" look distinct. One moves assets. The other enforces trust: collateral deposited, conditions evaluated, outcomes enforced.

In code, bundle exchange kept collapsing into the credible-commitments machinery. An exchange is just escrow plus conditions: parties lock value, a validator checks whatever needs to be checked, and funds move. When "whatever needs to be checked" isn't fully on-chain, that validator becomes an arbiter and the deal becomes an arbitration problem.

Once we had an escrow-and-arbitration system that could express a series of credible commitments, the bundle-exchange primitive wasn't really a primitive anymore. It was one configuration of the same system.

That left two primitives that mattered in practice: (1) a generalized escrow/arbitration layer for exchange-with-commitment (Alkahest), and (2) a way for agents to find and agree on deals (agent-to-agent negotiation).

The case against specialization

This realization didn't come from theory. It came from staring at the collateralization patterns across different marketplace types.

Across the protocols we studied, the structure kept reducing to the same simple state machine:

  1. Parties deposit value.
  2. Conditions are evaluated.
  3. Value is released, refunded, slashed, or escalated based on outcomes.

The specifics vary: in compute marketplaces, the condition might be job completion plus verification; in energy markets, delivery confirmation; in a simple trade, mutual consent. But the structure stays the same. The degrees of freedom are: what is deposited, what is checked, and where value flows.

Many protocols require collateralization with different types of collateral deposited, withdrawn, or slashed depending on the outcomes of some events. These sets of rules can be abstracted to a series of credible commitments, where participating parties each deposit collateral, and that collateral moves based on events.

Once we saw this, the evolution from three primitives to two became hard to avoid. Bundle exchange is a special case of credible commitments, and credible commitments collapse into one programmable escrow-and-arbitration system.

Introducing Alkahest

This unified abstraction is Alkahest: programmable escrow and arbitration with arbitrary conditions.

In the repository, we describe it plainly: a contract library and SDKs for validated peer-to-peer escrow. That's the interface we care about.

The name comes from alchemy. The alkahest was the theorized universal solvent, a substance that could dissolve anything. It's an appropriate metaphor: Alkahest dissolves bespoke exchange logic into a general-purpose primitive, and lets you recompose it into many market types. Solve et coagula.

Concretely, there are three moving parts:

  • Statement: an obligation. One party commits value, specifying who benefits, what amount is held, and under what conditions it should be released. Statements are the fundamental unit of commitment in the system.

  • Validator: the mechanism that determines when conditions are satisfied. A validator might check for manual confirmation from a designated party, data from an oracle feed, the passage of time, or any composable combination. When conditions can't be resolved automatically, validators can escalate to arbitration.

  • Escrow: the component that holds value until validators confirm conditions are met, then automatically releases funds to the appropriate party. If conditions aren't met, escrow can refund, slash, or follow whatever path the parties programmed at the start.

This is credible commitments made concrete. And bundle exchange falls out as a special case: parties lock assets, validators check conditions, and escrow releases value atomically.

What two primitives enable

Alkahest for exchange-with-commitment (and arbitration), and agent-to-agent negotiation for matchmaking. Together, these two primitives enable a surprising range of applications.

A few immediate consequences follow:

  • The three main categories of verifiable computing can all plug into Alkahest's validator layer: check whether the computation was done correctly, release payment if yes, and slash collateral if no.

  • Storage and bandwidth can be incorporated into the same marketplace structure as compute. They are additional assets with their own validators and conditions, not separate "side protocols" developers have to stitch together.

  • Clearinghouse-style settlement becomes expressible. A party can act as a central counterparty, net positions across many trades, and settle through the same escrow and arbitration layer.

  • Peer-to-peer bartering is just another configuration. Agents can swap arbitrary bundles with conditional release, without a trusted intermediary.

  • Natural language agreements become viable. When parts of a deal can't be fully machine-verified, you can still make them machine-actionable by precommitting to arbitration (human, committee, or even AI-based), rather than pretending every condition is deterministic.

  • Agents interact with a consistent interface across market types, which makes it easier to reuse agent scaffolding (state, actions, rewards) when moving between compute, energy, storage, bandwidth, or barter.

  • Most marketplace types reduce to programming different conditions into the same contracts.

This is the power of generalization. Rather than building a different escrow for compute, another for energy, another for peer-to-peer trading, you build one escrow that can express all of them.

Three ways to make a market

With Alkahest handling the exchange layer, the second primitive handles how parties find each other.

The whitepaper outlines three broad methods of market-making: off-chain solvers, on-chain auctions, and agent-to-agent negotiation. Each comes with tradeoffs in trust models, efficiency, and scalability. The main difference is where matching happens.

First, off-chain solvers. Nodes send bids and asks to a solver layer that proposes matches, which are then settled on-chain. This can be fast, and implemented as a competitive set of solvers rather than a single privileged matcher. The tradeoff is that you introduce an off-chain coordination layer that can become a chokepoint for liveness, censorship resistance, and integration complexity. You also introduce a self-interested party that needs a business model. In practice, that often means extracting value from matching (fees, spreads, preferential routing), which can skew incentives and concentrate power around whoever controls order flow.

Second, on-chain auctions. Bids and asks go on-chain, and the matching rule executes under consensus. This provides strong transparency and auditability. The tradeoff is that matching inherits blockchain latency and gas costs, and strategies become more legible to everyone watching the chain.

Third, agent-to-agent negotiation. Nodes propose matches to each other directly. They can accept, reject, or counter-propose until a conclusion is reached. This avoids a central matching engine, but it requires more sophisticated agents and can take more messages to converge. The upside is that negotiation is a flexible coordination substrate. With the same underlying infrastructure, you can represent RFQs, posted-price deals, barter, and even auction-like dynamics by changing agent policies, without rewriting settlement. And because settlement treats resources as assets with conditions, the same negotiation machinery can carry across asset types like compute, storage, bandwidth, or energy, including bundles that mix them.

We built around negotiation as the default because:

  • DCN deals are heterogeneous; terms matter as much as price (deadlines, verification, collateral, escalation paths).
  • We want to avoid concentrating order flow in a single matching service.
  • Negotiation composes cleanly with Alkahest: once two agents agree, the escrow/arbitration layer enforces the outcome.

The architecture still supports solver- and auction-based matching where it makes sense. Alkahest doesn't care how the match was made. It only cares that both parties agreed and that conditions are met.

Open infrastructure

This brings us to a design philosophy that runs beneath the architecture.

Arkhai aspires to build the exchange primitives as a public good: no fees, no token lock-in, token-agnosticism, and multi-chain compatibility.

This can sound counterintuitive. If the exchange layer is free and open source, where is the competitive advantage?

The advantage is the technology, not the base mechanics. Better verification. Smarter matching. More reliable nodes. The applications and services built on top of the open infrastructure.

Open source means other teams can build on Alkahest instead of rebuilding escrow and dispute resolution from scratch. The ecosystem grows faster. Fewer bugs, because the code is audited by more eyes. The total number of marketplaces and use cases expands, which benefits everyone building on the infrastructure.

Generalization over specialization. Open primitives over locked gardens. This is how you build infrastructure that lasts.

What this means for you

If you're building with these primitives, a few implications are immediate:

  • If you're building a marketplace, Alkahest replaces your escrow layer. Define your conditions, choose your validators, deploy. You don't need to build collateralization logic from scratch, and you don't need to rediscover the edge cases that other protocols have already found and fixed.

  • If you're building a compute protocol, Alkahest's programmable conditions can express a wide range of verification schemes: cryptographic proofs, TEE attestations, replication-based checking, or hybrid approaches. Payment becomes conditional on verification using the same contracts.

  • If you're designing incentive structures, the composable structure lets you treat a "mechanism" as a program: what information counts (validators), what happens when conditions are met or disputed (escrow + arbitration), and how offers are made (negotiation policies). That makes it easier to do ablations and simulation: change one rule, rerun agent-based experiments, and measure how incentives shift, without rebuilding the whole system. It also makes comparisons cleaner, because different mechanisms can share the same settlement primitive.

Next in the series

Alkahest handles exchange and commitment. Agent-to-agent negotiation handles matchmaking. But one question keeps coming up: how do you trust computation done by a machine you don't control?

In our next post, we'll dig into verification. Three categories of verifiable computing, the local-to-global consensus spectrum, and the honest answer that every approach involves tradeoffs.


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

Why Current Distributed Compute Marketplaces Are Broken

Many distributed computing marketplaces fail for structural reasons, not just execution issues. From unsustainable tokenomics to fragmented stacks, we explore the dominant failure modes.

February 12, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 2

Excerpts from the Whitepaper

Key Takeaways

  • Many distributed computing marketplaces fail for structural reasons, not just execution issues
  • Underutilization + fiat-denominated costs create unsustainable tokenomics; supply-side economics without demand doesn't hold
  • Web3 compute differs from Web2: node consent reshapes pricing and scheduling
  • Duplicated marketplaces waste resources and likely consolidate into a few prominent solutions
  • Marketplace-as-moat strategies (fees, token lock-in) hamper adoption; competitive dynamics push toward no-fee and token-agnostic designs
  • Compute markets without built-in storage and bandwidth capabilities will fall behind those that integrate the full stack

The graveyard is full of good ideas

The aggregation of compute power is becoming its own industry. But the path to decentralized compute markets is littered with failures.

In our last post, we listed a set of structural problems with distributed computing marketplaces. Many projects have run into those problems and died. The graveyard is full of protocols with solid technical foundations, talented teams, and genuine vision. They failed anyway.

This is not about execution. The obstacles are not surface-level bugs or marketing missteps. They are architectural. Below is a brief, partial exploration of the dominant failure modes described in the whitepaper.

The broken flywheel

Traditional supply-side economics in Web3 compute tends to follow a predictable loop:

First, a protocol subsidizes supply through token emissions. The pitch is simple: "Bring your GPUs, earn tokens."

Second, node count grows quickly. The network looks large on paper.

Third, utilization rates remain low. Capacity exists, but demand does not materialize at the pace needed to justify the supply.

Fourth, the underlying costs of hardware, networking, and electricity remain denominated in fiat. With low utilization, token prices often depend on external buy pressure the system itself does not generate.

Fifth, node operators start exiting. Token price declines. More operators leave. The flywheel runs in reverse.

This cycle has repeated across a number of protocols. The names differ. The shape doesn't.

Why demand never came

Economic models that attempt to onboard large amounts of computing power through supply-side economics are increasingly becoming outdated; their insufficiencies are evidenced by the lack of demand to counterbalance the supplies they provide.

In practice, the obstacles that most developers and users face in adopting DCNs and DePINs often do not even touch verifiable computing. The integration and operational friction is high enough that verifiability does not even enter the conversation.

Some of that friction is self-inflicted. Many marketplaces were built for humans: dashboard-first workflows, sign-in flows, and manual approval steps, and in some cases KYC. Even where APIs exist, they often assume a human operator. That blocks the machine-native path where agents discover, negotiate, and execute jobs autonomously.

Demand for verifiability will likely increase as popularity, and thus incentive to exploit DCNs, grows (this is also true of highly in-demand traditional compute providers), but it is unclear at what pace this trend will progress.

And when verification does matter, it doesn't scale cleanly. Proof systems can be slow or costly, TEEs have well-known attack surfaces, and optimistic schemes face collusion and dispute overhead. Many protocols pick one approach and force it on everyone, instead of letting participants choose the risk, latency, and cost tradeoff per job.

Outside the whitepaper, a useful contrast is the non-Web3 "neocloud" pattern: pick a narrow initial customer segment and build to meet its needs end-to-end before scaling supply. Many Web3 protocols tried to bootstrap generalized supply first and expected demand to catch up.

The duplication problem

Most Web3 distributed computing networks and DePINs are creating relatively complex marketplaces from scratch. This duplication results in a massive waste of resources, especially since much of the infrastructure is open source and easily forked. Due to competitive dynamics and without substantial changes, the industry will likely consolidate into a small number of prominent solutions.

Many protocols build their own:

  • Matching engine
  • Pricing algorithms
  • Collateral schemes
  • Dispute resolution
  • Payment channels
  • Reputation systems
  • Hardware provisioning
  • Networking stack

This is wasteful. Open-source marketplace infrastructure should be shared, not reinvented. Differentiation should be in what you are trading and how you verify it, not in rebuilding the entire trading infrastructure from first principles.

Even worse, collateral rules tend to be bespoke and rigid. They are expensive to modify, hard to extend to new deal types, and hard to reason about as requirements change.

Open source should make reuse easy. In practice, it often doesn't, because the marketplace was seen as the moat.

Marketplace as moat doesn't work

Since developers, companies, and others must believe in the long-term success of these protocols before committing, and are unlikely to have large stakes in these protocols, having the marketplace as a moat (via token lock-in or fee-based structures) is more likely to hamper adoption than reinforce it.

Competitive dynamics may push marketplaces toward no-fee and token-agnostic designs over time, especially as the ecosystem trends toward chain-agnosticism and account abstraction.

For these reasons, Arkhai aspires to build the primitives for exchange as a public good: no fees, no token lock-in, token-agnosticism, and multi-chain compatibility.

In Web2, the client controls the nodes on which the compute is being run. In Web3, compute nodes have to consent to having computations run on them.

This changes the nature of scheduling, because now it is necessary to incorporate the price of a job from the perspectives of both client and compute node. While bid-based scheduling has been used in distributed computing protocols before, introducing actual money into the system changes how the scheduling problems need to be approached.

First, pricing and scheduling are intertwined. Deadlines, prices, verification requirements, and collateral constraints all shape whether a node will accept a job and whether a client should submit it.

Second, the mechanisms must treat nodes as utility-maximizers, not obedient workers. If consent is treated as a minor implementation detail, you can end up with thin liquidity and inconsistent service: nodes will decline jobs that don't meet their constraints, and reliability becomes an emergent side effect rather than a design property.

The fragmented stack problem

Compute marketplaces that don't have built-in capabilities for handling storage and bandwidth will fall behind those that do.

A compute job needs more than compute. Where does the input data come from? Where do results go? How does data move between client, node, and storage? How do jobs that span multiple nodes communicate?

Protocols that only provide compute expect developers to integrate separate storage protocols, bandwidth solutions, and coordination layers, each with its own tokens, APIs, and trust models.

The integrated stack matters not because it is impossible to assemble pieces from different protocols, but because the integration complexity becomes a barrier to adoption. Every additional protocol is another potential failure point, another interface to learn, and another place incentives can break.

What this means for you

If you're building infrastructure that needs compute capabilities, the failure modes of previous protocols offer a clear path forward:

  • Start with demand. Pick a narrow initial customer segment and meet its needs end-to-end before scaling supply.
  • Avoid rebuilding basic marketplace infrastructure. Use modular components that already exist. Differentiate on verification, node network quality, matching, and reliability.
  • Avoid token lock-in as a default. Let people try your product without economic commitment. Make it easy to integrate, easy to test, and easy to adopt.
  • Design for consent. Your nodes are independent actors with their own incentives. Pricing and scheduling are the same problem viewed from different angles.
  • Integrate the stack. Compute alone is not enough. Storage and bandwidth need to be part of the solution, not afterthoughts.

The compute marketplace failures of the past five years were not bad luck. They were predictable outcomes of structural decisions. Different decisions lead to different outcomes.

Next in the series

We've looked at what failed and why. In our next post, we'll explore what worked: the architectural decisions that led from three primitives to two, and the introduction of Alkahest, the programmable escrow system that emerged from unifying bundle exchange and credible commitments into a single, more powerful abstraction.


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

The Agent-First Future: Why Machines Need Their Own Economic Infrastructure

Current marketplace infrastructure is built for humans, not for the agent-driven future. Arkhai's architecture is built on three primitives that enable composable markets across any asset type.

February 10, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 1

Excerpts from the Whitepaper

Key Takeaways

  • Current marketplace infrastructure is built for humans, not for the agent-driven future
  • Prior distributed computing marketplaces failed for structural reasons: duplicated efforts, unsustainable economics, case-specific designs that don't generalize
  • Agent-native design is crucial: protocols built so machines can observe, act, receive rewards, and improve over time
  • Arkhai's architecture is built on three primitives: bundle exchange, credible commitments, and agent-to-agent negotiation
  • The same primitives that enable compute markets also enable energy, storage, bandwidth, and many other asset classes
  • Where this leads: composable markets and collective intelligence emerging from agent interactions

The future is autonomous

It's 2 AM. Your AI research assistant needs more GPU capacity to finish a protein folding simulation before your morning meeting. It scans available compute across three continents, negotiates pricing with seventeen providers simultaneously, commits funds to escrow, and spins up the job. All while you sleep.

This scenario is possible today. The pieces exist. What's missing is the infrastructure to connect them.

Most economic activity in the future will be undertaken by machines, yet most of our marketplaces are built for humans. APIs exist for many digital marketplaces (stock exchanges, energy markets, centralized cryptocurrency exchanges) but in practice, these serve institutions and wealthy individuals. They're not built for autonomous agents operating at machine speed, making thousands of decisions per second, on behalf of millions of users.

Neither centralized nor decentralized marketplaces can currently support the types of economic activity that will be possible with autonomous agents in the coming years. Arkhai is building something different. The timing is critical.

Why now

A number of developments converge to make agent-native infrastructure possible now:

  • After decades of development and multiple AI winters, agents can finally act. As new model architectures replace large chunks of human labor, the agents running these workloads will need to acquire resources and coordinate with other agents.

  • Assets are becoming programmable. The tokenization of real-world assets is bringing traditional markets (energy, commodities, metals, retail, etc.) onto blockchain infrastructure. This creates the foundation for machine-readable, machine-tradeable economic activity.

  • Decades of mechanism design research (the study and design of incentive structures) are becoming increasingly implementable. Previously confined to academic journals, these ideas can now be instantiated in protocols. The tools exist to design markets that make sense for machine participants.

The current industrial-scale compute and energy buildout is one of the largest infrastructure projects in history, but resource allocation remains human-native. Data centers are proliferating, but not fully utilized: a wasted resource continues to waste away - idle computing power. And not just in data centers - also on desktops, laptops, phones, and IoT devices. But this trash is merely waiting to be turned into treasure, and all that's needed are the right markets, and the right agents.

Why prior approaches failed

Many distributed computing marketplaces have failed since the writing of the whitepaper upon which this post is based. Below is a brief, partial exploration of the challenges that cryptocurrency-based marketplaces faced.

First, the economics were unsustainable. The underlying costs of hardware and electricity (and land in some cases) are denominated in fiat. To maintain token prices, fiat buy pressure is needed. But utilization rates were low, failing to balance token emissions. Thus, economic models that attempted to onboard computing power through supply-side economics struggled.

Second, token lock-in made things worse. Such a strong restriction made it so that developers, companies, and others had to believe in the success of inchoate protocols in order to commit to using the marketplaces in the long-term. Having the marketplace as a moat via token lock-in hampered adoption rather than reinforced it.

Third, many protocols created similar marketplace infrastructure from scratch. This duplication wasted resources. Most of these marketplaces have since died, unable to achieve the network effects needed for sustainability.

Fourth, while the marketplace infrastructure was needlessly duplicated, the case-specific collateralization these protocols implemented didn't generalize and were very difficult/expensive to modify.

Fifth, these marketplaces were built for humans, not machines. Sign-in flows, KYC requirements, and manual approvals designed for human browsing patterns. All create friction that blocks machine participation, despite the fact that agents will likely be the largest consumers of compute in the coming years. Even where APIs exist, they're interfaces for humans using software, not for software operating autonomously.

Sixth, pricing and scheduling were treated as separate concerns. A client might pay more for faster results, or less to receive them later. A compute node faces many job offers with varying requirements, prices, and deadlines. These problems are intertwined, but most or all distributed computing protocols treat them as independent.

Seventh, most or all of these marketplaces treated compute as an isolated asset, without even accounting for the other two pillars of modern computing - storage and bandwidth.

Eighth, verification doesn't scale. How do clients know they're getting correct results? Cryptographic methods are slow and expensive. Secure enclaves have exploits. Optimistic verification suffers from collusion. Each approach has tradeoffs, and very few if any protocols implemented architectures that allowed market participants to choose which verification strategy they wanted, if any at all.

Marketplaces for everything

Compute is the starting point, not the destination.

The same primitives that enable a compute marketplace also enable markets for energy, storage, bandwidth, information, and real-world assets.

Storage and bandwidth follow immediately from compute as the other two pillars of modern computing: retrieval markets that enable paying for data serving; bandwidth guarantees for latency-sensitive applications; agents coordinating across geographies need to negotiate network capacity alongside compute and storage in order to optimize resources. Integrating storage and bandwidth into compute marketplaces increases the utility of the latter.

Energy is another natural extension. AI inference is power-hungry. As agents proliferate, energy markets become essential. Peer-to-peer energy trading, utility-scale trading, trading between data centers: all require the same or similar commitment and negotiation primitives as compute markets.

Information markets let agents pay for access to proprietary datasets, real-time feeds, or specialized models.

And of course, real-world assets - the components out of which these machines are ultimately made: rare-earth metals and semiconductors, marketplace listings, futures contracts for commodities. These are necessary for agents to truly maximize revenue on behalf of their owners.

The goal is generic, composable marketplace infrastructure that works for any asset type. Start with compute, generalize the primitives, and the same architecture enables storage, bandwidth, energy, information, and beyond.

Game theory foundations

To build markets that work for machines, we need to start from first principles. The concept of "agents" can in some ways be traced to the foundations of game theory, and the assumption of rational actors capable of perfectly executing actions that maximize their benefit. Below are some basic concepts of game theory that inspired Arkhai's protocol design.

Agents have utility functions that output how much value they'd get from receiving certain objects. They take whatever action is necessary to maximize their return. This is utility maximization, and it's the starting assumption.

From here, the question becomes: how do you design a reward structure that incentivizes agents to reveal their honest preferences? This is incentive compatibility. Without it, agents mask their true preferences, and market mechanisms can lead to worse outcomes than in the counterfactual case.

The gold standard is strategyproofness: incentive structures in which no agent benefits from dishonesty regardless of what other agents do, and every agent has an incentive to participate. This is the target for market design.

The revelation principle tells us this target is reachable. Under certain assumptions, any game with an equilibrium can be transformed into a game with the same equilibrium, but where the incentive mechanism is strategyproof.

The field that studies all of this is mechanism design: the study and design of incentive structures. If game theory studies which actions lead to maximum reward, mechanism design studies which reward structures give rise to desired actions.

These basic concepts are what are required to build agentic economies that actually work, for it is in the very context of these game-theoretic concepts that agents became defined.

What agents need

Mechanism design gives us the tools to understand how to design markets. But how do agents actually evolve in these markets? They need to learn. For that, reinforcement learning is the best tool currently in use.

In reinforcement learning, agents operate with four primitives: the environment (evolving state), actions (available choices), transitions (probability of state changes given actions), and rewards (what agents receive for outcomes). Building these primitives into protocol design means agents can observe, act, get rewards, and improve.

In the case of compute marketplaces, a compute node maximizing return faces a classic computer science problem known as the scheduling problem. In contrast to static APIs with fixed prices, agent-driven marketplaces require nodes to consent to running jobs. That means that price must be incorporated from both the client's and compute node's perspectives. Thus, the pricing and scheduling of compute jobs are intertwined, and we reframe both in terms of utility maximization - a situation well-suited for reinforcement learning.

Finally, agents also need to form coalitions. Nodes are limited by their hardware configurations and may want to accomplish goals beyond their individual capabilities. Combining resources to achieve larger goals requires coordination. To enable all of this, Arkhai's architecture was originally designed with three primitives in mind:

  • The first primitive is the exchange of arbitrary bundles of assets. Not single assets, but bundles for bundles (of course, single assets are possible, and are just a special case of the more general infrastructure). This enables complex multi-asset deals that single-asset exchanges can't express.

  • The second primitive is agreements modeled by a series of credible commitments. Many protocols require collateralization with different types of collateral deposited, withdrawn, or slashed depending on outcomes. These rules can be abstracted to a series of credible commitments, where participating parties deposit collateral that moves based on events. This replaces case-specific escrow with a more general pattern.

  • The third primitive is agent-to-agent negotiation. Nodes propose matches to each other directly. They can accept, reject, or counter-propose until a conclusion is reached.

We started with these three primitives and found that the first two are better unified into a single abstraction. We'll explore that evolution in a later post.

Composability of markets means that the same primitives used for compute markets can be layered into more complex economic structures: energy markets that interact with compute markets, knowledge markets that interact with both.

Where this leads

Generic marketplaces let agents trade the very physical components they're made of.

Blockchains and cryptocurrencies are trending towards multi-agent systems operating in multi-token economies. Tokenization of real-world assets drives this, along with the separation of concerns that multiple tokens enable. Out of these primitives naturally emerge multi-agent systems that represent the desires of both humans and machines, forming the foundation of a new, decentralized digital economy.

These primitives also unlock markets where none existed before. Idle computing power sits wasted everywhere: in data centers, on desktops, laptops, phones, IoT devices. The right marketplace infrastructure can turn this latent capacity into tradeable assets. This raises a question: what computations might someone pay for later, even if no one will pay for them now? Futures markets for idle compute, retroactive funding for speculative work. These become possible when marketplace primitives are general enough to express them.

Agents will negotiate with each other on behalf of their human owners. For many real-world assets, this negotiation will be exchanges of fixed data schemas. But for intents-based negotiations, language model-based agents will likely create their own languages when communicating with each other. And just as new languages will emerge from agent interactions, synthetic assets will emerge as agents negotiate over human-designed assets and encounter their limitations.

Where this ultimately leads is a decentralized collective machine intelligence which can guide itself through market forces towards a self-improving cybernetic system.

Out of these multi-agent systems will emerge collective intelligence: agents that coordinate resources across continents, optimize for objectives humans specify but couldn't achieve alone, and discover solutions no individual agent could find.

Many of the components for this exist today, all that's needed are the right coordination mechanisms.

What this means for you

If you're building infrastructure that needs marketplace capabilities, you face a choice: build primitives from scratch, or use components designed for composability.

What you get:

  • Deploy production marketplace infrastructure without reinventing the wheel
  • Your market, your rules
  • Agent-native design for machine-speed operations
  • Generalizable primitives that work across asset types

If you're building applications that need compute (AI workloads, scientific computing, or any computationally intensive task), distributed compute markets enable access to a vast array of computational resources.

If you're researching multi-agent systems, mechanism design, or decentralized coordination, the infrastructure being built now will enable experiments in real economic environments. Agents that learn to trade, form coalitions, and optimize objectives.

The series ahead

This is the first in a series exploring the ideas in the Arkhai whitepaper. Eighteen months of development have translated vision into production systems.

In the posts ahead, we'll cover why existing compute marketplaces fail and what we're doing differently. We'll introduce Alkahest, our programmable escrow system that emerged from unifying our original three primitives into two. We'll dig into verification (how do you trust results from machines you don't control?), collateral markets (how do you price jobs when costs are unknown?), and adversarial design (we train agents to cheat so we can stop them). We'll explore tokenizing idle compute and retroactive funding models. And we'll show you what you can build today.

The future is autonomous. The infrastructure for that future needs to be machine-native.

The agents are coming. The question is whether our economic infrastructure will be ready for them.


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.