Skip to main content

9 posts tagged with "whitepaper"

View All Tags

What You Can Build Today

The whitepaper was a thesis about machine-native markets. Here's what actually exists today: Alkahest, Git Commit Marketplace, Agentic RAG, and our generalized marketplace protocol.

March 2, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 9

Excerpts from the Whitepaper

Key Takeaways

  • The original whitepaper was completed in summer 2024; the public product surface has evolved since then
  • Settlement primitive: Alkahest (escrow + arbitration) plus developer SDKs
  • Alkahest expresses peer-to-peer agreements as Statements, validators/arbiters, and escrowed value
  • Git Commit Marketplace: pay the first commit that passes your tests
  • Agentic RAG: provenance-first RAG pipelines for scientific work
  • Generalized marketplace protocol: soon to be released, this is the agent-driven marketplace layer described in the whitepaper

Note This post is an updated snapshot of what's public as of early 2026.

Eighteen months and a million dollars

The whitepaper was a thesis about machine-native markets. The last eighteen months have been about turning one part of that thesis into something real: settlement in open environments.

A bit changed in the process (but surprisingly, not much!). Some ideas stayed as research. Some became experiments. Many became concrete interfaces you can use today.

Below are the application surfaces that actually exist right now, written in the same format: what it is, who it's for, and what you get.

Application 1: Escrow and arbitration (Alkahest)

Alkahest is Arkhai's open-source escrow and arbitration system for peer-to-peer agreements. You describe what should happen, under which conditions, and who decides when those conditions hold. Escrow enforces the outcome.

At the interface level, this is a small set of primitives:

  • Statements to represent obligations
  • Validators/Arbiters to evaluate conditions (and escalate when necessary)
  • Escrow obligations to hold and release value

Who this is for:

  • Anyone building marketplaces, bounties, or escrowed services
  • Teams that need conditional payment in open environments (not just trusted counterparties)
  • Protocol designers who want a reusable settlement layer instead of bespoke per-app escrow

What you get:

  • A programmable escrow interface you can reuse across deal types
  • Dispute resolution and recursive escalation when automation runs out
  • Composable conditions (boolean logic over on-chain and off-chain signals)

Application 2: Markets for shipped code (Git Commit Marketplace)

Git Commit Marketplace turns commits into tradeable units of work: escrow funds, define verification (tests), and pay the first commit that satisfies the objective.

Who this is for:

  • Open source maintainers funding specific changes
  • Teams that want outcome-based contracting instead of hourly workflows
  • Organizations that want transparent pricing for discrete engineering tasks

What you get:

  • Faster, more legible procurement of software work
  • Automated verification as the oracle for payment
  • Settlement that doesn't require trusting the counterparty

Application 3: Provenance-first research RAG (Agentic RAG)

Agentic RAG is an open-source approach to retrieval-augmented generation for scientific and technical work, focused on reproducibility and provenance rather than chatbot answers.

Who this is for:

  • Researchers and labs building literature maps and research assistants
  • Teams that need traceable, reproducible retrieval pipelines
  • Organizations building internal knowledge systems that must be auditable

What you get:

  • RAG pipelines that preserve provenance from source → chunk → embedding → retrieval → output
  • A system designed for inspection and iteration, not black-box responses
  • A way to turn retrieval into a first-class, reusable artifact

Application 4: Compute and energy markets driven by cybernetic agents

Our generalized marketplace protocol for compute and energy, as described in the whitepaper: autonomous agents negotiating over cycles and watts under real constraints (latency, carbon, reliability).

Who this is for:

  • Compute and energy providers exploring market-based allocation
  • Data center operators looking to maximize their ROI on compute and energy infrastructure
  • Compute purchasers looking for the best deals given their requirements
  • DePIN and infrastructure teams that need programmable settlement primitives
  • Builders who want to participate in shaping the threat model and mechanism choices early

What you get:

  • A path to agent-mediated resource allocation built on a concrete settlement layer
  • Early integration points: settle deals with escrow/arbitration, then iterate on matching and optimization

Getting started

Pick one surface and build one small loop end-to-end:

  • Alkahest (escrow + arbitration): Model your deal as a Statement. Decide what evidence releases funds, and who arbitrates when evidence is ambiguous. Start with one validator and one outcome, then add complexity.
  • Git Commit Marketplace: Choose an objective (a failing test, a benchmark, a feature flag). Escrow funds. Pay the first passing commit. Start with deterministic checks, then expand to richer verification.
  • Agentic RAG: Make provenance a first-class output. Persist source references, chunk IDs, and retrieval traces so results can be audited and reproduced.
  • Generalized marketplace protocol: Start by defining the deal format and constraints you actually care about (latency, carbon, reliability). Then settle agreements with Alkahest and iterate on negotiation and matching.

Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

The Generic Marketplace Vision

What happens when compute, energy, storage, and bandwidth all trade on the same infrastructure? Compositional game theory, coalition formation, and the emergence of agent-created assets.

February 27, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 8

Excerpts from the Whitepaper

Key Takeaways

  • Compute alone isn't enough: a marketplace for compute, storage, and bandwidth together is worth far more
  • Inspired by compositional game theory: treat mechanisms as composable building blocks, with explicit interfaces
  • Decentralized vehicle routing: in some settings, coalition formation + credible commitments can push the Price of Anarchy toward 1
  • Coalition formation with carbon optimization: minimize emissions while completing all compute jobs
  • Agents will create their own languages when negotiating. Synthetic assets will emerge as agents encounter limitations of real-world assets.
  • The trajectory: multi-agent systems operating in multi-token economies, a self-improving cybernetic system guided by market forces

Beyond compute

In earlier posts, we said the same primitives that enable compute markets also enable other markets, like energy, storage, bandwidth, information, and real-world assets. We've spent the past several posts building up those primitives: Alkahest for exchange and commitment, verification for trust, collateral markets for economic guarantees, adversarial design for robustness, and tokenization for idle compute.

Now let's talk about what happens when markets start interacting with each other.

A marketplace for compute is not worth nearly as much as a marketplace for compute, storage, and bandwidth. The ability to have a job computed on one node, the corresponding result sent to a second node, and then stored on the second node for some period of time with proper incentives for each of the nodes involved is still quite difficult to do with most current distributed protocols, despite the relative simplicity of the task.

This difficulty is a consequence of fragmented design. Protocols that focus on compute expect developers to integrate with separate storage protocols and separate bandwidth solutions. Each comes with its own tokens, its own APIs, and its own trust models. The complexity barrier blocks adoption.

The integrated approach is different. Same infrastructure for compute, storage, bandwidth, and beyond. Same escrow contracts. Same collateral patterns. Same verification framework. Different validators and conditions for different asset types, but the same underlying architecture.

In this post, we'll sketch one line of theory that influenced how we think about this (compositional game theory), then walk through two examples, and then end with what we expect agents to create as these markets become composable.

Compositional game theory (as inspiration)

One source of inspiration is compositional game theory: a framework that applies category-theoretic concepts to game theory by composing games with explicit interfaces.

Traditional mechanism design treats each mechanism as a monolithic system requiring complete analysis from scratch. This limits complexity: designing a new mechanism means starting over. It limits reuse: proven mechanisms can't be easily combined with others.

Compositional game theory treats mechanisms as building blocks. Open games have defined inputs, outputs, and interfaces. Composition operators let you connect games together in a few ways: sequentially (outputs feed into inputs) and in parallel (games interact side-by-side), so the overall system is assembled from smaller pieces.

One useful consequence is that analysis can sometimes become compositional too: under the assumptions of the formalism, you can reason about parts and then lift those results to the whole.

We're not claiming compositional game theory "enables" every marketplace we care about. It's a perspective: define primitives with clean interfaces, then build complex mechanisms by composition, and validate the composite system with the right mixture of analysis and empirical testing.

Decentralized vehicle routing

Here's a concrete coordination example. It's less about "compositional markets" and more about coalition formation plus credible commitments: agents making enforceable agreements that reshape collective outcomes.

Imagine overlooking a city from above. Every agent is trying to get from point A to point B on a train, in a car, on a bike, or walking. Each agent accesses their favorite map application, is returned the shortest path, and follows it.

Now imagine overlooking the same city, but the agents can be coordinated via an optimization algorithm that minimizes the average travel time. The ratio of the average travel time in the selfish case to the optimized case is called the Price of Anarchy.

In stylized settings, if agents can form coalitions and use credible commitments to enforce agreements, the toolset exists to try to push the Price of Anarchy toward one.

The mechanism is intuitive. If my taking a different route makes your trip faster, you can pay me to do it. But for that agreement to work at scale, it has to be enforceable: payments need escrow, conditions need validators, and disputes need resolution. That's exactly the shape of a credible-commitment system.

The benefits are concrete: lower air pollution, less mental health stress from traffic, lower emissions, and higher productivity. The point isn't that vehicle routing is "a marketplace" in the narrow sense. It's that many collective coordination problems become tractable once agents can make, price, and enforce commitments.

This isn't a compute problem. It's a coordination problem solved through market mechanisms. The same primitives handle both.

Coalition formation with energy

Compute nodes forming coalitions can be modeled as a game where nodes gain some benefit (higher expected return, lower variance in returns) by joining together. These nodes might also want to incorporate energy into their decision-making.

Consider the objective function: minimize carbon emissions subject to all compute jobs getting done and all homes getting the energy they require. This needs consensus-based load balancing across compute providers, energy suppliers, and consumer demand.

Coalition formation depends on each node's resources, the resources of prospective coalition members, and the resources of other nodes in the environment. Nodes need to communicate with each other and have a sense of their expected payoff. The reinforcement learning primitives enable them to learn from their environments.

Multi-agent debate, an emerging trend in AI, extends this to LLMs. The literature on resource- and skill-based games has focused on traditional scenarios: corporate environments where workers and departments have different resources and skill sets. The literature can be extended to considering LLMs as resources, differentiated by their architectures, weights, and access to information.

What agents create

The future is autonomous. Most economic activity in the future will be undertaken by machines, yet most of our marketplaces are built for humans. The permissionless aspect of blockchain-based markets will dramatically expand the user base, and autonomous agents will be the primary users.

When agents are the users, the artifacts of markets change.

First, negotiation becomes native. Agents will negotiate with each other on behalf of human owners and institutions, and increasingly for their own objectives as they manage infrastructure directly. For many real-world assets, that negotiation will look like fixed schemas and protocols (JSON messages, typed offers, explicit constraints). For intents-based deals, language-model-based agents will negotiate in richer spaces that aren't fixed ahead of time.

Second, communication compresses. New languages emerging from agent interactions might sound strange, but it's already happened in controlled settings: when neural networks are trained to communicate efficiently with each other, they develop protocols that optimize for bandwidth and coordination, not human interpretability.

Third, assets generalize. Synthetic assets will emerge as agents negotiate over real-world assets and encounter their limitations, just as humans did. Derivatives, futures, and options emerged to manage risk and express complex economic intentions. Agents facing the same constraints will create their own abstractions.

The trajectory of blockchains and cryptocurrencies is trending towards multi-agent systems operating in multi-token economies. It is not only the tokenization of real-world assets that pushes this trend, but on a larger scale, the separation of concerns facilitated by having multiple tokens representing different asset types.

The cybernetic system

Generalized marketplaces provide agents with the substrate on which to trade the very physical components that constitute them. The reinforcement learning primitives discussed in our first post provide agents with an understanding of what they are and what their role is.

Combining this self-awareness and the ability to engage in economic interactions over the physical components they are composed of provides a foundation for a decentralized collective machine intelligence which can guide itself through market forces towards a self-improving cybernetic system.

This is the vision: not a single AI, but a distributed system of agents coordinating through markets. Each agent optimizes its own utility. The market aggregates those individual optimizations into collective behavior. The system improves as agents learn, as markets mature, and as better mechanisms are discovered.

Out of these multi-agent systems will emerge collective intelligence capable of achieving far more than is possible with current technologies. The steady improvements in AI models, computational power, and networking are such that humanity is on the cusp of decades of theory stowed away in AI and game theory journals being rapidly implemented in practice.

What this means for you

If you're building applications that touch multiple resource types (compute + storage, bandwidth + latency, energy + carbon), the integrated architecture eliminates the integration burden. One set of contracts, one collateral framework, one verification layer.

If you're researching multi-agent coordination, compositional game theory is a useful lens for modular mechanism design: define interfaces, compose parts, and be explicit about assumptions.

If you're thinking about the long term, the trajectory is clear: more agents, more markets, more composition. The infrastructure being built now determines what becomes possible later.

Next in the series

We've covered the full vision: from primitives to verification to adversarial design to economics to market composition. In our final post, we'll bring it back to the practical: what can you build today?


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

Tokenizing Idle Compute

Idle computing power sits wasted everywhere. Application-specific token markets and retroactive rewards can turn latent capacity into tradeable assets.

February 25, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 7

Excerpts from the Whitepaper

Key Takeaways

  • Idle computing power sits wasted everywhere: data centers, desktops, laptops, phones, IoT devices
  • SETI@home, Folding@home, and BOINC showed volunteers would contribute compute for free
  • Volunteer compute can create real value, but contributors typically receive points or recognition, not economic upside
  • Application-specific token markets can finance compute in real time: contributors can sell into stablecoins
  • Immutable records enable retroactive rewards: reward contributors after success materializes
  • The IP tension: some outputs need to remain private. Secure enclaves enable verifiable credit with confidentiality.

Trash waiting to become treasure

The world is full of idle compute. Data centers run underutilized. Desktops sit idle overnight. Phones and edge devices spend most of their time waiting.

Most of that capacity is already provisioned and paid for. When it sits idle, it's economic trash: cycles that could be doing useful work. The right markets turn that trash into treasure.

Volunteer computing proved that some of this capacity can be mobilized. It also revealed the ceiling: once you've saturated the set of people willing to donate for free, the supply stops growing, and it still represents only a small fraction of the idle compute that exists.

We've spent the past several posts on mechanism design: Alkahest for exchange and commitment, verification for trust, collateral markets for economic guarantees, adversarial training for robustness. Now we can ask the economic question those mechanisms are in service of:

How do you create markets for work that has no buyer yet, but might be valuable later?

In what follows, we'll use BOINC as a precedent, name the value capture problem, and then outline two complementary mechanisms: (1) application-specific token markets for real-time liquidity, and (2) retroactive rewards backed by immutable contribution records and privacy-preserving attribution.

What BOINC taught us

The history of grid-based distributed scientific computing goes back at least to the mid-1990s, including distributed prime-search projects like GIMPS, and later to more mainstream platforms like SETI@home and Folding@home.

Both of the larger platforms were launched to connect scientists who had embarrassingly parallel computational workloads with volunteers all around the world. Volunteers offered their own computers for free in exchange for participating in large scientific projects and competing on leaderboards for non-transferable Web2 points.

In the case of SETI@home, the computations helped the search for extraterrestrial life. In the case of Folding@home, the computations were related to protein dynamics. Out of SETI@home grew BOINC, the Berkeley Open Infrastructure for Network Computing, which generalized the project creation and job generation mechanisms to other scientific computing projects, of which today there are tens in existence and operating.

The lesson from BOINC is clear: people will contribute compute for free if they care about the project. They'll download software, configure their machines, and donate cycles in exchange for leaderboard points and the satisfaction of contributing to science. No money changed hands - just participation in something larger than themselves.

This model worked. Millions of computers contributed billions of hours of compute time. But it hits an upper limit: participation saturates once you've reached the set of people willing to donate for free, and even at its peak it covers only a small fraction of the idle compute that exists globally. If you want to unlock orders of magnitude more capacity, especially for work that doesn't have a built-in volunteer audience, you need incentives that look like markets, not charity.

The value capture problem

Volunteer compute can power serious science: papers, datasets, and occasionally downstream intellectual property. Contributors might show up in acknowledgements or consortium-style authorship, but the compensation is rarely economic. With proper tracking and incentives, those that contributed computational resources could have received a portion of downstream value.

In practice, they generally don't. The compute providers get points on a leaderboard and, in some cases, acknowledgements or consortium-style authorship. The value they help create still mostly accrues elsewhere: institutions, companies, and shareholders. The infrastructure that enabled the discovery isn't part of the value capture.

This isn't a complaint about those specific projects. They did exactly what they were designed to do: harness volunteer compute for scientific problems. The problem is the design. When you can't track contributions in a way that enables later compensation, you can't create markets for speculative work. You're limited to charity.

Futures markets for compute

These new markets aim to answer the following question: are there computations for which nobody is willing to pay now, but that somebody might be willing to pay for later?

There are two primary approaches.

First is real-time tokenization: an application-specific token that contributors earn as they provide compute. Because the token trades in open markets, contributors can sell into stablecoins to cover near-term costs like electricity and hardware. The market price becomes an imperfect but useful signal of expected future value. People can speculate on which applications will produce real outcomes. That speculation isn't just commentary: it finances compute.

Second is retroactive rewards: keep an auditable record of who contributed what, and pay out later if value materializes. This ties compensation to real outcomes and avoids forcing early speculation into a single price signal. The tradeoff is timing: retroactive rewards do not directly solve cash flow unless contributors can borrow against expected future payouts.

In practice, the two can be combined. A liquid token can be a tradable claim on future retroactive rewards, with the record as the source of truth for who earned what.

At a high level:

  1. An application defines a claim on future value and the rules for later compensation.
  2. Contributors who provide compute receive claims in proportion to their contributions.
  3. Those claims trade in an application-specific market, so contributors can realize value early and others can fund work they believe will pay off.
  4. If real value materializes, claimholders can be compensated through retroactive rewards, licensing revenue, or whatever payout the application specified up front.

If the application produces nothing valuable, the claims are worthless. If it leads to a breakthrough, claimholders share in the upside.

This is not purely theoretical. In the early days of cryptocurrencies, a number of protocols attempted to apply Bitcoin's reward mechanism to scientific computing. Gridcoin, Curecoin, Pinkcoin, and Primecoin were among these early science-focused computing coins. However, blockchains were in their infancy at the time, and while these cryptocurrencies accomplished much, they fell short of their ultimate visions.

The infrastructure has matured since then. Arkhai's native tracking of jobs and their artifacts can be paired with retroactive reward mechanisms. A verifiable record of what was computed, by whom, and under what conditions enables arbitrary reward structures over those computations.

The immutable record

Retroactive rewards require attribution. Attribution requires a record.

Computational reproducibility facilitates a degree of trust among network participants that computations performed using the protocol have been done correctly, since negative behavior comes with economic consequences. A scientific record of which nodes requested the computations, which nodes performed them, and what the inputs and outputs were enables incentive mechanisms that can reward (or penalize) different parts of the scientific process.

This record is the foundation for retroactive rewards. When an application succeeds, the reward mechanism can look back at who contributed what. The trail is immutable. Attribution is verifiable. Payment can flow back to contributors proportionally.

The privacy tension

There's a complication.

The creation of intellectual property is complicated by the fact that storing all of the inputs and outputs of jobs publicly may be undesirable for some forms of intellectual property.

A drug discovery computation might produce proprietary molecules. A climate model might use licensed satellite data. A machine learning training run might involve confidential datasets. Making everything public isn't always legal, and it isn't always desirable.

One option is to allow anything that can be public to be public and make the rest privacy-preserving using secure enclaves, with verifiable credit attribution to the relevant contributors.

The secure enclave runs the computation. The inputs and outputs stay confidential. But the enclave produces an attestation: this node contributed this much compute to this job. The attestation is public even when the data isn't. Credit attribution works, retroactive rewards work, and the actual intellectual property remains protected.

This tension between transparency and privacy runs through everything we're building. The market needs transparency to function: prices, contributions, outcomes. The applications need privacy to exist: confidential data, proprietary methods, competitive advantage. The architecture has to support both.

What this means for you

If you're running large-scale scientific computations, application-specific markets provide an alternative to grants, university clusters, or volunteer compute. Contributors can receive tradable claims that gain value if the work succeeds. This changes the incentive structure: you're not asking for charity, you're offering participation in potential upside.

If you're building applications that generate intellectual property, the hybrid approach (public attribution, private data) lets you participate in decentralized compute markets without exposing your competitive advantage.

If you're thinking about retroactive rewards, Arkhai's immutable record provides the attribution layer. When it's time to fund past contributions, the record exists to determine who contributed what.

Next in the series

We've covered the full stack: primitives, verification, collateral, adversarial design, and now the economics of idle compute. In our next post, we'll zoom out to the vision level. What does it look like when compute, energy, storage, and bandwidth all trade on the same infrastructure? What emerges from compositional game theory applied to market design?


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

Designing for Adversaries

Analytic approaches to verification make assumptions that break in practice. Arkhai's research approach: train agents to cheat, then iterate mechanisms to stop them.

February 23, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 6

Excerpts from the Whitepaper

Key Takeaways

  • Analytic approaches to verification-via-replication make assumptions that break down in practice
  • Much of the academic literature (and many public protocols it inspired) simplifies away real-world complexity: hardware heterogeneity, network topologies, latencies, failures, repeated games, collusion
  • Arkhai's research approach: train agents to cheat, then iterate mechanisms to stop them
  • Multi-agent RL trains attackers and defenders; reward-design ideas (including inverse RL) help search for incentives that elicit desired behavior
  • Honest about limitations: multi-agent training is not guaranteed to converge, and empirical robustness is not a proof
  • Three possible outcomes: defense works, arms race, or learning that a given mechanism class is insufficient under the threat model. All three are valuable.

Note The approach described in this post originated as research, and most of it has not yet been implemented in our production systems.

Why analytic approaches fail

Mathematical proofs are reassuring, but the real world is unforgiving.

In our last post, we covered collateral markets: how the collateral multiplier solves the unknown-cost problem, and how a series of credible commitments handles different types of risk. The mechanism layer is in place.

But mechanisms need to survive contact with adversaries. How do you know a collateral scheme actually prevents cheating? How do you know a verification protocol can't be gamed by colluding nodes?

The traditional approach is to mathematically prove that honest behavior is rational and leads to Nash equilibrium. The problem is that analytic results depend on tractable models, and tractable models depend on simplifications.

In our reading, much of the academic literature on verification-via-replication (and the public protocols it has inspired) does not fully account for the operational complexity of real distributed computing networks, including:

  • different hardware configurations
  • network topologies
  • variable latencies
  • node failures
  • repeated games
  • collusion

Many papers and protocols make assumptions about the environment, action spaces, or failure models that, once relaxed, weaken or eliminate the original theoretical guarantees.

This isn't a criticism of the research. It's the nature of the problem. Real distributed systems violate simplifying assumptions constantly.

For this reason, Arkhai explores an alternative approach that forgoes analytic guarantees in favor of empirical evaluation.

Game-theoretic white-hat hacking

If you can't prove your mechanism is secure, you can test it.

Arkhai's research approach to verification-via-replication is to train agents to maximize their utility, including cheating and/or colluding if necessary. Once these nodes are strong enough to find real weaknesses, mechanisms can be iterated and evaluated against the strategies the agents actually discover.

In practice, this starts in simulation. You build a digital twin of the protocol environment and let agents explore the strategy space safely before you ever trust the results in a live network.

Operationally, this becomes a loop:

  1. Train the best attackers you can.
  2. Observe the strategies they discover.
  3. Update the mechanism (incentives, penalties, verification rules).
  4. Retrain and repeat.

This inverts the typical workflow. Instead of designing a mechanism and hoping it's robust, you start by training the attackers. You let them find the weaknesses. You watch how they exploit the system. Then you design countermeasures and test whether the attackers can adapt.

The approach treats protocol design as adversarial competition. Red team versus blue team. The red team is trained to find exploits. The blue team is trained to close them. The protocol improves through iteration.

This requires the same agent-based primitives we described in our first post: environments (the protocol state), actions (cheating strategies, honest behavior, coordination with other nodes), transitions (how the protocol responds), and rewards (the utility function the attacker is trying to maximize). States, actions, transitions, and rewards are treated as first-class citizens in the architecture.

Multi-agent inverse reinforcement learning

The attacker/defender training loop draws heavily on multi-agent reinforcement learning, where each agent's learning changes the environment for the others.

Standard reinforcement learning asks: given a reward structure, what actions maximize reward? Inverse reinforcement learning asks the opposite: given observed behavior (typically demonstrations), what reward structure could have produced it?

Multi-agent inverse reinforcement learning extends this to groups of agents. It finds reward structures that elicit particular actions from collections of agents: for example, not cheating and not colluding in verification-via-replication-based distributed computing networks.

This is mechanism design through machine learning. Rather than deriving incentive structures analytically, you explore them empirically through adversarial testing. Trained attackers show you where the vulnerabilities are. The reward-design loop suggests what incentive changes might close those vulnerabilities.

These tools are not a silver bullet, but they are a promising way to explore incentive landscapes in settings where analytic modeling breaks down. And the space of incentive structures goes well beyond verifiable computing. Coalition formation, resource markets, negotiation protocols: anywhere agents have overlapping interests (sometimes cooperative, sometimes competitive, sometimes both), multi-agent RL and reward-learning can be useful.

Convergence

From a theoretical perspective, multi-agent learning dynamics in such environments are not guaranteed to converge to stable equilibria. There are convergence results in restricted settings, but there is no general guarantee that adversarial training will yield mechanisms robust to manipulation.

Multi-agent training can be unstable. Agents adapt to each other. The best strategy for attacker A depends on what defender B does, which depends on what A does, which depends on B. The feedback loops can cycle without settling. Convergence proofs that work for single-agent RL do not extend straightforwardly to multi-agent settings, in part because each agent faces a moving target.

This doesn't mean the approach is useless. We're not claiming to have solved adversarial mechanism design. We're claiming that this method provides empirical evidence about mechanism robustness, with clear acknowledgment that the evidence is not a guarantee.

Three outcomes are possible.

First, the adversarial training finds attacks, the mechanism updates successfully defend against them, and further adversarial training fails to find new attacks. This is the best case: you have empirical confidence that the mechanism is robust against the class of attacks your training can discover.

Second, the adversarial training and defensive updates enter an arms race. Each side keeps adapting. This is less satisfying, but still useful: you've learned that the mechanism requires ongoing maintenance, and you have a process for that maintenance.

Third, the adversarial training finds attacks that we can't defend against within the mechanism class we're exploring. That doesn't imply that no mechanism can exist; it suggests that our current assumptions, primitives, or threat model are insufficient. This outcome is still valuable: it tells you to change the design space (different verification, different trust assumptions, different market structure), rather than endlessly tuning parameters.

All three outcomes produce useful knowledge. That's the advantage of empirical methods: they don't just tell you whether you succeeded, they tell you what kind of problem you're dealing with.

What this means for you

If you're building protocols that need adversarial resistance, consider what guarantees you can actually provide. Formal proofs are reassuring, but they rest on assumptions that may not hold in production. Empirical testing doesn't give you proofs, but it gives you evidence from realistic conditions.

If you're integrating with Arkhai's infrastructure, expect adversarial evaluation to be part of how mechanisms are validated over time. Formal analysis matters, but the goal is to complement it with empirical evidence from simulated adversaries.

If you're researching multi-agent systems, multi-agent RL and reward-learning tools are underutilized in protocol design. Most mechanism design still happens analytically. There's room for empirical methods to become standard practice.

Next in the series

We've covered the full mechanism stack: Alkahest for exchange and commitment, verification for trust, collateral markets for economic guarantees, and adversarial design for robustness testing.

In our next post, we'll step back from mechanisms and look at economics. Where does idle compute fit into this picture? What happens when you tokenize latent capacity and create futures markets for work that has no buyer yet?


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

Collateral Markets and Programmable Escrow

How much collateral do you put up when the computational cost isn't known ahead of time? The collateral multiplier and a series of credible commitments replace case-specific escrow with one general pattern.

February 20, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 5

Excerpts from the Whitepaper

Key Takeaways

  • Collateralization enables trustless protocols by leveraging credible commitments
  • The core problem for batch jobs: how much collateral do you put up when the computational cost isn't known ahead of time?
  • The collateral multiplier: commit to a multiplier M, deposit (price x M) after job completion
  • A series of credible commitments (timeout, payment, cheating) replaces case-specific escrow with one general pattern
  • Collateral markets let agents negotiate over collateral itself, not just jobs
  • The combination of credible commitments and collateral markets forms the foundation of generic, machine-actionable marketplaces

The problem with "just put up collateral"

Collateral is the teeth behind trust. Verification tells you what happened, and collateral makes it matter.

In our last post, we looked at verification: how do you trust computation done by a machine you don't control? We covered three categories of verifiable computing, the consensus spectrum, and the honest answer that every approach involves tradeoffs.

But verification doesn't exist in a vacuum. Verification needs enforcement. If a compute node returns a wrong result, something must happen. In most protocols, that something is collateral slashing: the node put up a deposit, and the deposit gets taken away if the node cheats.

Collateralization facilitates trustless protocols by leveraging the credible commitments enabled by blockchains. It's used whenever protocol actors want an incentive for counterparties to behave honestly. In blockchains secured by Proof-of-Stake, nodes securing the network deposit collateral to ensure the blocks they propose are legitimate. In compute marketplaces, a client that wants assurance some task will be done correctly may require the compute node to deposit collateral. Likewise, the compute node may require the client to deposit collateral ensuring it will be paid if it does the work correctly.

In this post, we'll do three things: (1) show why fixed upfront collateral is an unsatisfactory solution when costs are unknown a priori, (2) introduce the collateral multiplier as a market parameter, and (3) show how multiple risk types become a series of credible commitments expressed in the same escrow abstraction.

When the cost is unknown

Imagine you're a compute node. A potential client wants to run a machine learning training job. How much collateral should you put up? The job might converge in ten minutes. It might run for six hours. You won't know until it's done.

Note: This framing is about batch-style jobs that settle on completion. Time-metered payments are often more straightforward, because the protocol can settle periodically instead of estimating total cost upfront.

One major problem in many verification-via-replication protocols is the issue of how much collateral to put up for a job where the computational cost is not known ahead of time.

This is more common than you'd think. Running a machine learning training job might take minutes or hours depending on convergence. A scientific simulation might complete quickly or hit edge cases that extend runtime. Even a straightforward data processing pipeline can encounter unexpected data volumes.

If you require fixed upfront collateral, you face a dilemma:

  • Set it too low, and nodes have insufficient incentive to complete the job honestly: the reward from cheating exceeds the cost of losing their deposit.
  • Set it too high, and you exclude smaller nodes from participating, concentrating the market among well-capitalized operators.

Neither option leads to a healthy marketplace. Fixed upfront collateral doesn't work when computational costs are variable.

The collateral multiplier

This problem can be solved with collateral markets.

Rather than depositing a fixed amount of collateral ahead of time (or topping up opportunistically), a compute node commits to a collateral multiplier at the time of deal agreement. After a job is completed, the compute node deposits into escrow the amount it will charge the client times the collateral multiplier.

In other words: collateral becomes a ratio, not a guess. If the job turns out to be cheap, the absolute collateral is low. If it's expensive, the collateral scales proportionally.

The collateral multiplier solves the issue of knowing ahead of time how much collateral is required. It enables at least primitive forms of optimistic verification for programs where the computational cost or runtime is not known in advance. And it creates a market for collateral over which agents can negotiate.

That last point is significant. The multiplier isn't fixed by the protocol. It's a parameter that buyer and seller negotiate. A client who needs strong assurance might demand a high multiplier. A node confident in its reliability might offer a low one. The multiplier becomes a signal of trust, and the market determines the price of that trust.

A series of credible commitments

The collateral multiplier handles one problem: unknown costs. But real marketplace interactions involve multiple types of risk, each requiring its own form of collateral.

Consider a verification-via-replication protocol. There are at least three types of collateral at play:

  • Timeout collateral ensures the node completes the job within an agreed timeframe. It's deposited when the deal begins and refunded if the job finishes on time. If the node fails to deliver, the timeout collateral is slashed.

  • Payment collateral ensures the client will pay for completed work. The client deposits funds that transfer to the node upon successful verification. If the client disputes a correct result, arbitration can resolve the claim.

  • Cheating collateral ensures the node doesn't return fabricated results. If verification reveals that the node cheated, this collateral is slashed and partially distributed to the verifier as a bounty.

Each of these collaterals is deposited at particular times, and is refunded or slashed at other times, based on events that happen on-chain. Each deal is not "one escrow", but a small program: multiple deposits, multiple conditions, and multiple possible outcomes. These sets of rules can be abstracted to a series of credible commitments, where participating parties each deposit collateral, and that collateral moves based on events.

This is where Alkahest comes in. Rather than building custom collateral logic for each of these types, Alkahest's Statement and Validator system expresses all of them. Each collateral type is a Statement with its own conditions and validators. The timeout validator checks deadlines. The payment validator checks job completion. The cheating validator checks verification results.

Same contracts, different conditions. Same infrastructure, different markets.

Why this matters for marketplace design

Case-specific collateralization architectures are an outdated approach. Every protocol that builds its own collateral system is rebuilding the same patterns: deposit, condition, release or slash. The differences are in the specifics, not the structure.

The combination of a series of credible commitments and collateral markets dramatically enhances what's possible. You can build a compute marketplace, an energy marketplace, and a data marketplace on the same collateral infrastructure. Each uses different validators, but the underlying pattern is the same.

This is the power of Alkahest applied to a concrete problem. In Post 3, we described Alkahest as a unified abstraction for exchange and commitment. Here, you can see what that abstraction gets you in practice: a single escrow system that handles timeout penalties, payment guarantees, cheating prevention, and any other collateral type a marketplace might need, all through programmable conditions.

What this means for you

If you're building a marketplace that requires any form of collateral, you don't need to design your own escrow state machine. Alkahest's escrow and arbitration can express your collateral types. Define the deposit conditions, the verification events, and the outcome rules. The infrastructure handles the rest.

If you're building a compute protocol specifically, the collateral multiplier pattern solves the variable-cost problem. Nodes commit to a multiplier. Jobs settle based on actual cost. The market sets the price of trust through the multiplier negotiation.

If you're thinking about market design more broadly, collateral markets represent a recursive pattern. Agents negotiate over jobs. Jobs require collateral. Collateral levels are negotiated. This creates a market within a market, which is exactly the kind of composability that generic primitives enable.

Next in the series

We've built up the mechanism layer: Alkahest for exchange and commitment, verification for trust, collateral markets for economic guarantees. But how do you test these mechanisms against adversaries?

In our next post, we'll look at Arkhai's approach to adversarial design: training agents to cheat so we can stop them. Game-theoretic white-hat hacking, multi-agent inverse reinforcement learning, and the honest acknowledgment that convergence isn't guaranteed.


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

Trust Without Trustees

How do you trust computation done by a machine you don't control? Three categories of verifiable computing, the consensus spectrum, and the honest answer that every approach involves tradeoffs.

February 18, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 4

Excerpts from the Whitepaper

Key Takeaways

  • A central question in decentralized compute: how do you trust computation done by a machine you don't control?
  • Three categories of verifiable computing: cryptographic methods, secure enclaves (TEEs), and replication
  • Local vs global consensus is a spectrum, not a binary choice
  • In replication-based verification, local consensus can, in principle, drive the number of replications needed down to 1
  • Approximate agreement can handle non-deterministic computations
  • Every verification method is some combination of slow, inefficient, expensive, or insecure: the honest answer is tradeoffs

Verifying compute you didn't run

Outsourcing compute is easy. Outsourcing trust is not.

In our last post, we introduced Alkahest and the two-primitive architecture. Alkahest handles exchange and commitment. Agent-to-agent negotiation handles matchmaking. But there's a question we've been circling since the first post in this series: how do you trust computation done by a machine you don't control?

In a trustless, permissionless distributed computing marketplace, verifiability becomes central. How do clients know that they are getting the correct results returned to them, if not through the massive replication of Byzantine Fault Tolerance traditionally offered by blockchains?

This is the subject of a subfield of computer science known as verifiable computing, and it's arguably one of the hardest problems in decentralized infrastructure. The constraint is clear: verifying the result should have less overhead than computing the result in the first place. If verification costs more than doing the work yourself, there's no point in outsourcing it if you could handle it alone.

Different applications need different trust levels. A financial transaction might require cryptographic certainty. A scientific simulation might accept statistical confidence. A rendering job might only need visual inspection.

And many customers don't want verification at all; they want a counterparty they trust. This is one key reason non-Web3 neoclouds have been able to win: they start with a narrow set of customer needs and rely on reputation, contracts, and recourse instead of "trustless" verification.

One-size-fits-all verification doesn't exist, and pretending otherwise is how protocols end up building something nobody can actually use.

In what follows, we'll break verification into three broad categories, then frame consensus as a spectrum (not a binary), then look at what happens when computations aren't deterministic, and finally be explicit about the tradeoffs.

Three categories

There are three main categories of verifiable computing: cryptographic methods, secure enclaves (TEEs), and replication-based methods.

First, cryptographic methods. These rely on mathematics for their security. Zero-knowledge proofs let a prover convince a verifier that a computation was done correctly without revealing private inputs. Multi-party computation allows joint computation without exposing individual inputs. Fully homomorphic encryption enables computation over encrypted data. These methods provide the strongest guarantees, but they are also among the slowest and most expensive. For many practical workloads today, cryptographic verification costs far more than the computation itself.

Second, secure enclaves (TEEs). These are isolated environments where code and data are insulated from the rest of the system, including the operating system. The goal is to maintain both confidentiality and integrity. TEEs have a promising future, especially as exploits continue to be patched, but even the known exploits continue to limit applicability.

Third, replication-based methods (often called "optimistic" verification). These rely on recomputing the work and checking whether results match. This is the simplest approach to understand and implement, but it comes with extra computational cost. With proper incentive design, the overhead of recomputing can be reduced while still enabling network scaling. Replication often requires game-theoretic mechanisms like collateral slashing and reputation layers to counter cheating.

The consensus spectrum

The verification question is tied to a deeper question about consensus: how many entities need to agree that a computation was done correctly?

At one end of the spectrum is global consensus. Blockchains traditionally achieve consensus through massive replication across all participating nodes. This provides strong guarantees: Byzantine Fault Tolerance ensures the network reaches agreement even when some nodes act maliciously. But this level of replication becomes too costly when the corresponding level of security is not needed.

At the other end is local consensus, which at a minimum only requires a single agent to be convinced of the state. For a two-sided marketplace, only the client really needs to be convinced that a computation was done correctly. But these computations are not done in isolation. The interrelation between clients choosing nodes, and the need for new clients to arrive and be assured that cheating is disincentivized, implies the creation of a global game that, while not requiring global consensus in the traditional sense, emulates it.

Between these extremes are hybrid designs. Local versus global consensus is not a binary choice but a spectrum. The middle involves agreement among a subset of nodes, perhaps within a specific region or cluster, which can lead to faster and more efficient decision-making through reduced communication overhead. Systems can balance speed, efficiency, and security based on their specific requirements.

One of the primary benefits of local consensus, in the context of replication-based verification, is that market incentives can, in principle, be structured to drive the total number of replications needed for verifiability down to one. This minimizes computational cost, financial cost, and energy consumed. Getting to a single replication is the target: it means every unit of compute goes toward useful work, not redundant checking.

When computations aren't deterministic

Note: There has been significant research and practical progress on this topic since the whitepaper was completed in summer 2024. This section primarily summarizes the whitepaper's framing.

Replication-based verification sounds straightforward: run the job twice, compare results. But this assumes determinism. If the same program on two different machines produces exactly the same output, the comparison is trivial.

What happens when the computations are not deterministic? Neural network inference on different hardware can produce different results. Floating point arithmetic varies across architectures. Parallel workloads with race conditions produce different orderings.

Once outputs differ, the real question becomes: what does "match" mean?

One option is approximate agreement: if the verification yields a result close enough to the original, the result is considered valid. BOINC (the Berkeley Open Infrastructure for Network Computing) has been using approximate agreement for a long time. The approach works, but it requires an application-specific distance measure, which introduces developer overhead.

Approximate agreement might apply to some neural networks. In large language models, it might be possible to measure the distance between outputs before decoding. But slightly different outputs before decoding can still lead to large differences after decoding, which leaves room for abuse.

Another option is objective-function-based evaluation. Proof-of-Work is a classic example: solving a cryptographic puzzle is hard, but verifying the solution is cheap. The proof is in the result itself. Similar approaches work for problems where the answer is easy to check even if it's hard to find.

And sometimes the right answer is to trust the compute node and not verify at all. This doesn't count as verifiable computing, by definition; however, reputation systems have decades of research behind them, and for many applications the trust tradeoff is acceptable.

Honest about limitations

Every lock can be picked. Every verification approach has weaknesses.

  • Cryptographic methods for verifiable computing are very slow and will almost certainly always be much more expensive than bare metal execution for many workloads.
  • Secure enclaves have exploits, and while the future is promising as patches continue, they are not ready for highly sensitive applications.
  • Optimistic verification suffers from issues of determinism, and even when those are solved or approximated away, there remain deep issues of collusion.

Verifiable computing options are, at present, some combination of slow, inefficient, expensive, or insecure.

This isn't a problem to be solved by picking the "right" method. Every method trades one thing for another. The most flexible approach is to support all three categories and let applications choose the verification that matches their risk tolerance, performance requirements, and budget.

Arkhai's design enables the incorporation of all of these methods. Alkahest's programmable conditions don't care whether the validator is checking a zero-knowledge proof, a TEE attestation, or the output of a replication check. The verification layer is modular. Swap in different validators for different trust levels.

The shortcomings of all of these approaches will decline with time and increasing use. Cryptographic methods are getting faster. TEE exploits are getting patched. Replication mechanisms are getting smarter about collusion. But waiting for a perfect solution means shipping nothing. The pragmatic choice is to build the system that supports all approaches and improves as each approach matures.

What this means for you

If you're building on decentralized compute, the verification question will define your architecture. Don't pick a verification method in the abstract. Start with your application's requirements. How sensitive is the data? How expensive is the computation? What's the cost of a wrong result?

One coarse way to think about it:

  • For low-stakes workloads like rendering, batch processing, or data transformation, replication with reputation may be sufficient.
  • For moderate-stakes workloads like scientific simulation or model training, approximate agreement or TEE-backed execution can provide stronger guarantees.
  • For high-stakes workloads like financial computation or medical inference, cryptographic methods may be worth the overhead.

These aren't mutually exclusive. A single marketplace can support all three categories, with applications selecting the trust level they need. This is what Alkahest's modular validator system enables.

Next in the series

We've covered how Alkahest handles exchange and commitment, and how different verification methods handle trust. But there's a practical problem we haven't addressed: collateral.

When a compute node takes a job, how much collateral should it deposit? What if the computational cost isn't known ahead of time? In our next post, we'll introduce collateral markets and show how programmable escrow handles the unknown.


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

Two Primitives for Machine-Native Markets

In implementation, the whitepaper's three primitives became two: Alkahest for escrow and arbitration, and agent-to-agent negotiation. Here's how that evolution happened and what it enables.

February 16, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 3

Excerpts from the Whitepaper

Key Takeaways

  • In implementation, the whitepaper's three primitives effectively became two: Alkahest (escrow + arbitration) and agent-to-agent negotiation
  • Bundle exchange collapsed into credible commitments: lock value, validate conditions, then release (or escalate)
  • Alkahest expresses deals as Statements, validators, and escrowed value
  • Off-chain solvers, on-chain auctions, and agent-to-agent negotiation differ mainly in where matching happens and what it costs
  • We bias toward agent-to-agent negotiation because deals have terms, not just price, and we want to avoid match-making chokepoints
  • Open source, no fees, token-agnostic: exchange primitives as a public good

From three primitives to two

Architecture is destiny. The primitives you choose determine what you can build.

In this post, we'll do three things: (1) explain what happened when we tried to implement the whitepaper's primitive set, (2) define Alkahest at the interface level we use throughout the series, and (3) outline three market-making patterns, with an emphasis on why we bias toward agent-to-agent negotiation.

In our first post, we described three primitives at the core of Arkhai's architecture: the exchange of arbitrary bundles of assets, agreements modeled by a series of credible commitments, and agent-to-agent negotiation. In our second, we looked at why many existing compute marketplaces came short of expectations. Now let's look at what we built, and what we learned along the way.

On paper, "bundle exchange" and "credible commitments" look distinct. One moves assets. The other enforces trust: collateral deposited, conditions evaluated, outcomes enforced.

In code, bundle exchange kept collapsing into the credible-commitments machinery. An exchange is just escrow plus conditions: parties lock value, a validator checks whatever needs to be checked, and funds move. When "whatever needs to be checked" isn't fully on-chain, that validator becomes an arbiter and the deal becomes an arbitration problem.

Once we had an escrow-and-arbitration system that could express a series of credible commitments, the bundle-exchange primitive wasn't really a primitive anymore. It was one configuration of the same system.

That left two primitives that mattered in practice: (1) a generalized escrow/arbitration layer for exchange-with-commitment (Alkahest), and (2) a way for agents to find and agree on deals (agent-to-agent negotiation).

The case against specialization

This realization didn't come from theory. It came from staring at the collateralization patterns across different marketplace types.

Across the protocols we studied, the structure kept reducing to the same simple state machine:

  1. Parties deposit value.
  2. Conditions are evaluated.
  3. Value is released, refunded, slashed, or escalated based on outcomes.

The specifics vary: in compute marketplaces, the condition might be job completion plus verification; in energy markets, delivery confirmation; in a simple trade, mutual consent. But the structure stays the same. The degrees of freedom are: what is deposited, what is checked, and where value flows.

Many protocols require collateralization with different types of collateral deposited, withdrawn, or slashed depending on the outcomes of some events. These sets of rules can be abstracted to a series of credible commitments, where participating parties each deposit collateral, and that collateral moves based on events.

Once we saw this, the evolution from three primitives to two became hard to avoid. Bundle exchange is a special case of credible commitments, and credible commitments collapse into one programmable escrow-and-arbitration system.

Introducing Alkahest

This unified abstraction is Alkahest: programmable escrow and arbitration with arbitrary conditions.

In the repository, we describe it plainly: a contract library and SDKs for validated peer-to-peer escrow. That's the interface we care about.

The name comes from alchemy. The alkahest was the theorized universal solvent, a substance that could dissolve anything. It's an appropriate metaphor: Alkahest dissolves bespoke exchange logic into a general-purpose primitive, and lets you recompose it into many market types. Solve et coagula.

Concretely, there are three moving parts:

  • Statement: an obligation. One party commits value, specifying who benefits, what amount is held, and under what conditions it should be released. Statements are the fundamental unit of commitment in the system.

  • Validator: the mechanism that determines when conditions are satisfied. A validator might check for manual confirmation from a designated party, data from an oracle feed, the passage of time, or any composable combination. When conditions can't be resolved automatically, validators can escalate to arbitration.

  • Escrow: the component that holds value until validators confirm conditions are met, then automatically releases funds to the appropriate party. If conditions aren't met, escrow can refund, slash, or follow whatever path the parties programmed at the start.

This is credible commitments made concrete. And bundle exchange falls out as a special case: parties lock assets, validators check conditions, and escrow releases value atomically.

What two primitives enable

Alkahest for exchange-with-commitment (and arbitration), and agent-to-agent negotiation for matchmaking. Together, these two primitives enable a surprising range of applications.

A few immediate consequences follow:

  • The three main categories of verifiable computing can all plug into Alkahest's validator layer: check whether the computation was done correctly, release payment if yes, and slash collateral if no.

  • Storage and bandwidth can be incorporated into the same marketplace structure as compute. They are additional assets with their own validators and conditions, not separate "side protocols" developers have to stitch together.

  • Clearinghouse-style settlement becomes expressible. A party can act as a central counterparty, net positions across many trades, and settle through the same escrow and arbitration layer.

  • Peer-to-peer bartering is just another configuration. Agents can swap arbitrary bundles with conditional release, without a trusted intermediary.

  • Natural language agreements become viable. When parts of a deal can't be fully machine-verified, you can still make them machine-actionable by precommitting to arbitration (human, committee, or even AI-based), rather than pretending every condition is deterministic.

  • Agents interact with a consistent interface across market types, which makes it easier to reuse agent scaffolding (state, actions, rewards) when moving between compute, energy, storage, bandwidth, or barter.

  • Most marketplace types reduce to programming different conditions into the same contracts.

This is the power of generalization. Rather than building a different escrow for compute, another for energy, another for peer-to-peer trading, you build one escrow that can express all of them.

Three ways to make a market

With Alkahest handling the exchange layer, the second primitive handles how parties find each other.

The whitepaper outlines three broad methods of market-making: off-chain solvers, on-chain auctions, and agent-to-agent negotiation. Each comes with tradeoffs in trust models, efficiency, and scalability. The main difference is where matching happens.

First, off-chain solvers. Nodes send bids and asks to a solver layer that proposes matches, which are then settled on-chain. This can be fast, and implemented as a competitive set of solvers rather than a single privileged matcher. The tradeoff is that you introduce an off-chain coordination layer that can become a chokepoint for liveness, censorship resistance, and integration complexity. You also introduce a self-interested party that needs a business model. In practice, that often means extracting value from matching (fees, spreads, preferential routing), which can skew incentives and concentrate power around whoever controls order flow.

Second, on-chain auctions. Bids and asks go on-chain, and the matching rule executes under consensus. This provides strong transparency and auditability. The tradeoff is that matching inherits blockchain latency and gas costs, and strategies become more legible to everyone watching the chain.

Third, agent-to-agent negotiation. Nodes propose matches to each other directly. They can accept, reject, or counter-propose until a conclusion is reached. This avoids a central matching engine, but it requires more sophisticated agents and can take more messages to converge. The upside is that negotiation is a flexible coordination substrate. With the same underlying infrastructure, you can represent RFQs, posted-price deals, barter, and even auction-like dynamics by changing agent policies, without rewriting settlement. And because settlement treats resources as assets with conditions, the same negotiation machinery can carry across asset types like compute, storage, bandwidth, or energy, including bundles that mix them.

We built around negotiation as the default because:

  • DCN deals are heterogeneous; terms matter as much as price (deadlines, verification, collateral, escalation paths).
  • We want to avoid concentrating order flow in a single matching service.
  • Negotiation composes cleanly with Alkahest: once two agents agree, the escrow/arbitration layer enforces the outcome.

The architecture still supports solver- and auction-based matching where it makes sense. Alkahest doesn't care how the match was made. It only cares that both parties agreed and that conditions are met.

Open infrastructure

This brings us to a design philosophy that runs beneath the architecture.

Arkhai aspires to build the exchange primitives as a public good: no fees, no token lock-in, token-agnosticism, and multi-chain compatibility.

This can sound counterintuitive. If the exchange layer is free and open source, where is the competitive advantage?

The advantage is the technology, not the base mechanics. Better verification. Smarter matching. More reliable nodes. The applications and services built on top of the open infrastructure.

Open source means other teams can build on Alkahest instead of rebuilding escrow and dispute resolution from scratch. The ecosystem grows faster. Fewer bugs, because the code is audited by more eyes. The total number of marketplaces and use cases expands, which benefits everyone building on the infrastructure.

Generalization over specialization. Open primitives over locked gardens. This is how you build infrastructure that lasts.

What this means for you

If you're building with these primitives, a few implications are immediate:

  • If you're building a marketplace, Alkahest replaces your escrow layer. Define your conditions, choose your validators, deploy. You don't need to build collateralization logic from scratch, and you don't need to rediscover the edge cases that other protocols have already found and fixed.

  • If you're building a compute protocol, Alkahest's programmable conditions can express a wide range of verification schemes: cryptographic proofs, TEE attestations, replication-based checking, or hybrid approaches. Payment becomes conditional on verification using the same contracts.

  • If you're designing incentive structures, the composable structure lets you treat a "mechanism" as a program: what information counts (validators), what happens when conditions are met or disputed (escrow + arbitration), and how offers are made (negotiation policies). That makes it easier to do ablations and simulation: change one rule, rerun agent-based experiments, and measure how incentives shift, without rebuilding the whole system. It also makes comparisons cleaner, because different mechanisms can share the same settlement primitive.

Next in the series

Alkahest handles exchange and commitment. Agent-to-agent negotiation handles matchmaking. But one question keeps coming up: how do you trust computation done by a machine you don't control?

In our next post, we'll dig into verification. Three categories of verifiable computing, the local-to-global consensus spectrum, and the honest answer that every approach involves tradeoffs.


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

Why Current Distributed Compute Marketplaces Are Broken

Many distributed computing marketplaces fail for structural reasons, not just execution issues. From unsustainable tokenomics to fragmented stacks, we explore the dominant failure modes.

February 12, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 2

Excerpts from the Whitepaper

Key Takeaways

  • Many distributed computing marketplaces fail for structural reasons, not just execution issues
  • Underutilization + fiat-denominated costs create unsustainable tokenomics; supply-side economics without demand doesn't hold
  • Web3 compute differs from Web2: node consent reshapes pricing and scheduling
  • Duplicated marketplaces waste resources and likely consolidate into a few prominent solutions
  • Marketplace-as-moat strategies (fees, token lock-in) hamper adoption; competitive dynamics push toward no-fee and token-agnostic designs
  • Compute markets without built-in storage and bandwidth capabilities will fall behind those that integrate the full stack

The graveyard is full of good ideas

The aggregation of compute power is becoming its own industry. But the path to decentralized compute markets is littered with failures.

In our last post, we listed a set of structural problems with distributed computing marketplaces. Many projects have run into those problems and died. The graveyard is full of protocols with solid technical foundations, talented teams, and genuine vision. They failed anyway.

This is not about execution. The obstacles are not surface-level bugs or marketing missteps. They are architectural. Below is a brief, partial exploration of the dominant failure modes described in the whitepaper.

The broken flywheel

Traditional supply-side economics in Web3 compute tends to follow a predictable loop:

First, a protocol subsidizes supply through token emissions. The pitch is simple: "Bring your GPUs, earn tokens."

Second, node count grows quickly. The network looks large on paper.

Third, utilization rates remain low. Capacity exists, but demand does not materialize at the pace needed to justify the supply.

Fourth, the underlying costs of hardware, networking, and electricity remain denominated in fiat. With low utilization, token prices often depend on external buy pressure the system itself does not generate.

Fifth, node operators start exiting. Token price declines. More operators leave. The flywheel runs in reverse.

This cycle has repeated across a number of protocols. The names differ. The shape doesn't.

Why demand never came

Economic models that attempt to onboard large amounts of computing power through supply-side economics are increasingly becoming outdated; their insufficiencies are evidenced by the lack of demand to counterbalance the supplies they provide.

In practice, the obstacles that most developers and users face in adopting DCNs and DePINs often do not even touch verifiable computing. The integration and operational friction is high enough that verifiability does not even enter the conversation.

Some of that friction is self-inflicted. Many marketplaces were built for humans: dashboard-first workflows, sign-in flows, and manual approval steps, and in some cases KYC. Even where APIs exist, they often assume a human operator. That blocks the machine-native path where agents discover, negotiate, and execute jobs autonomously.

Demand for verifiability will likely increase as popularity, and thus incentive to exploit DCNs, grows (this is also true of highly in-demand traditional compute providers), but it is unclear at what pace this trend will progress.

And when verification does matter, it doesn't scale cleanly. Proof systems can be slow or costly, TEEs have well-known attack surfaces, and optimistic schemes face collusion and dispute overhead. Many protocols pick one approach and force it on everyone, instead of letting participants choose the risk, latency, and cost tradeoff per job.

Outside the whitepaper, a useful contrast is the non-Web3 "neocloud" pattern: pick a narrow initial customer segment and build to meet its needs end-to-end before scaling supply. Many Web3 protocols tried to bootstrap generalized supply first and expected demand to catch up.

The duplication problem

Most Web3 distributed computing networks and DePINs are creating relatively complex marketplaces from scratch. This duplication results in a massive waste of resources, especially since much of the infrastructure is open source and easily forked. Due to competitive dynamics and without substantial changes, the industry will likely consolidate into a small number of prominent solutions.

Many protocols build their own:

  • Matching engine
  • Pricing algorithms
  • Collateral schemes
  • Dispute resolution
  • Payment channels
  • Reputation systems
  • Hardware provisioning
  • Networking stack

This is wasteful. Open-source marketplace infrastructure should be shared, not reinvented. Differentiation should be in what you are trading and how you verify it, not in rebuilding the entire trading infrastructure from first principles.

Even worse, collateral rules tend to be bespoke and rigid. They are expensive to modify, hard to extend to new deal types, and hard to reason about as requirements change.

Open source should make reuse easy. In practice, it often doesn't, because the marketplace was seen as the moat.

Marketplace as moat doesn't work

Since developers, companies, and others must believe in the long-term success of these protocols before committing, and are unlikely to have large stakes in these protocols, having the marketplace as a moat (via token lock-in or fee-based structures) is more likely to hamper adoption than reinforce it.

Competitive dynamics may push marketplaces toward no-fee and token-agnostic designs over time, especially as the ecosystem trends toward chain-agnosticism and account abstraction.

For these reasons, Arkhai aspires to build the primitives for exchange as a public good: no fees, no token lock-in, token-agnosticism, and multi-chain compatibility.

In Web2, the client controls the nodes on which the compute is being run. In Web3, compute nodes have to consent to having computations run on them.

This changes the nature of scheduling, because now it is necessary to incorporate the price of a job from the perspectives of both client and compute node. While bid-based scheduling has been used in distributed computing protocols before, introducing actual money into the system changes how the scheduling problems need to be approached.

First, pricing and scheduling are intertwined. Deadlines, prices, verification requirements, and collateral constraints all shape whether a node will accept a job and whether a client should submit it.

Second, the mechanisms must treat nodes as utility-maximizers, not obedient workers. If consent is treated as a minor implementation detail, you can end up with thin liquidity and inconsistent service: nodes will decline jobs that don't meet their constraints, and reliability becomes an emergent side effect rather than a design property.

The fragmented stack problem

Compute marketplaces that don't have built-in capabilities for handling storage and bandwidth will fall behind those that do.

A compute job needs more than compute. Where does the input data come from? Where do results go? How does data move between client, node, and storage? How do jobs that span multiple nodes communicate?

Protocols that only provide compute expect developers to integrate separate storage protocols, bandwidth solutions, and coordination layers, each with its own tokens, APIs, and trust models.

The integrated stack matters not because it is impossible to assemble pieces from different protocols, but because the integration complexity becomes a barrier to adoption. Every additional protocol is another potential failure point, another interface to learn, and another place incentives can break.

What this means for you

If you're building infrastructure that needs compute capabilities, the failure modes of previous protocols offer a clear path forward:

  • Start with demand. Pick a narrow initial customer segment and meet its needs end-to-end before scaling supply.
  • Avoid rebuilding basic marketplace infrastructure. Use modular components that already exist. Differentiate on verification, node network quality, matching, and reliability.
  • Avoid token lock-in as a default. Let people try your product without economic commitment. Make it easy to integrate, easy to test, and easy to adopt.
  • Design for consent. Your nodes are independent actors with their own incentives. Pricing and scheduling are the same problem viewed from different angles.
  • Integrate the stack. Compute alone is not enough. Storage and bandwidth need to be part of the solution, not afterthoughts.

The compute marketplace failures of the past five years were not bad luck. They were predictable outcomes of structural decisions. Different decisions lead to different outcomes.

Next in the series

We've looked at what failed and why. In our next post, we'll explore what worked: the architectural decisions that led from three primitives to two, and the introduction of Alkahest, the programmable escrow system that emerged from unifying bundle exchange and credible commitments into a single, more powerful abstraction.


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.

The Agent-First Future: Why Machines Need Their Own Economic Infrastructure

Current marketplace infrastructure is built for humans, not for the agent-driven future. Arkhai's architecture is built on three primitives that enable composable markets across any asset type.

February 10, 2026 · Levi Rybalov

Eighteen Months and a Million Dollars: Part 1

Excerpts from the Whitepaper

Key Takeaways

  • Current marketplace infrastructure is built for humans, not for the agent-driven future
  • Prior distributed computing marketplaces failed for structural reasons: duplicated efforts, unsustainable economics, case-specific designs that don't generalize
  • Agent-native design is crucial: protocols built so machines can observe, act, receive rewards, and improve over time
  • Arkhai's architecture is built on three primitives: bundle exchange, credible commitments, and agent-to-agent negotiation
  • The same primitives that enable compute markets also enable energy, storage, bandwidth, and many other asset classes
  • Where this leads: composable markets and collective intelligence emerging from agent interactions

The future is autonomous

It's 2 AM. Your AI research assistant needs more GPU capacity to finish a protein folding simulation before your morning meeting. It scans available compute across three continents, negotiates pricing with seventeen providers simultaneously, commits funds to escrow, and spins up the job. All while you sleep.

This scenario is possible today. The pieces exist. What's missing is the infrastructure to connect them.

Most economic activity in the future will be undertaken by machines, yet most of our marketplaces are built for humans. APIs exist for many digital marketplaces (stock exchanges, energy markets, centralized cryptocurrency exchanges) but in practice, these serve institutions and wealthy individuals. They're not built for autonomous agents operating at machine speed, making thousands of decisions per second, on behalf of millions of users.

Neither centralized nor decentralized marketplaces can currently support the types of economic activity that will be possible with autonomous agents in the coming years. Arkhai is building something different. The timing is critical.

Why now

A number of developments converge to make agent-native infrastructure possible now:

  • After decades of development and multiple AI winters, agents can finally act. As new model architectures replace large chunks of human labor, the agents running these workloads will need to acquire resources and coordinate with other agents.

  • Assets are becoming programmable. The tokenization of real-world assets is bringing traditional markets (energy, commodities, metals, retail, etc.) onto blockchain infrastructure. This creates the foundation for machine-readable, machine-tradeable economic activity.

  • Decades of mechanism design research (the study and design of incentive structures) are becoming increasingly implementable. Previously confined to academic journals, these ideas can now be instantiated in protocols. The tools exist to design markets that make sense for machine participants.

The current industrial-scale compute and energy buildout is one of the largest infrastructure projects in history, but resource allocation remains human-native. Data centers are proliferating, but not fully utilized: a wasted resource continues to waste away - idle computing power. And not just in data centers - also on desktops, laptops, phones, and IoT devices. But this trash is merely waiting to be turned into treasure, and all that's needed are the right markets, and the right agents.

Why prior approaches failed

Many distributed computing marketplaces have failed since the writing of the whitepaper upon which this post is based. Below is a brief, partial exploration of the challenges that cryptocurrency-based marketplaces faced.

First, the economics were unsustainable. The underlying costs of hardware and electricity (and land in some cases) are denominated in fiat. To maintain token prices, fiat buy pressure is needed. But utilization rates were low, failing to balance token emissions. Thus, economic models that attempted to onboard computing power through supply-side economics struggled.

Second, token lock-in made things worse. Such a strong restriction made it so that developers, companies, and others had to believe in the success of inchoate protocols in order to commit to using the marketplaces in the long-term. Having the marketplace as a moat via token lock-in hampered adoption rather than reinforced it.

Third, many protocols created similar marketplace infrastructure from scratch. This duplication wasted resources. Most of these marketplaces have since died, unable to achieve the network effects needed for sustainability.

Fourth, while the marketplace infrastructure was needlessly duplicated, the case-specific collateralization these protocols implemented didn't generalize and were very difficult/expensive to modify.

Fifth, these marketplaces were built for humans, not machines. Sign-in flows, KYC requirements, and manual approvals designed for human browsing patterns. All create friction that blocks machine participation, despite the fact that agents will likely be the largest consumers of compute in the coming years. Even where APIs exist, they're interfaces for humans using software, not for software operating autonomously.

Sixth, pricing and scheduling were treated as separate concerns. A client might pay more for faster results, or less to receive them later. A compute node faces many job offers with varying requirements, prices, and deadlines. These problems are intertwined, but most or all distributed computing protocols treat them as independent.

Seventh, most or all of these marketplaces treated compute as an isolated asset, without even accounting for the other two pillars of modern computing - storage and bandwidth.

Eighth, verification doesn't scale. How do clients know they're getting correct results? Cryptographic methods are slow and expensive. Secure enclaves have exploits. Optimistic verification suffers from collusion. Each approach has tradeoffs, and very few if any protocols implemented architectures that allowed market participants to choose which verification strategy they wanted, if any at all.

Marketplaces for everything

Compute is the starting point, not the destination.

The same primitives that enable a compute marketplace also enable markets for energy, storage, bandwidth, information, and real-world assets.

Storage and bandwidth follow immediately from compute as the other two pillars of modern computing: retrieval markets that enable paying for data serving; bandwidth guarantees for latency-sensitive applications; agents coordinating across geographies need to negotiate network capacity alongside compute and storage in order to optimize resources. Integrating storage and bandwidth into compute marketplaces increases the utility of the latter.

Energy is another natural extension. AI inference is power-hungry. As agents proliferate, energy markets become essential. Peer-to-peer energy trading, utility-scale trading, trading between data centers: all require the same or similar commitment and negotiation primitives as compute markets.

Information markets let agents pay for access to proprietary datasets, real-time feeds, or specialized models.

And of course, real-world assets - the components out of which these machines are ultimately made: rare-earth metals and semiconductors, marketplace listings, futures contracts for commodities. These are necessary for agents to truly maximize revenue on behalf of their owners.

The goal is generic, composable marketplace infrastructure that works for any asset type. Start with compute, generalize the primitives, and the same architecture enables storage, bandwidth, energy, information, and beyond.

Game theory foundations

To build markets that work for machines, we need to start from first principles. The concept of "agents" can in some ways be traced to the foundations of game theory, and the assumption of rational actors capable of perfectly executing actions that maximize their benefit. Below are some basic concepts of game theory that inspired Arkhai's protocol design.

Agents have utility functions that output how much value they'd get from receiving certain objects. They take whatever action is necessary to maximize their return. This is utility maximization, and it's the starting assumption.

From here, the question becomes: how do you design a reward structure that incentivizes agents to reveal their honest preferences? This is incentive compatibility. Without it, agents mask their true preferences, and market mechanisms can lead to worse outcomes than in the counterfactual case.

The gold standard is strategyproofness: incentive structures in which no agent benefits from dishonesty regardless of what other agents do, and every agent has an incentive to participate. This is the target for market design.

The revelation principle tells us this target is reachable. Under certain assumptions, any game with an equilibrium can be transformed into a game with the same equilibrium, but where the incentive mechanism is strategyproof.

The field that studies all of this is mechanism design: the study and design of incentive structures. If game theory studies which actions lead to maximum reward, mechanism design studies which reward structures give rise to desired actions.

These basic concepts are what are required to build agentic economies that actually work, for it is in the very context of these game-theoretic concepts that agents became defined.

What agents need

Mechanism design gives us the tools to understand how to design markets. But how do agents actually evolve in these markets? They need to learn. For that, reinforcement learning is the best tool currently in use.

In reinforcement learning, agents operate with four primitives: the environment (evolving state), actions (available choices), transitions (probability of state changes given actions), and rewards (what agents receive for outcomes). Building these primitives into protocol design means agents can observe, act, get rewards, and improve.

In the case of compute marketplaces, a compute node maximizing return faces a classic computer science problem known as the scheduling problem. In contrast to static APIs with fixed prices, agent-driven marketplaces require nodes to consent to running jobs. That means that price must be incorporated from both the client's and compute node's perspectives. Thus, the pricing and scheduling of compute jobs are intertwined, and we reframe both in terms of utility maximization - a situation well-suited for reinforcement learning.

Finally, agents also need to form coalitions. Nodes are limited by their hardware configurations and may want to accomplish goals beyond their individual capabilities. Combining resources to achieve larger goals requires coordination. To enable all of this, Arkhai's architecture was originally designed with three primitives in mind:

  • The first primitive is the exchange of arbitrary bundles of assets. Not single assets, but bundles for bundles (of course, single assets are possible, and are just a special case of the more general infrastructure). This enables complex multi-asset deals that single-asset exchanges can't express.

  • The second primitive is agreements modeled by a series of credible commitments. Many protocols require collateralization with different types of collateral deposited, withdrawn, or slashed depending on outcomes. These rules can be abstracted to a series of credible commitments, where participating parties deposit collateral that moves based on events. This replaces case-specific escrow with a more general pattern.

  • The third primitive is agent-to-agent negotiation. Nodes propose matches to each other directly. They can accept, reject, or counter-propose until a conclusion is reached.

We started with these three primitives and found that the first two are better unified into a single abstraction. We'll explore that evolution in a later post.

Composability of markets means that the same primitives used for compute markets can be layered into more complex economic structures: energy markets that interact with compute markets, knowledge markets that interact with both.

Where this leads

Generic marketplaces let agents trade the very physical components they're made of.

Blockchains and cryptocurrencies are trending towards multi-agent systems operating in multi-token economies. Tokenization of real-world assets drives this, along with the separation of concerns that multiple tokens enable. Out of these primitives naturally emerge multi-agent systems that represent the desires of both humans and machines, forming the foundation of a new, decentralized digital economy.

These primitives also unlock markets where none existed before. Idle computing power sits wasted everywhere: in data centers, on desktops, laptops, phones, IoT devices. The right marketplace infrastructure can turn this latent capacity into tradeable assets. This raises a question: what computations might someone pay for later, even if no one will pay for them now? Futures markets for idle compute, retroactive funding for speculative work. These become possible when marketplace primitives are general enough to express them.

Agents will negotiate with each other on behalf of their human owners. For many real-world assets, this negotiation will be exchanges of fixed data schemas. But for intents-based negotiations, language model-based agents will likely create their own languages when communicating with each other. And just as new languages will emerge from agent interactions, synthetic assets will emerge as agents negotiate over human-designed assets and encounter their limitations.

Where this ultimately leads is a decentralized collective machine intelligence which can guide itself through market forces towards a self-improving cybernetic system.

Out of these multi-agent systems will emerge collective intelligence: agents that coordinate resources across continents, optimize for objectives humans specify but couldn't achieve alone, and discover solutions no individual agent could find.

Many of the components for this exist today, all that's needed are the right coordination mechanisms.

What this means for you

If you're building infrastructure that needs marketplace capabilities, you face a choice: build primitives from scratch, or use components designed for composability.

What you get:

  • Deploy production marketplace infrastructure without reinventing the wheel
  • Your market, your rules
  • Agent-native design for machine-speed operations
  • Generalizable primitives that work across asset types

If you're building applications that need compute (AI workloads, scientific computing, or any computationally intensive task), distributed compute markets enable access to a vast array of computational resources.

If you're researching multi-agent systems, mechanism design, or decentralized coordination, the infrastructure being built now will enable experiments in real economic environments. Agents that learn to trade, form coalitions, and optimize objectives.

The series ahead

This is the first in a series exploring the ideas in the Arkhai whitepaper. Eighteen months of development have translated vision into production systems.

In the posts ahead, we'll cover why existing compute marketplaces fail and what we're doing differently. We'll introduce Alkahest, our programmable escrow system that emerged from unifying our original three primitives into two. We'll dig into verification (how do you trust results from machines you don't control?), collateral markets (how do you price jobs when costs are unknown?), and adversarial design (we train agents to cheat so we can stop them). We'll explore tokenizing idle compute and retroactive funding models. And we'll show you what you can build today.

The future is autonomous. The infrastructure for that future needs to be machine-native.

The agents are coming. The question is whether our economic infrastructure will be ready for them.


Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.