Tokenizing Idle Compute
Idle computing power sits wasted everywhere. Application-specific token markets and retroactive rewards can turn latent capacity into tradeable assets.
February 25, 2026 · Levi Rybalov
Eighteen Months and a Million Dollars: Part 7
Excerpts from the WhitepaperKey Takeaways
- Idle computing power sits wasted everywhere: data centers, desktops, laptops, phones, IoT devices
- SETI@home, Folding@home, and BOINC showed volunteers would contribute compute for free
- Volunteer compute can create real value, but contributors typically receive points or recognition, not economic upside
- Application-specific token markets can finance compute in real time: contributors can sell into stablecoins
- Immutable records enable retroactive rewards: reward contributors after success materializes
- The IP tension: some outputs need to remain private. Secure enclaves enable verifiable credit with confidentiality.
Trash waiting to become treasure
The world is full of idle compute. Data centers run underutilized. Desktops sit idle overnight. Phones and edge devices spend most of their time waiting.
Most of that capacity is already provisioned and paid for. When it sits idle, it's economic trash: cycles that could be doing useful work. The right markets turn that trash into treasure.
Volunteer computing proved that some of this capacity can be mobilized. It also revealed the ceiling: once you've saturated the set of people willing to donate for free, the supply stops growing, and it still represents only a small fraction of the idle compute that exists.
We've spent the past several posts on mechanism design: Alkahest for exchange and commitment, verification for trust, collateral markets for economic guarantees, adversarial training for robustness. Now we can ask the economic question those mechanisms are in service of:
How do you create markets for work that has no buyer yet, but might be valuable later?
In what follows, we'll use BOINC as a precedent, name the value capture problem, and then outline two complementary mechanisms: (1) application-specific token markets for real-time liquidity, and (2) retroactive rewards backed by immutable contribution records and privacy-preserving attribution.
What BOINC taught us
The history of grid-based distributed scientific computing goes back at least to the mid-1990s, including distributed prime-search projects like GIMPS, and later to more mainstream platforms like SETI@home and Folding@home.
Both of the larger platforms were launched to connect scientists who had embarrassingly parallel computational workloads with volunteers all around the world. Volunteers offered their own computers for free in exchange for participating in large scientific projects and competing on leaderboards for non-transferable Web2 points.
In the case of SETI@home, the computations helped the search for extraterrestrial life. In the case of Folding@home, the computations were related to protein dynamics. Out of SETI@home grew BOINC, the Berkeley Open Infrastructure for Network Computing, which generalized the project creation and job generation mechanisms to other scientific computing projects, of which today there are tens in existence and operating.
The lesson from BOINC is clear: people will contribute compute for free if they care about the project. They'll download software, configure their machines, and donate cycles in exchange for leaderboard points and the satisfaction of contributing to science. No money changed hands - just participation in something larger than themselves.
This model worked. Millions of computers contributed billions of hours of compute time. But it hits an upper limit: participation saturates once you've reached the set of people willing to donate for free, and even at its peak it covers only a small fraction of the idle compute that exists globally. If you want to unlock orders of magnitude more capacity, especially for work that doesn't have a built-in volunteer audience, you need incentives that look like markets, not charity.
The value capture problem
Volunteer compute can power serious science: papers, datasets, and occasionally downstream intellectual property. Contributors might show up in acknowledgements or consortium-style authorship, but the compensation is rarely economic. With proper tracking and incentives, those that contributed computational resources could have received a portion of downstream value.
In practice, they generally don't. The compute providers get points on a leaderboard and, in some cases, acknowledgements or consortium-style authorship. The value they help create still mostly accrues elsewhere: institutions, companies, and shareholders. The infrastructure that enabled the discovery isn't part of the value capture.
This isn't a complaint about those specific projects. They did exactly what they were designed to do: harness volunteer compute for scientific problems. The problem is the design. When you can't track contributions in a way that enables later compensation, you can't create markets for speculative work. You're limited to charity.
Futures markets for compute
These new markets aim to answer the following question: are there computations for which nobody is willing to pay now, but that somebody might be willing to pay for later?
There are two primary approaches.
First is real-time tokenization: an application-specific token that contributors earn as they provide compute. Because the token trades in open markets, contributors can sell into stablecoins to cover near-term costs like electricity and hardware. The market price becomes an imperfect but useful signal of expected future value. People can speculate on which applications will produce real outcomes. That speculation isn't just commentary: it finances compute.
Second is retroactive rewards: keep an auditable record of who contributed what, and pay out later if value materializes. This ties compensation to real outcomes and avoids forcing early speculation into a single price signal. The tradeoff is timing: retroactive rewards do not directly solve cash flow unless contributors can borrow against expected future payouts.
In practice, the two can be combined. A liquid token can be a tradable claim on future retroactive rewards, with the record as the source of truth for who earned what.
At a high level:
- An application defines a claim on future value and the rules for later compensation.
- Contributors who provide compute receive claims in proportion to their contributions.
- Those claims trade in an application-specific market, so contributors can realize value early and others can fund work they believe will pay off.
- If real value materializes, claimholders can be compensated through retroactive rewards, licensing revenue, or whatever payout the application specified up front.
If the application produces nothing valuable, the claims are worthless. If it leads to a breakthrough, claimholders share in the upside.
This is not purely theoretical. In the early days of cryptocurrencies, a number of protocols attempted to apply Bitcoin's reward mechanism to scientific computing. Gridcoin, Curecoin, Pinkcoin, and Primecoin were among these early science-focused computing coins. However, blockchains were in their infancy at the time, and while these cryptocurrencies accomplished much, they fell short of their ultimate visions.
The infrastructure has matured since then. Arkhai's native tracking of jobs and their artifacts can be paired with retroactive reward mechanisms. A verifiable record of what was computed, by whom, and under what conditions enables arbitrary reward structures over those computations.
The immutable record
Retroactive rewards require attribution. Attribution requires a record.
Computational reproducibility facilitates a degree of trust among network participants that computations performed using the protocol have been done correctly, since negative behavior comes with economic consequences. A scientific record of which nodes requested the computations, which nodes performed them, and what the inputs and outputs were enables incentive mechanisms that can reward (or penalize) different parts of the scientific process.
This record is the foundation for retroactive rewards. When an application succeeds, the reward mechanism can look back at who contributed what. The trail is immutable. Attribution is verifiable. Payment can flow back to contributors proportionally.
The privacy tension
There's a complication.
The creation of intellectual property is complicated by the fact that storing all of the inputs and outputs of jobs publicly may be undesirable for some forms of intellectual property.
A drug discovery computation might produce proprietary molecules. A climate model might use licensed satellite data. A machine learning training run might involve confidential datasets. Making everything public isn't always legal, and it isn't always desirable.
One option is to allow anything that can be public to be public and make the rest privacy-preserving using secure enclaves, with verifiable credit attribution to the relevant contributors.
The secure enclave runs the computation. The inputs and outputs stay confidential. But the enclave produces an attestation: this node contributed this much compute to this job. The attestation is public even when the data isn't. Credit attribution works, retroactive rewards work, and the actual intellectual property remains protected.
This tension between transparency and privacy runs through everything we're building. The market needs transparency to function: prices, contributions, outcomes. The applications need privacy to exist: confidential data, proprietary methods, competitive advantage. The architecture has to support both.
What this means for you
If you're running large-scale scientific computations, application-specific markets provide an alternative to grants, university clusters, or volunteer compute. Contributors can receive tradable claims that gain value if the work succeeds. This changes the incentive structure: you're not asking for charity, you're offering participation in potential upside.
If you're building applications that generate intellectual property, the hybrid approach (public attribution, private data) lets you participate in decentralized compute markets without exposing your competitive advantage.
If you're thinking about retroactive rewards, Arkhai's immutable record provides the attribution layer. When it's time to fund past contributions, the record exists to determine who contributed what.
Next in the series
We've covered the full stack: primitives, verification, collateral, adversarial design, and now the economics of idle compute. In our next post, we'll zoom out to the vision level. What does it look like when compute, energy, storage, and bandwidth all trade on the same infrastructure? What emerges from compositional game theory applied to market design?
Arkhai is building machine-actionable marketplace infrastructure. If you're working on problems that intersect with compute markets, agent coordination, or decentralized infrastructure, we'd like to hear from you.