← All posts

SignalConsensus: Why We Want to Replace Human Oracles with AI

A UMA whale manipulated a $7M Polymarket resolution by accumulating governance tokens. SignalConsensus is our planned system using three independent AI models with BFT consensus on Chainlink CRE. You can't bribe a model that has no wallet.

SignalConsensus is a planned feature for Purrdict’s parlay system — it is not yet live. This post describes our design and the problem it solves. Currently, HIP-4 crypto price markets resolve automatically via HyperCore’s native L1 price feed with no oracle dependency.

The oracle problem

In March 2025, a UMA whale used 5 million governance tokens across three accounts — roughly 25% of total votes — to force a “Yes” resolution on Polymarket’s “Will Ukraine agree to Trump’s mineral deal?” market. There was no deal. The whale manipulated a $7 million market by simply holding enough tokens to control the vote.

This wasn’t a smart contract exploit. The contracts worked exactly as designed. The problem is the design itself: humans with financial incentives decide truth, and humans can be bought.

UMA’s optimistic oracle uses a propose-dispute-vote cycle. Anyone can propose a resolution. If disputed, UMA token holders vote. Whoever holds the most tokens controls the outcome. It’s governance-weighted truth — and governance tokens have a market price.

Kalshi solves this differently: internal adjudication. A team at Kalshi decides who won. You trust them because they’re CFTC-regulated. But “trust a regulated company” isn’t trustless. It’s just trusting a different entity.

Both approaches share the same vulnerability: humans in the loop.

What if the oracle had no wallet?

That’s the question we asked when designing SignalConsensus.

AI models don’t hold tokens. They don’t have financial incentives. You can’t offer GPT-5 a bribe — it has no wallet to receive it. You can’t accumulate “Claude governance tokens” — they don’t exist.

SignalConsensus replaces human voting with AI consensus from three independent providers: OpenAI (GPT-5), Anthropic (Claude Sonnet 4.6), and Google (Gemini 2.5 Pro). Each model evaluates the outcome independently. 2-of-3 consensus determines the result.

To manipulate the resolution, you’d need to simultaneously compromise two out of three of the world’s largest AI companies. That’s a fundamentally different — and much harder — attack surface than buying governance tokens on a DEX.

How it works

Step 1: Data collection

When a market needs resolution, the workflow gathers context from multiple independent sources — market data (prices, volumes) and news feeds (headlines, sentiment). Multiple sources prevent any single feed from controlling the narrative.

Step 2: Independent AI evaluation

Each model receives the same data package and evaluates independently. The models don’t see each other’s responses. There’s no “discussion” or “negotiation” — each model produces its own verdict.

For each outcome, each model returns:

  • Vote: YES or NO
  • Confidence: 0-100 score
  • Reasoning: Text explanation (recorded for transparency)

Step 3: Consensus

Simple: 2 out of 3 models must agree.

If 2+ vote YES, the resolution is YES. If 2+ vote NO, the resolution is NO. If no supermajority, no resolution — the market stays open until clarity emerges.

No consensus means no resolution. The system doesn’t force an answer when the models disagree. A wrong resolution is worse than a delayed one.

Step 4: On-chain publication

The consensus result is published on-chain. But we don’t just publish the final answer — we publish every model’s individual vote, including which model voted what and with what confidence.

Anyone can verify. Anyone can audit. If a resolution seems wrong, you can see exactly which models agreed and what data they were looking at. Full transparency.

SignalConsensus doesn’t just run on a server somewhere. It’s designed for the Chainlink Runtime Environment (CRE) — Chainlink’s orchestration layer for verifiable computation.

What CRE provides

Decentralized execution. Multiple independent Chainlink nodes execute the workflow independently. Results are aggregated via consensus and cryptographically signed. No single node controls the output.

Sandboxed runtime. Workflows are compiled to WASM and executed in an isolated environment on Chainlink nodes.

On-chain delivery. Results are delivered to our on-chain resolution contract via Chainlink’s forwarder infrastructure.

What CRE doesn’t provide

CRE verifies that the workflow ran correctly and that nodes agreed on the output. It does not verify that the AI models’ outputs are “true.” If one model hallucinates, CRE will faithfully confirm that the model produced that hallucination.

This is why we use 3 models with consensus. A hallucination from one model gets outvoted by two that got it right. The redundancy is the safety net, not the execution environment.

AI models can be wrong. Individual models can hallucinate. Data feeds can go stale. The system handles this through redundancy (3 models), consensus (2/3 required), and graceful failure (no consensus = no resolution).

Future: Chainlink Confidential Compute. Chainlink is building a separate product that will add hardware-level attestation for workflow execution, strengthening the guarantee that exact prompts were sent and exact responses received. This would further harden SignalConsensus when available.

Attack surface comparison

AttackUMA (Polymarket)KalshiSignalConsensus
Buy voting powerAccumulate UMA tokens on DEXN/ANo tokens to buy
Bribe decision-makersPay UMA holdersBribe employees (illegal)AI has no wallet
Sybil attackCreate voting walletsN/ACan’t create fake AI providers
Compromise single nodeWin proposal, hope no disputeCompromise adjudication teamOutvoted 2-to-1
Compromise majority51% of UMA voteCompromise KalshiSimultaneously compromise 2 of: OpenAI, Anthropic, Google

The fundamental shift: going from “how many tokens can I buy?” to “can I compromise two of the three largest AI companies in the world simultaneously?”

When do we use SignalConsensus?

Not every market needs AI consensus. That’s a key part of our two-tier resolution architecture.

Tier 1 markets — crypto prices like “BTC > $100K” or “ETH Up or Down” — can resolve from Hyperliquid’s on-chain oracle price data, accessible via HyperEVM precompiles (live on mainnet since April 2025). A resolution bot reads the price, checks the threshold, and publishes the result. No AI needed.

Tier 2 markets — real-world events, elections, community questions — have no on-chain data source. These go through SignalConsensus.

Most prediction market volume is Tier 1 (recurring crypto price bets). SignalConsensus handles the long tail — the markets that make prediction platforms interesting but are hardest to resolve trustlessly.

Where we are

SignalConsensus is a planned feature alongside our parlay system. Here’s what exists today:

  • Resolution contract written, tested, and ready for deployment — stores consensus results with per-model vote transparency and publisher access control
  • CRE workflow prototype built during the Chainlink Convergence Hackathon — queries three AI providers, applies consensus, compiles to WASM
  • Full frontend prototype with prediction markets, signal dashboard, and workflow trigger panel

What’s next: deploying the CRE workflow to a production Chainlink DON and publishing the first real AI consensus resolution on-chain. This will ship alongside the parlay system — parlays need resolution infrastructure, and SignalConsensus is how Tier 2 markets get it.

The roadmap

More models. Starting with 3, expanding to 5-of-7 consensus with additional providers (Llama, Mistral, Grok) for higher fault tolerance.

Specialized evaluation. Different market types need different approaches. A sports outcome and a geopolitical event require different reasoning. Category-specific prompts improve accuracy.

Resolution-as-a-service. The stack is general-purpose. Other protocols building on HIP-4 can use our AI oracle for their own resolution needs. We publish, they read.

The bottom line

Prediction market oracles have a trust problem. Humans with financial incentives shouldn’t decide who wins million-dollar bets.

SignalConsensus replaces human governance with AI consensus — three independent models, 2-of-3 agreement required, every vote recorded on-chain, execution orchestrated by Chainlink CRE.

You can’t buy votes that don’t exist. You can’t bribe a model that has no wallet.


Follow our progress on X or join the waitlist for mainnet.

Ready to trade?

Put your predictions to work on Hyperliquid. Instant fills, no custody risk, fully on-chain.

Start Trading → Browse Markets

Ready to trade?

Join Purrdict on Hyperliquid testnet. Instant fills, fully on-chain.

Start Trading →