Questions & Answers
Understanding Your Results
Confidence % is the chair model's estimate of how well-supported each claim is, based on the debate. It reflects model agreement — not an absolute measure of truth.
- 100% — all participating models agreed on this point without needing debate.
- 70–99% — most models agreed, or initial disagreement was resolved during debate.
- Below 70% — significant disagreement remained after all debate rounds.
A high confidence score means the AI models converged — it does not guarantee the claim is factually correct. Always verify important claims independently.
Each Key Claim carries a source label showing how consensus was reached:
- consensus — all models agreed from the very first round, with no debate needed.
- debate_resolved — models initially disagreed, but after one or more debate rounds they converged on the same position.
- disputed — models still disagreed after all debate rounds completed. The positions shown under the claim are each model's final stance.
How the Debate Works
Consensable runs a four-step process:
- Step 1 — Query: Your question is sent to all selected models simultaneously. Each answers independently.
- Step 2 — Analysis: The chair model reads all responses and identifies points of consensus and disagreement.
- Step 3 — Debate: For each disputed claim, the models engage in structured rounds. Each model sees the others' positions and can update or defend its own. Disputes run in parallel.
- Step 4 — Synthesis: The chair model produces a final synthesised answer, verdict, confidence scores, and key claims.
These are safety caps that prevent the debate from running too long or costing too much:
- Max Rounds — the maximum number of debate rounds per disputed claim. Each round, every model gets one response.
- Max Tokens — the total token budget across all model calls. The debate stops early if this limit is hit.
- Max Minutes — a wall-clock time limit. Useful for keeping responses fast on time-sensitive topics.
Synthesis always runs to completion regardless of caps — only the debate phase is cut short.
For questions about recent events, current news, or real-time data, the models' training data may be out of date. Live Web Context fetches fresh information from the web and injects it into all model prompts:
- Auto — a cheap classifier model first decides whether the question needs web search. If yes, Brave Search is used; if not, the web search is skipped.
- Brave — always searches via Brave Search, returning the top 6 results.
- Perplexity — uses Perplexity's AI-powered search to produce a pre-summarised web context.
- None — no web search; models rely entirely on their training data.
Cost and Billing
Costs are based on the number of tokens (units of text) processed by each AI model. Each model has a different per-token price. The cost shown is the total across all model calls during your query, including the debate and synthesis steps.
The initial cost shown (marked ~) is a real-time estimate based on token counts. The final actual cost is confirmed a few seconds later from the model provider and updates automatically.