Six advisors. One verdict. Zero hedging.
Run any question, idea, or decision through a structured council that fetches real data, debates it from six angles, peer-reviews itself anonymously, and gives you a Chairman's verdict with kill criteria.
One AI gives you one perspective.
Ask GPT or Claude "should I do X?" and you get a balanced, hedged answer that mostly reflects the model's training average. Useful, but not what you need before a real decision.
The council fixes three failure modes of single-model advice:
- Single perspective: six advisors with deliberately conflicting thinking styles produce tension you can use.
- Hallucinated facts: a recon agent fetches real data first, so advisors aren't inventing market state.
- Sycophancy: advisors review each other anonymously, so they evaluate the argument, not the source.
Six thinking styles. Three natural tensions.
The Contrarian
Looks for the fatal flaw. Names the failure mode in the first 30 days. Tests per-unit economics.
The First Principles Thinker
Asks "what are we really solving?" Strips assumptions. Forces a reframe even if the original survives.
The Expansionist
Names the 10x version. Finds the adjacent market. Maps the long-term moat.
The Customer
Pretends to be the actual end buyer. Surfaces curse-of-knowledge jargon. Names what would stop a sale.
The Executor
"OK, but what do you do Monday morning?" The smallest test. The kill date. The distribution mechanic.
The Operator
Legal entity, liability, refunds, taxes, KYC, regulatory exposure. The parts a strategy memo never mentions.
Tensions: Contrarian vs Expansionist (downside vs upside) — First Principles vs Executor (rethink vs ship) — Customer + Operator hold the room honest.
Recon before the council deploys.
Most strategy questions depend on current external reality — competitor pricing, regulatory rules, platform behavior. Single-LLM advice often hallucinates these confidently.
Before any advisor sees the question, the facilitator dispatches a recon agent that fetches real data via WebFetch and returns a ≤500-word ground-truth packet — every claim cited with a URL.
RECON PACKET - Stripe per-transaction fee: 2.9% + $0.30 (stripe.com/pricing) - LLC filing in Delaware: $90 state fee + registered agent - Competing tool X charges $49/mo (saw landing page X.com) - Reddit thread, 2026-03: "this category is saturated" (reddit.com/r/X/comments/abc123)
Generic answers don't ship.
Each advisor is given an explicit checklist they must address. This is what stops "the Contrarian" from giving a vague pessimism take and forces them to actually identify the load-bearing assumption.
Example — The Operator's mandatory checks:
- Legal entity & liability: who is the legal principal, whose name is on the contracts, what's the personal exposure?
- Regulatory exposure: licenses, tax filings, GDPR/CCPA, platform ToS, industry rules?
- Unit economics: what does each transaction cost after fees, refunds, support time, taxes?
- Customer-service shape: what happens with refunds, chargebacks, an angry customer who demands a phone call?
- Failure-mode containment: what's the kill switch and the worst single-incident loss?
If an advisor genuinely has nothing to add on a check (e.g., no legal dimension), they say so explicitly rather than padding.
Each advisor reviews the others — blind.
Once all six advisor responses are in, they're anonymized A through F (random map) and re-dispatched to six fresh reviewer agents. Each reviewer answers five questions:
- Which response is the strongest? Why?
- Which has the biggest blind spot? Name the specific thing it's missing.
- What did all six miss? (mandatory — even if you have to dig)
- Which response, if followed alone, would lead to the worst outcome?
- What evidence would change your answer to #1?
Anonymization removes positional bias. Reviewers evaluate the argument, not the brand.
One synthesis. Seven sections. No hedging.
The Chairman gets everything: question, recon packet, six advisor responses (de-anonymized), six peer reviews. The verdict follows a fixed structure:
| Section | What it forces |
|---|---|
| Where the council agrees | High-confidence convergence signals |
| Where the council clashes | Real disagreements, not papered over |
| Blind spots peer review caught | Insights that emerged in round two |
| What the council got wrong | Mandatory: Chairman must disagree on at least one thing |
| Recommendation | What to do, why over alternatives, kill criteria, falsification trigger |
| The one thing to do first | Single concrete next action — not a list of ten |
| What to verify before acting | Pre-action recon to protect against stale reality |
The Chairman can — and should — disagree with the majority if the minority case is stronger. The job is the right answer, not the popular one.
Every session produces three artifacts.
council-report-[timestamp].html
A clean briefing document. Framed question, Chairman's verdict prominent, alignment table, recon packet with URLs, collapsible advisor responses, peer review highlights. Opens immediately after generation.
council-transcript-[timestamp].md
Full record. Original question, framed question, recon, all six responses, all six peer reviews with anonymization map revealed, Chairman synthesis. The artifact a future agent can re-enter the conversation from.
council-followups-[timestamp].md
Pre-action checklist + post-action checkpoints + re-council triggers. This is the file that turns a one-shot council into a continuous decision tool.
One install. Then invoke per question.
/plugin marketplace add Coherence-Daddy/advisory-board /plugin install advisory-board@advisory-board /reload-plugins
Then any agent can invoke the council via the Skill tool, or you can prefix any question with:
Use advisory-board to council this: I'm thinking of [decision]. Audience [profile]. Constraints [budget, time]. What's the best move?
The skill loads on demand — its body doesn't bloat your session context until you invoke it.
Knowing when to skip is part of the skill.
A council is overkill for a lot of questions. The facilitator is trained to not deploy the machinery when:
- You've already counciled this topic recently and the new question is narrow and specific within the prior verdict's scope. Just answer using the prior verdict as context.
- The question is about executing a prior recommendation. Drop into normal assistant mode, not council mode.
- The question is purely factual or mechanical. "How does Stripe webhook signing work?" doesn't need six advisors.
A council is the right tool when:
- The decision is consequential, multi-dimensional, or has significant downside.
- You suspect you have blind spots or you're emotionally invested in one answer.
- External reality (markets, regulations, platforms) materially affects the answer.
One real decision is all it takes.
Pick something you've been chewing on for more than a week — a pricing change, a launch decision, a partnership offer, a pivot. Something with real downside. Run it through the council. Worst case: 10 minutes and confirmation. Best case: the council catches the one thing that would have killed the plan.