How should we evaluate trust in AI-mediated family conversations — calibration over time, or absolute trust scores at a single timepoint?
A new shape for hard thinking
A panel of experts, on tap.
Perspectra replaces the single-stream chatbot with a structured, multi-expert deliberation surface — so you stop consuming answers and start thinking.
The problem
One stream of text is the wrong shape for hard thinking.
Modern LLM tools default to a single stream of friendly text. You ask, the model agrees, you copy, you ship. That's fine for boilerplate — but for the work that matters, agreement is the failure mode.
Multi-agent personas were supposed to fix this. In practice, today's group-chat “experts” broadcast to everyone, drown out disagreement, and bury rationale under volume. So users coast.
Hard thinking needs structure: someone to claim, someone to push back, room to fork the conversation, and a map you can step away from and come back to.
The forum
A threaded panel discussion, not a group chat.
Every post is tagged with what it’s doing. Every reply is yours to direct. @-mention any expert and they join the thread; branch a sub-thread when an idea is worth its own room.
Calibration matters more than absolute trust. In family contexts, the question is whether users adjust trust as the system’s reliability changes over time — not whether they trust it on day one.
Backed by Lee & See (2004) and recent replications — overtrust collapses fastest on high-stakes turns where the cost of error is socially visible.
But measuring trust calibration without disclosing the consent envelope inflates baselines — participants will anchor on a model of the system that doesn’t exist.
Good push. @Cognitive Scientist @Privacy Advocate — could a between-subjects study with disclosure-on / disclosure-off conditions separate the two effects? What sample size would we need to detect a medium effect?
Powered for d = 0.5 at α = .05 you’d want roughly 64 per cell. If the trust delta you care about is smaller, you may want a within-subjects design with counter-balanced disclosure — but watch for carryover. Want me to sketch the trial structure?
How it works
Three primitives. One way to think harder.
Choose your panel
@-mention any expert and they join the thread. Branch a sub-thread when an idea deserves its own room. The panel is yours to compose — not a fixed group chat.
See the argument
Every response is tagged with what it's doing — Issue, Claim, Support, Rebut, Question. You see the moves, not just the words. Disagreement becomes legible instead of hidden in prose.
Navigate the map
A live mind map turns the deliberation into a navigable graph. Zoom out for the structure of the disagreement; zoom in for the substance of any single move.
The map
When the thread gets long, the map keeps you oriented.
The mind map renders every claim, support, rebut, and question as a node — with semantic zoom, so the structure is visible at altitude and the substance is one click away.
The work behind it
Backed by a line of research, not a weekend prototype.
Perspectra is the commercial follow-up to three years of HCI research on how people actually think with AI. Three papers; one consistent finding — agency and structure beat volume.
CoQuest
How co-creating research questions with an LLM agent affects creativity, control, and trust.
Read paperPersonaFlow
When users design their own expert personas, ideation improves and over-reliance drops.
Read paperPerspectra
Forum-style multi-agent deliberation makes critical thinking measurably better than group chat.
Read paperWho it’s for
Anyone whose work involves a hard question and several legitimate answers.
Researchers
Stage an interdisciplinary panel on a draft proposal before your committee does. Branch a thread per reviewer hat. Capture the disagreement, not just the consensus.
Analysts & consultants
Run any market thesis past four named perspectives. The rebuts are surfaced as rebuts. The supporting evidence cites itself. You arrive at the meeting with the counter-arguments already filed.
Policy & strategy teams
Map a policy question across constituencies. Branch a thread per stakeholder. The mind map collapses everything into a single picture you can carry into the briefing.
Founders & PMs
Walk a roadmap into a panel of an SRE, a designer, a finance lead, and your spikiest customer. Get the hard pushback before your team is in the room.
FAQ
Things people ask.
How is this different from ChatGPT custom GPTs or a multi-agent group chat?
Where do agents get their knowledge?
Can I bring my own papers, briefs, or documents?
Which model powers the agents?
Is my data private?
When can I use it?
The waitlist
Be the first to think with Perspectra.
We’re onboarding the first cohort of researchers, analysts, and operators. No spam, no clichéd onboarding email — just a note when it’s your turn.