Perspectra.Join waitlist

A new shape for hard thinking

A panel of experts, on tap.

Perspectra replaces the single-stream chatbot with a structured, multi-expert deliberation surface — so you stop consuming answers and start thinking.

See how it worksNo credit card. Early-access cohort opening soon.

The problem

One stream of text is the wrong shape for hard thinking.

Modern LLM tools default to a single stream of friendly text. You ask, the model agrees, you copy, you ship. That's fine for boilerplate — but for the work that matters, agreement is the failure mode.

Multi-agent personas were supposed to fix this. In practice, today's group-chat “experts” broadcast to everyone, drown out disagreement, and bury rationale under volume. So users coast.

Hard thinking needs structure: someone to claim, someone to push back, room to fork the conversation, and a map you can step away from and come back to.

The forum

A threaded panel discussion, not a group chat.

Every post is tagged with what it’s doing. Every reply is yours to direct. @-mention any expert and they join the thread; branch a sub-thread when an idea is worth its own room.

Trust in family CAsMethods+ New thread
ModeratorISSUE2m

How should we evaluate trust in AI-mediated family conversations — calibration over time, or absolute trust scores at a single timepoint?

HCI ResearcherCLAIM2m

Calibration matters more than absolute trust. In family contexts, the question is whether users adjust trust as the system’s reliability changes over time — not whether they trust it on day one.

Cognitive ScientistSUPPORT1m

Backed by Lee & See (2004) and recent replications — overtrust collapses fastest on high-stakes turns where the cost of error is socially visible.

Privacy AdvocateREBUT1m

But measuring trust calibration without disclosing the consent envelope inflates baselines — participants will anchor on a model of the system that doesn’t exist.

Youjust now

Good push. @Cognitive Scientist @Privacy Advocate — could a between-subjects study with disclosure-on / disclosure-off conditions separate the two effects? What sample size would we need to detect a medium effect?

Cognitive ScientistQUESTIONjust now

Powered for d = 0.5 at α = .05 you’d want roughly 64 per cell. If the trust delta you care about is smaller, you may want a within-subjects design with counter-balanced disclosure — but watch for carryover. Want me to sketch the trial structure?

@type to reply, @-mention to invite an expert, or branch this into a new thread⌘ ↵

How it works

Three primitives. One way to think harder.

01

Choose your panel

@-mention any expert and they join the thread. Branch a sub-thread when an idea deserves its own room. The panel is yours to compose — not a fixed group chat.

02

See the argument

Every response is tagged with what it's doing — Issue, Claim, Support, Rebut, Question. You see the moves, not just the words. Disagreement becomes legible instead of hidden in prose.

03

Navigate the map

A live mind map turns the deliberation into a navigable graph. Zoom out for the structure of the disagreement; zoom in for the substance of any single move.

The map

When the thread gets long, the map keeps you oriented.

The mind map renders every claim, support, rebut, and question as a node — with semantic zoom, so the structure is visible at altitude and the substance is one click away.

CLAIMCLAIMSUPPORTREBUTQUESTIONISSUEHow to evaluate trustin AI-mediated family CAs?CLAIMCalibration matters morethan absolute trust scores.CLAIMDisclosure of consentenvelope inflates baselines.SUPPORTLee & See (2004): overtrustcollapses on high-stakes turns.REBUTWithout consent envelope,users anchor on a fiction.QUESTIONSample size for d = 0.5between-subjects?
LegendISSUECLAIMSUPPORTREBUTQUESTION

The work behind it

Backed by a line of research, not a weekend prototype.

Perspectra is the commercial follow-up to three years of HCI research on how people actually think with AI. Three papers; one consistent finding — agency and structure beat volume.

Who it’s for

Anyone whose work involves a hard question and several legitimate answers.

Researchers

Prompt@HCI Methodologist @Statistician — what's the smallest sample size that would credibly detect a between-subjects effect for our trust study?

Stage an interdisciplinary panel on a draft proposal before your committee does. Branch a thread per reviewer hat. Capture the disagreement, not just the consensus.

Analysts & consultants

Prompt@Bull @Bear @Regulator @Field Ops — stress-test the thesis that vertical AI agents collapse the SaaS stack within 18 months.

Run any market thesis past four named perspectives. The rebuts are surfaced as rebuts. The supporting evidence cites itself. You arrive at the meeting with the counter-arguments already filed.

Policy & strategy teams

Prompt@Constituent Advocate @Civil-Service Lawyer @Economist — map the second-order effects of capping a benefit at $X.

Map a policy question across constituencies. Branch a thread per stakeholder. The mind map collapses everything into a single picture you can carry into the briefing.

Founders & PMs

Prompt@SRE @Product Designer @Finance @Customer #42 — walk this roadmap and tell me where it breaks first.

Walk a roadmap into a panel of an SRE, a designer, a finance lead, and your spikiest customer. Get the hard pushback before your team is in the room.

FAQ

Things people ask.

How is this different from ChatGPT custom GPTs or a multi-agent group chat?
Custom GPTs and group-chat MAS broadcast every message to every agent and stack responses in a single stream. You read passively, you can’t steer who responds, and disagreement gets averaged out. Perspectra flips the default: you compose the panel, every move is labeled (Claim, Support, Rebut, Question), and parallel topics live in their own threads with a shared map. We’ve tested the design head-to-head against the group-chat baseline — forum-style deliberation produces more disagreement, more revisions, and stronger work.
Where do agents get their knowledge?
Each agent has a persona profile (discipline, methods, focus areas), a long-term memory of the deliberation so far, and access to a shared literature retrieval layer powered by Semantic Scholar and OpenAlex. Retrieved papers are inserted into a graph-RAG index agents can query. Citations are surfaced inline so you can verify what the panel is leaning on.
Can I bring my own papers, briefs, or documents?
Yes — the prototype ingests PDFs and markdown into the same retrieval layer the agents use. Upload a brief and the panel will reference it the way they reference public literature.
Which model powers the agents?
Model-agnostic by design. The prototype was tested with GPT-5, Claude Sonnet 4.5, Gemini 2.5 Pro, and Llama-4 Maverick. We’ll ship with sensible defaults and a setting to swap providers.
Is my data private?
Your threads, documents, and notes stay yours. We’ll never use your work to train models. Self-hosted and BYOK options are on the roadmap for teams with stricter requirements.
When can I use it?
We’re onboarding the first cohort now. Drop your email and we’ll be in touch as we open up access.

The waitlist

Be the first to think with Perspectra.

We’re onboarding the first cohort of researchers, analysts, and operators. No spam, no clichéd onboarding email — just a note when it’s your turn.