← TradeApe

Replacing dashboards with inspectable agents

Dashboards answer the questions you had when you built them. Agents answer questions you didn't anticipate — but only if you can see through them.

Dashboards are good at answering questions you anticipated. A portfolio tracker shows the numbers you set up the portfolio tracker to show. A chart with pre-set indicators tells you what those indicators display. The knowledge is crystallized at build time, and that's fine — for the questions you knew you'd have.

The problem with market analysis is that the interesting questions arrive after you open the chart. You notice something in the structure, or a level keeps appearing, or the volume profile looks unusual. A dashboard built before you noticed the thing can't answer a question about it.

Agents fill that gap — and create a new one

An agent can answer questions you didn't anticipate. Ask it to find confluence across timeframes, characterize the recent structure, or compare this week's volume profile against last week's — none of that needs to be wired into a static UI in advance.

But agents introduce opacity that dashboards don't have. A dashboard shows its work through its interface. The chart is visible. The indicators are labeled. The numbers are there to read. When something is wrong, you can usually trace it: the indicator is misconfigured, the date range is off, the bar is mislabeled.

An agent that gives you a confident paragraph about support levels doesn't show its work by default. If the levels are wrong, the sentence pattern looks identical to when they're right. Fluency and accuracy are different properties, and prose alone doesn't distinguish them.

Three things TradeApe makes visible

Inspectability in TradeApe isn't a single feature — it's a few small surfaces that together let you follow the chain from question to claim.

What was queried. Every chat response that required tool calls includes a collapsible indicator showing which queries ran: price stats, key levels, chart drawing, indicator snapshot. During active queries it shows what's running live. If the response names a resistance level, you can verify that detect_key_levels ran. If it didn't, the level wasn't computed — it was generated.

What was drawn and why. The layer controls panel lists each overlay as a named layer with its type, source, creation time, and — for Fibonacci drawings — the anchor selection method. Hovering a layer shows: type: fibonacci · source: agent · fib anchor method: pivot_algorithm. That's the difference between "the agent drew a Fib" and "the agent drew a Fib using pivot-detected anchors, from a specific high and low, which you can verify."

Where the data came from. The provenance badge in the app header shows the feed, the exchange (or "unknown exchange"), and whether the latest candle is final or still forming. This isn't decorative. "Exchange: unknown" means volume figures are feed-local, not consolidated market depth. "Latest candle live" means the last bar is still building. The analysis has to be read in that light.

BYOK and AGPL as inspection depth

Inspectability at the UI level is one layer. BYOK and AGPL push it deeper.

BYOK means the API key lives in your browser's localStorage. There's no server-side proxy handling requests on your behalf. The messages going to the model are exactly the ones the app sends — visible in your browser's network tab if you want to look. The conversation isn't routed through a service you can't see into.

AGPL means the analysis logic is the source code. The system prompt, the tool schemas, the channel parallelism test, the Fibonacci anchor requirement — all of it is readable, forkable, and modifiable. You're not trusting an opaque hosted backend to handle your market questions correctly. You can read exactly what the agent is being told to do, and change it if you disagree.

Together they mean the inspection surface runs from the chat UI down to the deployed code. Nothing about how the analysis works is hidden behind a service boundary.

What this doesn't solve

Inspectability is not a complete answer to the problem of AI trust in financial analysis. Visible tool calls tell you that a query ran; they don't validate that the query returned the right answer for the question you actually had. Named layers tell you the source of an overlay; they don't confirm the overlay is analytically meaningful. Provenance badges show the feed's limitations; they don't substitute for proper exchange-attributed data.

These surfaces reduce the gap between "the agent said it" and "the data supports it" — but they don't close it. The remaining gap is the analyst's job.

The actual design space

The question isn't dashboards or agents. Dashboards earn their place for known questions asked repeatedly. Agents earn their place for questions that arrive at the chart.

The design question is what you build on top of agents to make them usable for anything you might act on. TradeApe's answer is specific: visible queries, named layers, explicit provenance, open source. Not a general solution, but a concrete set of affordances that make the analysis something you can follow rather than just receive.

That's the part dashboards never needed to solve — and the part agentic tooling has to get right.

Try TradeApe locally. Open source, BYOK, QuestDB-first.
Run locally →