Ask a language model to analyze a chart and it will produce something that sounds right. "Support around $82,000." "RSI appears elevated." "A potential ascending channel forming on the daily." These are the sentences that fit — the patterns a model trained on financial text has learned to produce. Whether they correspond to the actual data is a different question, and without structure around it, the model can't answer it.
This is the core design problem in agentic charting. Language models don't read charts. They read descriptions. If the only input is a text prompt asking about BTC price action, the model is generating the most plausible analysis — not computing one.
Tool calls before claims
TradeApe's approach is to run bounded, deterministic tool calls before the model makes any market claim. The tools return structured data. The model's job is to explain what they returned.
The tool list reads like a constraint surface:
detect_key_levels— support and resistance candidates from sampled candles, scored and clustereddetect_volume_nodes— price-volume profile nodes: POC, HVN, LVN, value area boundariesdetect_structure_channels— swing-sequence structure with an explicit parallelism testget_indicator_snapshot— actual RSI, MACD, and Bollinger values for a sampled windowselect_swing_annotations— Fibonacci and trendline anchors from pivot detection
The prompt instruction for each tool isn't "you can use this" — it's "call this
before making this kind of claim." Before saying RSI is elevated, call
get_indicator_snapshot and cite the returned value. Before drawing
support, call detect_key_levels and draw only from what it returns.
Levels with receipts
detect_key_levels runs over sampled candles and scores candidates
across several dimensions: how many times price touched the level, the volume
ratio at those candles compared to the window average, recency, and distance from
current price. Nearby candidates are clustered. The result is a ranked list with
each level's source, reason, confidence, and touch count.
{
price: 83240,
type: "resistance",
source: "swing_high",
reason: "Swing high tested 4 times in the 7d window",
strength: 4,
confidence: 0.81,
touches: 4,
volumeRatio: 1.4
} The model draws from this object. If it says "resistance at $83,240, tested four times with above-average volume," that claim is directly traceable to the tool output. The prompt forecloses the alternative: Do not invent additional support/resistance levels outside the tool output.
The channel test
The clearest example of why this matters is channel detection.
Without tooling, a model that sees a description mentioning two rising trendlines will call them a channel. That's the plausible thing to say. The word "channel" has a geometric requirement — the rails need to be roughly parallel — but language generation doesn't check geometry.
detect_structure_channels does. It computes the slope of both
trendlines and checks whether they diverge past a threshold. If the slope
difference ratio exceeds 0.35, the tool returns a rejectedChannel
instead of a channel:
if (!sameDirection || slopeDiffRatio > 0.35) {
return {
channel: null,
rejectedChannel: {
support,
resistance,
reason: `Support and resistance pivots are not parallel enough
for a channel: slope difference ratio ${slopeDiffRatio.toFixed(2)}.`,
},
}
}
The model has a matching hard rule in the prompt: if the tool returns
rejected_channel, do not call the chart a channel.
Describe the separate rails, or the wedge-like structure, instead.
The geometric check happens in code. The language rule enforces the result. The two together close the gap that fluency alone would leave open.
Drawing is not concluding
Models tend to collapse rendering and analysis. Drawing a support line and concluding there is support feel like the same step — the line appeared, so the level is real. TradeApe separates them structurally in the response format:
- Layers rendered — what was drawn: chart focus, overlays, indicators
- Observed technical read — what queried values support, with cited numbers
- Caveats — live candles, provisional anchors, weak evidence, provenance limits
The prompt states this directly: Do not treat chart rendering as analytical evidence. A chart focus or drawing result only tells you that a layer was requested for the UI.
In practice, this means an answer that draws a Fibonacci retracement also has to say how the anchors were chosen — because the anchor selection is where most of the analytical weight sits.
Fib anchor provenance
Every Fibonacci draw_on_chart call must include a
fib_anchor object describing how the anchors were selected:
{
selection_method: "pivot_algorithm",
from: { role: "high", basis: "wick" },
to: { role: "low", basis: "wick" }
} pivot_algorithm means a deterministic swing rule found the anchors.
agent_selected means the model inferred them, and the Fib is
provisional — the response must say so. The schema doesn't prevent agent-selected
Fibs; it makes the selection method visible so the reader knows how much weight to
give the levels.
When select_swing_annotations is called first, it returns
draw-ready anchor objects with timestamps, prices, pivot window, and wick/close
basis. The model passes those through to draw_on_chart without
substituting its own anchors. The pivot rule ran; its output is what gets drawn.
Language as a constraint map
The confidence vocabulary could be confused with simple style guidance. But it much more. It's instead a mapping from evidence state to permitted language:
rendered— a UI layer was requestedshows— the query returned this value directlysuggests— indicators lean one direction, confirmation incompletesupports— multiple queried values alignrequires validation— anchors are agent-selected, candle is live, or follow-through is missing
Words like "confirmed," "textbook," and "locked in" are explicitly blocked unless tool confidence is high, the candle is closed, and the answer includes the relevant invalidation. The effect is that strong language has to be earned by the data, not generated by pattern-matching on financial idiom.
Similarly: VWAP can only be called institutional or session-anchored if provenance includes a named venue and explicit session anchor. Volume-node language ("magnetic price," "fair value," "acceptance") is blocked unless the tool output directly supports that claim. RSI cannot be described as defaulting to period 14 unless the draw call explicitly set period 14.
What this buys
A charting agent without this structure is doing something subtle: producing the most plausible-sounding analysis of a chart it cannot actually see. Fluency and accuracy are different properties, and they diverge most sharply at the moments that matter — at the levels people might act on.
Deterministic tools don't solve that fully. But they give the model something concrete to report, and they give the analysis a receipt. Every claim traces back to a tool call. Every level has a score and a reason. Every Fib has an anchor provenance. Uncertainty is surfaced rather than smoothed over.
That's a narrow goal. In financial analysis, narrow and honest is the right starting point.