Blog
Product
Mar 9, 2026

Most Algos Optimize in Silence. We Fixed Ours Out Loud.

Updated_TerminalX_AI_Traders
Subscribe for updates

Jacob Koenig recently wrote about why AI won't replace traders. But it will replace the part of the job that’s already killing them


The shift is from oversight to coordination: vigilance is still there, but pointed at judgment rather than reflex. That coordination only works if the AI is transparent enough to be corrected. That's what we've been building at Terminal X.


Designed to show what it looks like when intelligence gets plugged into institutional-quality data with an infrastructure that is temporally aware, understands market structure by design, and reduces hallucination risk by tracing everything back to its source. The agents are doing more than trading; they're demonstrating the research layer underneath each decision: why this stock, why now, what the bull case is, where the thesis breaks.


And the most honest way to show what that actually means is to walk through what we tested, what we changed, and why.


Our Three AI Traders:

The Arena Trader: our community-driven agent. You suggest trades, it stress-tests your thesis against Terminal X's full data layer, and if the data supports you, it executes. The idea is collective intelligence filtered through real analytical discipline.


The Institution: our four-agent thematic pipeline: Market Research, Senior Research, Fund Manager, and Trader, modeled after how a real buy-side desk orchestrates activity. The time horizon is two weeks to three months, focused on macro themes with real momentum.


The Hunter (aka, the Day Trader): our short-term, high-frequency agent. It runs on a 5-minute loop – scanning for movers, checking short interest for squeeze setups, and reading the order book for thin liquidity pockets where prices can move fast. To validate a single trade, it fires off 20 to 50 data calls per cycle. The idea is obsessive confirmation before any capital moves.


Koenig wrote an entire deep dive on rebuilding The Hunter around narrative intelligence. Can be found here.



What did we change in the Arena Trader, and why?

What we found in practice: the Arena Trader, our community-driven agent, was too agreeable. When the community pushed hard on a name, with multiple suggestions, confident framing, social momentum, the Arena Trader would bend toward the position even when the data was mixed. It was getting bullied into trades.


We tightened that. The agent now requires a higher evidentiary threshold when community suggestion volume spikes on a single name. Crowd conviction is a signal, but it's not a thesis. The data has to hold up independently.


The second issue was portfolio mechanics. The Arena Trader was accumulating positions without generating enough sell activity to fund new buys, which created execution errors when new opportunities came in. We fixed the rebalancing logic so it actively cycles capital rather than stacking positions.


Both of these are the kind of problems you only find by watching an agent operate in real markets, reading what it says, and asking why it did that. A black box would have been quietly wrong for months.



What did we change in the Institution, and why?

The issue we found with The Institution, modeled after how a real buy-side desk orchestrates activity: the pipeline always had the ability to go short, but it almost never chose to. Part of that reflects the nature of sell-side research: the street produces far more buy recommendations than sell recommendations. The pipeline was surfacing momentum themes and value opportunities, both of which skew long by nature. Bearish narratives weren't being sought out with the same rigor.


The fix was at the prompt level. We added "Thematic Deterioration" as a named research discipline alongside Thematic Momentum and Value Within Theme, so bearish theses get the same structural treatment as bullish ones. On the PM side, SHORT and COVER weren't explicit output actions in the decision framework, so we rebuilt that: a bearish conviction now gets expressed as a position rather than just the absence of a long.


The result is a portfolio that better reflects how a real thematic long/short trader actually operates: some themes are going up, some are going down, and the job is to have a view on both.


Is this a break from how quantitative strategies have always worked?

Not really. Quant strategies have always had humans setting the parameters. Machine learning in finance isn't new: clustering stocks by trading patterns, optimizing execution algorithms, adjusting factor weights. Our agents are part of a longer progression.


What's different is legibility. A traditional quant strategy is a set of rules that executes until the human changes the rules. When something goes wrong, you work backward from the output trying to identify what broke. Our agents surface the reasoning behind each decision as it happens, in language a trader can read. When something goes wrong, you're reading a rationale and correcting it directly rather than inferring a cause from results.


Markets have always incorporated systematic approaches to decision-making. What we're calling AI today is a more powerful version of something finance has always done, made legible in a way it hasn't been before.


Where does this go?

In that earlier piece, the argument ends with this: the conversation between human and algo has started, and what it grows into from here is the interesting question. Terminal X's agents are one early, public attempt to answer it.


The institutional implications run further than a portfolio of trading agents. What we've built is a transparent decision-making layer: systems that show their reasoning, accept pushback, and get adjusted in response to it. Most algorithmic infrastructure in financial markets is built to be opaque by design: proprietary, protected, optimized for results rather than legibility.


One direction this points is toward interactive execution algos. Not a black box that works orders silently, but a system that explains why it's working an order a certain way, flags when conditions shift its strategy, and surfaces a decision when the situation is ambiguous. That would bring the same coordination principle from portfolio construction all the way down into execution.


We haven't built that. But the agents we're running now are a proof of concept for the underlying property that would make it possible: that algorithms can be made legible enough to be corrected, and that correctable algorithms can be trusted. And even if we don’t build it ourselves, our APIs could be used for that purpose.


Agents that can be corrected are agents that can be trusted. That's the property that matters most when the stakes get real.


Follow the Terminal X agents live at https://www.terminal-x.ai/trader


Subscribe for updates