Back to News
Advertisement
zzc2610 about 7 hours ago 27 commentsRead Article on github.com
Some technical context on what we ran into building this.

MCP tools don't really work for financial data at scale. One tool call for five years of daily prices dumps tens of thousands of tokens into the context window. And data vendors pack dozens of tools into a single MCP server, schemas alone can eat 50k+ tokens before the agent does anything useful. So we auto-generate typed Python modules from the MCP schemas at workspace init and upload them into the sandbox. The agent just imports them like a normal library. Only a one-line summary per server stays in the prompt. We have around 80 tools across our servers and the prompt cost is the same whether a server has 3 tools or 30. This part isn't finance-specific, it works with any MCP server.

The other big thing was making research actually persist across sessions. Most agents treat a single deliverable (a PDF, a spreadsheet) as the end goal. In investing that's day one. You update the model when earnings drop, re-run comps when a competitor reports, keep layering new analysis on old. But try doing that across agent sessions, files don't carry over, you re-paste context every time. So we built everything around workspaces. Each one maps to a persistent sandbox, one per research goal. The agent maintains its own memory file with findings and a file index that gets re-read before every LLM call. Come back a week later, start a new thread, it picks up where it left off.

We also wanted the agent to have real domain context the way Claude Code has codebase context. Portfolio, watchlist, risk tolerance, financial data sources, all injected into every call. Existing AI investing platforms have some of that but nothing close to what a proper agent harness can do. We wanted both and couldn't find it, so we built it and open-sourced the whole thing.

Advertisement

⚑ Community Insights

Discussion Sentiment

81% Positive

Analyzed from 1288 words in the discussion.

Trending Topics

#data#tools#agent#mcp#context#thing#more#don#sessions#agents

Discussion (27 Comments)Read Original on HackerNews

neomantraβ€’about 4 hours ago
> MCP tools don't really work for financial data at scale. One tool call for five years of daily prices dumps tens of thousands of tokens into the context window.

I maintain an OSS SDK for Databento market data. A year ago, I naively wrapped the API and certainly felt this pain. Having an API call drop a firehose of structured data into the context window was not very helpful. The tool there was get_range and the data was lost to the context.

Recently I updated the MCP server [1] to download the Databento market data into Parquet files onto the local filesystem and track those with DuckDB. So the MCP tool calls are fetch_range to fill the cache along with list_cache and query_cache to run SQL queries on it.

I haven't promoted it at all, but it would probably pair well with a platform like this. I'd be interested in how people might use this and I'm trying to understand how this approach might generally work with LLMs and DuckLake.

[1] https://github.com/NimbleMarkets/dbn-go/blob/main/cmd/dbn-go...

TeMPOraLβ€’about 3 hours ago
> The other big thing was making research actually persist across sessions. Most agents treat a single deliverable (a PDF, a spreadsheet) as the end goal. In investing that's day one.

This is a problem with pretty much everything beyond easy single-shot tasks. Even day-to-day stuff, like e.g. I was researching a new laptop to buy for my wife, and am now enlisting AI to help pick a good car. In both cases I run into a mismatch with what the non-coding AI tools offer, vs. what is needed:

I need a persistent Excel sheet to evolve over multiple session of gathering data, cross-referencing with current needs, and updating as decisions are made, and as our own needs get better understood.

All AI tools want to do single session with a deliverable at the end, that they they cannot read, or if they can read it, they cannot work on it, at best they can write a new version from scratch.

I think this may be a symptom of the mobile apps thinking that infects the industry: the best non-coding AI tools offered to people all behave like regular apps, thinking in sessions, prescribing a single workflow, and desperately preventing any form of user-controlled interoperability.

I miss when software philosophy put files ahead of apps, when applications were tools to work on documents, not a tools that contain documents.

zc2610β€’about 3 hours ago
Exactly, this is especially important for agents given the limited effective context window.
altmanaltmanβ€’about 2 hours ago
Interesting you mention non-coding AI apps because this seems pretty trivial to do with any harness (have a master file, update it over sessions + snapshots etc).

Most non-coding AI tools are meant for general consumers who normally don't care if they have to do a new search each session + the hacky memory features try to tackle this over the long term. Also you can always supply it with the updated file at each prompt and ask it to return by updating the file. (if you really want to do with something like ChatGPT).

And I think its a bit hyperbole to extrapolate this to "software philosophy is changing". Like most apps still work on documents/data? Not sure what you meant there

zc2610β€’about 6 hours ago
Hi HN. We built LangAlpha because we wanted something like Claude Code but for investment research.

It's a full stack open-source agent harness (Apache 2.0). Persistent sandboxed workspaces, code execution against financial data, and a complete UI with TradingView charts, live market data, and agent management. Works with any LLM provider, React 19 + FastAPI + Postgres + Redis.

zc2610β€’about 6 hours ago
Some technical context on what we ran into building this.

MCP tools don't really work for financial data at scale. One tool call for five years of daily prices dumps tens of thousands of tokens into the context window. And data vendors pack dozens of tools into a single MCP server, schemas alone can eat 50k+ tokens before the agent does anything useful. So we auto-generate typed Python modules from the MCP schemas at workspace init and upload them into the sandbox. The agent just imports them like a normal library. Only a one-line summary per server stays in the prompt. We have around 80 tools across our servers and the prompt cost is the same whether a server has 3 tools or 30. This part isn't finance-specific, it works with any MCP server.

The other big thing was making research actually persist across sessions. Most agents treat a single deliverable (a PDF, a spreadsheet) as the end goal. In investing that's day one. You update the model when earnings drop, re-run comps when a competitor reports, keep layering new analysis on old. But try doing that across agent sessions, files don't carry over, you re-paste context every time. So we built everything around workspaces. Each one maps to a persistent sandbox, one per research goal. The agent maintains its own memory file with findings and a file index that gets re-read before every LLM call. Come back a week later, start a new thread, it picks up where it left off.

We also wanted the agent to have real domain context the way Claude Code has codebase context. Portfolio, watchlist, risk tolerance, financial data sources, all injected into every call. Existing AI investing platforms have some of that but nothing close to what a proper agent harness can do. We wanted both and couldn't find it, so we built it and open-sourced the whole thing.

loumacielβ€’about 4 hours ago
You can make MCP tools work for any type of data by using a proxy like https://github.com/lourencomaciel/sift-gateway/.

It saves the payloads into SQLite, maps them, and exposes tools for the model to run python against them. Works very well.

esafakβ€’about 5 hours ago
You shouldn't dump data in the context, only the result of the query.
zc2610β€’about 5 hours ago
Yes, thats is the idea and exactly what we did
grant-aiβ€’26 minutes ago
The only thing that can work for finance industry is AI that you do deterministic recall in milliseconds regardless of the datasize.
dataviz1000β€’about 2 hours ago
That's awesome!

You might be interesting in what I've been working on. I've discovered giving an autoresearch approach to letting Claude write Claude, it will find lots of alpha everywhere beating SPY buy and hold. It will even find alpha filling in gaps with trading gold ETFs as a hedge. [0] What it really is a bug squashing agent. LLMs will lie and cheat at every move and can't be trusted. 75% (3/4 of agents and code is dedicated to this) of creating agents and using LLMs with financial data is hunting and squashing the bugs and lies.

[0] https://github.com/adam-s/alphadidactic

kolinkoβ€’about 5 hours ago
Nice!

What I missed from the writeup were some specific cases and how did you test that all this orchestration delivers worthwhile data (actionable and full/correct).

E.g. you have a screenshot of the AI supply chain - more of these would be useful, and also some info about how you tested that this supply chain agrees with reality.

Unless the goal of the project was to just play with agent architecture - then congrats :)

zc2610β€’about 3 hours ago
Great advice!

For demo purpose and to attract attention, i was primarily picking some cases with cool visuals (like the screenshot of the AI supply chain you mentioned). we have some internal eval and will try to add more cases in the public repo for reference.

uoaeiβ€’about 2 hours ago
More signs of the AI bubble. Completely unprofessional behavior ("cool visuals" not "real results"). And don't give me that "hacker culture" bullshit, these people are targeting Wall Street as paying customers.
mhh__β€’about 1 hour ago
> But real investing is Bayesian

Debatable. Making money is more about structure than being right as per se e.g. short vol is usually right...

The concept overall is basically ok though I think e.g. agents 100% going to be a big thing in finance but it's about man machine synthesis.

D_R_Farrellβ€’about 4 hours ago
I've been wondering for a long time about when this more Bayesian approach would become available alongside an AI. Really excited to play around with this!

Is this kind of like a Karpathy 2nd brain for investing then?

zc2610β€’about 3 hours ago
we do have something similar to a personal or workspace level investment wiki on the roadmap.

As for now, it would be more like how swe working on a codebase and build stuff incrementally by commits. We are taking a workspace centric approach where multiple agent sessions can happen in a workspace and build based on previous work.

jskrnβ€’about 3 hours ago
Sounds interesting. The video isn't working, wish I could see the hosted version without creating an account.
zc2610β€’about 3 hours ago
Thanks for feedback. i am working on that already.

it should be easy to self host in docker though.

erdanielsβ€’about 7 hours ago
Then people would lose a lot of money
locusofselfβ€’about 5 hours ago
Agreed. Unless this really helps people somehow make better trading decisions than existing tools, the vast majority of them are probably still better off index investing.
zc2610β€’about 3 hours ago
there will always be people lose money regardless, that's part of stock market. i hope at least with tools like this, people can make investment decisions more systematically and with discipline by relying on research rather than impulse or memes.
xydacβ€’about 4 hours ago
Its crazy how many similar threads exists today.
mhh__β€’about 1 hour ago
> mcp don't work

This is slop - the mcp could expose a query endpoint

Advertisement
ForOldHackβ€’about 4 hours ago
Note: Never make angry the gods of code. Never. If you do, they will leave angry on Friday night, and come back with some *amazing* thing like this on Monday:

Obligatory: Brilliant Work. Brilliant.

"We wanted both and couldn't find it, so we built it and open-sourced the whole thing."

\m/ \m/ /m\ /m\