FR version is available. Content is displayed in original English for accuracy.
Here is a video of how it works: https://www.loom.com/share/5e558204885e4264a34d2cf6bd488117
I initially built ctx because I wanted to try a workstream that I started on Claude and continue it from Codex. Since then, I’ve added a few quality of life improvements, including the ability to search across previous workstreams, manually delete parts of the context with, and branch off existing workstreams.. I’ve started using ctx instead of the native ‘/resume’ in Claude/Codex because I often have a lot of sessions going at once, and with the lists that these apps currently give, it’s not always obvious which one is the right one to pick back up. ctx gives me a much clearer way to organize and return to the sessions that actually matter.
It’s simple to install after you clone the repo with one line: ./setup.sh, which adds the skill to both Claude Code and Codex. After that, you should be able to directly use ctx in your agent as a skill with ‘/ctx [command]’ in Claude and ‘ctx [command]’ in Codex.
A few things it does:
- Resume an existing workstream from either tool
- Pull existing context into a new workstream
- Keep stable transcript binding, so once a workstream is linked to a Claude or Codex conversation, it keeps following that exact session instead of drifting to whichever transcript file is newest
- Search for relevant workstreams
- Branch from existing context to explore different tasks in parallel
It’s intentionally local-first: SQLite, no API keys, and no hosted backend. I built it mainly for myself, but thought it would be cool to share with the HN community.

Discussion (27 Comments)Read Original on HackerNews
> Changing thinking mode mid-conversation will increase latency and may reduce quality. For best results, set this at the start of a session.
Neither OpenAI nor Anthropic exposes raw thinking tokens anymore.
Claude Code redacts thinking by default (you can opt in to get Haiku-produced summaries at best), and OpenAI returns encrypted reasoning items.
Either way, first-party CLIs hold opaque thinking blobs that can't be manipulated or ported between providers without dropping them. So cross-agent resume carries an inherent performance penalty: you keep the (visible) transcript but lose the reasoning.
But yeah, after the price hikes, it's inevitable that people will run open source harnesses
How does ctx "normalize" things across providers in the context window ( e.g. tool/mcp calls, sub-agent results)?
Unless the goal is to move from one provider to another and preserve all context 1:1. And I can’t seem to find a decent reason why you would want everything and not the TLDR + resulting work.
The way this works is that it stores workstreams and session state in a local SQLite DB, and links each ctx session to the exact local Claude Code and/or Codex raw session log it came from (also stored locally).
What do you mean by prompt caching?
Obviously, your tool does not provide this. But I think GP is undervaluing the UX advantages of having your conversation history.