Back to News
Advertisement
ddchu17 3 days ago 27 commentsRead Article on github.com

FR version is available. Content is displayed in original English for accuracy.

ctx is a local SQLite-backed skill for Claude Code and Codex that stores context as a persistent workstream that can be continued across agent sessions. Each workstream can contain multiple sessions, notes, decisions, todos, and resume packs. It essentially functions as a /resume that can work across coding agents.

Here is a video of how it works: https://www.loom.com/share/5e558204885e4264a34d2cf6bd488117

I initially built ctx because I wanted to try a workstream that I started on Claude and continue it from Codex. Since then, I’ve added a few quality of life improvements, including the ability to search across previous workstreams, manually delete parts of the context with, and branch off existing workstreams.. I’ve started using ctx instead of the native ‘/resume’ in Claude/Codex because I often have a lot of sessions going at once, and with the lists that these apps currently give, it’s not always obvious which one is the right one to pick back up. ctx gives me a much clearer way to organize and return to the sessions that actually matter.

It’s simple to install after you clone the repo with one line: ./setup.sh, which adds the skill to both Claude Code and Codex. After that, you should be able to directly use ctx in your agent as a skill with ‘/ctx [command]’ in Claude and ‘ctx [command]’ in Codex.

A few things it does:

- Resume an existing workstream from either tool

- Pull existing context into a new workstream

- Keep stable transcript binding, so once a workstream is linked to a Claude or Codex conversation, it keeps following that exact session instead of drifting to whichever transcript file is newest

- Search for relevant workstreams

- Branch from existing context to explore different tasks in parallel

It’s intentionally local-first: SQLite, no API keys, and no hosted backend. I built it mainly for myself, but thought it would be cool to share with the HN community.

Advertisement

⚡ Community Insights

Discussion Sentiment

91% Positive

Analyzed from 607 words in the discussion.

Trending Topics

#claude#session#code#thinking#conversation#harness#different#context#codex#prompt

Discussion (27 Comments)Read Original on HackerNews

realdimas1 day ago
Claude Code used to have a warning that toggling thinking within a conversation would decrease performance:

> Changing thinking mode mid-conversation will increase latency and may reduce quality. For best results, set this at the start of a session.

Neither OpenAI nor Anthropic exposes raw thinking tokens anymore.

Claude Code redacts thinking by default (you can opt in to get Haiku-produced summaries at best), and OpenAI returns encrypted reasoning items.

Either way, first-party CLIs hold opaque thinking blobs that can't be manipulated or ported between providers without dropping them. So cross-agent resume carries an inherent performance penalty: you keep the (visible) transcript but lose the reasoning.

LeoPanthera1 day ago
I don't think I've ever /resumed a Claude Code session even once. What do people use that for? The way I use it is to make a change, maybe document the change, and then I'm done. New session.
giancarlostoro1 day ago
Tooling like this is why I really want to build my own harness that can replace Claude Code, because I have been building a few different custom tools that would be nice as part of one single harness so I don't have to tweak configurations across all my different environments, projects and even OS' it gets tiresome, and Claude even has separate "memories" on different devices, making the experience even more inconsistent.
StanAngeloff1 day ago
I've actually had the same itch and decided to give it a go ... So far I'm one year into the project, learned a ton and highly recommend to anyone who'd listen - try writing you own harness. It can be fun, it can be intoxicating, it can also be boring and mundane. However you'll learn so much along the way, even if you thought you already were well versed.
nextaccountic1 day ago
The problem with this is that you won't get to enjoy the heavy subsidies of Claude subscriptions

But yeah, after the price hikes, it's inevitable that people will run open source harnesses

arcanemachiner1 day ago
Pi is very extensible, and could possibly serve as a good foundation to build on.
giancarlostoro1 day ago
Is it Pi LLM you're referring to? I've heard "Pi" referenced twice now, and now I'm curious, I do have unused Pis, though not Raspberry Pi 5s...
ghm21801 day ago
Interesting. What kind of context usage does it have when switching between the two providers? Like is it smart about using the # tokens when you go from claude -> codex or vice versa for a conversation?

How does ctx "normalize" things across providers in the context window ( e.g. tool/mcp calls, sub-agent results)?

saadn921 day ago
Very cool: did something similar here as well: https://github.com/saadnvd1/hydra
buremba1 day ago
Since prompt caching won't work across different models, how is this approach better than dropping a PR for the other harnesses to review?
ycombinatornews1 day ago
Great callout about the prompt caching, this switch is going to burn subscription limits on Claude real real fast.

Unless the goal is to move from one provider to another and preserve all context 1:1. And I can’t seem to find a decent reason why you would want everything and not the TLDR + resulting work.

dchu171 day ago
Sorry, I may be misunderstanding the question.

The way this works is that it stores workstreams and session state in a local SQLite DB, and links each ctx session to the exact local Claude Code and/or Codex raw session log it came from (also stored locally).

What do you mean by prompt caching?

Wowfunhappy1 day ago
Prompt caching is done on the provider side. If you send two requests to a provider in short succession and the beginning of your second request is the same as your first (for example, because your second request is the continuation of an ongoing chat), the repeated tokens are much less expensive the second time.

Obviously, your tool does not provide this. But I think GP is undervaluing the UX advantages of having your conversation history.

buremba1 day ago
Yes that's it. I actually just ask codex/claude code to look up the session id when I want to resume sessions cross harness, it's just jsonl files locally so it can access the full conversation history when needed.
t0mas881 day ago
Have you considered making it possible to share a stream/context? As an export/import function.
rkuska1 day ago
I wrote a tool for myself to copy (and archive) the claude/codex conversations github.com/rkuska/carn
t0mas881 day ago
Thanks
dchu171 day ago
that's interesting, I hadn't at this point but this sounds potentially useful
phoenixranger1 day ago
really interesting idea! will check it out. and thanks for making it local-first!
ramon1561 day ago
Can we also get a /last ? 9/10 times i want to resume my last session. I know its only one extra tap, but still