HI version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
55% Positive
Analyzed from 872 words in the discussion.
Trending Topics
#proxy#https#github#com#app#per#key#need#authentication#more

Discussion (14 Comments)Read Original on HackerNews
Now your app doesn't have direct access to your stripe/github/aws/whatever keys (which is good!) but you still need to have _some_ authentication against your proxy.
If you have a per-app authentication, then if your app's key leaks, then whoever uses it will be able to reach all the external services your app can, i.e. with one key you lose everything. On the other hand, if you have per-endpoint authentication, then you didn't really solve anything, you still have to manage X secrets.
Even worse, from the perspective of the team who owns and runs the proxy, chances are you are going to use per-app AND per-endpoint authentication, because this will allow you to revoke bad keys without breaking everyone else, etc.
What this really solves is subscription management for (big?) organisations. Now that you have a proxy, you only need a single key to talk to <external-service>, no need to have to manage subscriptions, user onboarding and offboarding, etc. You just need to negotiate ratelimits.
For example if the proxy runs in localhost you can trust the localhost workload.
Or you can use some other kind of workload identity proof (like cloud based metadata servers). If you leak such a key no other VM can use it, because it's scoped to your VM
My point was that you don't literally have to run the proxy on localhost in order to scope the request.
For the integrations that aren't GitHub-style OAuth Apps, where upstream just ships a long-lived API key and someone still has to rotate it, how are you planning to handle the refresh lifecycle on the exe.dev side? Is that declared per-integration, or is the proxy expected to notice 401s and pull a fresh credential from somewhere upstream?
an 'mitm' tls proxy also gives you much better firewalling capabilities [1], not that firewalls aren't inherently leaky,
codex's a 'wildcard' based one [2]; hence "easy" to bypass [3] github's list is slightly better [4] but ymmv
[1] than a rudimentary "allow based on nslookup $host" we're seeing on new sandboxes popping up, esp. when the backing server may have other hosts.
[2] https://developers.openai.com/codex/cloud/internet-access#co...
[3] https://embracethered.com/blog/posts/2025/chatgpt-codex-remo...
[4] https://docs.github.com/en/copilot/reference/copilot-allowli...
(I was interested in this because I was actually working on something similar recently: https://github.com/imbue-ai/latchkey. To avoid the certificates issue, this library uses a gateway approach instead of a proxy, i.e. clients call endpoints like "http(s)://gateway.url:port/gateway/https://api.github.com/..." which can be effectively hidden behind the "latchkey curl" invocation.)
i think requests is a tricky one, as it _should_ be supporting it already based on the PR [2], but looks like it was merged in the 3.x branch and idk where that is, release-wise.
there is also native TLS on linux (idk what exactly you call it); but
all languages also seem to have packages around providing cert bundles which get used directly (e.g., certifi [3]), which does cause some pain[1] https://github.com/rustls/rustls-native-certs/issues/16#issu...
[2] https://github.com/psf/requests/issues/2899
[3] https://pypi.org/project/certifi/
is when python 3.13 [1] introduced some stricter validations and the CASB issued certs were not compliant (missing AKI); which broke REQUESTS_CA_BUNDLE/SSL_CERT_FILE for us
[1] https://discuss.python.org/t/python-3-13-x-ssl-security-chan...
You may not want to be doing this at the edge.