Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

82% Positive

Analyzed from 1763 words in the discussion.

Trending Topics

#api#key#keys#token#secret#hash#don#need#checksum#used

Discussion (81 Comments)Read Original on HackerNews

bob10291 day ago
I don't understand the need for this level of engineering. It appears we are going for an opaque bearer token here. The checksum is pointless because an entire 512 bit token still fits in an x86 cache line. Comparing the whole sequence won't show up in any profiler session you will ever care about.

If you want aspects of the token to be inspectable by intermediaries, then you want json web tokens or a similar technology. You do not want to conflate these ideas. JWTs would solve the stated database concern. All you need to store in a JWT scheme are the private/public keys. Explicit tracking of the session is not required.

notpushkin1 day ago
> The checksum is pointless because an entire 512 bit token still fits in an x86 cache line

I suppose it’s there to avoid round-trip to the DB. Most of us just need to host the DB on the same machine instead, but given sharding is involved, I assume the product is big enough this is undesirable.

phire1 day ago
You need to support revocation, so I'm not sure it's ever possible to avoid the need for a round trip to verify the token.
kukkamario1 day ago
The point of the checksum is to just drop obviously wrong keys. No need to handle revocation or do any DB access if checksum is incorrect, the key can just be rejected.
rrr_oh_man1 day ago
> I assume the product is big enough

Experience tells otherwise

locknitpicker1 day ago
> I suppose it’s there to avoid round-trip to the DB.

That assumption is false. The article states that the DB is hit either way.

From the article:

> The reason behind having a checksum is that it allows you to verify first whether this API key is even valid before hitting the DB,

This is absurdly redundant. Caching DB calls is cheaper and simpler to implement.

If this was a local validation check, where API key signature would be checked with a secret to avoid a DB roundtrip then that could see the value in it. But that's already well in the territory of an access token, which then would be enough to reject the whole idea.

If I saw a proposal like that in my org I would reject it on the grounds of being technically unsound.

vjay151 day ago
Hello bob! the checksum is for secret scanning offline and also for rejecting api keys which might have a typo (niche case)

I just was confused regarding the JWT approach, since from the research I did I saw that it's supposed to be a unique string and thats it!

petterroea1 day ago
I may be naive but I can't imagine anyone typing an api key by hand. Optimizing for it sounds like premature optimization, surely stopping the less than one in a million HTTP request with a hand-typed API key from reaching the db isn't worth anything
vjay151 day ago
if not for typo, then I can use for secret scanning then :)
bob10291 day ago
The neat thing about JWT is that there are no secrets to scan for. Your secret material ideally lives inside an HSM and never leaves. Scanning for these private keys is a waste of energy if they were generated inside the secure context.
agwa1 day ago
But JWTs are usually used as bearer tokens when doing API authentication. Those are definitely secrets that need to be scanned for.

Or are you suggesting that the API requests are signed with a private key stored in an HSM, and the JWT certifies the public key? Is that common?

vjay151 day ago
Ideally API key shouldn't contain anything regarding the account or any info right? it's meant to be an opaque string, is what I found in most of the other articles I read. Please do let me know if I am wrong about this assumption
arethuza1 day ago
"for rejecting api keys which might have a type" - assuming that is meant by to be "typo" - won't they get rejected anyway?
vjay151 day ago
it's just an added benefit, I don't have to make a DB call to verify that :)
Hendrikto1 day ago
JWTs solve some problems but then come with a lot of their own. I do not think they should be the goto solution.
weitendorf1 day ago
Hey OP, sorry for the negativity, I think most of these commenters right now are pretty off-base. My company is building a lot of API infrastructure and I thought this was a great write up!
vjay151 day ago
It is alright, I am learning a lot from them as well, healthy criticism is always useful :) I am very glad that you found this a great write up ^_^
randomint641 day ago
While it's true that API keys are basically prefix + base32Encode(ID + secret), you will want a few more things to make secure API keys: at least versioning and hashing metadata to avoid confused deputy attacks.

Here is a detailed write-up on how to implement production API keys: https://kerkour.com/api-keys

9214063141about 20 hours ago
Interesting read, I do have some questions though and hope you could answer them:

1. Why do you use the API key ID AND the organization ID, and not just one of them, to prevent the confused deputy problem?

2. Why is not necessary to use something like Argon2id for hashing? You say "our secret is already cryptograhically-secure", but what does this mean exactly? Is it due to the fact that the secret is already very high entropy and cracking it, even if we use much faster hash functions like the ones mentioned in your article, it would practically not be possible even PQ with highly parallelized hardware?

Anyways, very interesting read, thank you!

jeremyloy_wt1 day ago
I don’t understand your explanation on mitigating the confused deputy. If the attacker has access to the database, can’t they just read the IDs for the target row they are overriding first so they can generate the correct hash?
randomint641 day ago
The attack would be like: attacker has read/write access to the database but not to the code of the backend service. Attacker swaps the hash of a targeted API key with the hash of their own API key. Attacker has now access to the resources of the targeted organization when using their own API key.
vjay151 day ago
Thank you! I will definitely look into it!
Savageman1 day ago
Side note: the slug prefix is not primarily intended for the end-user / developer to figure out which kind of key it is, but for security scanners to detect when they are committed to code / leaked and invalidate them.
vjay151 day ago
Ahhhh I see, I didn't think about it that way too, this could help us a lot yea!!!
ramchip1 day ago
The purpose of the checksum is to help secret scanners avoid false positives, not to optimize the (extremely rare) case where an API key has a typo
matja1 day ago
I suppose there could be two checksums, or two hashes: the public spec that can be used by API key scanners on the client side to detect leaks, and an internal hash with a secret nonce that is used to validate that the API key is potentially valid before needing to look it up in the database.

That lets clients detect leaks, but malicious clients cant generate lots of valid-looking keys to spam your API endpoint and generate database load for just looking up API keys.

ramchipabout 16 hours ago
That second hash is called a Message Authentication Code (MAC), it's what the JWT HS256 algorithm does
vjay151 day ago
thank you so much ram chip :) I didnt know that!
calrain1 day ago
I don't like giving away any information what-so-ever in an API key, and would lean towards a UUIDv7 string, just trying to avoid collisions.

Even the random hex with checksum component seems overkill to me, either the API key is correct or it isn't.

andrus1 day ago
GitHub introduced checksums to their tokens to aid offline secret scanning. AFAIK it’s mostly an optimization for that use case. But the checksums also mean you can reveal a token’s prefix and suffix to show a partially redacted token, which has its benefits.
sneak1 day ago
Identifying an opaque value is useful for security analysis. You can use regex to see when they are committed to repos accidentally, for example.
vjay154 days ago
Hello everyone this is my third blog, I am still a junior learning stuff ^_^
notpushkin1 day ago
Hey, welcome to HN!

Reading “hex” pointing to a clearly base62-ish string was a bit interesting :-)

Also, could we shard based on a short hash of account_id, and store the same hash in the token? This way we can lose the whole api_key → account_id lookup table in the metashard altogether.

vjay151 day ago
Hello thanks for reading through my blog :D Coming to your question, yes! that is possible I mentioned it in my second approach!

But when I mentioned it to my senior he wanted me to default with the random string approach :)

vjay151 day ago
I NEVER THOUGHT I WOULD BE IN THE MAIN PAGE OF HACKERNEWS THANK YOU SO MUCH GUYS (╥﹏╥)
tlonny1 day ago
Presumably because API keys are n bytes of random data vs. a shitty user-generated password we don’t have to bother using a salt + can use something cheap to compute like SHA256 vs. a multi-round bcrypt-like?
agwa1 day ago
Correct.

Even a million rounds of hashing only adds 20 bits of security. No need if your secret is already 128 bits.

vjay151 day ago
I can't understand what you are trying to say :o
numbsafari1 day ago
How are you storing the API key in your database?
vjay151 day ago
hash of the API key just like passwords
tjarjoura1 day ago
I've always been interested in the technical distinction between an API "key" and an API "token". And the terminology of "key" used to confuse me, because I associated that with cryptography, and I thought an API key would be used to sign or encrypt something. But it seems that in many cases it's basically just a long, random password.
vjay15about 12 hours ago
Yes, it's just a random long password used to access public APIs
pdhborges1 day ago
I don't even understand what approach 3 is doing. They ended up hashing the random part of the API key with an hash function that produces a small hash and stored that in the metashard server is that it?
vjay151 day ago
yea... sorry I still am not the best explainer but that is the approach, I just wanted to have a shorter hash in the meta shard that is it. The approach 3 is an attempt by me to generate my own base62/base70 encoder ;-;
petterroea1 day ago
A bit over-engineered, but it was fun to read about observations on industry standard API keys. I agree it would be nice with more discussion around API keys and qualities one would want from them.
Advertisement
dhruv30061 day ago
Hey - this was a great blog ! I liked how you used the birthday paradox here.

PS : I too am working on a APIs.Take a look here : https://voiden.md/

matja1 day ago
What if the "slug" was a prefix for the API key revocation URL, so the API key was actually a valid URL that revoked itself if fetched/clicked? :)
out_of_protocolabout 6 hours ago
i suspect a lot of tools will try to fetch the url without explicit user action (e.g. messengers do that kind of crap). Gotta be hard to keep keys non-revoked, which is a nice side-effect
vjay15about 10 hours ago
but api keys arent meant to be revoked once used right?
amelius1 day ago
It's a bit confusing that the "Random hex" example contains characters such as "q" and "p".
vjay151 day ago
I don't understand your question :o
onei1 day ago
Hex is 0-9, a-f. P and q are outside that character set.
vjay151 day ago
yes, you are right onei, it is supposed to be random string instead of hex, I am sorry I made that mistake
vjay151 day ago
fixed it in the blog, thanks for pointing it out amelius ;-;
hk__21 day ago
> I didn't proceed with this approach since I don't want the API keys to have any info regarding the account, but hey it is all just a matter of preference and opinion.

Well I would have done that and saved half the blog post.

usernametaken291 day ago
I know sometimes people just like to try things out, but for the love of god do not implement encryption related functionality yourself. Use JWT tokens and OpenSSL or another established library to sign them. This problem is solved. Not essentially solved, solved. Creating your own API key system has a high likelihood of fucking things up for good!
fabian2k1 day ago
You don't need any encryption or signing for API keys. Using JWTs is probably more dangerous here, and more annoying for people using the API since you now have to handle refreshing tokens.

Plain old API keys are straightforward to implement. Create a long random string and save it in the DB. When someone connects to the API, check if the API key is in your DB and use that to authenticate them. That's it.

swiftcoder1 day ago
> Plain old API keys are straightforward to implement

This is pretty much just plain-old-api-keys, at least as far as the auth mechanism is concerned.

The prefix slug and the checksum are just there so your vulnerability scanner can find and revoke all the keys folks accidentally commit to github.

vjay151 day ago
yes this is the approach!
iamflimflam11 day ago
I would add the capability to be able to seamlessly rotate keys.

But otherwise, yes, for love of everything holy - keep it simple.

sabageti1 day ago
We don't store it, in plain text right, store them hashed as always.
notpushkin1 day ago
The securify here comes from looking the key up in the DB, not from any crypto shenanigans.
sneak1 day ago
This is a very good example of premature optimization.
grugdev421 day ago
Everything about this is over engineered. Just KISS.
codingjoe1 day ago
Is this running in a production environment yet? If so, do you have an email address to disclose a vulnerability?
vjay151 day ago
no this is just a POC, I haven't implemented any of it
codingjoeabout 21 hours ago
Ok, then for everyone. Don't save tokens in a database. Selects are vulnerable to timing attacks. You want a token to include a id and a signature. The ID is used to look up the scope or user attached to the token, while the signature is recreated from the ID, the server secret and some salt. The resulting signature is double checked with the provided signature with a time constant comparison.

An attacker will be able to identify valid keys, but won't be able to sign them.

You can either split the values like aws or join them with a separator.

Good idea with the slug though, makes it easier to report leaked tokens to the issuer.