ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
90% Positive
Analyzed from 476 words in the discussion.
Trending Topics
#context#real#long#more#product#still#breakthrough#yet#benchmarks#etc

Discussion (16 Comments)Read Original on HackerNews
Edit: their blog post (https://subq.ai/how-ssa-makes-long-context-practical) does go pretty in-depth about it
Edit 2: the fact that they're going straight for an end-to-end coding product on day 1 is very ambitious. Other speed/efficiency-oriented AI companies (Cerebras and Inception come to mind) still don't have a first-party coding product after years. IMO this is absolutely the right way to go if they really do have the big breakthrough they're claiming.
- They are admitting that this is built on top of a Chinese model[1]
- They committed a huge chart crime with the Y axis of a chart comparing to Opus on their website that I can't find anymore (Too embarrassing to keep?). The delta between their score (81%) vs. Opus (87%) on SWE bench was hugely minimized
- They named the company subquadratic but in parts they said O(1) linear scaling. At O(1) you could do much more than 12M tokens context window. At O(log n) even.
I hope this is real but I doubt...
It seems at or above SOTA on the given benchmarks, doesn’t have context rot, is orders of magnitude faster, and uses less compute that current transformer models. I suppose it’s just an announcement and we can’t test it ourselves yet.
I am happy to answer any questions!
Do you anticipate having any kind of public accessible chat interface for testing in the near future?
Also, what, if any, benefits are there for smaller context windows? Is there still a material improvement in cost to serve under say 256k? I'm curious about the broader implications for the space beyond improvements for very large context windows.
no published benchmarks
no paper
no demonstrations of capabilities