DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
78% Positive
Analyzed from 411 words in the discussion.
Trending Topics
#deepseek#opus#pro#api#open#decent#types#introduced#past#used

Discussion (7 Comments)Read Original on HackerNews
I did cut loose Deepseek v4 on a decent sized Typescript codebase and asked it to only focus on a single endpoint and go in depth on it layer by layer (API, DTOs, service, database models) and form a complete picture of types involved and introduced and ensure no adhoc types are being introduced.
It developed a very brief but very to the point summary of types being introduced and which of them were refunded etc.
Then I asked it to simplify it all.
It obviously went through lots of files in both prompts but total cost? Just $0.09 for the Pro version.
On Claude Opus I think (from past experience before price hikes) these two prompts alone would have burned somewhere between $9 to $13 easily with not much benefit.
Note - I didn't use Open router rather used the Deepseek API directly because Open router itself was being rate limited by Deep seek.
Those tokens are heavily subsidized, but DeepSeek's API pricing is looking really good. For example, with an agentic coding setup (roughly 85% input, 15% output and around 90% cache reads) I'd get around 150M tokens per month for the same 100 USD. Even at more output tokens and worse cache performance, it'd still most likely be upwards of 100M.
Even without the currently discounted pricing, the value is incredible.
It takes about twice as long to finish code reviews given an identical context compared to opus 4.7/gpt 5.5 but at 1/10 the cost of less, there's just no comparison.
https://twitter.com/aljosa/status/2049176528638902555
To be clear, i'm not doing state of the art stuff. I mostly used it for frontend development since i'm not great at that and just need a decent looking prototype.
But for my purposes it's a perfectly good model, and the price is decent.
I can't wait for open model small enough for me to run locally come out though. I hate having to rely on someone elses machines (and getting all my data exfiltrated that way)
Which provider are you using for inference? Opencode or the DeepSeek api?
Keep the pelican but isn’t it time to add something else more novel that all current and past models struggle with?