Back to News
Advertisement
HHenryNdubuaku about 4 hours ago 61 commentsRead Article on github.com

RU version is available. Content is displayed in original English for accuracy.

Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices.

We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.

Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).

Training: - Pretrained on 200B tokens across 16 TPU v6e (27 hours) - Post-trained on 2B tokens of synthesized function-calling data (45 minutes) - Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)

You can test it right now and finetune on your Mac/PC: https://github.com/cactus-compute/needle

The full writeup on the architecture is here: https://github.com/cactus-compute/needle/blob/main/docs/simp...

We found that the "no FFN" finding generalizes beyond function calling to any task where the model has access to external structured knowledge (RAG, tool use, retrieval-augmented generation). The model doesn't need to memorize facts in FFN weights if the facts are provided in the input. Experimental results to published.

While it beats FunctionGemma-270M, Qwen-0.6B, Granite-350M, LFM2.5-350M on single-shot function calling, those models have more scope/capacity and excel in conversational settings. We encourage you to test on your own tools via the playground and finetune accordingly.

This is part of our broader work on Cactus (https://github.com/cactus-compute/cactus), an inference engine built from scratch for mobile, wearables and custom hardware. We wrote about Cactus here previously: https://news.ycombinator.com/item?id=44524544

Everything is MIT licensed. Weights: https://huggingface.co/Cactus-Compute/needle GitHub: https://github.com/cactus-compute/needle

Advertisement

⚡ Community Insights

Discussion Sentiment

71% Positive

Analyzed from 1035 words in the discussion.

Trending Topics

#model#models#needle#tool#run#https#gemini#thanks#huggingface#calling

Discussion (61 Comments)Read Original on HackerNews

alex7o11 minutes ago
From all the models that do toolcalls the only thing I am confused is why did you pick the worst? Or maybe they are only bad in agentic work it fine for one shot toolcalls?
HenryNdubuaku9 minutes ago
Gemini is pretty solid for 1-shot tool call and affordable as well.
simonwabout 3 hours ago
Suggestion: publish a live demo of the "needle playground". It's small enough that it should be pretty cheap to run this on a little VPS somewhere!
quantumleaperabout 2 hours ago
Should be quick and easy with WebGPU, too.
simonwabout 2 hours ago
That's an even better idea, I bet this could run in Transformers.js.
ilakshabout 2 hours ago
Good idea. Could you make that.
HenryNdubuakuabout 3 hours ago
thanks, yeah, the problem is just handling scale, we don't have the infra ready to go, but anyone can do that. Its easy for people to run on their laptops straight up. Will try the VPS route.
benobabout 2 hours ago
Deployed it to a huggingface space: https://huggingface.co/spaces/benoitfavre/needle-playground

You can check the very simple docker file there.

simonwabout 1 hour ago
Here's the Dockerfile, it's delightfully simple https://huggingface.co/spaces/benoitfavre/needle-playground/...
HenryNdubuakuabout 1 hour ago
Thanks!
giancarlostoroabout 3 hours ago
Alternatively, record a video that showcases it.
HenryNdubuakuabout 3 hours ago
Ok, will do that now!
roggenbuck6 minutes ago
This is some excellent work Henry! Very excited to try it out.
HenryNdubuaku1 minute ago
Thanks, let me know how it goes!
ilakshabout 3 hours ago
Hmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.

But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.

E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom`

HenryNdubuakuabout 3 hours ago
So Needle is trained for INT4, what you see in the playground is INT4, only 14MB, same challenge though.
ilakshabout 3 hours ago
Oh gotcha. Fixed my comment.
z3ugma18 minutes ago
I don't really understand what this is for... there is a lot of ML-researcher talk on the GH page about the model architecture, but how should I use it?

Is it a replacement for Kimi 2.7, Claude Haiku, Gemini Flash 3.1 lite, a conversational LLM for the situations where it's mostly tool-calling like coding and conversational AI?

HenryNdubuaku11 minutes ago
It is for building agentic capabilities into very small devices like phones, glasses, watches and more. Does that make sense?
kristopolousabout 2 hours ago
That M versus B is way too subtle. 0.026B is my suggestion
HenryNdubuakuabout 2 hours ago
Haha, we were trying to not be hand-wavy too much :)
dangoodmanUT29 minutes ago
Why pick Gemini? It's probably the worst tool calling model of the major labs.
HenryNdubuaku20 minutes ago
Cheaper APIs
simonwabout 3 hours ago
Looks like you need to open up access to https://huggingface.co/Cactus-Compute/datasets/needle-tokeni... - I get this error when trying to run the steps in your README:

> Repository Not Found for url: http s://huggingface.co/api/datasets/Cactus-Compute/needle-tokenizer/revision/main.

HenryNdubuakuabout 3 hours ago
Fixed now, apologies!
simonwabout 3 hours ago
zamalekabout 1 hour ago
Is the idea here to add function calling to models that don't have it, or even improve function calling (qwen quirks)?
HenryNdubuakuabout 1 hour ago
So it’s a tiny model capable of function calling that could run locally on cheap devices.
rsolvaabout 1 hour ago
Can it summarize text it fetches?

Come to think of it, this could be a nice model to have as the first pass in a more complex agent system where Needle hands of the results of a tool call to a larger model.

I will defiantly play around with this!

HenryNdubuakuabout 1 hour ago
The codebase is fully open, feel free to play around!
logdahlabout 2 hours ago
I find this stuff super fascinating and been thinking about it myself. Maybe one could bootstrap tiny models on a rather 'pure' procedural data set. Neglecting [0] of course...

[0]: http://www.incompleteideas.net/IncIdeas/BitterLesson.html

HenryNdubuakuabout 2 hours ago
Sounds interesting, would love to see it too!
Advertisement
bityardabout 1 hour ago
This is pretty much exactly what I want for Home Assistant. I yell out, "Computer! Lights!" and it toggles the lamp in the room on or off. (I mean I can do that now, I think, but probably with a much larger model.)

I haven't played with it yet, but does it ever return anything other than a tool call? What are the failure modes? What if it doesn't understand the request? Does it ever say it can't find a tool? Does it get confused if there are two similar (but different) tools? Can it chain tools together (e.g. one tool to look up and address and another to get directions to the address)?

I mean, I plan on downloading the model later tonight and finding out for myself, but since I'm stuck at work right now, I figured I'd ask anyway...

HenryNdubuaku14 minutes ago
Let me know what you think!
Havocabout 2 hours ago
Sounds interesting.

Got a bunch of errors trying to run it on CPU though. Very likely connected to me running this in a container (unpriv LXC), but figured for 26M CPU would suffice.

https://pastebin.com/PYZJKTNk

dakolliabout 2 hours ago
It better, considering its purpose is to run on devices with no GPU.
quadratureabout 1 hour ago
Does the model have capacity for in context learning ?, if we give it examples of patterns can it follow them ?.
HenryNdubuakuabout 1 hour ago
Not yet, for now. But it’s in the works!
murktabout 2 hours ago
Can this be a Siri-like core? Set me a timer, tell me what’s the weather, etc. Here is transcribed text and available list of tools for the model to call, and voice the output.
HenryNdubuakuabout 2 hours ago
That was the goal!
varispeed24 minutes ago
What is the use case for this?
HenryNdubuaku13 minutes ago
Deploying AI on tiny devices like watches, earphones, glasses etc.
deepsquirrelnetabout 2 hours ago
This is really cool. Any plans to release the dataset?
HenryNdubuakuabout 2 hours ago
We include the dataset pipeline in the codebase so far, might release dataset.
BoredPositronabout 1 hour ago
I source old, defective high-end radios with timeless designs from brands like Grundig or Braun, and replace the original hardware with a Raspberry Pi while using the original audio parts to build custom smart speakers. Reliable hotword detection and voice command recognition have been a persistent challenge over the years, but whisper and other small models have helped enormously. At the moment I have ollama running on my server with qwen 9b which works fine but a 26M that could be deployed on the pi itself would be amazing.
HenryNdubuaku13 minutes ago
Sounds cool, play with it and let uk know what you think!
cmrdporcupineabout 3 hours ago
This is very cool I'm going to try to carve out some time to try building this into my MOO system ( https://codeberg.org/timbran/moor / https://timbran.org/moor.html ) as alternative command parser front end.
Balinaresabout 2 hours ago
Man, I love that there are still people writing new MOO servers in 2026. Any game out there already running on mooR?
cmrdporcupineabout 1 hour ago
Many people tease that they will, and start... but then kinda stop. But mostly just been building my own bespoke thing on my own bespoke platform, and kinda running out of steam because I need to make $$ instead.
HenryNdubuakuabout 3 hours ago
Thanks, let us know how it goes!
ac29about 2 hours ago
FYI, distilling Gemini is explicitly against the ToS:

"You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights)."

Havocabout 2 hours ago
Yeah I think Google should shove that somewhere. They effectively distilled all the internet's knowledge into these models...without asking & without permission
HenryNdubuakuabout 2 hours ago
Thanks, Needle doesn’t compete with those tools though and the distillation process did not access the weights.
ilakshabout 2 hours ago
I think GLM 5.1 or Kimi 2.6 could substitute for this type of purpose.
iAMkenoughabout 1 hour ago
FYI, Gemini was developed using stolen copyrighted works without author consent. The double standard is striking.
ForHackernewsabout 2 hours ago
So is copying all the books in the world.
vablingsabout 2 hours ago
Oh no! They stole the model weights! Distillation "attacks" is such bullshit
xgulfieabout 2 hours ago
This is being downvoted but it's worth noting if only for the "be careful" aspect.

That said, we need more people distilling models IMO, just be ready for a C&D and a ban