DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
79% Positive
Analyzed from 352 words in the discussion.
Trending Topics
#model#thebloke#mistral#try#llama#cpp#yeah#gguf#models#love

Discussion (14 Comments)Read Original on HackerNews
As someone who is tinkering with a desktop-based inference app in FLTK[0], i wish this used the actual Jinja2 template parser llama.cpp uses (or there was another C function that did that since AFAICT for "proper" parsing you need to be able to pass a bunch of data to the template so it knows if you, e.g., do tool calling). Currently i'm using this adhocky function, but i guess i'll either write a Jinja2 interpreter or copy/paste the one from llama.cpp's code (depending on how i feel at the time :-P).
But yeah, GGUF's "all-in-one" approach is very convenient. And i agree that it feels odd to have the projection models as separate files - i remember when i first download a vision-capable model, i just grabbed whatever GGUF looked appropriate, then llama.cpp told me it couldn't do model and took me a bit to realize that i had to download an extra file. Literally my thought once i did was "wasn't GGUF supposed to contain everything?" :-P
[0] https://i.imgur.com/GiTBE1j.png
it’s good at writing, coding, decently intelligent
you can try it on nvidia nim
Try both in lm studio, they really are surprisingly capable
Tried all the stuff bios, volting
I love TheBloke I wish he still made stuff
I didn't want to get personal with an LLM unless it was local so that's why I was setting this up but yeah. So far just research is what I was looking at.
hmmm...