HI version is available. Content is displayed in original English for accuracy.
I built localLLLM: a small community project for running local models.
Live: https://locallllm.fly.dev
The goal is simple: if someone has model + OS + GPU + RAM, they should get steps that actually work (ideally one liner)
I need help populating and validating guides.
If you run local models, please submit one working recipe (or report what failed). Would love to hear general feedback as well!

Discussion (1 Comments)Read Original on HackerNews