HI version is available. Content is displayed in original English for accuracy.
Dexcom G7 (cloud API) Tandem t:slim X2 and Mobi pumps (direct BLE) Nightscout (point it at your existing instance and you're running in minutes)
What the AI layer does:
Daily briefs summarizing overnight and 24-hour patterns Meal response analysis Conversational chat with RAG-backed clinical knowledge Predictive alerting with configurable thresholds and caregiver escalation
Important: this is monitoring and analysis only. GlycemicGPT does not deliver insulin, does not control your pump, and is not a closed-loop system. It reads your data and gives you insight on top of it. Your clinical decisions stay between you and your care team. Architecture:
Self-hosted via Docker or K8S — the GlycemicGPT stack runs entirely on your hardware BYOAI — bring your own AI provider. Use Ollama for fully local operation (no data leaves your hardware), or point it at Claude, OpenAI, or any OpenAI-compatible endpoint if you prefer a hosted model. Data flows directly from your instance to the provider you choose; nothing is routed through any centralized service operated by the project. GPL-3.0, no subscriptions, no vendor lock-in
Stack:
Backend API: FastAPI, Python 3.12, PostgreSQL 16, Redis 7 Web Dashboard: Next.js 15, React 19, Tailwind CSS, shadcn/ui AI Sidecar: TypeScript, Express, multi-provider proxy Android App: Kotlin, Jetpack Compose, BLE Wear OS: Kotlin, Wear Compose, Watch Face Push API Plugin SDK: Kotlin interfaces, capability-based, sandboxed
Looking for contributors — especially folks with BLE/Android experience or anyone in the diabetes tech space. Plugin SDK is documented if you want to add support for new devices. GitHub: https://github.com/GlycemicGPT/GlycemicGPT

Discussion (36 Comments)Read Original on HackerNews
Monitoring and analytics is important, but it is a solved problem. A language model will only be able to hallucinate about the relationship between meals and glycemic response. At best it does no harm, at worst it can directly misinform.
But I will check this algo out. Maybe it has some interesting bits.
We're even yet debating and trying to understand what impact AI has on software engineering and quality let alone putting AI into something that's directly linked to a human's well being.
Is your perspective based on, say, opinionated principle?, or experience?
The benefits are enormous.
The risks; What risks? No diabetic with baseline adult competence is going to drive their insulin-delivery vehicle off a cliff because some app said so.
if you can't trust this thing then what is it doing? the implication that people that trust this software do not have adult competency is also confusing.
> Is your perspective based on, say, opinionated principle?, or experience?
your perspective is solely based on recent trauma so I don't know if it is more reliable in any capacity
Don't do as I say. I'm just a rando from the Internet.
Don't do as the author of the posted software does. Don't do what the software tells you either. But the software can certainly build an informative perspective and suggest patterns and movements in an exquisitely complex disease. Managing T1D with a pump is exhausting.
Second, re. "your perspective is solely based on recent trauma so I don't know if it is more reliable in any capacity"
This kind of statement is far beyond anything bounded by the self-respect of a balanced adult. What the fuck, and who are you?
My ex-fiancée almost died in 2020. We lost an unborn child in IVF due to grave neglect on behalf of healthcare who missed the glaringly obvious Type 1 diabetes she had; They never once checked her blood sugar. You know what I did? I read the literature. I read medicine, I read molecular biology, I read neuroimmunobiology, I read about the placenta and fetal development.
I stood by my fiancée and carried her by hand back to health. She recovered faster than the endocrinologists expected. Her pregnancy was exemplary, fullly intact placental vitals out to 38.5 weeks. Healthcare is in such a bad state that I was forced to interject and argue coolly and adamantly with doctors on several occasions about potentially severe mistakes they were about to make. EVERY SINGLE TIME when I interceded, it was confirmed correct by a second opinion from a senior doctor.
I don't come here speaking from trauma. I come here speaking from grim and serious and confirmed lived experience of stepping in and caring, without any margin for error. Know how you do that? With extreme humility and the utmost care.
Who are you to speak to me like that; I can tell that you know not at all who I am or what I have been tasked with in this life, because then you would not. talk. to me. this way. Okay?
My local physician says otherwise, with respect to facebook posts about dosages. I'm convinced the same applies to LLM generated content with respect to people blindly following the computer.
It is entirely possible to beneficially and safely use software like the that which is the topic of the post.
Changing parameters on the insulin pump because the LLM said so
Neglecting to seek actual medical advice believing a LLM replaces it
Misunderstanding medical complexity (ie a prescription due to medical history not available to the LLM)
You 1000% don't work with the general public in a tech way.
This is not an app for the general public.
About the risks, managing type 1 diabetes is exhausting, and most people will still sanitycheck the output alongside the hundreds of treatment decisions they make every day. That doesn’t change the fact that tools like this can nudge you to notice and look into patterns or things that needs attention.
And how do you deal with AI hallucinations?
Otherwise, when tuned correctly, oref1 et.al. provide amazing results and are safe. Hard to understand where I would use LLMs in this.
I genuinely don't see where I would use an LLM in this process.
How do you protect your life and the life of others using your software against potential lethal errors?
The hardest to learn was that an unhealthy lifestyle resulted in a diabetes that was harder to manage. Too much carbs, not enough exercise, etc. After adjusting my lifestyle, it became quite easy.
The most pain, in my experience, comes from the discrepancy between the CGM - measured value and the prick-test value, even when accounting for time lag. I've used several CGMs and they've all been wildly off sometimes. I have a few T1D acquaintances who relied on their CGM alone and have significantly improved their HbA1c after accounting for that.
Maybe that information is useful to you.
But if someone dies because this thing hallucinates their reporting - would you feel any sense of culpability?
“GPL says no warranty”
“People need to double check LLM output”
“You’re holding it wrong”
I really don’t know if we, collectively as a civilization, should be willing to accept this kind of hand-waving when it comes to creating things like this. Sure, tools make mistakes or people misinterpret reports without the help of LLMs - but LLMs are just on a whole other level where the mistakes are just part of how these things work from a fundamental level.
I don’t even trust AI scribes at my doctors office to transcribe my appointment due to errors. There is no way in hell I would ever use something like this that could just straight up lie about something that kills me if I get it wrong.
The data available to the LLM in OP’s app is the polar opposite. It’s all actíonable and real, so I bet it can draw more useful insights than Whoop reminding you that you didn’t exercise all week.
On your work:
this is legit
it is appreciated
Hats off, I salute this, thank you
Marvin
It's so helpful to offload some the thinking about the condition to ai, all these people moaning about 'muh safety' don't get it. T1D suffers have to think about it all day all the time. A person doesn't have their own blood glucose data in their head.
Probably something like SVM for warnings.
Unless the whole purpose is just daily reports.
Do you find the analytics actually helps? I.e. a lot of this will depend on what you ate and whether or not you logged it?
It's breaking the golden rule of these tools which is to have someone with enough knowledge to verify the accuracy of the data it spits out. Patient's famously don't. Hell, even the actual staff don't really understand or know how these tools work (or the ways in which you can/can't trust them).