ES version is available. Content is displayed in original English for accuracy.
To see what the agent sees, you can load https://getadb.com/new
There's two fun things about how it's implemented:
1. If you curl the home page, it the agent content rather than human content. We do this by detecting the 'Sec-Fetch-Mode' header. It's not perfect, but gets the job done for Claude Code et al.
2. For an agent to spin up an app, they make _two_ fethes. (1) getadb.com/guide tells them to generate a uuid, and fetch (2) getadb.com/provision/<uuid>. We did this, because just about half of the popular web-based app builders cache URLs globally, even if you return no-store headers. To get around this we just instruct the agent to generate unique URLs
You may wonder: Why GET requests, rather than POST requests? It's because then you can build in surprising places. For example, we get meta.ai to build an app inside the artifact preview: https://artifacts.meta.ai/share/a/b80c7412-c3af-4088-b430-78efdfe8ea2d
Under the hood, this is possible because the whole infra is mult-tenant from ground up. We already announced how that works on HN, but if you're curious here's the essay for it: https://www.instantdb.com/essays/architecture

Discussion (33 Comments)Read Original on HackerNews
"Request methods are considered 'safe' if their defined semantics are essentially read-only; i.e., the client does not request, and does not expect, any state change on the origin server as a result of applying a safe method to a target resource." -RFC 9110 section 9.2.1
https://www.rfc-editor.org/rfc/rfc9110.html#name-safe-method...
In practice many GET requests don't adhere to this spec. For example, when you load a page, your "view" generally changes lots of things on the backend. Those changes come back to you in ways too: for example, consider view counts on Youtube videos or X posts.
http, not https?
For GETadb, it's a conflicting sell. The people that need "a db solved by AI" and fully abstracted are using app builders no? lovable, v0, manus. The people the are closer to the code and need an instant db would look to sqlite, render, supabase, neon. I'm all for another option, but then there's the realization that instandb is a new kind of db and I need to research into the value-prop vs the initial persona: "just solve my db problem with AI".
disclaimer: I'm a professional developer, doing an honest review. I may play around with it separately, later. So this marketing site did its job!
We hope delightful experiences like that then prod hackers to dive deeper and use Instant for startups.
[1] https://x.com/JoeAverbukh/status/2028544576206860697
When you do want to get closer to the code, we think Instant provides a nicer abstraction for working with agents and getting deligthful experiences like a sync engine out of the box
Problem is supabase rots. And turning that project into anything meaningful is basically undoing everything you got for free up front.
My solution today is sqlite. I'm not diehard typescript so it turns out traditional backend apps like rails running sqlite on tiny/free hardwire is pretty nice.
That said, client-side runtime will always be alluring because it can be deployed statically. So you've got something there that I'll check out.
Is this the kind of use case that is seen as valuable?
I joked a while back that LLM-brain was going to have people building bespoke apps on each HTTP request, and people thought I was exaggerating!
I think it could be. Consider an argument like this:
It's valuable to ask ChatGPT questions and receive text responses. Some of the responses are more valuable when they don't just return text, but some markup: bolding, adding visualizations etc. Why can't some responses be more valuable if they return little apps?
One place where I've wanted this myself are with using LLMs for long-running goals I have. For example, I do my blood work about once a year, and I use the results to make changes and track. For a long time I had a long chat thread with ChatGPT. Now I have a little app instead.
An extreme version of this starts to turn responses into more and more fully-fledged apps. I did an experiment recently with creating a personal finance app. I found customizing the app to my specific needs made it much more valuable to me then generic personal finance apps, which have much more effort put it, but aren't tailored to my needs [^1]
[^1]: more on this experiment here: https://x.com/stopachka/status/2040982623636607009
Err, no thanks.
But why do we need this? An agent can just have a local DB using SQLite for example.
1. With this, agents can actually deploy a full backend with their credentials [^1].
2. If your agent ever wants to add auth, or real-time presence, or file uploads, or streams, they'll be able to do that too
[^1] Alas we don't offer static site hosting, so to push the website you would need to use something like a vercel cli.
Why are your database instructions giving instructions about the UI design?
But I was curious and just did an adhoc eval.
Here's a version with the aesthetic line included
https://with-aes.vercel.app/
Here's a version without the line
https://wo-aes.vercel.app/
Everything else is the same. Will let y'all be the judge which is better.
Both where made in one-shot with this prompt:
Create a habit tracking app where users can create habits, mark daily completions, and visualize streaks. Include features for setting habit frequency (daily/weekly), viewing completion calendars, and tracking overall progress percentages.
1. For the users table specifically, we have a default rule that says `"view": "auth.id == data.id"`. This way even if the the user (or AI) did not set access controls, user data is protected by default.
2. In the instructions file given to the agent (https://www.getadb.com/provision/new), we specifically mention permissions and how to push them. We found this prods the agent to push perms.
> Generate a random UUID yourself and use a different UUID each time.
LLMs are terrible at this. If you are relying on this to prevent collisions, it will fail badly.