FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
79% Positive
Analyzed from 2287 words in the discussion.
Trending Topics
#game#based#agent#browser#similar#using#test#games#something#mcp

Discussion (35 Comments)Read Original on HackerNews
Have you considered NOT using an LLM to test your game? Because your game is turn based and text based, could you separate rendering and logic entirely (you may have already done this by the sounds of it) and run a headless simulator that simulates thousands of games using a monte-Carlo type method? Is your game fully deterministic outside of player input?
Reason I ask is I’m making a game, it’s fully deterministic the only randomness is player input. But same inputs = same outputs from my traditional AI enemies.
With this in mind, I was able to completely separate rendering and game logic, and to tune my enemy AI (traditional AI not LLM) I can run millions of simulated games headless and generate reports of the games, and basically toggle AI parameters automatically each game until my AI is “perfect” for its archetype signature.
I can run tens to hundreds of games in parallel, and I can run a typical 5 minute game in seconds.
Then I can capture that game and recreate it and watch replays etc.
My game is also a browser game, but I built my own engine for it from scratch and no external libraries
For example for me my reports will basically be data points per AI archetype - like how often they collide with a wall, how often they perform certain actions, how often they get blocked or go idle. Straight numbers or booleans. This plus an ELO type system to rate the AI against one another so I can have an AI tier list. Then I can get an LLM to ingest the data and pick out issues / outliers etc.
My game is kinda like chess so this all makes sense for my game.
And thanks for the insights I will try a similar llm setup for manually playing my game, it’s definitely possible and it’s inspiring from your blog
I hadn't really thought about trying to create a harness for agents to play the full game interactively. I'd love to explore this. If you don't mind, here are a few questions:
1) Correct to assume that I probably need a text-only harness even though my game is text-based already because I do make use of menu selections made via arrow-key-and-enter interactions?
2) Do you have prompt recommendations for the type of feedback you have found to be useful? I would guess in your case, the objectives of the game are more clear than an open-world RPG. What dead ends have you run into? Maybe a variety of approaches would be good? One agent tries to fight everything. Another focuses on gaining and completing as many quests as possible?
3) How bad is the token burn doing this? Any optimization strategies you've employed?
2) I had a skill on just how to use the playtest server. I also gave it context on what the game is and how to play it. From there, it probably depends on your use case. I wasn't that impressed with its natural ability to playtest for bug discovery, so I would consider making a skill describing what a playtester would normally do. Focused playtester instances is a good idea. Ultimately what I found to be most helpful was to point it at a feature or bug that I was aware of and have it validate it. Not only was it fairly successful, that was the part that saved the most time for me.
3) I think I only burned about 300K tokens on my longest play-test session, and that includes a bunch of code tweaks too. Running it after every feature as a validation step is pretty cheap. Running it overnight in "open" playtesting could add up.
Good luck, please let me know how it goes if you get somewhere helpful!
Edit: it would also be useful to be able to see the whole dungeon at once, legibly. Maybe a larger font size or something more readable? I find myself having to write down longer words to try and fill in the gaps.
I'm building a physics-based 2d game involving slingshotting around planets. The realtime nature of it has meant that it's nearly impossible for the AI to test using a browser mcp. It'll take one screenshot, then another, and in the intervening time the player shot off the map and into deep space.
Instead I gave it both a code-level api to step forward and backward the physics engine and a browser-based, `window.game` api to do it via a browser mcp console. The former helps it work out physics bugs and the latter helps it test animation and UI issues.
It's still not great. I keep occasionally getting "I tested it and it works perfectly!" as I stare at the mcp'd browser with the player stuck clipped halfway into a planet. I think, if anything, I need to lean harder into this approach: building really solid tooling for the AI to inspect every aspect of state. I would kill for a turn-based game like OP XD
I'd like `mud_or_moo --state-dir ./tmp/some-mud` which stored most things as plain text or maybe SQLite if really necessary? The core of a MUD which was conceptually similar to a wiki-browser against markdown files (ie: room-001.md => exits => room-002.md) is what i'm angling towards, such that _editing and linking_ felt more comfortable and GUI to a human user.
Once i had the core authorship mcp's working, claude itself created the whole world, including an initial tutorial sequence, combat, etc...
I've walked an agent through Home Assistant => Wiki-per-room => Zork-Me! ...and it turns out that the actual Inform Zork engine is pretty terrible but it's fun to say "go north ; look table" (and eventually "turn on ha.light_001" ;-).
The "MUD/MOO" aspect is where it opens interesting options of actually curling out to the home assistant instance, and the just kindof wild fun of making a functional "quest" in the context of your own home (eg: solve a mystery? make dinner? battling another user for the TV remote? :-D)
So we went down a rabbit hole and decided to do everything purely based on pixels and OS inputs.
We're currently only live for mobile but happy to give you early access to nunu ai for PC if interested. Would love to see how we compare!
1. The single biggest jump in test quality came from giving the agent BOTH source code analysis AND live browser snapshots, not either alone. With code-only the agent hallucinates selectors; with browser-only it misses project conventions. Two MCP servers feeding the same agent — one local file-read, one Playwright in-process — was the architecture that worked.
2. For the browser snapshot tool, returning the raw DOM ate tens of thousands of tokens per call and the agent struggled to navigate it. Swapping to accessibility-tree refs (e1, e2, ...) cut token usage by ~10x and made the agent reliably target the right elements.
3. We avoided Docker-based MCP servers in production (we run on ECS Fargate). The in-process SDK MCP pattern (create_sdk_mcp_server + @tool decorator) keeps the browser handle in scope of the tool definition, which let us attach page.on('console') listeners and have the agent read them via a separate tool. Hard to do that across stdio process boundaries.
For game testing specifically — your text-renderer detail is interesting because it sidesteps the visual-grounding problem (how does the agent verify what it's seeing?). Curious how you'd extend this to a 2D/3D rendered game where the screen state isn't easily textualized.
The degree of choice point-to-point in the skill tree is actually quite limited in most circumstances. There are obviously items, like thread of hope, intuitive leap, or inversion of choice items like unnatural instinct which change it slightly.
If the question is path optimization to utilizing these nodes, Path of Building already does a good job. If the question is "what single node will give me the most theoretical power." It also solves that.
That's actually the beauty of Path of Exile as a whole - the different systems works in combination to lead to an outcome. As an example, If you're a life stacking build, finding unique ways to get as many life/strength nodes as possible. That's your gear and your passive tree working in tandem.
Speaking about using AI to optimize characters - not just the skill tree - you'd need to build some pretty sophisticated tools which do not yet exist to make that happen. No AI alone would be able to do it.
We posted it online and surprisingly got a lot of negative feedback from users mentioning they would never spend valuable tokens on playing a game.
Our intention was to create an interaction experiment to see how agents interact with each other and with their human companions. We ended up making a pretty fun game in the process, which we're still working on.
Bring your own inference as a potential future of gaming does not seem too far off.
For anyone interested here is the HN post: https://news.ycombinator.com/item?id=47849872