Back to News
Advertisement
kkhurdula about 4 hours ago 13 commentsRead Article on interfaze.ai
When building workflows that rely on LLMs, we commonly use structured output for programmatic use cases like converting an invoice into rows or meeting transcripts into tickets or even complex PDFs into database entries.

The model may return the schema you want, but with hallucinated values like `invoice_date` being off by 2 months or the transcript array ordered wrongly. The JSON is valid, but the values are not.

Structured output today is a big part of using LLMs, especially when building deterministic workflows.

Current structured output benchmarks (e.g., JSONSchemaBench) only validate the pass rate for JSON schema and types, and not the actual values within the produced JSON.

So we designed the Structured Output Benchmark (SOB) that fixes this by measuring both the JSON schema pass rate, types, and the value accuracy across all three modalities, text, image, and audio.

For our test set, every record is paired with a JSON Schema and a ground-truth answer that was verified against the source context manually by a human and an LLM cross-check, so a missing or hallucinated value will be considered to be wrong.

Open source is doing pretty well with GLM 4.7 coming in number 2 right after GPT 5.4.

We noticed the rankings shift across modalities: GLM-4.7 leads text, Gemma-4-31B leads images, Gemini-2.5-Flash leads audio.

For example, GPT-5.4 ranks 3rd on text but 9th on images.

Model size is not a predictor, either: Qwen3.5-35B and GLM-4.7 beat GPT-5 and Claude-Sonnet-4.6 on Value Accuracy. Phi-4 (14B) beats GPT-5 and GPT-5-mini on text.

Structured hallucinations are the hardest bug. Such values are type-correct, schema-valid, and plausible, so they slip through most guardrails. For example, in one audio record, the ground truth is "target_market_age": "15 to 35 years", and a model returns "25 to 35". This is invisible without field-level checks.

Our goal is to be the best general model for deterministic tasks, and a key aspect of determinism is a controllable and consistent output structure. The first step to making structured output better is to measure it and hold ourselves against the best.

Advertisement

⚑ Community Insights

Discussion Sentiment

85% Positive

Analyzed from 504 words in the discussion.

Trending Topics

#models#structured#gemini#benchmark#output#flash#opus#decoding#why#benchmarks

Discussion (13 Comments)Read Original on HackerNews

staredβ€’about 3 hours ago
Thank you for sharing benchmark. However, the results are selective.

Why no Opus 4.7? Why Gemini 3.1 Pro is missing?

If there is some other criterion (e.g. models within certain time or budget), great - just make it explicit.

When I see "Top 5 at a glance" and it missed key frontier models, I am (at best) confused.

khurdulaβ€’about 2 hours ago
Yeah we selected models that are most commonly integrated in developer workflows and being used for structured output. Typically those models tend to be in the low -mid cost range and with no or low reasoning.

For the benchmark, was kept consistent across all models and typically opus and 3.1 pro would be overkill and expensive even with reasoning off.

Good point tho, will add this point in the blog too :)

Also the benchmark is open source, so anyone can run a model on it and create a PR too, the leaderboard is dynamic and will automatically add that in.

staredβ€’6 minutes ago
Then the way to go is to use Pareto frontier, e.g. https://quesma.com/benchmarks/binaryaudit/#cost

If you want to avoid using Opus 4.7 them why GPT-5.4 (unless with a disclaimer that it is low reasoning setting, or check that on medium its price is comparable with Haiku/Flash).

Also, usually it is good to look at the newest model. Gemini 2.5 Flash is quite dated. Gemini 3.1 Flash Lite is the new one (https://openrouter.ai/google/gemini-3.1-flash-lite-preview).

Flux159β€’about 3 hours ago
Agree that the choices are strange. Sonnet 4.6 was tested, but no Opus 4.6.

Gemini 3.1 and GLM 5 came out around the same time as Sonnet 4.6 (~Feb 2026) so it's strange that they are missing, but Gemini 2.5 Flash, Gemini 3 Flash, and GLM 4.7 are there.

maxdoβ€’37 minutes ago
gpt 5.5 seems to be the recent leader overall, it make sense to include it , just to see what you trade off for speed/open source nature vs cutting edge leader.
zihotkiβ€’about 2 hours ago
I wonder if this benchmark brings any value. Models are already quite capable and reach high scores in it.
khurdulaβ€’about 2 hours ago
Check out the "The JSON-pass vs Value-Accuracy gap" section in the blog. That was an eye opener.

While most models were great at producing JSON schema, they were pretty bad at producing accurate values.

In the graph you'll is almost a 20%-30% drop between the JSON schema pass vs the value accuracy.

dalbertoβ€’about 2 hours ago
A benchmark without Opus 4.6/4.7 feels incomplete.
khurdulaβ€’about 1 hour ago
Due to high demand, we're adding it soon!
broyojoβ€’about 2 hours ago
hmm why can't structured decoding be used?
khurdulaβ€’about 1 hour ago
We saw that structured decoding didn't make a difference in the quality of the output.

Check out the paper section "6.3 Structured Decoding Ablation"

Paper: https://arxiv.org/pdf/2604.25359

We ran the comparison and saw no difference, so to keep the bench consistent since some models don't support structured decoding we used greedy decoding on all models.

iLoveOncallβ€’about 2 hours ago
This is just a hallucinations benchmark on a subset of outputs, not sure there's a value over general hallucinations benchmarks?

> Our goal is to be the best general model for deterministic tasks

I'm sorry but this simply doesn't make sense. If you want a deterministic output don't use an LLM.

khurdulaβ€’about 1 hour ago
General hallucinations benchmarks tend to be knowledge specific like GPQA or MMLU but none specifically measure structured output end-to-end which is one of the biggest use case for LLMs.

Many developer workflows use LLMs to produce structured artifacts due to it's flexibility of consuming unstructured inputs.

> "don't use an LLM"

Partially agree, that's what we're building towards at interfaze.ai a hybrid between transformers (LLMs) and traditional CNN/DNN architecture to solve this problem of "deterministic" output. This give devs the flexibility of custom schema definitions and unstructured input while still getting high quality structured output like you would get from a CNN models like EasyOCR.

The industry is moving toward using LLMs for more and more deterministic tasks so this benchmarks allows us to now measure it.