Back to News
Advertisement
MManyaGhobadi about 5 hours ago 8 commentsRead Article on systalyze.com

DE version is available. Content is displayed in original English for accuracy.

The standard GPU utilization metric reported by nvidia-smi, nvtop, Weights & Biases, Amazon CloudWatch, Google Cloud Monitoring, and Azure Monitor is highly misleading. It reports the fraction of time that any kernel is running on the GPU, which means a GPU can report 100% utilization even if only a small portion of its compute capacity is actually being used. In practice, we've seen workloads with ~1–10% real compute throughput while dashboards show 100%.

This becomes a problem when teams rely on that metric for capacity planning or optimization decisions, it can make underutilized systems look saturated.

We're releasing an open-source (Apache 2.0) tool, Utilyze, to measure GPU utilization differently. It samples hardware performance counters and reports compute and memory throughput relative to the hardware's theoretical limits. It also estimates an attainable utilization ceiling for a given workload.

GitHub link: https://github.com/systalyze/utilyze

We'd love to hear your thoughts!

Advertisement

⚡ Community Insights

Discussion Sentiment

100% Positive

Analyzed from 260 words in the discussion.

Trending Topics

#gpu#power#useful#more#usage#should#actually#utilization#compute#memory

Discussion (8 Comments)Read Original on HackerNews

uberduperabout 1 hour ago
There's a few dimensions you can look at for gpu load. Probably the easiest indirect metric to watch for gpu load is power usage.

But if you really care about this, you should actually profile your application. nsight systems makes this pretty simple to do. Dunno how many actually care about having a TUI.

ManyaGhobadiabout 1 hour ago
Power is useful as a second-order metric and can help catch drastic underutilization, but it has similar problems to SM Active (DCGM) -- it tends to overestimate utilization and doesn't distinguish between useful compute and memory traffic. It's very possible to be in a memory-bound workload with high power even though underutilizing compute utilization. Our goal was to separate these bottlenecks out so there's more visibility into where to optimize.

On nsys, agreed it's great, but we wanted something that could run continuously instead of an offline analysis tool. We think there's room for both to be useful.

jhggabout 2 hours ago
We just track power utilization.
xtimecrystalabout 2 hours ago
One small suggestion: add more GPU stats to your tool.

At the moment (v0.1.3) it is more helpful for compute visualization but keeping track of memory usage/processes/temperature/fan speed/etc. prevent this from becoming a full-on drop-in replacement for `nvidia-smi` for me.

ManyaGhobadiabout 1 hour ago
We agree! We are planning a "process" or "advanced" view with temp/power usage and per-process breakdowns. Would a separate full page view or fitting everything onto one view be more useful for your workflows? Just thinking about fitting everything in because it is a lot
nawiabout 2 hours ago
Hi, many thx, does the os can run on nvidia jetson and orin? Or just for server gpu?
ManyaGhobadiabout 1 hour ago
Currently just server GPU, but theoretically it should be easy to link against the ARM64 CUDA libraries for Jetson/Orin. The only challenge would be to check if it supports all the metrics we're sampling, though anything Ampere or newer should have reasonable support.
latchkeyabout 1 hour ago
You mention rocm-smi in your blog post, but you don't actually support AMD gpus?