ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
74% Positive
Analyzed from 1664 words in the discussion.
Trending Topics
#alerts#alert#should#more#metrics#don#something#need#dashboards#level

Discussion (29 Comments)Read Original on HackerNews
Then I have a second level of this, the superpanic. Here is the "true" alert, which means "drop all things, fix this now". On every superpanic, there are stricter routines which intentionally cause friction, such as creating tickets about said superpanic, potentially hosting post mortems etc. This additional manual labour encourages tweaking the levels of the superpanic so that they sometimes are more lack, sometimes stricter, depending on the quality of the deployed services + the current load.
What signals a superpanic? Key valuable functionality being offline. Off-site uptime-checkers assuring that all primary domains resolve + serve traffic, mostly. Also crontime integration tests of core functionality. Stuff like that.
Each critical and warning alert should link to an "interactive runbook" - a dashboard that combines text instructions along with graphs showing real-time data.
Doing this at scale, correctly, requires both alerts-as-code and dashboards-as-code, which almost nobody does because nobody treats higher-level configuration languages (jsonnet, CUE...) with the attention and respect they deserve /cries-in-yaml
Lots of metrics are typically available, but almost all of them are noise.
Start with the business: what is important to the business ? What kind of failures are existential threats ?
Then work your way down and design your metrics and alerts, instead of just throwing stuff at the wall.
I’ve had to push back so many times with teams whose manager at one point said “we need better monitoring / alerting” and they interpreted that to mean more metrics / alerts.
This is rarely the case.
I personally am really fond of just using a few alerts. The important thing to know that something went wrong. Not necessarily where / why / how something went wrong.
And yes, inertia is real, and false / invaluable alerts need to be killed immediately, without remorse. They are SRE’s cancer.
As you say, few is better. And a well chosen few.
I have tons of alerts at work. They go to specialized slack channels that I can look at if I need. We have on call escalation paths for critical ones and housekeeping duties for the ones that require engineers to perform a maintenance task. We have the hell channels that are 99.99% flapping, if you ever need that.
I find that observability in general has an extremely linear marginal reward curve, it basically always justifies the effort you put into setting it up.
https://en.wikipedia.org/wiki/Nelson_rules
https://en.wikipedia.org/wiki/Western_Electric_rules
https://en.wikipedia.org/wiki/Westgard_rules
Instead I would move up a level and start with a SLO for the various "business level" metrics you might care about. Things like "request latency", "successful requests", etc.
Then use the longer lookahead "error budget" burndowns to see where your error budget is being spent, and from there decide 1.) if the SLO needs adjusting, and/or 2.) if an alert is appropriate.
To cleanly answer those questions and iterate you'll need metrics, dashboards, traces, and logs. So then you're not just making dashboards because "its best practice", you're creating them to specifically help you measure if you're meeting your stated service objectives.
https://sre.google/sre-book/service-level-objectives/
Also, the best alerts come from looking at actual failures you had and not trying to make up "good alerts" from thin air. After you have an outage, figure out what alerts would have caught it, and implement those.
Also looking at failures others had, prior experience from yourself and others contribute to good alerts. You don't have to wait for failure to implement most of them. Most of that knowlege is also trained in to most LLM's nowadays. Just ask and then also verify sources, then implement. If you get to many alerts question if you needed them or if its noice. Its a constant trimming until you find the perfect alert setup.
ElasticSearch for example can be configured using ILM policies to fill up the disk then start deleting old records. I don't need to be woken up for disk filling up on those nodes.
Even worse is CPU/RAM alerts.
Yet the article doesn’t tackle at all the hard part: making alerts that are actually meaningful. They handwave it instead of giving actual advice. This post is a good intro, but I didn’t "walk away" with anything useful.
This is why, in this case, AI is important. Someone puts in an effort to write a short article (if a bit wordy) that can be used by e.g. beginners or managers? Good! I’m not the target audience. But if it’s the output of AI, what’s the intent?
“it’s not X it’s Y”
at this point when I see this pattern in writing I assume most if not all of it is AI generated - same with em-dashes.
This is not to discount the idea that alerts are more important than dashboards (I work directly in observability) - but just to say that I personally shut off reading anything else with these patterns because, generally speaking, the rest of the content is just not original or interesting.
It is very frequent to find things about which a majority of the people wrongly believe that they are X, but in fact they are Y.
In such cases, you must point to them that "it's not X it's Y".
There are a few alternative ways to formulate this, but the alternatives are typically longer and more complex.
The same happens with em-dashes, which have valid uses and one should not care that there exist some people who are not familiar with the classic ways of using punctuation.
I do not believe that the right solution is to attempt to use more convoluted expressions or inappropriate punctuation in order to avoid to be accused of being a clanker.
At the first role I ever had 10+ years ago, we had a TV in our team's office space constantly showing our dashboard for our critical services and health. We still had alerting monitors but it felt like those alarms were for important issues (like sev-2 or worse).
the last couple roles I've had we don't constantly look at our dashboards unless our monitors keep ringing us with alerts. We have also had more monitors in general than the first role I mentioned. Occasionally if another team asks us if we're affected by something we'll look at the dashboards we have to make sure we don't have a monitoring gap.
There was a period of time where people were writing alerts for the sake of it (i.e we have this sensor, when should we alert on it).
Nowadays we're strictly failure mode driven, this has meant lots of sensors aren't used in the analytics. They are however available to the experts to plot them for a more holistic view if required.
I work for a startup; we have what I think is a fairly typical setup: metrics ingested from a variety of sources, fed into industry-standard metrics/dashboard solutions, triggering escalations to humans. It's fine and I'm happy we have it, but...
The highest value source of alerting right now is one of our growth marketers who pays close attention to our CRM and product analytics tool and notices when key product funnels are underperforming.
Our next highest value signals are a handful of ad hoc alerting channels, mostly in Slack, either directly from a partner telling us that something suspicious happened on their side (think: fraud) or from in-product instrumentation sent to a channel for non-engineering visibility. Members of our business/product/operations team pay attention in these places and make decisions based on their business context.
After that, our support team is increasingly able to filter customer issues and differentiate between bugs, missing features, etc.
I know someone is going to argue that these are all a sign that we haven't instrumented the right things. Fair, but also misses the point. The decision makers in these flows don't (and won't) live in traditional alerting systems and wouldn't have helped us understand breakages without these other, ad hoc processes.
My theory is that it's relatively easy to offer a technical product that moves alerts around or that manages escalation paths. It's quite hard to design a product that surfaces detail to a non-technical export and that makes it easy to build systematic rules.