RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
23% Positive
Analyzed from 1451 words in the discussion.
Trending Topics
#stderr#stdout#code#output#program#exit#should#file#sometimes#warning

Discussion (40 Comments)Read Original on HackerNews
I work with old and new code bases used by many clients in complicated setups, but adding a warning to stderr while stdout was left untouched, and proper exit codes maintained, was hardly, if ever, a problem so far.
Of course, there's always some unpleasant exception, but it's rare.
And of course, I also understand that the author might have found themselves not only in one of those rare-ish instances, but also one where reasoning with the other side was fruitless.
Sounds like you don't use ffmpeg very often. Because ffmpeg is able to send its output to stdout to be piped to other apps, verbose text output can't use stdout as one would expect. Non-error text is sent to stderr instead. So when you want to trap the text output you have to route stderr to text file. It takes some getting used to, but it's now normal for me.
So, yeah, I know stderr vs stdout still, but it's not what you want it to simply be. In the real world, things are not as clean as they are in school books.
We, uh, reasoned with the other side - we told them to fix their stupid broken setup.
1. If you mess up the command line to the program in a script or pipe, and get a bunch of usage output in stdout, a downstream consumer of that stdout might think its legit program output and try to parse it.
2. If your user actually calls the program with -h or --help, they might want to |less through it to read it on a small terminal screen. Output that to stdout.
3. Generally, you can always tell if something is going wrong by grepping for errors or warnings a single stream (stderr), or by looking for a nonzero exit code.
But your general principle applies: Output expected by the user -> stdout. Diagnostic output or output incidental to the program's operation or errors -> stderr.
1: https://pubs.opengroup.org/onlinepubs/9799919799/functions/s...
I'll use ffmpeg as an example of being an edge case. It's hard to get ffmpeg to give a nonzero exit code. What might be a problem for the user wasn't necessarily a problem for the app, so the app thinks it is completed and does its thing exiting with zero. For example, if a file is being read as input that is corrupted causing ffmpeg to no longer be able to read from the source, it will happily close your file cleanly so it is usable (just shorter than expected) and report it completed successfully. If all you do is check the exit code, you'll think your file is completed. Much more due diligence is necessary to be sure.
If one wants to use a pager (like I sometimes do, though most of the time I just scroll up), they'll just use `foo 2>&1 | less`.
1: https://www.gnu.org/prep/standards/html_node/_002d_002dhelp....
In the case of `git diff | grep FOO`, the diff output should go to stdout.
In the case of `git --help | grep FOO` the help output should go to stdout.
In the case of `git --omg-wtf | grep FOO`, it's fine if there is only output on stderr.
Gosh I thought the engineering culture was bad where I work.
Some applications have more trouble with setup and teardown than others. Like I knew a professor who kept sending me C programs that would crash before main() and some systems have a lot of trouble with "crash on shutdown" which might be a problem (corrupted files) or a non-problem.
This really does not need to be an either/or. They have different uses. You can stick in 20 printfs and get a quick feel for where the bug is far quicker than stepping through the code - especially if you set a breakpoint and hit run, only to realise that you've overshot. You can run the program 10 times with different parameters and compare the results with printf much more easily than you could with a debugger. But, once you've found the rough area, a debugger is much better for fine grained inspection, and especially interrogating state with carefully written watches.
I do get your point about the risk of leaving in some trace by accident. But it feels like overkill to throw away such a valuable tool just because of that.
There's no good reason you shouldn't be able to have an IDE maintain a text overlay of debugging points which is solely supplied as breakpoint scripts to the debugger instead.
IDEs seem to conk out at click to set breakpoint.
Ah yes, Schroedinger's workflow. So important any disruption is a disaster, and simultaneously so unimportant they couldn't possibly spend a single dime on the tools critical to the workflow.
- sometimes you can get the status code, sometimes you can't.
- sometimes you can separate out stdout from stderr, sometimes you can't
- sometimes the program generating the error message identifies itself, sometimes it doesn't
- sometimes you don't know if you have a "good error" (ok to ignore) or a "bad error" (cannot ignore)
I am a fan of the HARD FAIL.
I think internal unit tests or things like that should hard fail, then get a human to either fix it, or put in a hard exception.
if it is user-facing... sigh
If it's commercial software, you're paid to make it work, no matter how stupid that may be, and forced stupidity isn't your problem.
If it's FOSS, you can tell the user to deal with it and close the ticket.