RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
57% Positive
Analyzed from 2745 words in the discussion.
Trending Topics
#rust#file#unix#coreutils#bugs#gnu#don#code#uutils#path

Discussion (55 Comments)Read Original on HackerNews
I just want to mention that I disagree with the section titled "Rule: Resolve Paths Before Comparing Them". Generally, it is better to make calls to fstat and compare the st_dev and st_ino. However, that was mentioned in the article. A side effect that seems less often considered is the performance impact. Here is an example in practice:
I know people are very unlikely to do something like that in real life. However, GNU software tends to work very hard to avoid arbitrary limits [1].Also, the larger point still stands, but the article says "The Rust rewrite has shipped zero of these [memory saftey bugs], over a comparable window of activity." However, this is not true [2]. :)
[1] https://www.gnu.org/prep/standards/standards.html#Semantics [2] https://github.com/advisories/GHSA-w9vv-q986-vj7x
So how can I learn from this? (Asking very aggressively, especially for Internet writing, to make the contrast unmistakable. And contrast helps with perceiving differences and mistakes.) (You also don’t owe me any of your time or mental bandwidth, whatsoever.)
So here goes:
Question 1:
How come "speed", "performance", race conditions and st_ino keep getting brought up?
Speed (latency), physically writing things out to storage (sequentially, atomically (ACID), all of HDD NVME SSD ODD FDD tape, "haskell monad", event horizons, finite speed of light and information, whatever) as well as race conditions all seem to boil down to the same thing. For reliable systems like accounting the path seems to be ACID or the highway. And "unreliable" systems forget fast enough that computers don’t seem to really make a difference there.
Question 2:
Does throughput really matter more than latency in everyday application?
Question 3 (explanation first, this time):
The focus on inode numbers is at least understandable with regards to the history of C and unix-like operating systems and GNU coreutils.
What about this basic example? Just make a USB thumb drive "work" for storing files (ignoring nand flash decay and USB). Without getting tripped up in libc IO buffering, fflush, kernel buffering (Hurd if you prefer it over Linux or FreeBSD), more than one application running on a multi-core and/or time-sliced system (to really weed out single-core CPUs running only a single user-land binary with blocking IO).
EDIT: got it. -bash: cd: a/a/a/....../a/a/: File name too long
You could probably make the loop more efficient, but it works good enough. Also, some shells don't allow you to enter directories that deep entirely. It doesn't work on mksh, for example.
> However, GNU software tends to work very hard to avoid arbitrary limits [1].
They knew how to write Rust, but clearly weren't sufficiently experienced with Unix APIs, semantics, and pitfalls. Most of those mistakes are exceedingly amateur from the perspective of long-time GNU coreutils (or BSD or Solaris base) developers, issues that were identified and largely hashed out decades ago, notwithstanding the continued long tail of fixes--mostly just a trickle these days--to the old codebases.
And, yeah, the Unix syscalls are very prone to mistakes like this. For example, Unix's `rename` syscall takes two paths as arguments; you can't rename a file by handle; and so Rust has a `rename` function that takes two paths rather than an associated function on a `File`. Rust exposes path-based APIs where Unix exposes path-based APIs, and file-handle-based APIs where Unix exposes file-handle-based APIs.
So I agree that Rust's stdilb is somewhat mistake prone; not so much because it's being opinionated and "nudg[ing] the developer towards using neat APIs", but because it's so low-level that it's not offering much "safety" in filesystem access over raw syscalls beyond ensuring that you didn't write a buffer overflow.
[0]: https://doc.rust-lang.org/std/fs/index.html
This can also be a pain on microcontrollers sometimes, but there you're free to pretend you're on Unix if you want to.
We're looking solely at the few things they got wrong, and not the thousands of correct lines around them.
Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.
Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.
Exactly what is the controversial take here?
> I don’t think brushing the bad parts off with “most of the code was really good!” is a fair way to look at this.
Nope. this is fine.
> Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.
Maybe this?
> Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.
Nope, this is fine too.
Shows how good Rust is, that even inexperienced Unix devs can write stuff like this and make almost no mistakes.
From what I understand, "assigned" probably isn't the best way to put it. uutils started off back in 2013 as a way to learn Rust [0] way before the present kerfuffle.
[0]: https://github.com/uutils/coreutils/tree/9653ed81a2fbf393f42...
> The trap is that get_user_by_name ends up loading shared libraries from the new root filesystem to resolve the username. An attacker who can plant a file in the chroot gets to run code as uid 0.
To me such a get_user_by_name function is like a booby trap, an accident that is waiting to happen. You need to have user data, you have this get_user_by_name function, and then it goes and starts loading shared libraries. This smells like mixing of concerns to me. I'd say, either split getting the user data and loading any shared libraries in two separate functions, or somehow make it clear in the function name what it is doing.
The code gets silently encumbered with those lessons, and unless they are documented, there's a lot of hidden work that needs to be done before you actually reach parity.
TFA is a good list of this exact sort of thing.
Before you call people amateur for it, also consider it's one of the most softwarey things about writing software. It was bound to happen unless coreutils had really good technical docs and included tests for these cases that they ignored.
uutils would be so much better imo if it was GPL and took direct inspiration from the coreutils source code.
...
[List of bugs a diligent person would be mindful of, unix expert or not]
---
Only conclusion I can make is, unfortunately, the people writing these tools are not good software developers.
For comparison, I'm neither a unix neckbeard not a rust expert, but I'm using rust to write a music player using LLMs and the amount of tokens I've sunk into watching for DoS panics or dropped errors is pretty substantial. Why? Because I don't want my music player to suck! Simple as that.
Pretty shocking to see the lack of basic thought going into writing what is meant to be critical infrastructure. Wow, honestly pathetic. Sorry to be so negative and for this word choice, but I am shocked and disappointed. The title of this article should be "Rust can't stop you from not giving a fuck" or "rust can't stop you from being careless."
It's actually even worse than that somewhat, because the attacker with write access to a parent directory can mess with hard links as well... sure, it only messes with the regular files themselves but there is basically no mitigations. See e.g. [0] and other posts on the site.
[0] https://michael.orlitzky.com/articles/posix_hardlink_heartac...
Rust won't catch it, but now the agents will.
Edit: https://gist.github.com/fschutt/cc585703d52a9e1da8a06f9ef93c... for anyone who needs copying this
That's kind of horrifying. Is there a reliable list somewhere of all the functions that do that? Is that list considered stable?
[0]: https://github.com/uutils/coreutils-tracking/commits/main/?a...
Granted, the uutils authors are well experienced in Rust, but it is not enough for a large-scale rewrite like this and you can't assume that it's "secure" because of memory safety.
In this case, this post tells us that Unix itself has thousands of gotchas and re-implementing the coreutils in Rust is not a silver bullet and even the bugs Unix (and even the POSIX standard) has are part of the specification, and can be later to be revealed as vulnerabilities in reality.
I'm not sure that they were all that experienced in Rust when most of this code was written. uutils has been a bit of a "good first rust issue" playground for a lot of its existence
Which makes it pretty unsurprising that the authors also weren't all that well versed in the details of low-level POSIX API
I hate to armchair general, but I clicked on this article expecting subtle race conditions or tricky ambiguous corners of the POSIX standard, and instead found that it seems to be amateur hour in uutils.
1. uutils as a project started back in 2013 as a way to learn Rust, by no means by knowledgeable developers or in a mature language
2. uutils didn't even have a consideration to become a replacement of GNU Coreutils until.... roughly 2021, I think? 2021 is when they started running compliance/compatibility tests, anyway
3. The choice of licensing (made in 2013) effectively forbids them from looking at the original source
They're a group of people who want to replace pro-user software (GPL) with pro-business software (MIT).
I don't really want them to achieve their goal.
I'd be interested in a comparison with the amount of bugs and CVE's in GNU coreutils at the start of its lifetime, and compare it with this rewrite. Same with the number of memory bugs that are impossible in (safe) Rust.
Don't just downvote me, tell me how I'm wrong.