FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
65% Positive
Analyzed from 3401 words in the discussion.
Trending Topics
#lock#need#locks#mutex#level#global#code#locking#rust#don

Discussion (79 Comments)Read Original on HackerNews
But the example seems backwards to me: unless every callsite that locks any item always locks the big global lock first (probably not true, because if you serialize all item access on a global lock then a per-item lock serves no purpose...), aren't you begging for priority inversions by acquiring the big global lock before you acquire the item lock?
My only gripe is missing the obvious opportunity for Ferengi memes ("rules of acquisition") :D :D
A pattern I've definitely both seen and used is
Which works to parallelize work so long as guard2 isn't contended... and at least ensures correctness and forward progress the rest of the time.There’s no priority inversion possible because locks can only ever be held in decreasing orders of priority - you can’t acquire a low priority lock and then a high priority lock since your remaining MutexKey won’t have the right level.
> There’s no priority inversion possible because locks can only ever be held in decreasing orders of priority
...and now any other thread that needs big_lock() spins waiting for T2 to release it, but T2 is spinning waiting for T1 to release the (presumably less critical) small lock.If small_lock is never ever acquired without acquiring big_lock first, small_lock serves no purpose and should be deleted from the program.
Look at the API - if big_lock and small_lock are at the same level, you would need to acquire the lock simultaneously for both locks which is accomplished within the library by sorting* the locks and then acquiring. If you fail to acquire small_lock, big lock isn’t held (it’s an all or nothing situation). This exact scenario is explained in the link by the way. You can’t bypass the “acquire simultaneously” api because you only have a key for one level
Your terminology is also off. A lock around a configuration is typically called a fine grained lock unless you’re holding that lock for large swathes of program. Global as it refers to locking doesn’t refer to visibility of the lock or that it does mutual exclusion. For example, a lock on a database that only allows one thread into a hot path operation at a time is a global lock.
* sorting is done based on global construction order grabbed at construction - there’s a singleton atomic that hands out IDs for each mutex.
Mutex::new(AppConfig::default()) might very well be a small, leaf mutex.
> Mutex::new(AppConfig::default());
> ...is meant to be acquiring a mutex protecting some global config object, yes? That's what I'm calling a "global lock".
You could certainly have a global lock at the top-most level, but you're not required to. The example is just an example.
In the DB world, we often trade complex locking for deterministic ordering or latch-free structures, but translating those to general-purpose app code (like what this Rust crate tries to do) is where the friction happens. It’s great to see more 'DB-style' rigour (like total ordering for locks) making its way into library design.
https://docs.kernel.org/locking/ww-mutex-design.html
One thing that I think do affect things, is that language design discussions tend to be concentrated into their own communities based on the programming language itself, rather than one "programming language discussions" place where everyone can easier cross-pollinate ideas across languages. Luckily, there are some individuals who move between communities without effort, which does lead to a bit of ideas making it across, but it feels like we're missing out on so much evolution and ideas from various languages across the ecosystem.
Oh, many of these travelers spend a lot of effort!
Supercomputing is another domain that has deep insights into scalable systems that is famously so insular that ideas rarely cross over into mainstream scalable systems. My detour through supercomputing probably added as much to my database design knowledge as anything I actually did in databases.
(Speaking from the perspective of someone who simultaneously loves high-performance compute and agentic AI haha)
http://joeduffyblog.com/2010/01/03/a-brief-retrospective-on-...
> Models can be pulled along other axes, however, such as whether memory locations must be tagged in order to be used in a transaction or not, etc. Haskell requires this tagging (via TVars) so that side-effects are evident in the type system as with any other kind of monad. We quickly settled on unbounded transactions.
Snip
> In hindsight, this was a critical decision that had far-reaching implications. And to be honest, I now frequently doubt that it was the right call. We had our hearts in the right places, and the entire industry was trekking down the same path at the same time (with the notable exception of Haskell)
So basically not that TM isn’t workable, but unbounded TM is likely a fool’s errand but Haskell’s is bounded TM that requires explicit annotation of memory that will participate in atomicity.
It's the whole language, not just the TM code. Other languages have no way of opting out of the TM code, whereas Haskell does.
https://dl.acm.org/doi/10.1145/1400214.1400228
Is easy, or hard?
Demand a new paradigm at large, or is only a inconvenience in the few places is used?
Because if the answer is "turns the language into Haskell" then is a big NOPE!
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n44...
To nicely support TVars, it's good if your language can differentiate between pure code and code with side-effects. Haskell's type system is one way to get there; but eg something like Rust could probably also be coerced to do something appropriate.
Apart from that, you probably don't even need static typing to make it work well enough (though it probably helps). You definitely don't need laziness or Haskell's love of making up new operators or significant whitespace.
Also the fact that it doesn't detect locking the same mutex twice makes no sense: a static order obviously detects that and when locking multiple mutexes at the same level all you need to do is check for equal consecutive addresses after sorting, which is trivial.
Overall it seems like the authors are weirdly both quite competent and very incompetent. This is typical of LLMs, but it doesn't seem ZlLM-made.
Reentrant mutexes https://en.wikipedia.org/wiki/Reentrant_mutex need interior mutability in Rust, i.e. you'd need something like ReentrantMutex<RefCell<T>>. You can't just lock the mutex and get a &mut T out of it, because then locking the mutex again would get you a second &mut T which would violate Rust's no-aliasing semantics for &mut. The Rust standard library AIUI does not provide this yet.
While not obviously problematic, that seems weird enough you would need to validate that it is explicitly safe.
This is an unusually hostile take.
The authors comment about address instability is only a minor point in the article:
> happylock also sorts locks by memory address, which is not stable across Vec reallocations or moves.
…specifically with regard to happylock, which has a bunch of commentary on it (1) around the design.
You're asserting this is a problem that doesn't exist in general, or specifically saying the author doesn't know what they're talking about with regard to happylock and vecs?
Anyway, saying they're not competent feels like a childish slap.
This is a well written article about a well written library.
Its easy to make a comment like this without doing any research or actually understanding whats been done, responding to the title instead of the article.
Specifically in this regard, why do you believe the approach taken here to overcome the limitations of happylock has not been done correctly?
(1) - https://github.com/botahamec/happylock
>This is a deliberate design decision. lock_tree uses a DAG, which lets you declare that branches A and B are independent — neither needs to come before the other. Sounds great, but it has a subtle problem: if thread 1 acquires A then B, and thread 2 acquires B then A, and both orderings are valid in the DAG, you have a deadlock that the compiler happily approved.
Would it be possible to build one at compile time? Static levels seem like they won't let you share code without level-collaboration, so that might be kinda important for larger-scale use.
I don't know enough about Rust's type system to know if that's possible though. Feels like it's pushing into "maybe" territory, like maybe not with just linear types but what about proc macros?
I can definitely see why it's easier to build this way though, and for some contexts that limitation seems entirely fine. Neat library, and nice post :)
IMO compile time locking levels should be preferred whenever possible... but the biggest problem with compile time levels is that they, well, check at compile time. If you need to make mutexes at runtime (eg mange exclusive access to documents uploaded to a server by users) then you need to be able to safely acquire those too (provided in surelock with LockSet).
On that note though, I haven't found a whole lot of documentation or blog posts around trying to make better errors in macros or other compile-time checks. Have you looked at that/do you know of any decent detailed sources? I haven't looked too hard yet, but also I just don't have any good place to start, and Google's kind of garbage at the moment.
One thing I didn't see in the post or the repo: does this work with async code?
I couldn't find the "search" button on Codeberg, and tests/integration.rs didn't have any async.
For embedded, I have had my eye on https://github.com/embassy-rs/embassy (which has an async runtime for embedded) and would love a nice locking crate to go with it.
First, lock acquisition seems to be a blocking method. And I don't see a `try_lock` method, so the naive pattern of spinning on `try_lock` and yielding on failure won't work. It'll still work in an async function, you'll just block the executor if the lock is contested and be sad.
Second, the key and guard types are not Send, otherwise it would be possible to send a key of a lower level to a thread that has already acquired a lock of a higher level, allowing deadlocks. (Or to pass a mutex guard of a higher level to a thread that has a key of a lower level.)
Therefore, holding a lock or a key across an await point makes your Future not Send.
Technically, this is fine. Nothing about Rust async in general requires that your Futures are Send. But in practice, most of the popular async runtimes require this. So if you want to use this with Tokio, for example, then you have to design your system to not hold locks or keys across await points.
This first restriction seems like it could be improved with the addition of an `AsyncLockable` trait. But the second restriction seems to me to be fundamental to the design.
Also to note, regarding “future not send,” that, in tokio codebases where the general expectation is that futures will be Send, enabling the clippy lint “future_not_send” is extremely helpful in avoiding these kinds of issues and also in keeping the error localized to the offending function, rather than it being miles away somewhere it happens to be getting indirectly spawned or whatever: https://rust-lang.github.io/rust-clippy/stable/index.html?se...
Why? You have java.util.concurrent; you should never see a deadlock. You might see a performance degradation or maybe even livelock, but that's very, very, very rare.
What abjectly idiotic thing is in your Java codebase such that you have deadlocks?
I dunno. I appreciate the opposition to "just be careful". But this feels to me like it's inducing bad design patterns. So it feels like it's wandering down the wrong path.
[0] https://web.mit.edu/6.005/www/fa15/classes/23-locks/#deadloc...
I'd be curious to hear the authors reason to not prefer a LockSet everywhere.
Opting out of lock levels was a design goal. By default all locks are are Level1, so the level can be omitted thanks to the default type parameter filling it in for you. Levels have no runtime cost, so sidestepping them is free. This lets you live in an atomic-locks only world if you want, and if you later find that you need incremental locks, you can add more levels at that time :)
[EDIT: fixing autocorrect typos when I got back to my laptop]
It's things like "perfectly invisible in code review, happy to pass CI a thousand times, then lock your system up at 3am under a request pattern that no one anticipated." which are a dead tell it was written by ChatGPT
I'll bet you 2 beers the LLM you used to proofread the post was indeed ChatGPT.
That's exactly the problem. It sounds like one aggregate person. It's quite unpleasant to read the same turns of phrase again and again and again, especially when it means that the author copped out of writing it themselves.
In fairness I think in this case they mostly did write it themselves.
Email the mods about it rather than replying, subject “Accusation of AI in FP comment” or whatever. It’s a guidelines violation to make the accusation in a comment rather than to them by email, and they have tools to deal with it!
The closest actually human style to LLM writing is obnoxious marketing speak. So that also sucks.
So many people who are not great writers lean on LLMs to write, but aren't good enough to see how bad it is. They should be criticised for this. Either use them and be good enough to make it read as human, or just don't use them. No free lunch.