HI version is available. Content is displayed in original English for accuracy.
Yet if John Nash had solved negotiation in the 1950s, why did it seem like nobody was using it today? The issue was that Nash's solution required that each party to the negotiation provide a "utility function", which could take a set of deal terms and produce a utility number. But even experts have trouble producing such functions for non-trivial negotiations.
A few years passed and LLMs appeared, and about a year ago I realized that while LLMs aren’t good at directly producing utility estimates, they are good at doing comparisons, and this can be used to estimate utilities of draft agreements.
This is the basis for Mediator.ai, which I soft-launched over the weekend. Be interviewed by an LLM to capture your preferences and then invite the other party or parties to do the same. These preferences are then used as the fitness function for a genetic algorithm to find an agreement all parties are likely to agree to.
An article with more technical detail: https://mediator.ai/blog/ai-negotiation-nash-bargaining/

Discussion (26 Comments)Read Original on HackerNews
I have published some research on using LLMs for mediation here: https://arxiv.org/abs/2307.16732 and https://arxiv.org/abs/2410.07053
These papers describe the LLMediator, a platform that uses LLMs to:
a) ensure a discussion maintains a positive tone by flagging and offering reformulated versions of messages that may derail the conversation
b) suggest intervention messages that the mediator can use to intervene in the discussion and guide the parties toward a positive outcome.
Overall, LLMs seem to be very good at these tasks, and even compared favourably to human-written interventions. Very excited about the potential of LLMs to lower the barrier to mediation, as it has a lot of potential to resolve disputes in a positive and collaborative manner.
But when one side is indifferent to something the other side cares deeply about, yet has veto power to spoil it, a Nash agreement isn't going to be "fair" in the usual sense of the word.
One way we solve it in the real world is that the negotiators also have power - including, possibly, the power to force the party most OK with the status quo to come to the negotiating table, and reject exploitative proposals.
That isn't foolproof either, of course. But it beats rhetoric trying to convince the weaker party to submit.
Once you formalize preferences into something comparable, you’re already making a lot of assumptions about how people value outcomes.
Last year I built https://andshake.app to prevent the need for conflict resolution… by getting things clear up front.
I agree that AI has much to offer in low-stakes agreements to help people move forward in cooperation.
Do you want the first link "How it Works" to really be just the # of front page? it makes it feel like it's broken if someone clicks it. Also your blog about Nash Bargaining is almost more of a "How it Works" page than the How it Works page is.
I feel like your landing page very quickly told me what your website does which is great. If the Nash Bargaining is the "wedge" to separate you from the pack, I'd try explain how that differentiates this over the others as quickly as possible. I know that's easier said than done. Good luck!
We built a way to make contracts enforceable and resolve disputes without the high cost of litigation. Specifically, by adding our arbitration clause to your contracts or using our "case by consent" you can get AI driven court-enforceable arbitration decisions in 7 days for a $500 flat fee - no lawyers required. This compares to the $30k or $40k you would otherwise spend on a lawyer+ JAMS/AAA arbitration fees. For your HOA, I suspect the case by consent would be the best approach - two parties come to the website, both agree to use DecisionLayer to resolve the dispute and then present the issue and each side's argument.
We have free case simulator on our site. Check it out at https://www.decisionlayer.ai/simulate
Is anyone working on this ? seems like a big win for AI if it can be done.
This thing you point out is exactly why Nash demanded invariance under affine transformations in his solution. Using completely arbitrary units if I rank everything as having importance 1 million, that's exactly the same as ranking everything as having importance 1, and also the same as ranking everything as having importance 0.
The solution is only sensitive to diffences in the unitity function, not the actual values of the function. If you want to weight something very strongly in the Nash version of the game you also have to weight other things correspondingly weakly.
That said, given the fictional example:
Honestly I’m on Daniel’s side - they agreed on a 50/50 split, and they’ve both been working their asses off to make the business work. It’s an arrangement that clearly both of them have been actively participating in, not trying to push back against, for a year and a half.
And the supposed insight this product offers is to… split the difference? Between Maya’s power play for 70/30, and Daniel’s insistence on the original 50/50? 60/40 is the brilliant proposal?
How could they stand to work together afterwards, knowing she thinks she deserves 70% of the profit, but was willing to ‘settle’ for 60%? Why would you want to keep working with someone who screwed you over that way? Their partnership is toast. All the mediation really does is… I don’t know, what? How is this good for Daniel? This ain’t any kind of reconciliation, surely.
Is the argument that it’d be easier for her to get a new baker, than it is for him to get a new business manager?
These characters have both been putting the work in.
I’d be looking for a serpent at his partner’s ear, planting poisonous suggestions that she deserves more of the company they started equally. If this were real.