Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

50% Positive

Analyzed from 2431 words in the discussion.

Trending Topics

#privilege#attorney#client#court#something#said#meetings#never#information#https

Discussion (62 Comments)Read Original on HackerNews

burningion•about 3 hours ago
The main point raised in the article is that these bots may void attorney client privileges.

But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.

coffeebeqn•about 3 hours ago
Plus they are super inaccurate. Gemini gets one of its three bullet subtly or very majorly wrong almost every time. Just a few weeks ago Gemini said we’re rolling out our payment setup in Russia. You know the place where we have 20+ sanctions packages on? We were talking about France in the meeting.
operation_moose•about 3 hours ago
We've found they're surprisingly good if everyone on the call is using a decent headset.

The problems start when using conference room audio or someone is on their laptop mic. If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.

We just went through a round of 100+ (non-sensitive) VoC interviews and they really cut down the workload of compiling all of the feedback. If the audio was a little shaky though, we pretty much had to throw away the transcripts and do them from scratch like we used to.

user_7832•about 3 hours ago
> If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.

Imo this is the single biggest flaw of LLMs. They're great at a lot of things, but knowing when they're wrong (or don't have enough information to actually work on) is a critical flaw.

IMO there's nothing structural about why they shouldn't be able to spot this and correct themselves - I suspect it's a training issue. But presumably bots that infer context/fill in the dots rank better on what people like... at the cost of accuracy.

pjc50•about 3 hours ago
Given how financial services can impose silent inexplicable lifetime bans for using the wrong words in the "what is this transaction for" field, I'm wondering at what point the AI automatically reports people for sanctions violation based on its mishearing.
yagizdagabak•1 minute ago
my fear exactly. same with something like Meta glasses. and i feel like we have moved quickly from the regulatory problems to "'tis a fact of life"
camdenreslink•9 minutes ago
The AI note summaries in meetings I'm in are frequently totally inaccurate. They are actually inaccurate in two ways: they fabricate things that were never said (but always kind of close to something that was said), and they emphasize the totally wrong thing (e.g. acting like the entire conversation was about one topic when that was just a very small part).

I sincerely hope these aren't used in court.

rayiner•2 minutes ago
[delayed]
LanceH•about 2 hours ago
> But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.

I would add that their is no guarantee their are correct as well.

mock-possum•about 1 hour ago
You’d use a computer generated transcript as a guide, not as proof - the proof is the recording of the person actually saying the thing, not the LLMs best guess of what it imagined the person saying.

“At timestamp X, person Y said Z” says the robot, and then you dutifully scrub the audio to timestamp X to verify.

LanceH•21 minutes ago
Is audio always kept in addition to transcripts? (genuine question, I rarely record either)
stego-tech•about 2 hours ago
This. The fact LLMs can also amplify existing closed-set research means even smaller shops can now search through a flood of documents to find smoking guns or critical evidence, much faster.

I’ve been saying it since the mid-10s, but it’s worth repeating: data isn’t gold, it’s more like oxygen in a room in that the higher the concentration, the more likely it is to poison the inhabitants or explode with an errant spark (lawsuit).

Collect only what’s needed to perform the function, and store it only as long as necessary for compliance. Anything else is going to spool counsel.

mock-possum•about 1 hour ago
What are you trying to get away with I wonder?
infecto•40 minutes ago
The nuance here too is that just because someone has concern about materials being discoverable does not mean the company is doing something illegal. Corporate law as it pertains to legislation (US in this perspective) is a dance between company and current administration. When it comes to antitrust and other related legislation the equilibrium is shades of gray that changes between both administration changes but sometimes from the same administration. Companies look to optimize their outcomes and the government is optimizing not so much for legality but what the current administration sets as the main concern.
watwut•about 2 hours ago
Basically, it will be harder to hide illegal and unethical stuff companies routinely engage in.
nz•about 2 hours ago
No, that would be a strict improvement. The AI note-takers can easily "mishear" or "misreport" non-existent illegal and unethical things. It also seems to easily mess up numbers (which is big problem, because a lot of decisions hinge on precise numbers -- imagine inflating an inventory by an order of magnitude, and then imagine having to pay a tariff on something that never existed).

I have a friend who works at a large-ish company that imports and manufactures things (in one of the clerical/quantitative professions). A few years back, they had the IT department go on a kind of "inquisition", wherein they forced employees to disable the summarization function that came with MS Teams, and threatened to fire them if they did not. The resistance to this demand was surprising -- most people are clueless about the cost of their own convenience. Worst of all, people would zone out of meetings, because the AI was producing summaries, which they would then never read.

The effect of the technology was that it made meetings infinitely more expensive, because the supposed benefit of meetings was nullified by complacency, _and_ it made the meetings a liability (incorrectly summarized meetings, that could be used in the discovery process, sure, but could also be sold by MSFT as a kind of market-research-data to competitors in the space).

Nothing illegal has to happen in these meetings at all, for this tech to cause an infinity of problems for the corporation. Every employee that uses these is effectively an unwitting spy. And if that is the case, then the meetings might as well be recorded and uploaded to YouTube (or whatever people watch these days)[1].

[1]: Maybe this is the future. Which I am okay with, but only if the entire planet has to do it, and the penalties for not doing it are irrecoverably severe.

kjs3•36 minutes ago
"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him" - Cardinal Richelieu

Be careful what you wish for. Particularly when it involves tech that often gets it very, very wrong.

chvid•about 2 hours ago
Show me man the man and I will show you the crime.

Modernized. Industrial AI scale.

triceratops•about 1 hour ago
That's an argument for recording everyone on earth 24/7. Is that what you mean?
sdellis•about 1 hour ago
With the level of surveillance and erosion of privacy, that is essentially what is happening. We all know that we are being watched and surveilled. There is no longer an "argument". Anything you say in public or private could potentially be used against you in the future.
flir•about 1 hour ago
It'll just happen. Can't really fight technological progress.
SecretDreams•about 2 hours ago
Going to also be harder to hide completely legal, but not ideal stuff. Like randomly complaining about your boss to a colleague or casually discussing a feature you're stuck working on that you think is a bad idea.
derektank•44 minutes ago
>casually discussing a feature you're stuck working on that you think is a bad idea.

I’ll be honest, this is something that I hope AI note taking tools capture and incorporate into summaries of the company’s status. Especially if they act as an intermediary without revealing the specific person who said it. There’s a lot of information latent within organizations that doesn’t get properly shared due to concerns of retaliation or simply embarrassment that would benefit everyone by being communicated sooner.

gwbas1c•about 1 hour ago
Back when I was in college, in a fraternity, we always assumed that the phones were tapped. Specifically, we never spoke about alcohol or marijuana (now legal) on the phone.

Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family. I'm extra careful about dirty jokes or "grey morality" in video conferences and email.

The same applies to speaking with lawyers. You never know when some motivated asshole wants to twist your words out of context, and the possibility of a recording just enables that behavior.

---

I know enough about security and encryption to know that unless I've exchanged keys physically with someone else, there really is no guarantee that someone hasn't compromised a certificate somewhere. (IE, a "secure" connection on the internet is secure enough for a credit card.)

atonse•25 minutes ago
This is where I think either realtime transcription (or just in time followed by deleting everything) will be an end state.

Especially real-time transcription where the AI actually takes notes (instead of just recording every word and has a dump of it somewhere) can be appealing. Then there isn't any record of the raw sentences, and things that aren't relevant are immediately discarded without any written record.

OpenAI's realtime whisper and other such models will become the default over time.

Tistron•about 3 hours ago
testfoobar•38 minutes ago
I would be concerned about transcription error perhaps (e.g. non native speaker) where precision matters: engineering, compliance, regulation, legal, etc.
rpaddock•about 2 hours ago
Some companies want no records at all, see:

"2028 – A Dystopian Story By Jack Ganssle":

http://www.ganssle.com/articles/2028adystopianstory.htm

Known as ’The Rule of 26’, which is sometimes given as a reason NOT to keep engineering notebooks etc. By Federal Rule 26 you are guilty if you did not volunteer the records before they are requested. Including any backups.

From Cornel Law:

LII Federal Rules of Civil Procedure Rule 26. Duty to Disclose; General Provisions Governing Discovery

Rule 26. Duty to Disclose; General Provisions Governing Discovery

(a) Required Disclosures.

(1) Initial Disclosure.

(A) In General. Except as exempted by Rule 26(a)(1)(B) or as otherwise stipulated or ordered by the court, a party must, without awaiting a discovery request, provide to the other parties:

(i) the name and, if known, the address and telephone number of each individual likely to have discoverable information—along with the subjects of that information—that the disclosing party may use to support its claims or defenses, unless the use would be solely for impeachment;

(ii) a copy—or a description by category and location—of all documents, electronically stored information, and tangible things that the disclosing party has in its possession, custody, or control and may use to support its claims or defenses, unless the use would be solely for impeachment; …

https://www.law.cornell.edu/rules/frcp/rule_26

kjs3•8 minutes ago
Much of my experience with corporate counsel is one of 2 extremes: "keep everything"[1] or "keep nothing". Keep everything, because then you can't be caught out deleting something possibly relevant, which looks very, very bad in court. Keep nothing, because then opposing counsel can't catch you out only keeping things that make you look good in court.

[1] There's actually a subset of this, which includes "...until you are legally allowed to delete it, then delete everything". This is driven by regulation (e.g. SOX in the US).

djoldman•34 minutes ago
This was interesting and sent me down a research hole.

General conclusion:

Corporate litigation is mostly just a series of self-investigations so that both sides can learn what both sides actually know, given that neither side knows much about themselves OR the other side. At the same time both sides are trying to stop the other side from getting the judge to order them to do more investigating.

next_xibalba•about 2 hours ago
See also the OpenAI vs. Musk trial, where Greg Brockman's diary and Sam Altman's texts have taken center stage.
pfortuny•about 3 hours ago
Honest question:

Do these systems not share data with the AI servers? Or are they all local (on-site, not on-computer)?

I am totally baffled by the trust people put on these systems, sharing with them the most obviously private data.

dsr_•about 2 hours ago
Most services have privacy policies that boil down to:

- we promise not to share PII (defined as narrowly as possible)

- we promise not to share payment information except with our payment system

- if you pay us, we promise not to train LLMs on your data

- you agree that everything else can be used for any business purpose, including marketing, intelligence gathering, and "sharing with our 1735 trusted partners".

cj•about 2 hours ago
> I am totally baffled by the trust people put on these systems

The average person doesn't care about online privacy.

sdellis•26 minutes ago
They care, but realize that there is no such thing as privacy anymore. The amount of obsession required to maybe maintain some degree of privacy is not something most people are willing to do.
daft_pink•about 2 hours ago
If you are in a trusted industry like finance or healthcare, the popular ones generally have industry wide privacy certification like HIPAA compliant, SOC 2 Type 2 etc.
sandworm101•about 3 hours ago
>> Executives and corporate boards generally expect conversations with their legal team about legal matters to have attorney-client privilege. They lose that protection if they share the same information with outside parties — and it’s possible that an A.I. note taker could have the same effect.

Total oversimplification. The fact is the privilege is a rule totally in the hands of the court. Every time a new communications technology come up, someone shouts about privilege but the courts still accept it. (Telephones, cell phones, emails, IMs, zoom court, each have had their day in the A-C privilege debate and been accepted.) What matters is that the parties intended and expected communications to be privileged.

As an example. I had a crim law prof who had been a NYC public defender in the 70s/80s. She had regularly interviewed clients at Rikers Island. All interviews were listened to by guards and she said you could even pay to get a copy of the recording. But these interviews were still covered by attorney-client privilege. No court would allow such evidence, but that doesn't mean that the prison could not use it for jail safety. Why does this matter: Because the presence of a third party doesn't mean anything. This isn't magic. An eavesdropper does not nullify the spell. Whether something is or is not privileged depends on the rules followed in the local jurisdiction, and no jurisdiction has ever followed a simplistic "presence of a third part" rule.

Until someone demonstrates an example of an AI actually leaking privileged information, courts are going to chalk it up as just another electronic tool for recording communications.

analogpixel•about 2 hours ago
unrealted to the article, but how do you make a page that that prevents the mouse scroll wheel from working? that's pretty impressive.
bilekas•about 1 hour ago
It's no impressive, it's scummy hiding news behind a paywal, they simple use some CSS trickery to set the height of the content to the size of your view, so there is nowhere to scroll to.
Advertisement
vintagedave•about 3 hours ago
Paywall: can anyone share what the issue is?

Inaccuracy in meeting minutes?

Leaking private info, re security of notes?

I have never used them (don't trust them to accurately capture what is important in a meeting vs just noting what's mentioned), but the concept seems very useful to me.

WillAdams•about 3 hours ago
Reminds me of when I worked for a small shop which had the copier maintenance contract at a local college --- when something went wrong and wasn't properly addressed, my bosses found themselves being held to account with their own words from prior phone calls being quoted back to them verbatim --- which they were mystified by until I explained that the administrators had all come up from the clerical pool and knew shorthand.
bearjaws•about 1 hour ago
The main risk is attorney client privilege and it's already been tested in New York, if you transcribe a call you need to turn over the transcriptions and they can subpoena the company doing the transcription for the records if you refuse.
LanceH•about 1 hour ago
They are saying that it could invalidate attorney client privilege because the transcription could technically be available to an outside party.

I suspect what isn't being said by the lawyers is they want to keep attorney client privilege so they can outright lie.

close04•about 3 hours ago
It's in the viewable text on the page.

> A trendy productivity hack, A.I. note takers are capturing every joke and offhand comment in many meetings. They could also potentially waive attorney-client privilege.

By now everyone knows that AI notes that aren't curated by a human will catch every silly thing that was said in the meeting while omitting the context of the tone or body language. Something as simple as "yeah, right" has vastly different meanings depending on how it was said. In a different context it's already been established that using AI breaks client attorney privilege [0] and this concern has been raised before by law firms [1][2] or the American Bar Association [3] (you can just hit escape before the paywall loads to see the full content). A judge will have to weigh in on this one too.

I don't know what's with the wave of paywalled articles that keep making it to the front page without any workaround included in the submission. Even when you coax the text out of the page source, they're not very insightful to begin with.

[0] https://perkinscoie.com/insights/update/federal-court-rules-...

[1] https://www.smithlaw.com/newsroom/publications/the-silent-gu...

[2] https://natlawreview.com/article/when-ai-takes-notes-protect...

[3] https://www.americanbar.org/groups/gpsolo/resources/ereport/...

vintagedave•about 1 hour ago
> It's in the viewable text on the page.

Not for me - there was no viewable text.

pjc50•about 3 hours ago
People opt in to the panopticon and then discover they have no more secrets. I'm surprised lawyers fall for that as well.
lukewarm707•about 2 hours ago
the doofus lawyer probably didn't realise, i wouldn't call it opt in
close04•about 2 hours ago
If a lawyer takes notes and puts them in a computer, or a cloud drive, or send it over email, they are still covered by attorney-client privilege, right? If they use an AI to do it, it's treated more like a third party no longer covered by the same privilege. If there's no court decision on this it only takes a bit of bad assumption to screw up with using AI.

To be fair, the attorney-client privilege should be completely technology/medium agnostic. If the intention is to have that info stay between client and attorney, nothing should change this.