Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

70% Positive

Analyzed from 752 words in the discussion.

Trending Topics

#employees#meta#company#isn#more#using#models#training#https#going

Discussion (47 Comments)Read Original on HackerNews

dagmx•about 1 hour ago
This is going to be a huge chilling factor for employees. You’d no longer be able to disent, or discuss anything non-work related with even the slightest expectation of privacy.

Yes they could have accessed logs before but there’s a difference between directed checking after incidents and active surveillance at scale.

PradeetPatel•10 minutes ago
Tbh that's to be expected, the work machine is the company's property and there shouldn't be any expectation of privacy.

I work at a tech firm in India, and we are encouraged to create skills.md based on the traits of our colleagues, with the intention of reducing key personnel risk. A handful of engineers were let go as the result of a re-alignment, and their AI counterparts are actively maintaining their code.

I wonder if this is where they are going.

Hamuko•7 minutes ago
>we are encouraged to create skills.md based on the traits of our colleagues

Like that "Scott is an asswipe who never agrees to any idea that isn't his" or what?

everdrive•about 1 hour ago
Yes, but I cannot imagine Meta cares about chilling their employees. They're deep into the "extract more value" phase and are no longer bringing in the cutting edge talent.
stringfood•about 1 hour ago
at this point employees should be kept in cold storage to acclimate so as to prevent being shocked from any more chilling announcements. also will cut down on bathroom breaks
layman51•4 minutes ago
Question: I have heard that at some tech companies that use internal chat software, the general practice is for IT to set it so that the messages are automatically deleted at the end of the day. In Google Chat this is a feature called "turn off history", and the idea behind it is that it can reduce a paper trail when there are investigations into the company doing something that's potentially monopolistic or otherwise shady.

If keystrokes are captured, isn't this a double-edged sword where maybe the company might be inadvertently collecting evidence against itself if there's an investigation and the investigators want to collect keystrokes?

simmerup•about 1 hour ago
Yeah, if at any time Mark can ask Meta AI ‘which of my employees insulted me today’ for example, that’s wild
kridsdale1•about 1 hour ago
I insulted him in my mandatory Exit Interview form from HR when I resigned.

It had no impact of recruiters trying to win me back since then.

LightBug1•4 minutes ago
Should have framed it. Good job.
gambiting•about 1 hour ago
In my experience at other companies recruiters and pretty much no one else has any idea that someone has been blacklisted, until you do all of your interviews and tell HR to hire that person and that's when they tell you the person is on some kind of shit list and we can't hire them. That was an awkward conversation with someone who was basically told we'll be making an offer soon.
gwerbin•about 1 hour ago
That's not a bug, that's a feature
jmull•about 2 hours ago
I like to imagine they’ll mostly capture meta employees using AIs to do work.

Then they’ll deploy models trained on this, and begin capturing employees using AIs that are good at using AIs to do work.

Repeat a few times and they’ll start capturing the keystrokes from people mashing their heads into keyboards with dispair and exclaiming, “Why can’t these models do anything anymore!!”

arjvik•about 2 hours ago
While it would be a hilarious failure mode to encounter, this is actually a good thing!

These models already have the skills that humans were using them for, so either by training the models to use subagents or simply inlining the work done by the AI, you have a much easier time training the model to perform tasks from a human-distribution. The humans have done the work of making the human-distribution look more like an AI distribution.

bwestergard•about 1 hour ago
Doesn't this assume that what humans are current doing with LLM agents is working out? Isn't it a bit early to bet on that to this degree?
wrs•about 2 hours ago
>data collected would not be used for performance assessments or any other purpose besides model training

And you expect Meta employees, of all people, to believe this?

anonym00se1•about 2 hours ago
In the midst of their 4th straight year of layoffs with another looming 20% cut coming, I'm guessing Meta employees are a tiny but suspicious.
orangecoffee•about 2 hours ago
Does not matter? I think the high compensation will be what will drive the compliance.
tristanj•about 2 hours ago
camjw•about 1 hour ago
I guess this is why they acquired https://www.limitless.ai/ ?
fidotron•about 2 hours ago
Meta going all in on their brand with this.

Someone had to do it, distasteful though it may be. Could be quite hilarious what it learns in the process.

jtemplestein•about 2 hours ago
I wonder if this screen + mouse + keyboard (+ camera + speaker + mic) interface is really the right level of abstraction to model a “digital entity”

Sure, you can do everything a human can, but it also seems VERY inefficient

As an alternative, maybe you could just do network in/out?

evanjrowley•about 1 hour ago
It's the same approach as Windows Recall, but all data remains sovereign to the company generating it.
loeg•about 2 hours ago
For context, when the article says "a list of work-related apps and websites," this includes Google properties like gmail, docs, etc, and social media websites like Facebook and Instagram, with no provision for excluding personal accounts.
tmp10423288442•about 2 hours ago
No one intelligent should be logging into their personal accounts on their work devices in any case - it's always been the case (at least in the US) that companies can do whatever invasive scanning they want on devices they own.
bradlys•about 2 hours ago
Data collection isn’t new. The training is.
shimman•about 1 hour ago
You don't think collecting this type of intimate information about your employees as a major violation of the social contract?
rvz•about 1 hour ago
Meta can even afford to destroy themselves and their own employees.

More proof that they do not care about you at all. This is Meta's way of moving fast and destroying everything at all costs.

Advertisement