DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
62% Positive
Analyzed from 2175 words in the discussion.
Trending Topics
#models#mythos#model#don#altman#https#more#going#run#com

Discussion (106 Comments)Read Original on HackerNews
"No mine is the most dangerous"
"Nuh uh mine is"
"Mine could kill everyone!"
"Mine could do it faster!"
"Prove it!!!"
This is where we are
Did somebody say that Elon is stealthly funding: Seven lawsuits filed against OpenAI by families of Canada mass-shooting victims
As always, when the going get's tough, the tough ultimately resort to lawsuits.
You should assume that everyone has a hidden agenda when money is involved.
I didn't think crying could be such a successful business model.
i.e. "I'm so worried that our capped for-profit structure will limit your returns when we make over 1 Trillion in profit".
This is the world we live in.
I'm sure their marketing department is ecstatic but you guys are far more hype-based than what you're calling out.
This AISLE benchmark is interesting in this matter: https://aisle.com/blog/ai-cybersecurity-after-mythos-the-jag...
And the recently discovered Copy Fail by Xint code is another proof that the gating is overblown: https://xint.io/blog/copy-fail-linux-distributions
I'm not entirely up to date on each week's LLM hype train/scandal but last I heard there was no public access to it or public-trusted 3rd parties that can review model's capabilities
https://x.com/AISecurityInst/status/2049868227740565890
They say this because I'm their circles it's a compliment, and nobody ever stopped to consider how the general public might react to it, especially if you claim you'll shortly be the one in charge of world-reshaping technology.
People don't become bad guys just because they lie. The consequences of their actions (and their lies) matter more. Take Elon Musk for instance, he has always been a recognized liar, even when he was a good guy. What changed? Before, he was famous for making the electric car people actually wanted to drive, and cool rockets. Then came the politics: supporting the party most of his fans disliked, being responsible for many government job losses, in particular in the field of environmental preservation (ironic for a supporter of "green" energy), etc...
The following companies are participating in Project Glasswing (to get out in front what vulnerabilities Mythos is able to find and exploit at scale):
AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks.
Do you think they are all in that gullible category?
https://www.anthropic.com/glasswing
assuming mythos is a paper tiger: great marketing, keep going
assuming mythos is for real: err, does this have to be explained?
>ChatGPT: This content was flagged for possible cybersecurity risk. If this seems wrong, try rephrasing your request. To get authorized for security work, join the Trusted Access Cyber program.
Related, they outsourced the TAP verification to a terrible vendor, and their internal support process to AI, so we are now in fairly busted support email threads with both and no humans in sight.
This all feels like an unserious cybersecurity partner.
If you make an LLM more safe, you are going to shift the weight for defensive actions as well.
There’s no physical way to assign weights to have one and not the other.
Do you think a human is capable of providing assistance with defense but not offense, over a textual communication channel with another human?
If no, how does a cybersec firm train its employees?
If yes, how can you make the bold claim that it's possible for a human to differentiate between the two cases using incoming text as their basis for judgement, but IMpossible for an LLM to be configured to do the same? Note that if some hypothetical completely-determinstic LLM that always rejects "attack" requests and accepts "defense" ones can exist, the claim it's impossible is false. Providing nondeterministic output for a given input is not a hard requirement for language models.
Put up velvet ropes outside… leak out rumors about the horrors inside. Whether it’s LLMs or carnies with tents full of “freaks” it’s the same playbook.
Watching OpenAI tumble from the clear market leader into “hey guys us too!” territory has been insightful.
https://huggingface.co/microsoft/bitnet-b1.58-2B-4T
Unless ... idk it sounds crazy but giving me $200/mo might actually make it safe. Lets do that
I personally am ready to buy the drop when this bubble pops.
Not sure about the security capabilities and haven't tested it all that well, as I usually just use hosted models, but I do find myself using it and it's been quite successful for parsing unstructured data, writing small focused scripts and translations.
The fact that I retain control of the data itself makes it incredibly useful, as I work in an environment where I can't just paste internal stuff into Codex.
But since it's run locally on a toaster testing it is out of scope for me. It takes a fairly long time to do anything.