Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

75% Positive

Analyzed from 708 words in the discussion.

Trending Topics

#code#same#agents#stuff#psychosis#something#things#examples#thing#anything

Discussion (10 Comments)Read Original on HackerNews

urbandw311er44 minutes ago
> Tan’s website made 169 server requests > (Hacker News makes 7). It shipped 28 test files > to production users. It loaded 78 JavaScript > controllers > Uncompressed 2MB PNGs that could’ve been 300KB. > An empty 0-byte file sitting in production. > A rich-text editor loaded on a read-only page.

I mean - none of this is great, but if these are the very worst examples they can find then it feels a bit like scraping the barrel.

simskijabout 2 hours ago
This might be the best article I’ve read in months. Thanks for sharing it!
LarsDu8831 minutes ago
I was under the impression that the tokenmaxxing phenomenon was mostly propaganda from the hyperscalers and their vendors to keep the AI funding gravy train alive and prevent demand collapse from popping the bubble.

The big catastrope for valuations is if supply outruns demand. Videogen has not reached the expected traction and profitability, so now we're talking about 1000x code output as a success metric. But that's not quite the same thing as solving 1000x the number of user problems.

If anything these huge codebases are just creating new problems and atrophying from context rot due to the sheer amount of noise.

prism56about 1 hour ago
I work in a more traditional manufacturing business and we are poorly inplementingal AI tools allover just because our customers ask about it...
jdw64about 2 hours ago
I would genuinely like to see a version for freelancers: “Your Client May Be Suffering from AI Psychosis.”
baw-bagabout 2 hours ago
Is there a term for something similar?

My boss is injecting hopium. He barely sleeps compiling massive massive documents which don't answer anything. We literally have a "(LILR Design Language)" spec which when you keep slicing through it, doesn't actually say anything.

It takes 70 seconds to full scroll through it with the fast scroll mechanism, but the whole thing boils down to "basically if it looks good then its good".

And even worse, he is asking for access to the codebase of the product since "I can do 90% of what we need and all the devs need to do is review the PR".

What is this called?

i7l22 minutes ago
I call it executive deterministic parroting:

https://ianreppel.org/executive-deterministic-parrots/

razodactylabout 2 hours ago
It's called: the CEO isn't staying in their lane and is injecting incompetence into the company - look for a new job.
add-sub-mul-divabout 2 hours ago
Okay but didn't we know Tan had issues before the LLM era?
ActorNightlyabout 1 hour ago
>Around the same time, Andrej Karpathy (OpenAI cofounder, former Tesla AI lead) told the No Priors podcast he was in a “state of psychosis” over AI agents. He said he hadn’t written a line of code since December. He described tasks that used to take a weekend now finishing in 30 minutes with zero human intervention. Karpathy is a literal genius and one of the most technically accomplished people in the industry. He built a WhatsApp bot called “Dobby the House Elf” to control his home systems (though that naming leans more towards genius than psychosis).

Ah yes, the same guy that said implementing lidar with cameras is hard (like Kalman filters aren't a thing). Same guy who spoke positively about Musks engineering talents AFTER he went crazy. That genius...

Basically, I feel like if you are suffering from psychosis, your talent is measured by how much stuff you have memorized, and how much of it you can type on keyboard in a given timeframe. And now that LLMs are doing it for you, you feel worthless.

I remember when I first started learning python, having been in Java/C++ land. It felt like a hack. You could just pip install stuff, import it, dynamically hack things around if you needed to, and make stuff work in much shorter time. I wrote tools that let me write other tools quicker. For example, back before you could ask LLMs to write code, you basically had to google stuff and search for examples. So one of the first things I wrote was essentially web page to api converter. Now I had a tool that programmatically let me pull content from web, which included things like code samples.

I then wrote a tool to search documentation and github, and pull things that were styled as code, using my previous tool, and put them into opensearch, so when i had a question about something, I could search a function in opensearch and see examples.

E.t.c and so on.

Agents these days have replaced a lot of the manual work. But complex tasks, with decision making, repeat loops, and unknown unknowns is still something that agents cant reliably do. Anyone can put together a UI with agents very quickly. But then, if you leave a lot of stuff to the agents and not specify how you want the code written, you are going to get bounded into code that is going to quickly degrade performance, introduce edge case bugs, and so on. Sure, you can have llms fix all that, but to do that automatically is something nobody has done yet.

The real skill in the future is going to be writing agentic programs to work on features for you instead of working on features. You invest time up front to do this, and spend minimal time maintaining. Much in the same way that you invested time into writing OOP code with clean separation in packages and classes, build systems with verification, all so that anyone can come in and write code and have a safe way of testing and committing changes.