DE version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
40% Positive
Analyzed from 2051 words in the discussion.
Trending Topics
#don#vibe#job#company#bad#software#engineers#training#coding#maybe

Discussion (69 Comments)Read Original on HackerNews
Nevermind that the update cycle seems to be 6-10 months for changes like "You can now reset your radio presets directly from the radio settings menu", while bugs like temperature control resetting to max cool every start-up never get fixed.
I would love to read about why that stuff is the way it is from the engineers, hmm that might be a good spelunking. I really must be missing something that makes it harder than I think it really should be.
CarPlay works great though.
It’s not just American cars
Carbage in, carbage out.
It's really annoying how at some ASIL levels you need 100% code coverage of unit tests. With AI, all you have to do is to get your agent to generate the tests! Likewise with all the MISRA C requirements. Need your cyclomatic complexity to be less than 10? It's just one prompt away! Now your spaghetti code can easily satisfy the safety requirements with much less effort.
If you want 100% coverage, you just autogenerate the test cases. LLMs can't properly check MISRA requirements, so they're really just a layer on top existing automated checkers. Same for complexity metrics, it doesn't get merged if it violates that (or it's a vendor dependency you won't touch anyway).
If you care about the spirit of the rules, they're not that big a difference. If you don't care, there are already ways to do this. In either case they're an incremental change, not what I'd call a godsend.
AI can't be held accountable, it shouldn't be writing the tests that determine whether car systems function correctly.
I hear this all the time. Why does it matter? Punishing a human for making a mistake does not prevent mistakes, nor does it undo the harm of the mistake. A human saying "my bad, I messed up" and an AI saying "my bad, I messed up" are equally worthless, in a functional sense.
A human also knows they might get punished if it messes up bad enough, which might cause it to think twice before doing something bad. For an AI there is a reward, but there is no risk.
So while both might lie, only the human will be worried that it will be found out. That makes a difference.
I don't understand these words. Does "AI-native workflow" mean vibe coding?
I am now seeing a lot of roles asking for "AI-enabled engineers". And I am not sure what that means either. I am sort of afraid to ask because the answer will probably confuse me even more. Maybe it's my understanding of what LLMs are and how they work that makes these words mean very little to me.
"...In practical terms, GM is looking for people who know how to build with AI from the ground up — designing the systems, training the models, and engineering the pipelines — not just use AI as a productivity tool."
Cheaper younger people who dont think vibe coding is bad
Is this a good idea - probably not
It would be like hiring a junior to lead a team. They're the worst choice for that role.
People will demand training, ignore it, and continue to be a drain on the company. There will be those people out there that have one very very very specific skill and that's all they want to do; "I remove people from Active directory that have names that start with A,B or C" or "I run this Ansible Playbook someone else wrote, and that's my entire job."
To say nothing of their cars.
Man, the only advice I can give people is do not sacrifice time with your loved ones for a company that doesn’t give a shit. Your kid is only going to graduate once. Those family vacations are priceless in the long run. Hell, I take time off to hang out with my dogs now and then. The job can wait.
I've been down quite a few rabbit holes like that which made me think that a lot of major 'issues' appear to be meticulously engineered to protect a certain set of interests at the expense of others.
It's like; "Damm, houses are expensive, I'm going to live in a caravan" then you realize you can't park it on your own land without council approval... Then you find out that council will never approve due to it "negatively impacting the charm of the area."
Then you become homeless and realize that you can't legally put your tent anywhere and all the camping sites in the wilderness which you used to go to as a child now charge you fees to stay there and have rangers patrolling constantly (paid for by your own tax money you used to pay). Also, you can't get a job without an address and it's a literal catch-22... Then if you lose hope and start doing drugs, bad actors (possibly sponsored by foreign states) put fentanyl in the drug supply to finish you off. Then the media fully covers it up by distracting people with slop.
People are dying and it is covered up in the most targeted, effective way imaginable... They are not only killed, they are blamed for what is system failure on the way out. "Should have gotten a job," or "Shouldn't have done drugs." And the people doing the most blaming and defending the system are passive-income shareholders who have a lot of time on their hands and sit at home all day and further rig the politics in their favour. It's cooked all the way down.
It's like the dystopian book "Brave New World" is looking pretty good by comparison to where we're heading. At least in BNW, the "savages" had a designated reserve they could escape to.
Firing people with institutional knowledge? So what? It's going to improve profits short-term.