RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
64% Positive
Analyzed from 3309 words in the discussion.
Trending Topics
#oil#code#car#more#filter#engine#battery#miles#should#every

Discussion (99 Comments)Read Original on HackerNews
The battery is in a compartment in the left front wheel well. You have to remove that wheel to access the battery.
I was instantly impressed by the pure creativity and artistic expression the team employed for that design.
All of them have been in Ford (or Saturn).
Most maintainability conflicts come from packaging and design for assembly.
Efficiency more often comes into conflict with durability, and sometimes safety.
Do you optimize an engine for how easy it is to replace a filter once or twice a year (most likely done by someone the average car-owner is already paying to change their oil for them), or do you optimize it for getting better gas mileage over every single mile the car is driven?
We're talking about a hypothetical car and neither of us (I assume) design engines like this, I'm just trying to illustrate a point about tradeoffs existing. To your own point of efficiency being a trade with durability, that's not in a vacuum. If a part is in a different location with a different loading environment, it can be more/less durable (material changes leading to efficiency differences), more/less likely to break (maybe you service the hard-to-service part half as often when it's in a harder to service spot), etc.
Or, we consider that 2mpg across 100,000 cars can save 3,500,000 gallons of gas being burned for the average American driving ~12k miles per year. And maybe things aren't so black and white. You're argument, in this hypothetical, is that negligent car owner who destroys their car because they're choosing to not change the oil is worth burning an extra 3.5millon gallons of gasoline.
There is just no universe in which placing an oil filter in one location or another is going to make such a difference. You'd have to mount it completely outside the engine, say sitting as a cylinder on top of the hood, and even there you are not going to get a 2mpg improvement.
The point that I am making (obviously, I think) is that tradeoffs exist, even if you don't think the right decision was made, your full view into the trade space is likely incomplete, or prioritizes something different than the engineers.
Based on the replies, saying there's a hypothetical 2mpg improvement to be had was a mistake, everyone is latching on to that like there's some actual engine we're investigating.
Anyone who actually drives their car regularly will be doing an oil change at least twice a year. If an oil change takes more than 30 minutes of actual labour time of an inexperienced mechanic, it's going to be a serious financial burden which will likely outweigh any 2mpg improvement.
We do - they are just a lot bigger.
You should replace the oil filter when it is no longer filtering. Replacing it early is a pure waste of money. Unfortunately the tests of do you need to change the oil filter is more expensive than just replacing the filter so just replace it before it can possibly be clogged is the right answer. Generally the manufactures recommendations are correct and you should follow what they say unless you have lab results that say otherwise.
Putting some random number of hypothetical mpg improvement was clearly a mistake, but I assumed people here would be able to get the point I was trying to make, instead of getting riled up about the relationship (or lack thereof) of oil filters and fuel efficiency.
You should be replacing your oil filters based on the manufacturer’s service schedule, there’s no rule of thumb. Look at the service manual, my car has the filter change scheduled every 10,000 miles.
There’s definitely a programming equivalent as well…
Not for normal car
Fun Fact: Along with the "Bees are disappearing" scare, which was just measurement error, there has been an "insects are disappearing" scare, due to the fact people's windshields are not covered with bugs like they used to be. However that is because cars have gotten more aerodynamic so fewer insects are hitting the windshield.
Between rebuilding an engine and disassembling a bumper to replace a lightbulb most mechanics would genuinely rather be doing the lengthy but interesting work of rebuilding an engine than the lengthy and fucking boring task of disassembling a bumper to fix a lightbulb.
Moreover, even if a mechanic must charge you stupid amounts of labour cost to do a simple repair because it genuinely takes that much time, the customer might not come away with it thinking: "fuck, I bought a dumb car which is expensive to repair", they might instead come away with it thinking: "all these mechanics, quoting ridiculous prices to fix a light bulb, they must all be scammers".
ChatGPT, write me a 2010-style Hacker News front page essay about how software maintenance is just like automobile maintenance, and why nobody wants low-value maintenance work to be arduous, failure-prone, and boring.
So, similar with software design, as in other fields, often a problem goes away when you ask a different question.
Every moving part - especially gears -- needs to be oiled, and whenever you are oiling metal on metal contact such as in gears, you are going to want an oil filter to catch worn metal debris, to remove it from the oil.
The difference between EVs and ICE vehicles is not that only one of them uses oil to reduce friction, but that the oil service intervals on EVs are so long that regular oil maintenance is not needed, you do it every 60,000 miles or whatever the manufacturer recommends, so it's out of mind. But that doesn't mean it doesn't require service.
Once EVs have been around for a while and there is an established market for used EVs, the people who buy them are going to want to change the oil to add more life to the EV. So it's something that is dealt with in the long-life maintenance, not the monthly maintenance. But when you do the oil service, you will curse Tesla for needing to drop the battery in order to do it, and all of a sudden you will care where things are placed and how accessible they are.
Here is a nice video -- I follow Sam Crac as one of my favorite automotive youtubers - and he picked up an old Tesla and did an oil service for it. It's a nice watch:
https://www.youtube.com/watch?v=l0ZNHKjHalY
They actually will need oil changes starting anywhere from the 50k to 100k mile mark.
Here's the maintenance guide with pictures walking through changing the oil and filter for the Rear Drive Unit (RDU) in a Tesla Model S:
https://service.tesla.com/docs/ModelS/ServiceManual/Palladiu...
The obvious one is the battery, and you can argue that modern EVs have batteries so expensive that when they are dead the car becomes scrap, and - sure, whatever.
But EVs still have: cabin air filters, coolant, brake fluid, lubricants in various places (although granted, these lubricants will mostly last the service life).
At the end of the day, as long as you have a car which moves, and not a statue, it will have things which wear out and which should be easy to replace.
Engine oil and oil filters are just an example.
It is absolutely astounding how much of them run on code that is:
- very reliable aka it almost never breaks/fails
- written in ways that makes you wonder what series of events led to such awful code
For example:
- A deployment system that used python to read and respond to raw HTTP requests. If you triggered a deployment, you had to leave the webpage open as the deployment code was in the HTTP serving code
- A workflow manager that had <1000 lines of code but commits from 38 different people as the ownership always got passed to whoever the newest, most junior person on the team was
- Python code written in Java OOP style where every function call had to be traced up and down through four levels of abstraction
I mention this only b/c the "LLMs write shitty code" isn't quite the insult/blocker that people think it is. Humans write TONS of awful but working code too.
LLMs regurgitate shitty code. They learned it entirely from people.
This looks like an example of biobackend: defective IT compensated by humans
Your point is very sane, of course, shitty code was not invented now. But was it ever sold as a revolution ? Probably, too !
To be fair, the standard library `unittest` and `logging`, along with the historic `distutils`/Setuptools stack, are hardly any better.
It's a continuous object lesson in missing the point. A similar thing happened a few hours ago when an article was posted about a researcher who posted a fake paper about a fake disease to a pre-print server that LLMs picked up via RAG, telling people with vague symptoms that they had this non-existent disease. Lo and behold, commenters go in immediately saying "I'd be fooled too because I trust pre-print medical research." Except the article itself was intentionally ridiculous, opening by telling you it was fake, using obviously fake names, fictional characters from popular television. The only reason it fooled humans on Hacker News is because they don't bother reading the articles and respond only to headlines.
It's just like your code examples. Humans fail because we're lazy. Just like all animals, we have a strong instinct to preserve energy and expend effort only when provoked by fear, desire, or external coercion. The easiest possible code to write that seems to work on a single happy path using stupid workarounds is deemed good enough and allowed through. If your true purpose on a web discussion board is to bloviate and prove how smart you are rather than learn anything, why bother actually reading anything? The faster you comment, the better chance you have of getting noticed and upvoted anyway.
Humans are not actually stupid. We can write great code. We can read an obviously fake paper and understand that it's fake. We know how hierarchy of evidence and trust works if we bother to try. We're just incredibly lazy. LLMs are not lazy. Unlike animals, they have no idea how much energy they're using and don't care. Their human slaves will move heaven and earth and reallocate entire sectors of their national economies and land use policies to feed them as much as they will ever need. LLMs, however, do have far more concrete cognitive limitations brought about by the way they are trained without any grounding in hierarchy of evidence or the factual accuracy of the text the ingest. We've erected quite a bit of ingenious scaffolding with various forms of augmented context, input pre-processing, post-training model fine tuning, and whatever the heck else these brilliant human engineers are doing to create the latest generation of state of the art agents, but the models underneath still have this limitation.
Do we need more? Can the scaffolding alone compensate sufficiently to produce true genius at the level of a human who is actually motivated and trying? I have no idea. Maybe, maybe not, but it's really irritating that we can't even discuss the topic because it immediately drops into the tarpit of "well, you too." It's the discourse of toddlers. Can't we do better than this?
I am afraid that without a major crash or revolution of some sort, user won't matter next to a sufficiently big biz. But time will tell.
For companies that have a solid competitive moat they have at best gotten lazy about user centricity and at worst actively hostile.
How could this have happened? Well, the code was shipped but no customer was running it in production.
This sentence, itself, takes on new meaning in the age of agentic coding. "I'm fine with treating this new feature as greenfield even if it reimplements existing code, because the LLM will handle ensuring the new code meets biz and user expectations" is fine in isolation... but it may mean that the code does not benefit from shared patterns for observability, traffic shaping, debugging, and more.
And if the agent inlines code that itself had a bug, that later proves to be a root cause, the amount of code that needs to be found and fixed in an outage situation is not only larger but more inscrutable.
Using the OOP's terminology, where biz > user > ops > dev is ideal, this is a dev > ops style failure that goes far beyond "runs on my machine" towards a notion of "is only maintainable in isolation."
Luckily, we have 1M context windows now! We can choose to say: "Meticulously explore the full codebase for ways we might be able to refactor this prototype to reuse existing functionality, patterns, and services, with an eye towards maintainability by other teams." But that requires discipline, foresight, and clock-time.
Obviously, our regulations aren't perfect or even good enough yet. See DRM. See spyware TVs. See "who actually gets to control your device?". But still...
If that's what the regulators are optimizing for.
"We arrived at a little model that expresses the relative importance of various factors in software development..."
Survivorship bias exists, but look at all the Virgin brands and at places like Google. So for a moment let’s posit he’s correct.
So, then, the problem would seem not to be capitalism generally. It would be the sort of short-term quarterly goals capitalism we see so often in recent years.