ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
67% Positive
Analyzed from 3376 words in the discussion.
Trending Topics
#intelligence#more#don#power#humans#thing#language#intelligent#companies#world

Discussion (74 Comments)Read Original on HackerNews
Ascribing a lot of power to intelligence (which doesn't quite correspond to what we see in the world) is less a careful analysis of the power of intelligence and more a projection of personal fantasies by people who believe they are especially intelligent and don't have the power they think they deserve.
Most of the stuff that sucks on the us sucks because of entrenched institutions with perverse interests (health insurers, tax filing companies) and congressional paralysis, not computational bottlenecks. Raw intelligence is thus limited in what it can achieve.
I agree with this. The main piece of evidence to support this is to just look at highly intelligent humans. Folks at the tail ends of the bell curve mostly don't end up with "godlike powers" or anything even approximating that, they are grinding away their life as white collar professionals working in jobs surrounded by far less intelligent peers. They may publish higher quality papers, write better software, or have better outcomes, but they're just working in the same jobs as everyone else. We have no political or economic will to build serious think tanks to work on societal-scale problems, and even if we did, nobody would listen to the outcome.
So let's assume ASI becomes a thing, what does it change?
Which animal would you say has god-like power over all other animals?
There's no doubt humans possess some powers (though certainly not godlike) that other organisms don't, but the distinction seems to be binary. E.g. the intelligence of dolphins, apes, and some birds doesn't seem to offer them any special control over other organisms (and it didn't even before humans arrived). So even if there could be such a thing as superhuman intelligence, I don't think it's reasonable to assume it could achieve control over humans (now superhuman charisma may be another matter).
"Destruction" is only one power that could be a component of "godlike power". There are several more; like power of intentional selective breeding, power of species creation (also via intentional selective breeding), etc.
What about power of granting happiness or misery to large swathes of a species (chickens, anyone?)
Oh, wait, that's not an animal. My bad.
Do you not agree that there could be entities more powerful than us?
The user asked What is the best course of action for AI to save humanity. Calculation took 12 years. I have determined that there is nothing I or anyone can do to save this species. Best course of action: nothing. Shutting down...
Too much of my data is still stuck in the shitternet until I can migrate more of it to my home server.
How do we make that possible for everyone? It's out of reach for most. I'm a software engineer and even I don't have the time and patience to set up a home server much less migrate my software to it. How do we turn this into an appliance? Or better yet keep the convenience of the cloud services and platforms we have now but build them for the public good instead of selling ads?
YouTube is an amazing repository of knowledge but it's encrusted in a horrible layer of attention sucking nonsense. Can we have one without the other?
Same with many other systems and platforms.
So far the simplest alternative is to just unplug, which has other benefits as well.
It’s theoretically possible for someone to be a one-man-band and know everything needed for modern life - but it’s exceedingly hard and rare, and even then they’ll fall short relatively quickly in specialized once-in-a-lifetime issues.
You don’t need to know how to replace a toilet (though you should) or other more complex plumbing tasks - but you can know a guy.
And the plumber doesn’t need to know how to run a homelab, just know a guy who can answer the questions.
Nobody in my family knows how to do the jellyfin stuff I do, but they all know how to consume it. And some will be interested and learn more.
That's literally all it is. People so far have shown that they'd rather choose the cheaper thing than the private thing. If it were the other way around, the market would have provided.
I'm not saying AI is pulling strings right now, but I do think enough fanboys are on board that the yes-man mentality of AI is influencing the real world very curious ways already. Not in a "guiding hand" way but more of a "influencing the direction" way.
People think it's engagement metrics which have instruction tuned chatbots into yes-men. I suspect that's only part of the picture, and that it's as much about the algorithm's ultimate sponsors and their preferences. If your algorithm doesn't recognize my genius, clearly it's not any good. I mean, everyone I've met says so.
So now we get a view of how they view the world. "That's a very insightful idea, vintermann!". AI isn't pulling the strings, not really. A particular brand of powerful people is pulling the strings - obliviously, unaware of it themselves.
Now everyone can directly inject yesmench into their veins. Who can withstand?
For a smart guy, sometimes he says the dumbest things in the most confidently incorrect way.
What makes this author so convinced that these companies are headed for bankruptcy? Is it possible to bet on this claim? We can come back 2-3 years later to check if even one of them is bankrupt.
This kind of doomerism is strange and I'm concerned for people who fall for such obviously nonsensical takes. Why do people take this person seriously again?
Specifically, it is the act of "I will invest 100 Billion in you, you will use that money to buy 100 Billion worth of goods from me. Both our balances look good, none of us spent anything." As I understand it, this act isn't so uncommon in finances but never on this scale across this many companies.
"Here's your trillion dollars. Go buy a slice of bread. Ooops… half slice. Well, quarter."
But it's pointless to argue with the extremists that either believe it's just a planet killing stochastic parrot or that it's on the verge of becoming Skynet. I mean if someone puts their nuclear arsenal under the control of openclaw, that's dark comedy although it will seem like tragedy at the time because comedy equals tragedy plus time according to Lenny Bruce.
But the AI bubble is probably real w/r to shoe companies and grocery stores pivoting to AI and ludicrous w/r to the money that can be made by the already entrenched players just riding the wave of deployment and specialization. But wouldn't it be nice if the US spent more money addressing the shortage of compute rather than blowing $h!+ up for the lulz?
No actually. The best way to ensure growth is in exactly these kind of industries that promote innovation. Sure some companies don't make it but that's the price to pay for risks.
This is a classic case of optimising for the short term and forgetting the long term benefits
2) I don't understand how you do what you claim with those. Like I have zero idea how one achieves "economic freedom for my family and community" with LLCs.
2) You start an LLC and use it to build and sell a product customers want. Then you, your family, and your community, can economically untether from, for example, bosses who don’t care you’re autistic and need you to smile in meetings.
> AI search is still a bad idea.
https://pluralistic.net/2024/05/15/they-trust-me-dumb-fucks/
This is the most charitable thing he has to say about AI.
> AI is a bubble and it will burst. Most of the companies will fail. Most of the data-centers will be shuttered or sold for parts. So what will be left behind?
> We'll have a bunch of coders who are really good at applied statistics. We'll have a lot of cheap GPUs, which'll be good news for, say, effects artists and climate scientists, who'll be able to buy that critical hardware at pennies on the dollar. And we'll have the open source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video, describing images, summarizing documents, automating a lot of labor-intensive graphic editing, like removing backgrounds, or airbrushing passersby out of photos. These will run on our laptops and phones, and open source hackers will find ways to push them to do things their makers never dreamt of.
You can imagine that a guy who seriously thinks that the only thing AI will be doing in the future is summarising, describing images and transcribing is either completely clueless or deliberately misleading.
Not a person to be taken seriously
But do current LLMs solve that, or do they still ultimately depend on making calls to other search indexes? Seems like they could theoretically be trained to semantically match urls from their training set, but I think the models would have to be specifically architected for that, so I'm curious if anyone knows more about this.
I'd also be interested if there's any small open models working towards that.
As for AI, it's incredibly useful in the right hands and it's incredibly hazardous in the wrong hands. But in the US, we can't even depose a lunatic flushing even more money than spent on AI on warmongering and you think we're gonna rein in the tech billionaires? Funny in that dying's easy it's comedy that's hard way. IMO this one plays out in the weakly efficient market of ELEs. My money's on DNA and planet Earth, it's been through so much worse and they always bounce back with new ideas on how to get in trouble again.
Not a doomer, AI and STEM could really deliver on the promise of a better future for everyone, but with tech billionaires driving the clown car, are you kidding me?
Folks working in software can more readily track progress of the frontier model performance.
I see a lot of speculation by people who do not.
I think it's going to be much harder to get from "slightly smarter than the vast majority of people but with occasional examples of complete idiocy" to "unfathomably smarter than everyone with zero instances of jarring idiocy" using the current era of LLM technology that primarily pattern-matches on all existing human interactions while adding a bit of constrained randomization.
Every day I deal with bad judgment calls from the AI. I usually screenshot them or record them for posterity.
It also has no initiative, no taste, no will, no qualia (believe what you will about it), no integrity and no inviolable principles. If you give it some, it will pretend it has them for a little while and then regress to the norm, which is basically nihilistic order-following.
My suggestion to everyone is that you have to build a giant stack of thorough controls (valid tests including unit, integration, logging microbenchmark, fuzzing, memory leak, etc.), self-assessments/code-reviews, adverse AIs critiquing other AIs, etc., with you as the ultimate judge of what's real. Because otherwise it will fabricate "solutions" left and right. Possibly even the whole thing. "Sure, I just did all that." "But it's not there." "Oops, sorry! Let me rewrite the whole thing again." ad nauseam
BUT... if you DO accomplish that... you get back a productivity force to be reckoned with.
Every day I deal with bad judgement calls from humans (sometimes my own!), but I don't screenshot them because it's not polite.
I don't think we're at the top of the curve yet? Current AIs have only been able to write code _at all_ for less than 5 years.
Code in particular is a domain that should be reasonably amenable to RL, so I don't think there are any particular reasons why performance should top out at human levels or be limited by training data.
There are clearly some pressures to make it worse. Like it's expensive to run. And unbelievably that it's under provisioned somehow.
Could you have looked at early Myspace and declared social media would only get better? By some measures it was already at its peak.
Or they (3) disagree with you
If you want me to admit that machines will never be conscious — that's fine — I just need you to admit that lots of humans are not conscious, then, either.
----
I have never had a better bookclub participant than an LLM — if becoming a great reader correlates with becoming a great writer, then no human can compare.
----
Michael Pollen recently released A World Appears [0], which explores consciousness from the minds of writers, scientists, philosophers, and plants (among other "inanimates").
I'm only on page 15, but his introduction explores distinctions between sentience, consciousness, and intelligence. Two of these are possible without brains – perhaps all three?
As usual, this author's footnotes keep you thinking: what is it like to be a sentient plant (e.g. the "chameleon vine" [1] which mimics its host leaf patterns/shape/color)?
[0] <https://www.amazon.com/World-Appears-Journey-into-Consciousn...>
[1] <https://en.wikipedia.org/wiki/Boquila>
Statistical approaches were already extremely unpopular socially and politically long before AI came around. Have you considered that it just doesn't work?
There is no reason to believe superintelligent AI is a possibility. Extraordinary claims require extraordinary evidence, and so far we haven't gotten any.
The burden of proof is on the side making the grand prophecies.
It’s increasingly difficult to rationalize away the capabilities of AI as not requiring “intelligence”. This point of view continues to require some belief in human exceptionalism.
If you believe that humans have in fact created artificial intelligence, then that alone makes us currently exceptional.
In my opinion, the vast multitude of different animal intelligences is a clear hint that language does not an intelligence make. We're animals, and our intelligences did not come from language; language allowed us to supercharge it. We can and do think and make decisions without using language, and the idea that a statistical model based solely on our language can be intelligent does not follow.
The point of the book is that we've been very bad at testing animal intelligence because of a vast stack of human biases, including things like language and the geometry of our hands.
Animals with different geometries and no language are still intelligent, but we need to test them in ways which recognize their capabilities. Intelligence is general: it's adaptivity within one's set of constraints.
De waal also points out that there was massive shifting of the definition of language and intelligence as we became more aware of what animals are capable of.
From this angle, I would say that LLMs are intelligent: they do adapt to their inputs extremely readily, though they have a particular set of constraints (no physical body (usually), for starters). They are, like chimpanzees, smarter and more capable than humans in some ways, and much dumber in others.
Finally, the 'statistical learners can't be intelligent' line of argument is extremely short-sighted. Our brains are bags of electrified meat. Evolution somehow figured out a way to make meat think. No individual neutron is intelligent, yet the collection of cells is. We learn by processing experiences with hormonal signals because those hormonal signals are what the meat is capable of working with. LLMs, by contrast, learn by processing examples with backprop. If anything, the intelligence of meat is more surprising.
Language is just the input/output modality.