Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
85% Positive
Analyzed from 3757 words in the discussion.
Trending Topics
#conscious#consciousness#llms#math#self#experience#don#human#same#sense
Discussion Sentiment
Analyzed from 3757 words in the discussion.
Trending Topics
Discussion (149 Comments)Read Original on HackerNews
Imo we don't even have a definition of the word that we agree on.
We might not clearly understand the diff between the two states but we can certainly point to it and go "it's that".
And you’ll find it’s not as clear cut.
Clive Wearing's mind has no time continuity and basically zero memory integration. Is he not conscious? There's interviews with the guy.
Where on the scale [No mind <-> Clive Wearing <-> Healthy human brain] would you put an LLM with a 10M token context window?
They prove no such thing. We can't even prove consciousness in other humans.
https://en.wikipedia.org/wiki/Problem_of_other_minds
I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.
The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.
How is that different than a cell?
Trees react to the world around them in many ways.
If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.
Especially confusing when it’s someone who knows how algorithms work.
Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?
Never, because they’re incapable of doing anything independently because there is no sense of self.
He's had some very strange output on biological gender, where he tries to handwave away the existence of intersex people. And he's a biologist.
As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.
As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.
So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.
What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.
AI is stochastic, not static and deterministic.
As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems
LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.
IF current AI is conscious, so are trees, rocks, turbulent flows, etc.
The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.
I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.
Your best argument is that the weights are set because that means it’s not a system that can self reflect and alter the experience. But I don’t see why that is necessary to have an experience. It seems that I can sense a light and feel its warmth regardless of whether my neurons change. One experience being identical to another doesn’t mean neither was an experience.
LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.
Can such an algorithm reason about itself in relation to others?
How do you know other humans do?
And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.
> Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math.
This was obvious since LLMs were first invented. They published papers with all the details, you don't need to see something implemented in Minecraft to realize that it's just math. You could simply read the paper or the code and know for certain. [0]
> math is the only area of human knowledge with perfect flawless reductionism, straight to the roots
Incorrect, Kurt Gödel showed with his Incompleteness Theorems in 1931 [1] that it is impossible to find a complete and consistent set of axioms for mathematics. Math is not perfectly reducible and there is no single set of "roots" for math.
> It was build [sic] that way since the beginning,
This is a serious misunderstanding of what mathematics is. Math is discovered as much as it is built. No one sat down and planned out what we understand as modern mathematics - the math we know is the result of endless amounts of logical reasoning and exploration, from geometric proofs to calculus to linear algebra to everything else that encompasses modern mathematics.
> And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design
This sentence means nothing, because math is not reducible in that way.
> so it can be proven there are no anything like consciousness simply because conciousness [sic] was not implented [sic] in the first place, only perfect mimicry.
Even if the previous sentence held, this does not follow, because while we are conscious the current consensus is that LLMs are not and most AI experts who are not actively selling a product recognize that LLMs will not lead to human-equivalent general intelligence. [3]
[0] https://github.com/openai/gpt-2
[1] https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th...
[2] https://www.cambridge.org/core/journals/think/article/mathem...
[3] https://deepmind.google/research/publications/231971/
Unknown Ptolemy disciple
We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.
What is the evidence for this?
(If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)
And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.
Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.
When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.
Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.
Assuming your brain and the GPUs are both real physical things, where’s the magic part in your brain that makes you conscious?
(Roger Penrose knows, but no one believes him.)
Why is indeterminism the key to consciousness?
We too are amalgamations of inanimate components - emerged superstructures.
Just cells. Just molecules. Just atoms.
But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.
But on the other hand his thoughts at the end are interesting. Summary:
Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.
I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.
Is a sperm conscious? Or an egg? When they come together the eventual brain is not conscious immediately.
"Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)
18 points | 2 hours ago | 16 comments
Or what is the reasoning exactly?
Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.
Thinking positively, it could just be newsworthy because he is famous and he so misses the mark. Other older famous people might agree with us but that's not news.
To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).
At least, that’s certainly not how I got here.
1. passes turing test
2. is organic
I'm not saying it's correct or even that I agree with it, but that's what it boils down to.