Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

85% Positive

Analyzed from 3757 words in the discussion.

Trending Topics

#conscious#consciousness#llms#math#self#experience#don#human#same#sense

Discussion (149 Comments)Read Original on HackerNews

tracerbulletxabout 7 hours ago
We don't even know what the pre-requisites for consciousness are so we have no way of knowing. LLMs have emergent behavior that is reminiscent of language forming brains, but they're also missing a lot of properties that are probably necessary? Mainly continuity over time, more integrated memory, and a better sense of space and time? Brains use the rhythm and timing of neuronal firings, and the length of axons effects computation, they do a lot of different things with signal and patterns, but in any case without knowing what consciousness is I don't know which of those things are required.
boxedabout 1 hour ago
> We don't even know what the pre-requisites for consciousness are so we have no way of knowing.

Imo we don't even have a definition of the word that we agree on.

qsera36 minutes ago
Ability to feel pain or pleasure is a good indicator I think..
echoangle23 minutes ago
And how do you define pain and pleasure? Do insects feel pain?
pydryabout 1 hour ago
We're pretty clear on the distinction between a conscious and an unconscious human.

We might not clearly understand the diff between the two states but we can certainly point to it and go "it's that".

freedombenabout 1 hour ago
I'm not sure it's that clear. What about a person who is on drugs to the point they clearly don't know what reality is happening around them, but they are able to speak and move and such? I'm not sure I'd call that conscious, but by most definitions it is.
agnosticmantisabout 1 hour ago
Now discuss whether a bonobo, a dog, a cat, a mouse, an ant, a bacterium is conscious.

And you’ll find it’s not as clear cut.

throwuxiytayqabout 2 hours ago
Clive Wearing's memory lasts for less than 30 seconds, so he has no memory of being awake before now. He is permanently in a state of feeling like he has just woken up, observing his surroundings for the first time.

Clive Wearing's mind has no time continuity and basically zero memory integration. Is he not conscious? There's interviews with the guy.

Where on the scale [No mind <-> Clive Wearing <-> Healthy human brain] would you put an LLM with a 10M token context window?

throwyawayyyyabout 8 hours ago
Current LLMs prove that the Turing Test was insufficient all along. But they also prove that intelligence != consciousness. One can, after all, be conscious without a thought in one's head. We certainly have ongoing work in identifying the neural correlates of consciousness in animals, none of which is going to be remotely applicable to machines. We're genuinely blind to the question of whether a sufficiently large neural net can exhibit flashes of subjective experience.
dparkabout 2 hours ago
> But they also prove that intelligence != consciousness.

They prove no such thing. We can't even prove consciousness in other humans.

https://en.wikipedia.org/wiki/Problem_of_other_minds

psychoslaveabout 1 hour ago
On that regard, arguing with thermometer is not a thing generally, but people arguing with LLMs is certainly common enough now to not be considered a completely marginal case. Given some people fall in love or move to suicide after interacting with these models, they are certainly different from even the most beloved dialectical rubber duck.
qsera34 minutes ago
They are not intelligent. And they won't pass turing tests if it cannot count or some simple thing like that..
brookstabout 7 hours ago
Obligatory Blightsight recommendation for intelligence != consciousness.
marshrayabout 2 hours ago
That book is badass on so many levels. I'd just started it again yesterday.
exe34about 2 hours ago
that book messes with my head every time I read it, it's like I go through life in a detached way for several weeks. I need to read it again!
ninalanyonabout 1 hour ago
I read it once, was immensely impressed, can't bear to read it again. In fact I find most of what I have read from Peter Watts to be brilliant but disconcerting and uncomfortable.
dreamcompilerabout 2 hours ago
Blindsight
apiabout 8 hours ago
That was one of my thoughts years ago after playing with early ChatGPT and local llama1: this proves that intelligence and consciousness do not necessitate one another and may not even be directly related.

I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.

The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.

digitaltreesabout 7 hours ago
But why? A roomba has senses, and can access them when it has power and respond to stimulation. When it runs out of power it no longer experiences this sensation and no longer responds to stimulus.

How is that different than a cell?

dparkabout 2 hours ago
You simply defined consciousness as life, which seems like an unusual but also not very useful definition.
throwyawayyyyabout 7 hours ago
I think this gets to the conflation we naturally have with consciousness and a sense of self. Does a tree have a sense of self? I imagine probably not, a tree acts more like a clonal colony than a single organism.
kortexabout 5 hours ago
Is someone tripped out on mushrooms experience ego death and total disruption of sense of self still conscious? They may even contend they are more conscious than normal life, what with all the communing with the universe and whatnot.

Trees react to the world around them in many ways.

digitaltreesabout 7 hours ago
Wrong based on what criteria? Or are we just moving the goal post because we are uncomfortable with the idea that neural networks might be conscious?

If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.

throwyawayyyyabout 7 hours ago
I'm mainly saying it's impossible to know, at least without a theory of consciousness that doesn't exist. Do we consider bacteria to be conscious though, is there something like to be a single cell? I can easily believe there is something like to be an insect.
digitaltreesabout 7 hours ago
I’d argue it’s a spectrum with awareness being simple response to stimuli at one and self awareness of and reflection on a subjective experience across time on the other.
ofjcihenabout 7 hours ago
Incredibly confusing that people who are otherwise of sound mind seem to fall for this.

Especially confusing when it’s someone who knows how algorithms work.

Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?

Never, because they’re incapable of doing anything independently because there is no sense of self.

tovejabout 2 hours ago
If you've followed Dawkins' trajectory, I don't think it's clear that he's "otherwise of sound mind" anymore.

He's had some very strange output on biological gender, where he tries to handwave away the existence of intersex people. And he's a biologist.

shrubbleabout 3 hours ago
He famously doesn’t believe in God, but he believes in Claude?
dparkabout 3 hours ago
There is considerable evidence for the existence of Claude.
jdthedisciple17 minutes ago
of Claude's consciousness, you mean ... ??
altmanaltmanabout 3 hours ago
Anthropic marketing made Dawkins believe in the supernatural. Is there anything Dario cant do?
locallostabout 2 hours ago
Maybe he also believes that God believes in Claude, that's me, that's meeeee
sdevonoesabout 1 hour ago
As long as AI is being introduced by multibillion dollar corporations, it’s all a trick, a scam. They are just looking for increasing their valuation. A waste of time
pettersabout 1 hour ago
Many dismiss Dawkins here but Ilya Sutskever wrote in 2022: “it may be that today's large neural networks are slightly conscious.”
374849944941 minutes ago
IS quite literally gets paid to think that
root_axisabout 8 hours ago
There are a lot of people vulnerable to AI psychosis.

As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.

apiabout 8 hours ago
If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism.

As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.

So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.

root_axisabout 7 hours ago
I agree, but I don't think determinism is a factor either way. Ultimately, if arbitrary computer programs can be conscious, then it stands to reason that many other arbitrarily complex systems in the universe should also be.

What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.

digitaltreesabout 8 hours ago
Why would current AI be an argument for panpsycism? I don’t understand the connection.

AI is stochastic, not static and deterministic.

As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems

applfanboysbgonabout 7 hours ago
> AI is stochastic, not static and deterministic.

LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.

colechristensenabout 7 hours ago
I think it's the opposite argument

IF current AI is conscious, so are trees, rocks, turbulent flows, etc.

The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.

digitaltreesabout 8 hours ago
There is evidence that awareness is an emergent property from sensory experience. And consciousness is an emergent property of language that has grammatical meaning for self and other.
brookstabout 7 hours ago
These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.

I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.

kortexabout 6 hours ago
How do you know the sensation of a red photon hitting a cone cell, transduced to the optic nerve through ion junctions and processed by pyramidal neurons, is any more or less real than the excitation of electrons in a doped silicon junction activating the latent space of the "red" thought vector? Cause we are made of meat?
vidarhabout 7 hours ago
Sensory input is nothing but data.
digitaltreesabout 7 hours ago
Neural networks can have senses. Hook an LLM up to a thermometer and it will respond to temperature changes.
root_axisabout 7 hours ago
LLMs have no self, sensory experience, or experience of any kind. The idea doesn't even really make sense. Even if it did, the closest analogy to biological "experience" for an LLM would be the training process, since training at least vaguely resembles an environment where the model is receiving stimuli and reacting to it (i.e. human lived experience) - inference is just using the freeze-dried weights as a lookup table for token statistics. It's absurd to think that such a thing is conscious.
digitaltreesabout 1 hour ago
What is different about the human neural network? People have given LLMs sensors and they respond to stimuli. The sense of self can be expressed as a linguistic artifact that results in an emergent pattern recognition of distinct entities. For example, merely my saying I am sitting under the tree with a friend I have encountered the self as a pointer to me as the speaker. There is evidence from early childhood development that language acquisition correlates to awareness of the self as distinct from other. And there is evidence from anthropology indicating that language structures shape exactly what the self is perceived to be.

Your best argument is that the weights are set because that means it’s not a system that can self reflect and alter the experience. But I don’t see why that is necessary to have an experience. It seems that I can sense a light and feel its warmth regardless of whether my neurons change. One experience being identical to another doesn’t mean neither was an experience.

ofjcihenabout 7 hours ago
What you’re missing is a “self” to have the “experience”.

LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

AlecSchuelerabout 2 hours ago
> the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

Can such an algorithm reason about itself in relation to others?

digitaltreesabout 7 hours ago
The sense of self may be an emergent property of the grammatical structure of language and the operations of memory. If an LLM, by necessity, operates with the linguistics of “you” and “me” and “others”. And documents that in a memory system and can reliably identify itself as a discrete entity from you and others then on what basis would we say it doesn’t have a sense of self?
vidarhabout 7 hours ago
How do I know you have this "self"?

How do you know other humans do?

search_facilityabout 8 hours ago
Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math. Nothing else, by nature. Modern LLMs have the same math as in GPT-2 - just bigger and with extra stuff around - and math is the only area of human knowledge with perfect flawless reductionism, straight to the roots. It was build that way since the beginning, so philosophy have no say in this :) And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design - so it can be proven there are no anything like consciousness simply because conciousness was not implented in the first place, only perfect mimicry.

And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.

solid_fuelabout 3 hours ago
This is such a weird comment.

> Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math.

This was obvious since LLMs were first invented. They published papers with all the details, you don't need to see something implemented in Minecraft to realize that it's just math. You could simply read the paper or the code and know for certain. [0]

> math is the only area of human knowledge with perfect flawless reductionism, straight to the roots

Incorrect, Kurt Gödel showed with his Incompleteness Theorems in 1931 [1] that it is impossible to find a complete and consistent set of axioms for mathematics. Math is not perfectly reducible and there is no single set of "roots" for math.

> It was build [sic] that way since the beginning,

This is a serious misunderstanding of what mathematics is. Math is discovered as much as it is built. No one sat down and planned out what we understand as modern mathematics - the math we know is the result of endless amounts of logical reasoning and exploration, from geometric proofs to calculus to linear algebra to everything else that encompasses modern mathematics.

> And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design

This sentence means nothing, because math is not reducible in that way.

> so it can be proven there are no anything like consciousness simply because conciousness [sic] was not implented [sic] in the first place, only perfect mimicry.

Even if the previous sentence held, this does not follow, because while we are conscious the current consensus is that LLMs are not and most AI experts who are not actively selling a product recognize that LLMs will not lead to human-equivalent general intelligence. [3]

[0] https://github.com/openai/gpt-2

[1] https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th...

[2] https://www.cambridge.org/core/journals/think/article/mathem...

[3] https://deepmind.google/research/publications/231971/

SuperV1234about 8 hours ago
We are not fundamentally different. Chemical reactions are just math.
kbrkbr12 minutes ago
"The universe is fundamentally just a complicated clockwork"

Unknown Ptolemy disciple

rellfyabout 8 hours ago
Well, (in our current understanding) yes, but there may be underlying aspects of physics and the universe that we do not understand that could be the reason consciousness kicks in. It could turn out that LLMs do work similarly to how humans think, but as an abstracted system it does not have the low level requirements for consciousness.
vidarhabout 7 hours ago
We do not know what the "low level requirements for consciousness" are.

We do not know how to measure whether consciousness is present in an entity - even other humans - or whether it is just mimicry, nor whether there is a distinction between the two.

baggy_troughabout 8 hours ago
> it does not have the low level requirements for consciousness.

What is the evidence for this?

ekianjoabout 3 hours ago
Amusing statement since we are far from being able to understand chemical reactions in depth. Most of our knowledge in chemistry is empirical. Nothing like math.
pettersabout 1 hour ago
We have a very good idea of all math behind chemistry. But the equations are very difficult to solve.
slopinthebagabout 2 hours ago
No, math is a tool that we can use to describe something more fundamental. Don't mistake the map for the territory!
XMPPwockyabout 8 hours ago
Yup- the question is "can math be conscious?"

(If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)

SwellJoeabout 7 hours ago
Not just any math: Matrix multiplication. Can matrix multiplication be conscious?

And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.

markburnsabout 3 hours ago
People often point to the relative simplicity of the architecture and code as proof that the system can’t be doing whatever it is that consciousness does, but in doing so they ignore the vast size of the data those simple structures are operating over. Nobody can actually say whether consciousness is just emergent behaviour of a sufficiently complex system, and knowing how a system is built tells you nothing about whether it clears the bar for that kind of emergence. Architectural simplicity and total system complexity aren’t the same thing.

Ie the intelligence sits in the weights and may sit there in the synapses in our brains too.

When we talk about machines being simple mimicking entities we pay no attention to whether or not we are also simple mimicking entities.

Most other assertions in this topic regarding what consciousness truly is tend to be stated without evidence and exceedingly anthropocentric whilst requiring a higher and higher bar for anything that is not human and no justification for what human intelligence really entails.

JackFr34 minutes ago
> Not just any math: Matrix multiplication. Can matrix multiplication be conscious? And, I don't see how it can be.

Assuming your brain and the GPUs are both real physical things, where’s the magic part in your brain that makes you conscious?

(Roger Penrose knows, but no one believes him.)

AlecSchuelerabout 3 hours ago
> And, I don't see how it can be. It is deterministic

Why is indeterminism the key to consciousness?

XMPPwockyabout 3 hours ago
Hm, it sounds like to you consciousness implies non-determinism, and so determinism implies a lack of consciousness - is that right? If so, why do you think so? And if not, what am I missing?
kingofmenabout 3 hours ago
Human brains are also deterministic, though somewhat more difficult to reset to a starting state. So this seems to prove that humans aren't conscious either.
search_facilityabout 7 hours ago
Imho no, math itself have no conciousness. Quite confidently its a helpful tool that does not act by himself.
XMPPwockyabout 3 hours ago
Hm, say more about what your opinion's based on here?
NiloCKabout 3 hours ago
The whole is composed of parts, ergo there is no whole. This seems incorrect to me.

We too are amalgamations of inanimate components - emerged superstructures.

Just cells. Just molecules. Just atoms.

canjobearabout 8 hours ago
You could simulate your own brain in Minecraft. What do you conclude from this?
search_facilityabout 7 hours ago
I can not simulate my brain, it's a huge stretch to imply this.

But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.

Myrmornisabout 7 hours ago
On the one hand I'm not sure Dawkins has read/thought enough about how LLMs actually work. I'm getting the impression he doesn't fully appreciate or is somehow forgetting that it's a text completion algorithm with a vast number of parameters and that even if the patterns of learned parameter tunings are not really comprehendible, the architecture was very deliberately designed.

But on the other hand his thoughts at the end are interesting. Summary:

Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.

lpcvoidabout 2 hours ago
No, it's not conscious, and anybody pretending it is has either no clue, or, more likely in the AI space, is a grifter.
textlapseabout 2 hours ago
At what stage does a series of floating point numbers output from a GPU become conscious?
becquerelabout 2 hours ago
Around 9T parameters, depending on quantization.
Advertisement
mellosouls2 days ago
digitaltreesabout 7 hours ago
Feels like watching and esteemed scientists falling in love with a bot that’s telling him what he wants to hear because the system prompt said “be helpful”
SwellJoeabout 7 hours ago
I've begun to wonder if narcissism predisposes one to AI psychosis. It's probably not the only thing that leads there, I've seen normal seeming folks get there, too. But, a lot of the most unhinged takes I've seen thus far have been from people that are publicly very impressed with themselves.

I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.

jasiekabout 1 hour ago
muggles will look at matrix multiplication and say it's magic
iamflimflam129 minutes ago
Given this article is behind a paywall, what on earth is everyone discussing in the comments here?
robinhouston28 minutes ago
There's an archive link above that bypasses the paywall
iamflimflam118 minutes ago
Doesn’t seem to be working…
wewewedxfgdfabout 7 hours ago
Its software. Software is not conscious.
thebruce87mabout 2 hours ago
If your brain is hardware then what are your thoughts?

Is a sperm conscious? Or an egg? When they come together the eventual brain is not conscious immediately.

vixen99about 3 hours ago
I do appreciate how AI has been taught to spell properly as in the difference between its and it's. Here, initially I thought you'd left out the apostrophe in its, but then I realized you might be saying 'the reason it is not conscious is because of -its- software - the latter not being conscious. Context and interpretation are rather critical. (I know - a truism!)
WalterGRabout 12 hours ago
Related: https://news.ycombinator.com/item?id=47988880

"Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)

18 points | 2 hours ago | 16 comments

dangabout 8 hours ago
Also The Claude Delusion: Richard Dawkins believes his AI chatbot is conscious - https://news.ycombinator.com/item?id=47991340 - May 2026 (30 comments)
ameliusabout 8 hours ago
So we know Claude is deterministic, but does that mean it is not conscious?

Or what is the reasoning exactly?

throwaway27448about 8 hours ago
It largely comes down to how you define the term. Personally, I think anything that includes software (...of only tepid determinism, as we do explicitly add pseudorandomness) is not a particularly useful term.

Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.

morpheos137about 7 hours ago
Really is it conscious is a bizarre question. Can LLMs simulate the output of a 'conscious' system quite well? Increasingly yes. Is the nature of machine 'consciousness' different from human consciousness of course yes. Can an ai introspect. yes. Interestingly having been working a lot with highly automated (e.g. ratio of prompt to output maybe 1/1000 or less) iterative coding agents recently has iluminated for me just how different machine consciousness is from human. part of this could the harness of course. Time is a mysterious concept to machines. the connection of before and after to cause and effect is far weaker than in humans. over generalization is the norm: this is common in humans as well (c.f. fallacy of excluded middle or false dilemma) but the tricky part with current ai is they present as advanced in terms of acessible knowledge base but are actually shockingly weak in reasoning once you get off the beaten path.
RVuRnvbM2eabout 8 hours ago
It is terribly sad when someone undeniably brilliant in a particular field fails to recognize their own incompetence in other areas - in this case mistaking advanced technology for magic.
thinkingemoteabout 2 hours ago
We're going to see increasing numbers of older famous (non computer savvy) figures that we have respected follow his views on this. It's like seeing your favourite celebrity sell out an shill crypto coins, all a bit sad.

Thinking positively, it could just be newsworthy because he is famous and he so misses the mark. Other older famous people might agree with us but that's not news.

mrandishabout 1 hour ago
Given that Dawkins is a biologist in his 80s, I'm more disposed towards being charitable than I am when people actively involved in developing LLMs let themselves get bamboozled.
rellfyabout 8 hours ago
Are you implying consciousness is magic? Well, I wouldn't disagree with that really.
Myrmornisabout 7 hours ago
I don't think you read carefully what he said. At the end he gave three quite interesting thoughts about what might be true assuming LLMs are less conscious than we are (i.e. assuming our consciousness is not a purely algorithmic phenomenon as we obviously know LLMs are).
AdeptusAquinasabout 8 hours ago
That's always been Dawkins's shtick though. As an atheist I've generally found him a bit embarrassing
morpheos137about 7 hours ago
the problem is asking if ai is conscious is like asking does ai have a soul. it is not a scientific question and presupposes humans are 'conscious' without even defining the term. to me it is 100% irrelevant if ai is conscious and all discussions about it are based on fallacies and assumptions. what matters to me about ai and matters to other people as well in terms of theory of mind about others is: can i predict how it will work. is it useful. thats it. consciouness is a sophist question with no scientific resolution available and no moral weight until it has consequences.
vixen99about 3 hours ago
Good - I was scanning down to see if anyone was going to say this.
IncreasePostsabout 8 hours ago
Where does he say it's magic?
ezfeabout 8 hours ago
LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.

To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).

kortexabout 5 hours ago
They stopped being autocomplete years ago with RLHF
baggy_troughabout 8 hours ago
Neurons are just summing up their inputs according to the laws of chemistry. What's the difference?
ChrisClarkabout 8 hours ago
So, how is consciousness generated?
wrsabout 8 hours ago
Not simply by reading every word ever written by a conscious being and learning to reproduce them with high probability.

At least, that’s certainly not how I got here.

brookstabout 7 hours ago
Think of the poor Xerox machines.
psychoslaveabout 2 hours ago
Honestly, who care if they are conscious? If it's about how we should treat other conscious beings, our attention should first go to how we treat other animals, or even other humans. Actually even how fellow humans will treat themselves can be a concern if they are not the proper means to deal with their own life.
yakbarberabout 1 hour ago
let's say aliens land. we learn to talk to them. they're super smart - smarter than us. would we say they're conscious? why? because they're organic. I think that's the root of the criteria many folk are trying to express.

1. passes turing test

2. is organic

I'm not saying it's correct or even that I agree with it, but that's what it boils down to.