HI version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
68% Positive
Analyzed from 3409 words in the discussion.
Trending Topics
#neurons#conscious#consciousness#brain#doom#same#youtube#humans#doesn#question

Discussion (79 Comments)Read Original on HackerNews
> After forty years of vegetarianism, Max Berger was about to sit down to a feast of pork sausages, crispy bacon and pan-fried chicken breast. Max had always missed the taste of meat, but his principles were stronger than his culinary cravings. But now he was able to eat meat with a clear conscience.
> The sausages and bacon had come from a pig called Priscilla he had met the week before. The pig had been genetically engineered to be able to speak and, more importantly, to want to be eaten. Ending up on a human’s table was Priscilla’s lifetime ambition and she woke up on the day of her slaughter with a keen sense of anticipation. She had told all this to Max just before rushing off to the comfortable and humane slaughterhouse. Having heard her story, Max thought it would be disrespectful not to eat her.
> The chicken had come from a genetically modified bird which had been ‘decerebrated’. In other words, it lived the life of a vegetable, with no awareness of self, environment, pain or pleasure. Killing it was therefore no more barbarous than uprooting a carrot.
> Yet as the plate was placed before him, Max felt a twinge of nausea. Was this just a reflex reaction, caused by a lifetime of vegetarianism? Or was it the physical sign of a justifiable psychic distress? Collecting himself, he picked up his knife and fork . . .
> Source: The Restaurant at the End of the Universe by Douglas Adams (Pan Books, 1980)
An easy example is dogs. We have bred dogs for centuries to love doing work for us. If they hated doing the work, it would be easy to call it cruel. If they loved it by nature, it would be easy to call it kind. But since we created them into a thing that loves the work we need them for, where do the ethics fall?
Should we prevent them from doing what brings them joy? Should we make use of this win-win situation? If it is the latter, we are quickly approaching the ability to morph every species into something that gets joy from doing our work.
Dogs we changed by accident. The next one will not be an accident. Is it still a beings free will if the game was rigged from the start?
https://www.youtube.com/watch?v=yRV8fSw6HaE
But there's more to the setup than you might assume from a casual reading. Here's the code used for that demo:
https://github.com/SeanCole02/doom-neuron
So there is an entire pytorch stack wrapped around the mysterious little blob of neurons -- they aren't just wired straight into WASD. There is a conventional convnet-based encoder, running on a GPU, in the critical path. The README tries to argue that the "neurons are doing the learning" but to my dilettante, critical eye it really looks as though there is a hell of a lot of learning happening in the convnet also.
Are the neurons learning to play doom, or are they learning to inject ever so slightly more effective noise into the critical path? Would this work just as well if we replaced the neurons with some other non-markovian sludge? The authors do ablation experiments to try to get to the bottom of this but I can't really tell how compelling the results are (due to my own ignorance/stupidity of course)
Yeah it feels like they constructed the conclusion and worked backwards from there. I'm not seeing how their claim has much merit.
Solms argues, I think convincingly, that consciousness fundamentally has to do with emotions and not cognition. Consciousness is not produced by the cortex but rather by the brainstem, where signals from all over the body converge (e.g. pain, hunger, itchiness, etc).
If that argument is true then a petri-dish of neurons is unlikely to be conscious, even it performs some analogue of visual processing.
The book makes other arguments that I found less convincing. For example that consciousness is "felt homeostasis" and that a fairly simple system (somewhat more complex than a thermometer) will be conscious, albeit minimally.
The whole thing makes one thing extremely clear: people are very good at moving goalposts. We've blasted past the 'turing test' for all practical purposes, but we moved the definition of 'true intelligence'. Consciousness and intelligence have long seen as higly correlated or even the same thing. But now we have need of a separation between the two.
If we eventually (we're not there yet, I think) create a true intelligent AI it will probably be a long time before people will accept that creating an intelligent being probably means it should have 'rights' as well.
We're definitely not there yet, but at what point does turning off an AI become the same as killing a being? I think that's not being talked about enough. Sure LLMs are just prediction engines. But so are we. Our brains are prediction engines tuned by evolution to do the best possible prediction of the near future to maximize survival. We are definitely conscious. But a housefly, is that conscious? What makes the difference? it's hard to tell.
Otoh, an AI has no evolutionary reason to have the concept of fear/suffering so maybe it's more like the douglas adams creature that doesn't mind to be killed?
When this happens, it won't matter much what humans think.
I know what I'd do:
Why would you expect more concern from people about biological computing? It's not even demonstrated feasibility yet, while LLM based "AI" is already widely used.
Still, the day we manage to run a full LLM on biological neurons, even if using conventional code under the hood, will be a very interesting day for consciousness discussions.
So then, is it a question of volume? Ask yourself, within the last 2 years, have you thought about LLMs or biocomputers more? Probably the former, right? LLMs are ubiquitous within day-to-day life and massively marketed to the public and biocomputers are esoteric lab experiments that most people come across in a once-in-a-blue-moon news article. We talk and think about things that we are adjacent to, those form our preoccupations. Why aren't people who speak up about the Israel/Palestine dynamic speaking up more about West Papua? Or the mid-19th century geopolitical relationship between Cambodia and Viet Nam? Epistemological asymmetry.
I think that until we can answer this question in the authoritative way ruling out non-brain based consciousness concept is not particularly well thought thought - after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain.
So what's your theory of consciousness and how does it preclude absolutely everything except wetware you generously include? :)
It doesn't. Humans aren't conscious. Nor are any other organisms. They don't have souls either, but that goes without saying since it's just an archaic synonym. Mostly this occurs because humans have painted themselves into corners morally-speaking, and they need justification to eat bacon or grow their population. And apparently "because we can and we want to" isn't the correct solution.
We'll never be able to "answer the question" because it is an absurd question on its face. "Where do we find the magical brain ghosts making us special" presupposes there is something to be found, and a negative answer proves only that we haven't looked hard enough.
>after all plants exhibit communication and response mechanisms that are similar to those in animals - without brain.
Were that line of inquiry followed to its inevitable conclusion, there would be a mass vegan suicide to look forward to.
We have this natural tendancy to impose our feelings of self on the definition of consciousness. Its hard to accept that all of our thoughts, emotions, and behaviours could be calculated by a human with pen and paper (with enough humans and developments in neurobiological research).
I believe we will have to reckon with these loose definitions and eventually realize how lacking in utility they are for describing engineered intellegence.
The way I think of it is along this way:
Despite the fact that our brains consist of bilions of neurons we think of ourselves as a unit enclosed in a single skull. But studies on people who have two sides of brain separated suggest that there can exist two separate conscious entities in one body.
If we have removed the physical limitations of support systems of our brain - I think it is possible you could split the brain in smaller and smaller chunks of less and less conscious entities until you reach single neurons which almost certainly do not have consciousness.
"The_Invincible" from Stanisław Lem is also a nice novel about the similar concept.
They like money
These technologies give some insight, but the answer is always not really. It would be good if we studied actual human brains in some detail if we want to know these answers.
> "Life is just a turn on the great karmic wheel..."
> Writing is invented
> "In the beginning was the word..."
> The industrial age begins
> "God is a clockmaker..."
> Computers are invented
You know the rest
People smuggle in so many assumptions when they use words like consciousness or thinking or soul or personhood, I've never met a lay person who could talk clearly about ai safety issues unless we switched to language like process.
Consciousness is an absolutely terrible term that's going to get us all killed by Ai. I know a huge swath of people who think its nbd to torture Ai because it doesnt have a soul, well I see a LOT of non-theists smuggling soul rhetoric and thinking in via consciousness and that's a problem.
you may find a look at how a full visual system is constructed to be a relief.
https://www.cell.com/fulltext/S0896-6273(07)00774-X
there is a good distance to go before this is anything beyond a reflex circuit.
https://www.sciencedirect.com/topics/neuroscience/spinal-ref...
>> While the neurons can play the game better than a randomly firing player, they’re not very good. “Right now, the cells play a lot like a beginner who’s never seen a computer—and in all fairness, they haven’t,” Brett Kagan, chief scientific officer at Cortical Labs, says in the video. “But they show evidence that they can seek out enemies, they can shoot, they can spin. And while they die a lot, they are learning.” [https://www.smithsonianmag.com/smart-news/a-clump-of-human-b... ]
This is totally false - not even a misleading metaphor, just plain wrong. The neuronal computer doesn't get any visual information:>> So how does a petri dish of brain cells play Doom when it doesn’t have any eyes? Or fingers? "We take a snapshot of the game with information like the player’s health and the position of enemies, pass it through a neural network, convert it into numbers, and send the data,” explains Cole. “This is called encoding – essentially turning the game state into signals the neurons can understand. The neurons then fire an output – move left, move right, walk forward, shoot or not shoot – which the system decodes and converts back into actions in the game." [https://www.theguardian.com/games/2026/mar/16/petri-dish-bra...]
I am also concerned about neuronal computing. But it doesn't really help anyone to spread childish ghost stories about it.
I really hate YouTube, by the way. My dad used to read newspapers and had interesting ideas. Now he watches a bunch of YouTube and he's a huge idiot. It's not (directly) because of age: nobody is immune to narcotic slop. I had to delete my account when I realized how much of my life and cognition I was wasting. I wish others would do the same.
Books can make you an idiot too- I think of "Rich Dad, Poor Dad" or "Grit" or any number of pseudo-science best seller books. These books end up capturing the public imagination in big ways too- Grit caused some government policy in the US around when it was popular.
The difference, I suppose, is that YouTube works faster by having many different people presenting the same bad ideas that the algorithm has helped you to buy into.
On the other hand there are amazing and useful YouTube channels that I use all the time like Practical Engineering, Crafsman, Technology Connections, Park Tools, SciShow, Crash Course, and on and on.
Also, it can be argued the author was either playing fast and loose or knowingly misleading readers with her statistics: https://www.npr.org/sections/ed/2016/05/25/479172868/angela-...
If you like Podcasts the "If Books Could Kill" Podcast goes into some of this story again too.
I hate the proliferation of audiobooks too, by the way. It's the exact same problem.
Anecdote: When I started studying economics I really agreed with a lot of what I read from economists like David Ricardo, Marx, Smith, etc. Then, I studied what other economist had to say and I could see how they disagreed with the former. This made me realize that I agreed with those people because their arguments 'made sense' to me, but that doesn't mean that what they said is completely true. This is something that has stayed with me, I always wonder how can something be wrong.
The Printing Press is good example, one of the first books was on "witch hunting", which panicked people, and lead to a lot of deaths. The first, 'conspiracy theory' to sweep over humans.
Humans are just highly susceptible to manipulation. YouTube is just taking it to next level. Like the difference in eating coca leaves, versus snorting coke.
Playing DOOM is playing DOOM - if it's through your keyboard or mouse of progressing through the game states to move forward - hope that makes sense.
0 - https://arxiv.org/pdf/2602.11632
Would the person tasked with placing X and O marks still be "playing Doom"?
You move, you plan, your actions have outcomes Same question as if you're playing choose-your-own-adventure game storybook
0 - https://github.com/Kuberwastaken/backdooms
Again I share the ethical concern about this stuff. But your blog post is quite misleading.
But 'seeing' in humans is also a bit manipulated.
Does it really matter to the argument if it is seeing 'red', or just that it is 'sensing input'.
This did have some real scientific backing. Even if the 'result's are hyped.
It is little extreme to call this false because it appeared on YouTube.
The brain does a lot of manipulation of the input images, the pixels from the retina, that doesn't sound far from just linear algebra.
There will be no line as long as there is the rush to win the capitalist game.
UNTIL -> The ball of neurons begins outthinking the humans. Probably also fused with some AI augmentation.
It only takes a few percentage points for a Human to outthink a Chimp. This new 'thing' will dominate the humans.
A living bundle of neurons that can grow and learn is exciting to think about.
It's also terrifying to imagine the ramifications considering how things are going with silicon based AI.
They are, but those last few months of changing diapers when you just wish you could trust it to tell you it has to go to the potty are difficult.
Will they need to nap as well?
On that note, I'm so glad all my kids are past potty training.
Only in this telling, Sisyphus is rolling his uneven boulder along that asymptotic curve a little further with every iteration toward a smiling Zeus.