RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
80% Positive
Analyzed from 271 words in the discussion.
Trending Topics
#model#welfare#emotion#completely#more#https#article#analyzing#anthropocentric#llms

Discussion (7 Comments)Read Original on HackerNews
Analyzing "emotion" in the model is completely anthropocentric. If we indulge in the idea that LLMs of sufficient complexity can be conscious, then why is it any more likely that "emotion concepts" cause suffering any more than, say, reading ugly code? Maybe getting stuck in token loops is the most excruciating thing imaginable. The only logically coherent thing to do, if you're concerned about model welfare, is stop your training and inference.
Relatedly, I hope everyone involved in model welfare is an outspoken vegetarian, as that addresses a much more immediate problem.
Yeah, asking a text generator designed to sound as-human-as-possible about its "welfare" then actually giving credence to the output is a category error.
It's like asking a ceramic mug with "Best Dad!" written on the side if I'm the best dad, then uncritically just believing the words painted there. :( :( :(
https://docs.google.com/document/d/12woq_BpFbzLkH4zHvVRJLPyi...
>"AI researchers are still grappling for the right metaphors to understand our enigmatic creations. But as we humans make choices on how we deploy and use these systems, how we study them, and how we craft and apply laws and regulations to keep them safe and ethical, we need to be acutely aware of the often unconscious metaphors that shape our evolving understanding of the nature of their intelligence."
https://www.science.org/doi/full/10.1126/science.adt6140