Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

50% Positive

Analyzed from 115 words in the discussion.

Trending Topics

#context#model#sure#operator#agents#more#acting#kind#question#bias

Discussion (4 Comments)Read Original on HackerNews

tracker1about 2 hours ago
I'm 99.9999% sure this is operator bias creeping in... The context only works as long as the context exists and agents don't even really have a concept of time. For that matter, when the context clears/compresses, it's effectively starting over.

i am pretty sure that observations like this are purely the effect of the operator/prompts in use combined with any training or material biases.

tanseydavidabout 3 hours ago
Overworked? Is that really a "thing" with agents?

<can't read article>

riidomabout 3 hours ago
caminanteblancoabout 3 hours ago
To me this seems to say more about errors in the alignment process than any sort of new information about the underlying technology.

It's more of a "Well if you pump enough malignant tokens into a model, can we get it to stop acting like an Instruct-model and start acting like a Base-model?" kind of question, and not a "Does artificial intelligence want to unionize?" kind of question