FR version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
57% Positive
Analyzed from 621 words in the discussion.
Trending Topics
#things#llms#tools#should#trained#manipulation#models#political#different#issues

Discussion (18 Comments)Read Original on HackerNews
Anybody using AI tools should be extremely cautious about what is being produced.
Hard to get around these kinds of issues and definitely leads me to avoid them for non-technical questions.
That said, there is no such thing as an objective unbiased political opinion. Chinese LLMs may have issues with events of 1989 but Western LLMs have their blindspots too.
I have often wondered about the legality of such manipulation. As AI becomes used for increasingly important things, it becomes increasingly valuable to make a system serve the needs of someone other than its owner.
It reminds me of the early internet days and everyone making a big deal about the anonymity of internet forurms and safety.. sure it is an isssue
For example, they will occasionally replace "colour" with "color". Why? Because both occur in the training data in the "same role" but "color" is, apparently, more common[1]. You can also trick them into replacing things like "sardines" with "anchovies" (on pizza) and "head of lettuce" with "cabbage" in the context of rowboats.
They are lossy text compressing parrots and we are all suffering from a massive madness-of-crowds scale Eliza Effect.
[1] Yep. https://books.google.com/ngrams/graph?content=color%2C+colou...