Back to News
Advertisement
Advertisement

⚑ Community Insights

Discussion Sentiment

100% Positive

Analyzed from 159 words in the discussion.

Trending Topics

#mike#artifacts#post#prompts#https#mikecaulfield#substack#com#talk#presenting

Discussion (1 Comments)Read Original on HackerNews

jauntywundrkindβ€’about 1 hour ago
I really enjoyed this recent post on prompts to get the LLM to write in different styles.

It's from Mike Caulfield who I just love anyways, who has been so committed & eloquent a proponent of/towards a high-signal world for so long,

https://mikecaulfield.substack.com/p/a-300-word-prompt-that-... https://mikecaulfield.substack.com/p/introducing-jamesian-a-...

It was an interesting explanation, of why LLMs talk the way they do. Paratastic structures, that lack subordinate clauses, leaving the user to connect the ideas together themselves, presenting elements in independence, unconnected, but still adjacent. Admittedly I think the post's setup/exploration is good, but it's left as a reader exercise to see what actually Mike does in the prompts.

There's some great food for thought in this submission. But I think it risks presenting this idea that the compression artifacts are real lossy hallmarks: i don't think that's necessarily a limit of the weights. My guess is that many of today's hallmark AI compression artifacts are really decompression artifacts. That we haven't yet, like Mike has begun, to explore how to talk about language to steer our uncompression, our expansion of those weights, into the forms we could or should ask for.