Parallels between humans and LLMs
- Both learn through reinforcement
- It's all about prediction
- LLMs: Always predict the next word
- Humans: Prediction is a core organizing principle of the nervous system. (How emotions are made: predictive processing.)
- Autocomplete machines
- LLMs: Always predict the next word ("stochastic parrots", "autocomplete machines")
- Humans: We also often don't know what will come out of our mouth before we say it
And this: In a discussion, if prompt engineering will stay a thing, this came up:
We prompt engineer people constantly. When people talk about ‘performing class’ they are largely talking about prompt engineering for humans, with different humans responding differently to different prompts, including things like body language and tone of voice and how you look and so on. People will totally vibe off of everything you say and do and are, and the wise person sculpts their actions and communications based on this. Zvi
Also: Unconscious bias.
These paragraphs include speculation about the internal workings of large artificial neural networks. Such networks are sufficiently complicated that we can't actually look inside and say “ah yes, now it's evolved from reflexes into having goals” or “OK so there’s the list of drives it has.” Instead, we basically have to do psychology, looking how it behaves in various settings and running various experiments on it and trying to piece together the clues. And it’s all terribly controversial and confusing.