LLM Patterns
LLM Patterns
Various ranges of context
Context is the information that a chat instance of a LLM has available. This includes the general system prompt, specific user prompts (usually in the settings) and the chat history of the current chat. Recently, this context window has expanded to connected apps like Gmail, Calendar, GitHub, Google Docs, etc. Also, ChatGPT has a memory feature where information and preferences about the user is saved. On 2025-04-10, ChatGPT gained access to all the previous chats with the user as context, so it will "remember" these things with every new chat.
This has some obvious pros: For one, the model will generally get useful, the more it knows the user and their preferences. Users don't have to explain things over and over again.
There are downsides too. Sometimes certain context leads the model in directions you don't want it. Simon Willison writes:
I’m an LLM power-user. I’ve spent a couple of years now figuring out the best way to prompt these systems to give them exactly what I want.
The entire game when it comes to prompting LLMs is to carefully control their context—the inputs (and subsequent outputs) that make it into the current conversation with the model.
The previous memory feature—where the model would sometimes take notes on things I’d told it—still kept me in control. I could browse those notes at any time to see exactly what was being recorded, and delete the ones that weren’t helpful for my ongoing prompts.
The new memory feature removes that control completely.
Personal experience
I used to think the more context the better, so I would feed Claude a lot of context in certain projects. That would sometimes backfire. I remember an example where for my thesis I didn't like the pseudonyms of my participants and I asked Claude to generate 8 new random names. Claude then proceeded to generate the 8 very names I had in the draft of my thesis, which I had shared as a google doc – while being totally unaware what the source of that information was. When I asked Claude about it, it claimed that must have been a coincidence.
User Patterns
One Chat
A lot of people seem to just have one chat instance open and just change topics all the time. I wonder how common that is. I always start new instances for every little thing. I liked the blank slate it gave me (even though that is less and less of a thing, see above).
Switch models during the convo
I often switch models during a conversation, depending on what kind of answer I want. So I might use ChatGPT o3 to do some research on something and then switch to 4o and ask it to write something about the results. o3 is clearly better at research, 4o is clearly better at writing.
Scolding LLMs
Something I gave up on, but I'm sure many people still do: Scold LLMs if they messed something up or do some sort of "gotcha". It will react in predictable ways and learn nothing.
It will not only not "feel guilty" (it doesn't feel), every answer is generated with all the context present, including the chat history, but it's not the same entity, the continuity of a conversation with a LLM is an illusion. That's why different models can pick up the conversation without problems and without any sort of awareness that anything has changed.
Interaction vs tool
There seem to be roughly two types of users. Users who love the actual interaction, the quality of how "human" or "alilve" the model feels, matters to them. This includes using a model for "therapeutic" interactions. A lot of people are into that stuff and I suspect that's why ChatGPT's advanced voice mode is so popular. That's why Glaze ChatGPT was so popular.
In the other camp, where I count myself, are the people who are more result focused and see LLMs as tools. It is more a tool-like use, they want something out of a model. How conversations feel (one might say the vibe) of the conversation is less important. But: The personality of the model matters too here, we just prefer a personality that is aligned with our goals, so it understands what we want, gets to the point, isn't annoying, etc. ChatGPT o3 (at the time of writing) is a good example of that.