Certain names make ChatGPT grind to a halt, and we know why
https://news.ycombinator.com/item?id=42304333
https://arstechnica.com/information-technology/2024/12/certain-names-make-chatgpt-grind-to-a-halt-and-we-know-why/
LLM -and current ML in general- is about generate statistically compressed lossy databases, that the queries statistically decompress with erroneous random data due the nature of this lossy compression technology (I think about it as statistically vectorial linked bits).
https://news.ycombinator.com/item?id=42398560
Tips for prompts
o1
https://news.ycombinator.com/item?id=42750096
https://www.latent.space/p/o1-skill-issue