October 3, 2025
https://github.com/karpathy/llm.c
A large language model is a function. Given a sequence of characters it returns a sequence of characters. How does it do that?
It takes training data consisting of “correct” answers and interpolates.
Machine learning began in … with … perceptrons. They were modelled after neurons in the brain. Given a set of inputs they would either fire or not.
The best prompt is “yes”. It will clue you in to the current context of the LLM you are using at the moment. Use the output to detect what it is not telling you.
A LLM is a function from an input string prompt and a context to an output string. So called “prompt engineering” is really about context engineering.