How do you know that you know what you know?
What does it mean to prove knowledge to yourself and why that's important
February 25, 2026
For the last couple hundred years, the typical answer to this question was roughly the same.
The most obvious option was school. A combination of grades, exams, and degrees was a pretty good metric for how well you knew a certain topic. Not perfect, but for the most part it was good enough. A similar best option was a person's resume (or today their LinkedIn). Metrics like X years of experience and referrals were also good enough. Then somewhat recently, for edge cases not covered by school like tech builders and content creators, a personal portfolio of successful projects could tell a useful story of agency, ambition, and aptitude.
As silicon-based intelligence continues to rapidly grow in ability and adoption, these common proofs of knowledge will no longer work.
AI driven friction is already fairly common today. One simple example is Cluely and job applicants using AI at every step of the interview process, even live coding assessments. Another is the decline in CS major hire-ability as most students just use ChatGPT for all their assignments and don't actually learn anything.
These are top of mind examples for me, but my point is that you could find 20 more in any context. This is only going to get worse.
The obvious solution is to play the cat and mouse game and build better anti-cheat software for standardized tests, job interviews, etc. Basically making these existing knowledge proofs better. This is a pointless game in the age of abundant intelligence. And by playing this game people will not learn effectively. My main argument is that we should instead build to support learning in the age of AI.
In asking "how do you know that you know what you know?" there are two different questions depending on the interpretation of the first "you". If "you" an evaluator (employer, teacher, etc), then you're asking "how do we prove knowledge"? If "you" is oneself, then you're asking "how do I know that I know something?
Up until now, the distinction wasn't really important. The questions were logically equivalent. A result in one would explain the other. But today, if an evaluator is easily cheated, how do you actually know that you know something?
At first, to think this is a question people actually care about comes off as a bit of overly optimistic thinking. If GPT-5.2 is PhD level in most subjects do we really expect people to try even harder to learn things? Especially if there is no actual test for them to take, and they are doing it for themselves?
Yes. The more people get job displaced by AI, and have time to care about things they are interested in, more and more people will want to learn things. This behavior is probably somewhat rare today. But I fully agree with Karpathy when he says that people will eventually treat learning like the do going to the gym today.
If you can agree with that, then you understand why a way to prove to yourself that you know something is going to be super useful.
Now the question is what does that look like?
If everyone in the world had a personal tutor in the things they cared about that would probably solve this. So the next best thing is to expect frontier LLMs to act as personal tutors and solve this. As of today this obviously is not even close to working.
What I do think is possible is a system that basically automates the process of teaching a topic. Much like the Feynman technique that is very popular. But to get feedback for yourself, ideally you could trust an AI system to evaluate how well you can teach something.
More specifically how to build this is something I'm curious about.