Karpathy’s first frequently asked question is “Does the model ‘understand’ anything?” “That’s a philosophical question,” he answers diplomatically, “but mechanically: no magic is happening.” Does 200 lines of Python code understand anything? My siblings in Christ I hope it’s clear how utterly bizarre this question is. And it translates directly to the same question for Anthropic’s Claude, which is not doing anything different. If we make the input file bigger, if we make the way it gets mathematically processed more efficient, if we prepend a long document describing how we imagine a helpful robot might act to the user’s input, at which of those steps does “understanding” happen?
AI isn't people