

LLMs are also very good at convincing their users that they know what they are saying.
It’s what they’re really selected for. Looking accurate sells more than being accurate.
I wouldn’t be surprised if many of the people selling LLMs as AI have drunk their own kool-aid (of course most just care about the line going up, but still).
To be fair, an 1840 “computer” might be able to tell there was something wrong with the figures and ask about it or even correct them herself.
Babbage was being a bit obtuse there; people weren’t familiar with computing machines yet. Computer was a job, and computers were expected to be fairly intelligent.
In fact I’d say that if anything this question shows that the questioner understood enough about the new machine to realise it was not the same as they understood a computer to be, and lacked many of their abilities, and was just looking for Babbage to confirm their suspicions.