
Neither can humans with any degree of reliability.
This criticism is crap, but you probably won’t understand why it’s worthless garbage unless you have a pretty firm grounding in philosophy and have thought about this sort of thing for many years.
To not write an entire screed — which is my wont — this clownery is either trivially true in a way that also applies to humans (you canโt get causality from correlation alone), or false if itโs meant in the sense that LLMs cannot do causal reasoning, because of course causal reasoning is possible once you supply assumptions and models. Which LLMs have access to and make use of.
Do people actually think? At all? Do they even try? I see no evidence of that. In that sense, LLMs already best 99.9999% of humans I routinely encounter.