Although I agree with you, I have heard the argument in other contexts.
The “It’s just an LLM” card is quickly drawn as soon as minor errors or anomalies become apparent. It’s “just a machine that strings together somewhat logical sentences.”
Even though this ability makes the program more intelligent than some people I know.
Both. Both is good.