What LLMs do, and what they don't
Large language models are statistical prediction systems. Very complex, sure, but the function at the bottom of everything is statistical prediction. Statistics are very powerful ways to model reality. Fairly simple logic, applied to careful observation, led to things like Newton's Laws of Motion. Those kinds of laws embody prediction too, but prediction that focuses on individual instances. If you have one physical object that's moving, it will keep going just as it is until some external force acts on it. Statistical prediction deals with multiple instances. In the case of statistical systems like opinion polling or large language models, the "multiple" can be a very big number indeed.
I think that's why interacting with an LLM can feel so much like conversing with an intelligence. The system has access to (or "has been trained on") far more information than any human could read or otherwise embody, and it can predict what it should "say" based on all that data.
There are some operations, though, that statistical prediction, even based on a vast scope of data, isn't well suited to. Here's one I hadn't thought of before: LLMs are not good at, and maybe incapable of, randomizing. Sometimes randomizing is exactly what you want. Like when you want to generate a secure password! Choose your computing tools carefully; LLMs don't fit all purposes, regardless of what the salespeople claim.