Simon Willison writes Hallucinations in code are the least dangerous form of LLM mistakes

If you’re using an LLM to write code without even running it yourself, what are you doing?

Hallucinated methods are such a tiny roadblock that when people complain about them I assume they’ve spent minimal time learning how to effectively use these systems—they dropped them at the first hurdle.

AI is going to lower the barrier to folks getting creative with code, but if you are a seasoned developer you have to be an active participant to get the most of out LLMs.