I defined "not too bad" as "I was expecting it to have no idea what I was talking about"
That is the fundamental issue with tools like ChatGPT right now. You must NEVER use it for something you couldn't verify yourself.
I actually use ChatGPT productively by letting it write minor scripts to save me the time reading man-pages or looking up specific standard libraries i haven't used in ages.
It works amazingly well and it saves me a lot of time, but it only works, because i notice when the AI messes up. Then i point out the issue and tell it to fix it (usually with precise instructions on how to) and that is a procedure that works for me.
However, that is not an approach that any laymen could do. Beginner programmers/developers will have a hard time engineering a proper prompt for the AI to get what they want and if it doesn't work they have no means to fix it.
Same with asking some scientific knowledge. It will get a lot of the principles right, but many details are likely to be wrong. It usually just makes up numbers that sound right, but have no basis in reality.
So generating a helpful text that explains how gravity works is easy with the AI, but unless you know yourself how gravity works, you will not notice minor mistakes the AI makes. And it WILL make mistakes.
AIs are far away from replacing experts, but they are already a great resource for experts to utilize.