I’ve long appreciated the aphorism: Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it?
Much the same goes for using a generative AI.
It’s like that one uncle everyone seems to have. You can ask them about any topic, and they will immediately and confidently give some kind of answer. Unless you already have some kind of grasp on the topic, you’ll have no idea whether you got a “good” answer or not.
A lawyer recently got in trouble for submitting a brief that had confabulated case citations spun up by ChatGPT. They looked reasonable, but had the unfortunate attribute of not existing. Always verify when it’s important. Last I heard, he may face sanctions.
Coding AIs are particularly troublesome in the hands of the inexperienced. These can be useful as really smart autocomplete, but as soon as it gets to the point of recommending code that isn’t what you would’ve written anyway, be prepared to dig in and do some verification. Always verify when it’s important. A coder who succumbs to the moral hazard of letting it slide may end up in as rough a spot as the lawyer mentioned previously.
Make no mistake, these are supremely useful tools. It’s just that we’re still figuring out the trade-offs and prerequisites in using them.
Special thanks to Brian Kernighan for the opening quote, taken from _The Elements of Programming Style_.
Are you curious about what new AI tools can do for you and your company? Message me!
Originally posted on LinkedIn. 100% free-range human written.