Testing is a poor stubstitute for reasoning, re-stated

Just saw a tweet extolling the virtues of some "AI" tool for auto-generating unit tests. It struck a chord because I have spent my afternoon having Copilot author unit tests in order to get our CI to accept my patch.

The tweet's author has it all backwards: the way out of the hellscape of authoring boilerplate, redundant, brain-dead unit tests is not to employ a tool to do it for you (and "AI" is just a tool). The way forward is to get back to reasoning about our code, both in the classical & modern senses of the word.

The first step is: for the love of God, get yourself a type system. A type system that will allow you to render illegal state un-representable in your program. You know you're doing it right when you struggle to author negative unit tests that compile. The reason Python devs are so gung-ho on unit tests is because they've foresworn their best ally in detecting illegal state– types.

The next step: start thinking about your code in terms of the contracts each entity offers, and develop the habit of reasoning about the universe of inputs in terms of that contract. If you find that hard… you may consider working in terms of contracts that are easier to reason about.

Finally, and this is something I'm really interested in, start writing down the invariants you want to maintain in your program, and hand them off to an SMT solver to prove that you are.

The answer to code which is difficult to reason about is not tools to write insipid code exercising that code.