So, following on from the last post I wrote.
I wrote that post just before I checked in the code.
As a final check before committing the code, I did some manual checks.
I found a[n obvious] scenario that didn't work.
I thought I'd created a test case for it. Turns out I hadn't. So, I added one and it failed. Then I added an extra clause to the function and made sure all tests pass.
I wonder if the Copilot generated tests cover this scenario?
It turns out they do and they don't.
There is a generated test with a name that implies it covers the scenario, but the actual contents of the test method do not do what the name implies.
It's not enough to have the test, it's also necessary that it's valid and correct.
Review generated tests more thoroughly than generated code.
If the test is wrong it leads to code that is wrong.
This also highlights the need to have other people review code (and test plans).
I know I'm not perfect and I make mistakes and omissions.
That's part of the reason for code reviews: to have an extra pair of eyes double check things. (It's just a shame this was all on a personal project.)
This experience really has reinforced my belief that test cases should be written before code.
0 comments:
Post a Comment
I get a lot of comment spam :( - moderation may take a while.