If you're unsure of writing tests or don't do a lot of it the following will point you in a better direction.
- Not doing it. It's an easy trap to fall into but one without an excuse. Make plans to start adding tests to the code you're working on now and add them to future projects from the start.
- Not starting testing from the beginning of a project.
It's harder to go back and add them in retrospectively and may require architecture changes to do so which will ultimately take longer for you to have code you can be confident in. Adding tests from the start saves time and effort over the lifetime of a project.
- Writing failing tests.
The popularity of the TDD methodology has brought the idea of Red-Green-Refactor to the software testing world. The is commonly misunderstood to mean that you should "start by writing a failing test". This is not the case. The purpose of creating a test before you write the code is to define what the correct behavior of the system should be. In many cases this will be a failing test (indicated in red) but it may be that this is represented by an inconclusive or unimplemented test.
- Being afraid of unimplemented tests.
A big problem in software development is the separation between code and any documentation about what the system should actually do. By having a test with a name that clearly defines the intended behavior that you will eventually implement, you will get some value from a test even if how it will be written is currently unknown.
- Not naming the tests well.
Naming things in software is famously difficult to do well and this applies to tests as well. There are several popular conventions on how to name tests. The one you use isn't important as long as it's used consistently and accurately describes what is being tested.
- Having tests that do too much.
Long complicated names are a good indication that you're trying to test more than one thing at once. An individual test should only test a single thing. If it fails it should give a clear indication of what went wrong in the code. You should not need to look at which part of the test failed to see what the problem in the code is. This doesn't mean that you should never have multiple asserts in a test but that they should be tightly related. For instance, it's ok to have a test that looks at the output of an order processing system and verify that there is a single line item in it and it contains a specific item. It's not Ok to have a single test that verifies that the output of the same system creates a specific item and it's logged to the database and it also sends a confirmation email.
- Not actually testing the code.
It's common to see people who are new to testing creating overly complicated mocks and setup procedures that don't end up testing the actual code. They might verify that the mock code is correct or that the mock code does the same as the real code or just execute the code without ever asserting anything. Such "tests" are a waste of effort, especially if they exist to only boost the level of code coverage.
- Worrying about code coverage.
The idea of code coverage is noble but often has limited actual value. To know how much of the code is executed when the tests are run should be useful but because it doesn't consider the quality of the tests that are executing the code it can be meaningless. Code coverage is only interesting if it is very high or very low. If very high it suggests that more of the code is probably being tested than will bring value. Very low code coverage suggests that there's probably not enough tests for the code. With this ambiguity, some people struggle to know if an individual piece of code should be tested. I use a simple question to determine this: Does the code contain non-trivial complexity? If it does then you need some tests. If it doesn't then you don't. Testing property accessors is a waste of time. If they fail there's something more fundamentally wrong with your code system than the code you're writing. If you can't look at a piece of code and instantly see everything it does then it's non-trivial. This doesn't just apply to code as you write it. If revisiting code at any point after it's been written then it needs tests. If a bug is ever found in existing code that's confirmation that there weren't sufficient tests for the complexity of that area of the code.
- Focusing on just one type of testing.
Once you do start testing it can be easy to get drawn into just one style of tests. This is a mistake. You can't adequately test all parts of a system with one type of tests. You need unit tests to confirm individual components of the code work correctly. You need integration tests to confirm the different components work together. You need automated UI tests to verify the software can be used as it's intended. Finally you need manual tests for any parts that can't be easily automated and for exploratory testing.
- Focusing on short-term tests.
The majority of the value from tests is obtained over time. Tests shouldn't just exist to verify that something has been written correctly but that it continues to function correctly as time passes and other changes are made to the codebase. Be they regression errors or new exceptions tests should be repeatedly run to detect problems as early as possible as that will mean they are quicker, cheaper and easier to fix. Having tests that can be automated and executed quickly, without variation (human error) is why coded tests are so valuable.
- Being a developer relying on someone else to run (or write) the tests.
Tests have very little value if not run. If tests can't be run then they won't be and so bugs that could have been caught will be missed. Having as many tests run automatically (as part of a continuous integration system) is a start but anyone on a project should be able to run any tests at any time. If you need special setup, machines, permissions, or configurations to run tests these will only serve as barriers to the tests being executed. Developers need to be able to run tests before they check in code and so they need access to and the ability to run all relevant tests. Code and tests should be kept in the same place and any setup needed should be scripted. One of the worst examples I've seen of this being done badly was on a project where a sub-team of testers would periodically take a copy of the code the developers were working on, they'd modify the code so they could execute a series of test that developers didn’t have access to on a specially configured (an undocumented) machine and then send a single large email to all developers indicating any issues they'd found. Not only is this a bad way to test but it's a bad way to work as a team. Do not do this.
Having code that executes correctly is part of what it means to be a professional developer. The way to guarantee the accuracy of the code you write is with appropriate tests that accompany it. You cannot be a professional developer and rely solely on other people to write tests for and run tests on your code.
If none of the above apply to you congratulations. Carry on making robust, valuable software.
If some of the above do apply to you, now's a great time to start doing something about it.
0 comments:
Post a Comment
I get a lot of comment spam :( - moderation may take a while.