Testing - Where to begin?

Back to Unsung Developer Thoughts

Testing is a challenging aspect of software development. Nobody purposefully writes broken software, therefore, at some level we expect our code to be correct. I feel this is the crux of why testing continues to difficult for software developers. As people gain experience, they protect against this overconfidence by validating their code more thoroughly. One reason why newer devs don’t do this is that they don’t know what to test or how to test it. This section aims to cover the former. The latter and technical details of testing is better left to more knowledgeable educators.

Is the logic truly being tested?

This must always be asked. The trick here is, do you understand the implementation well enough to know if it’s being tested. One of the justifications of code coverage metrics is that you know that code has actually ran. But sometimes the test can hide the details of what’s being run. For example, if you’re testing a query that needs to filter some data, but the database only contains data that matches the filter, how can you be sure the filter works?

Sometimes you may need to use a debugger to confirm that the logic is being tested properly. At other times, you may need to restore the code to its broken state to confirm that your test fails as well. This is typical when writing a test to prevent future regressions.

My shorthand process is:

Can different datasets cause different results?

Can different datasets cause different results and are they covered? When a query exists that filters data on a number of facets, each of those need to be tested. When a query contains a lot of logic, it becomes more difficult to test completely. This is one reason to write your code more modularly; it makes testing easier.

Another question to consider is what happens to this component if the amount of data is varied? For example, if there’s pagination, is pagination being tested properly? The no data case should also be considered. When looking at how to test a change you need to see how the application works in its entirety. You should visualize how the data flows from storage to the user and where things can vary. Then review those variations and the cascading effects to their conclusions. Then decide how much of that can reasonably and efficiently be tested.

Are exception flows reasonably covered?

An exception flow here is any flow that’s not the happy path. This may be an invalid data in a submission, an unauthenticated request, or even a literal exception raised while running. If you’re implementing a tool that has human users, they will try inputs you do not expect. So try to consider how the solution will handle the unexpected. Keep in mind, there are diminishing returns on tests. Getting the “right” ones in place efficiently is what you should strive for.

Do the changes require permission tests?

If the changed component involves restricting functionality based on the user, then that needs to be confirmed in the tests. This type of logic is critical to have tested properly. If the wrong user finds a bug in your security, they may exploit it rather than notify you.

An idea Adam Johnson pointed out to me was to not write the same tests to confirm composed or decorated logic. In Django this is the challenge of testing that every view has the @login_required decorator. Rather than writing a specific test for each view for an authenticated user and unauthenticated user, a single test could be written that would iterate over all the views and confirm that the view has the decorator applied. This is a much better way of handling views that are added in the future. This is because new views will be tested automatically and don’t require the developer to remember to write the tests as the view is being added.