Testing - It's more than asserts!Back to Unsung Developer Thoughts
Testing is practically a philosophy. As code is written, how do you know it works? How do you know it can be delivered to your customers reliably and successfully? This section asks you to consider higher level testing topics. Topics that go beyond verifying that your code works, because testing is more than asserts.
Some changes are more important than others. Changes to a one-off page can get away with some light testing. Changes to your billing or the core feature of your service require more rigorous testing. Any change that could impact a chunk of customers experience should be well tested. If you don’t feel that the testing is adequate, speak up. Explain what you see, why it’s falling short and why it’s beneficial to do more testing.
Mocking and patching are extremely powerful, but a bit dangerous to use. They are great for confirming a specific unit of code does exactly what it should do. However, they have a tendency to be brittle because they aren’t easy to keep up to date with the functionality being mocked.
Mocks should be used sparingly where the alternative is very costly to implement or to augment integration tests.
When looking at a change you want to be able to envision what that testing flow looks like. While it’s always beneficial to automate your tests, sometimes that’s difficult. To some degree manual tests are always required. Knowing how a change needs to be tested helps you to communicate about the change. If you’re reviewing code, you can ask what the test plan is. If it doesn’t align with your expectations, you need to resolve that misunderstanding. If you’re the developer, you should communicate the test plan to the customers and reviewers. That way they can compare it to what they anticipate the test plan is. In the end this aligns expectations, reduces surprises, and results in more reliable software.
Software development is subject to trade-offs. Testing is not exempt from that. When you develop software that has a user interface, things become complicated pretty quickly. If you’re working with the web you have different devices, operating systems, browsers and display sizes. Testing every combination of those is a very difficult task. Those combinations start to exponentially grow when your application introduces configuration options that vary per group or user.
I won’t argue that you should test every permutation of your application. The cost is prohibitive. Instead, I’d argue that you should make well-thought-out decisions on what combinations don’t need to be tested. Not testing something because you don’t want to or because “it’s hard” is unacceptable. Now, if you explain that writing a specific test for something will take a week, is brittle to changes and/or covers an insignificant user flow, then that may be a good enough reason to exclude the test. You’d still need to explain why it would take a week, is brittle to changes or why that user flow is insignificant, but it’s the beginning of a discussion.
If your project is not heavily used you can likely ignore this question. If you have enough users to strive for 100% uptime, then you need to ask this question.
There will be times when the change isn’t impacting the functionality of your code at all, but instead is focused on changing the environment in which it runs. Adding a not nullable column to a database is a decent example of this. If you’re only changing your database schema and not including any functional changes, it seems like an easy change. There’s very little to actually test and the change to your application is minimal, if anything at all. However, the deployment itself could be problematic. Adding a not nullable column requires a lock on the table and will prevent any other queries from making updates to that table1. If the table is very small, you can probably get away with this. If the table is large or has a lot of usage, and you’re not sure of its impact, you need to test the deployment.
Another consideration is how your software is delivered itself. Perhaps you’re maintaining a library and have a script to automatically publish your package. How do you test that script? If you’re using PyPI, they offer a test repository which you can use in a CI process to upload your package to the test repository. That way when you make changes to the publishing script, that flow is tested well before it needs to be used to publish the next version. I learned about this the hard way.
I’m assuming PostgreSQL for the database here. Your tech stack may differ. ↩